# Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Theoretical Considerations of the Digital Filter

#### 2.1. EMD Algorithm

_{i}(t). The IMF, c

_{i}(t) is defined under the conditions that: (i) The numbers of extrema (maxima plus minima) and zero crossings in the entire data series must either be equal to or differ by at most one; and (ii) At any point, the mean value of the envelope defined by the local maxima and that defined by the local minima, must be zero [10]. These conditions are met

_{i}(t), nL

_{i}(t) and nZ

_{i}(t), are the values of the maxima (upper peak), minima (lower peak) and zero crossing, of the EMD respectively. The EMD algorithm can be represented as follows.

**Step 1:**Spline the EMD datasets denoted to x

_{k}(t) by interrelating using U

_{i}(t) and L

_{i}(t), which is given by

- (1)
- When constructing the upper and lower envelopes, we calculate the parabola coefficients of ax
^{2}+ bx + c using x(k − 1), x(k), (k + 1); - (2)
- If the second-degree coefficient, a, equals zero, is x(k) is certainly not an extremum so the sliding window moves further on a discrete value of x(k);
- (3)
- If the first-degree coefficient, b, equals zero, x(k) is an extremum, either a maximum or minimum, depending on the sign. We then calculate the top of this parabola by introducing ${t}_{\text{top}}=\frac{-b}{2a}$; similarly, applying to the bottom part of the parabolic curves;
- (4)
- Repeat (2) and (3) and stop after executing x(k + n).

**Step 2:**Average the maxima and minima in order of the time series, which is represented as

**Step 3:**Subtract the x

_{i}(t) from the average of the local extrema (maxima and minima) m

_{i}(t) in order of the time series, and the new decomposed signal is

**Step 4:**Repeat steps (i) through (iii) k times until ${h}_{1k}(t)$ equals c

_{1}(t). Using Equation (5) where ${h}_{1(k-1)}(t)-{m}_{1k}={h}_{1k}(t)$, we designate c

_{1}(t) as the first IMF.

**Step 5:**Find other IMFs by calculating the first residual, which is given by

_{n}(t). This procedure is represented by

_{n}(t), the analytical data can be expressed in terms of the Hilbert amplitude and instantaneous frequency, as follows:

#### 2.2. Creating the aIMF Algorithm

_{n}(t) becomes either over-distorted or a monochromatic function from which no further IMF can be decomposed [11]. The other study confirmed that the real complex quantity of all IMFs decomposed i.e., IMF1 to IMF7, which are in the amplitude-frequency domain, shifted to the lower region [19].

_{a}(t), from the original nonlinear, nonstationary times series datasets, x

_{n}(t). Thus, a new digital filter is created, which is given by

_{n}(t) is a function of the aIMF algorithm.

_{n}(t) becomes either over-distorted or a monochromatic function from which no further IMF can be decomposed.

#### 2.3. Theoretical Considerations of the WT Algorithm

_{J,i}represents the information in the signal on the coarsest level, ϕ

_{J,i}is the scaling function and d

_{J,i}represents the details (wavelet coefficients) at the different scales necessary to reconstruct the function at the fine scale 0, at which the wavelet and scaling functions are compactly supported.

_{J,i}, which are normally affected at different levels of the scale j. This is given by Li [24]

#### 2.4. Theoretical Considerations of the EKF Algorithm

_{k}

_{+1}= f(x

_{k},u

_{k}) + w

_{k}

_{k}= h(x

_{k}) + v

_{k}

_{k}

_{+1}is the state of the system, denoted for the next time series of the original datasets x

_{k}; k is the time index; u

_{k}is the driving function that may call a signal control or distribution function; w

_{k}is a noise, independent and identically distributed (i.i.d.) N(0,Q), where Q is the covariance (matrix) of the state; v

_{k}and is another noise, i.i.d. N(0,R), where R is the covariance (matrix) of the measurement of noise; y

_{k}is the measured output; and f(.) and h(.) are the state equation and output equation, respectively. The state and output functions in the case are nonlinear functions. Thus, Equations (27) and (28) are nominal states that are known (predicted) ahead of time, which are represented by

_{k}and C

_{k}with respect to x

_{k}, and we obtain in the following equations

_{k}as the difference between the actual measurement y

_{k}and the nominal measurement y

_{k}, which is given by

_{k}is the Kalman gain, P

_{k}is the covariance of the error of the estimation and I is the identity matrix.

_{k}and R

_{k}are relevant to x. In the execution mode, the measurement update (output state) adjusts the projected estimate based on an actual measurement at that time. It should be noted that EKF applies Equations (30), (31) and (37) to update the prediction mode after the state is changed periodically and the Kalman gain K

_{k}determines how the observer responds to the difference between its estimated output and the noisy measurement. To simplify this, we can present EKF as the system block diagram shown in Figure 3.

#### 2.5. Theoretical Considerations of the PF Algorithm

_{i}indirectly using state y

_{i}, and we can specify a simple model as follows

_{k}, which is equivalent to x

_{i}the original datasets, given all observations up to point (y

_{1:n}). Alternatively, we need to find the posterior distribution of p(x

_{1:n}|y

_{1:n}). Using Bayes, we end up with two steps as follows:

- (i)
- Update step:$$p({x}_{k}|{y}_{1:k})=\frac{h({y}_{k}|{x}_{k})p({x}_{k}|{y}_{1:k-1})}{p({y}_{k}|{y}_{1:k-1})}$$
- (ii)
- Prediction step:$$p({x}_{k}|{y}_{1:k-1})={\displaystyle \int f({x}_{k}}|{x}_{k-1})p({x}_{k-1}|{y}_{1:k-1})d{x}_{k-1}$$

- (1)
- Assume $p({x}_{k-1}|{y}_{k-1})$ is the posterior probability distribution at k − 1 where the transition state (state equation) is
- (2)
- Resample $p({x}_{k}|{x}_{k-1})$, which is the prior probability distribution at k − 1 using the bootstrap algorithm
- (3)
- Find a weight by Monte Carlo integration using $p({x}_{k-1}|{y}_{k-1})$ and $p({x}_{k}$ to obtain, $\int p\left({x}_{k}|{x}_{k-1}\right)\cdot p\left({x}_{k-1}|{y}_{k-1}\right)\text{d}{x}_{k-1}}\approx {\displaystyle \sum _{i}{w}_{k-1}^{i}}p\left({x}_{k}|{x}_{k-1}^{i}\right)$
- (4)
- From (3), use to estimate the particle at k − 1 and obtain ${\widehat{S}}_{k-1}=\left\{({\widehat{x}}_{k-1}^{i},{\widehat{w}}_{k-1}^{i});i=1,2,\mathrm{...},N\right\}$.
- (5)
- Update the likelihood function $p({y}_{k}$ with ${\hat{x}}_{k-1}^{i}$ and ${\hat{w}}_{k-1}^{i}$.
- (6)
- From (5) use the new updated likelihood to update $p({x}_{k}$, which is the posterior probability distribution at k using a normalised weight from time to time. The weight can either be the averaging of all the weights or the last weight only.
- (7)
- Finally, we obtain $p({x}_{k}|{y}_{k})$ the posterior probability distribution at k.

## 3. Simulation and Results

^{2}, AIC, BIC and Accuracy count which is a sum of the upward and downward movements of all the underlying local signals after they had transited the reversion points. See flowchart of simulation and results in Figure 5.

#### 3.1. Adding White-Gaussian Noise

_{k}(t) in time series. The distribution of these added noise are similar to white-Gaussian noise as shown in Figure 6.

**Figure 6.**Six sets of sine wave as frequencies in Hz of 1, 5 and 10 with the amplitude of 10% and 20% of exchange rates.

_{k}represented datasets, EUR-USD, exchange rates, in which their nonlinear and nonstationary characteristics were verified by all the testing methods; and w

_{k}represented the six sets of sine wave created in Section 3.1. The equation which used to add white Gaussian noise to the original datasets can be rearranged as follows

_{kg}(t) is the original signal with the noise, x

_{k}(t) is the original datasets and ${\epsilon}_{kg}(t)$ is the added noise with i.i.d. N(0,1) at section of various frequencies in Hz of 1, 5 and 10 with 100% of random distribution of x

_{k}(t). To verify that the Equation (43) is nonlinear equation, we tested the x

_{kg}(t) with the different parameters of the noise added. As per the results, all of the p-value generated by all the testing methods mentioned earlier were less than 0.05. This served to imply that the characteristics of x

_{kg}(t) was nonlinear and nonstationary. Later, x

_{kg}(t) was used as input signal to estimate the performance of the following algorithms: (i) aIMF, (ii) WT, (iii) EKF and (iv) PF.

#### 3.2. Simulation and Performance Measurements of the aIMF Algorithm

^{2}and Accuracy count.

^{2}is in the high ranging, from 0.8533–0.9228, whereas, AIC and BIC are high, up to −6305.47 and −6974.09, respectively. The Accuracy count showed 57.69%–79.66%. As a result, the performance of the aIMF is acceptable.

^{2}, AIC, BIC and Accuracy count were 0.00014, 0.01019, 0.88981, 0.99820, −15723.93, −15706.68, and 95.86%, respectively; and it outperformed those of simulations with 100% random distribution.

**Table 1.**Performance measurements of original datasets using noise distribution at 100% with 1 Hz and 10% amplitude of the original signals.

EUR-USD | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.00399 | 0.05636 | 4.77093 | 0.8962 | −6305.47 | −6288.22 | 77.34 |

WT | 0.00506 | 0.05402 | 5.54677 | 0.7231 | −6877.81 | −7857.63 | 53.82 |

EKF | 0.00548 | 0.06432 | 5.44475 | 0.8701 | −5783.21 | −5765.96 | 47.13 |

PF | 4.52658 | 1.65756 | 140.015 | 0.0062 | −1060.50 | −1043.25 | 50.32 |

**Table 2.**Performance measurements of original datasets using noise distribution at 100% with 1 Hz and 20% amplitude of the original signals.

EUR-USD | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.00296 | 0.04838 | 4.08815 | 0.9228 | −6991.34 | −6974.09 | 79.66 |

WT | 0.05439 | 0.05311 | 5.79178 | 0.6759 | −6397.83 | −6804.50 | 54.23 |

EKF | 0.02043 | 0.12765 | 10.7989 | 0.6385 | −3407.66 | −3390.41 | 48.51 |

PF | 4.43079 | 1.64779 | 139.747 | 0.0067 | −1061.73 | −1044.48 | 49.76 |

**Table 3.**Performance measurements of original datasets using noise distribution at 100% with 5 Hz and 10% amplitude of the original signals.

EUR-USD | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.00522 | 0.06303 | 5.32770 | 0.8760 | −5891.33 | −5874.08 | 74.92 |

WT | 0.00599 | 0.05636 | 4.77151 | 0.8962 | −6303.39 | −6286.15 | 55.51 |

EKF | 0.00503 | 0.06159 | 5.22073 | 0.8807 | −5980.49 | −5963.24 | 47.65 |

PF | 4.47322 | 1.64897 | 139.488 | 0.0096 | −1067.39 | −1050.14 | 50.24 |

**Table 4.**Performance measurements of original datasets using noise distribution at 100% with 5 Hz and 20% amplitude of the original signals.

EUR-USD | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.00596 | 0.06415 | 5.42792 | 0.8534 | −5501.99 | −5484.74 | 64.41 |

WT | 0.00571 | 0.07660 | 4.78754 | 0.8761 | −6890.19 | −6897.21 | 56.13 |

EKF | 0.01870 | 0.12247 | 10.3683 | 0.6637 | −3575.62 | −3558.38 | 48.00 |

PF | 4.50770 | 1.64377 | 138.983 | 0.0074 | −1063.27 | −1046.02 | 50.58 |

**Table 5.**Performance measurements of original datasets using noise distribution at 100% with 10 Hz and 10% amplitude of the original signals.

EUR-USD | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.00527 | 0.06444 | 5.44396 | 0.8710 | −5799.61 | −5782.36 | 67.26 |

WT | 0.00508 | 0.06890 | 5.79883 | 0.8948 | −6273.51 | −6745.29 | 56.82 |

EKF | 0.00406 | 0.05546 | 4.69540 | 0.9017 | −6429.89 | −6412.64 | 51.66 |

PF | 4.47322 | 1.64897 | 139.487 | 0.0096 | −1067.39 | −1050.14 | 50.24 |

**Table 6.**Performance measurements of original datasets using noise distribution at 100% with 10 Hz and 20% amplitude of the original signals.

EUR-USD | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.00580 | 0.06618 | 5.59678 | 0.8533 | −5501.09 | −5483.84 | 57.69 |

WT | 0.00521 | 0.06003 | 4.57416 | 0.6941 | −6843.56 | −6899.78 | 53.34 |

EKF | 0.01490 | 0.11046 | 9.34639 | 0.7140 | −3951.51 | −3934.26 | 51.62 |

PF | 4.61609 | 1.67868 | 141.868 | 0.0049 | −1057.44 | −1040.19 | 50.45 |

#### 3.3. Simulation and Performance Measurements of the WT Algorithm

^{2}is in the below standard ranging from 0.85340–0.8948, whereas, AIC and BIC are relatively high, up to −6890.19 and −6286.15, respectively; whereas the Accuracy counts showed 53.82%–56.13%. As a result, we rated the performance of the WT algorithm is relatively low and hardly acceptable.

**Figure 8.**Plots of the original datasets, original datasets + noise, and the Wavelet Transform (WT) algorithm.

#### 3.4. Simulation and Performance Measurements of the EKF Algorithm

^{2}is in the ranging from 0.6637–0.9017, whereas, AIC and BIC are high, up to −6429.89 and −6412.64, respectively, and the Accuracy count showed 47.13%–51.66%. As a result, the performance of the EKF algorithm is unacceptable.

#### 3.5. Simulation and Performance Measurements of the PF Algorithm

^{2}is unacceptable with the ranging of 0.0049–0.096, whereas, AIC and BIC are relatively too low i.e., −1067.39 and −1046.02, respectively, whereas the Accuracy count showed 49.76%–50.58%. As a result, the performance of the PF algorithm is also unacceptable.

**Figure 10.**Plots of the original datasets, original datasets + noise, and the Particle Filter (PF) algorithm.

#### 3.6. Discussion

^{2}and Accuracy count were 8.20211E−05, 0.00719, 0.57085, 0.9980 and 99.95%, respectively. It is noted that the original datasets, EUR-USD exchange rates, contained a certain level of noise, not pure signal only. In the real application, data of exchange rates are normally fluctuated before closing hours of trading by retailers and speculators who want to manipulate the price. The manipulations are always executed with low volumes of trade, but enable the price changes at the end of trading hours. In the financial community, we deem these trades as noise. Finally, the figures from the last loss estimators shown were similar to the results in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6. Hence, it is safe to assume that the proposed aIMF algorithm can remove the unwanted signals.

**Figure 11.**Graphs (

**a**)–(

**c**) show plots of IMF1, IMF5 and IMF7, of which their local extrema of higher order IMF moved toward the normal distribution.

#### 3.7. Robustness Test

^{2}, AIC, BIC and Accuracy count. Having simulated with all the loss estimators indicated in Table 7, Table 8 and Table 9 under the same conditions used to test for EUR-USD as input, the results shared the same trend with few deviations from each other. This served to confirm that the aIMF algorithm performed significantly better when filtering a nonlinear nonstationary time series, i.e., EUR-JPY, EUR-CHF and EUR-GBP exchange rates, followed by WT and EKF algorithms. Moreover, we rejected using the PF algorithm to reduce the noise for the nonlinear nonstationary time series data. Additionally, we have discovered that the EKF and WT algorithm must be optimised in the areas of resampling and building up mother wavelet, respectively.

**Table 7.**Performance measurements of original dataset, EUR-JPY, using noise distribution at 10% with 10 Hz and 20% amplitude of the original signals’.

EUR-JPY | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.00424 | 0.057854 | 8.29746 | 0.5219 | −6782.23 | −6764.98 | 63.85 |

WT | 0.00626 | 0.05676 | 8.67169 | 0.5514 | −6839.68 | −682.42 | 53.46 |

EKF | 0.00516 | 0.063797 | 9.15559 | 0.4046 | −6272.94 | −6255.69 | 49.50 |

PF | 3.37133 | 1.42611 | 204.696 | 0.0020 | −5074.33 | −5057.08 | 50.19 |

**Table 8.**Performance measurements of original dataset, EUR-CHF, using noise distribution at 10% with 10 Hz and 20% amplitude of the original signals’.

EUR-CHF | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.03772 | 0.145457 | 0.420782 | 0.9983 | −1086.41 | −1069.16 | 88.96 |

WT | 0.08124 | 0.1745481 | 0.53478 | 0.5905 | −6852.24 | −6699.54 | 53.20 |

EKF | 0.37038 | 0.176222 | 0.519026 | 0.5588 | −11843.6 | −11860.9 | 45.71 |

PF | 1218.50 | 34.52886 | 101.8893 | 0.4421 | −12388.2 | −12405.5 | 50.84 |

**Table 9.**Performance measurements of original dataset, EUR-GBP, using noise distribution at 10% with 10 Hz and 20% amplitude the original signals’.

EUR-CHF | MSE | MAE | MAPE | R^{2} | AIC | BIC | Accuracy count (%) |
---|---|---|---|---|---|---|---|

aIMF | 0.00424 | 0.057854 | 8.306478 | 0.5191 | −6802.20 | −6784.98 | 64.92 |

WT | 0.00826 | 0.065946 | 9.319482 | 0.6435 | −6699.65 | −6382.98 | 51.08 |

EKF | 0.00517 | 0.063851 | 9.171988 | 0.4055 | −6309.97 | −6292.68 | 49.20 |

PF | 3.33789 | 1.435658 | 206.7395 | 0.0006 | −5104.53 | −5087.27 | 48.68 |

- (i)
- Intel(R) Xeon(R) server with 2 × 2.4 GHz E5620 CPUs, 3.99 GB RAM and a 64-bit Microsoft Windows Operating System is configured as the main processor.
- (ii)
- Sony Visio, Sony L2412M1EB Desktop with an Intel Core i5, 2.5 GHz, 8 GB RAM, and a 64-bit Microsoft Windows Operating System is used as the front-end connection to the data terminal from Bloomberg via web access using a Citrix client.
- (iii)
- Application programs written using R programming scripts and some amendments to suit the requirements.

## 4. Conclusion

## Acknowledgments

## References

- Huttunen, J.M.J.; Lehikoinen, A.; Hämäläinen, J.; Kaipio, J. Importance sampling approach for the nonstationary approximation error method. Inverse Probl.
**2002**, 26, 1–16. [Google Scholar] - Brown, R.G.; Hwang, P.Y.C. Introduction to Random Signals and Applied Kalman Filtering, 3rd ed.; John Wiley & Sons, Inc.: New Jersey, NJ, USA, 1992; pp. 128–140. [Google Scholar]
- Calabrese, A.; Paninski, L. Kalman filter mixture model for spike sorting of nonstationary data. J. Neurosci. Methods
**2011**, 196, 159–169. [Google Scholar] [CrossRef] [PubMed] - Sanjeev Arulampalam, M.; Maskell, S.; Gordon, N.; Clapp, T. A tutorial on particle filters for online nonlinear/non-gaussian bayesian tracking. IEEE Trans. Signal Process.
**2002**, 50, 174–187. [Google Scholar] [CrossRef] - Caron, F.; Davy, M.; Duflos, E.; Vanheeghe, P. Particle filtering for multi-sensor data fusion with switching observation models: Applications to land vehicle positioning. IEEE Trans. Signal Process.
**2007**, 55, 2703–2719. [Google Scholar] [CrossRef] - Gustafsson, F.; Gunnarsson, F.; Bergman, N.; Forssel, U. Particle filters for positioning, navigation and tracking. IEEE Trans. Signal Process.
**2002**, 50, 425–437. [Google Scholar] [CrossRef] - Thrun, S.; Burgard, B.; Fox, D. Probabilistic Robotics; MIT Press: Massachusetts, MA, USA, 2005; pp. 39–117. [Google Scholar]
- Huang, N.E.; Attoh-Okine, N.O. The Hilbert Transform in Engineering; CRC Press, Taylor & Francis Group: Florida, FL, USA, 2005. [Google Scholar]
- Wu, Z.; Huang, N.E. Ensemble empirical mode decomposition: A noise-assisted data analysis method. Adv. Adapt. Data Anal.
**2009**, 1, 1–41. [Google Scholar] [CrossRef] - Huang, W.; Shen, Z.; Huang, N.E.; Fung, Y.C. Nonlinear indicial response of complex nonstationary oscillations as pulmonary hypertension responding to step hypoxia. Proc. Natl. Acad. Sci. USA
**1996**, 96, 1834–1839. [Google Scholar] [CrossRef] - Huang, N.E.; Shen, Z.; Long, S.R. The empirical mode decomposition and the hilbert spectrum for nonlinear and nonstationary time series analysis. Proc. R. Soc. Lond. A
**1998**, 454, 903–995. [Google Scholar] [CrossRef] - Huang, N.E.; Shen, S.S.P. Hilbert-Huang Transform and Its Applications; World Scientific Publishing Company: Singapore, 2005; pp. 2–26. [Google Scholar]
- Wang, L.; Koblinsky, C.; Howden, S.; Huang, N.E. Interannual variability in the South China Sea from expendable bathythermograph data. J. Geophys. Res.
**1990**, 104, 509–523. [Google Scholar] [CrossRef] - Datig, M.; Schlurmann, T. Performance and limitations of the Hilbert-Huang Transformation (HHT) with an application to irregular water waves. Ocean Eng.
**2004**, 31, 1783–1834. [Google Scholar] [CrossRef] - Guhathakurta, K.; Mukherjee, I.; Roy, A. Empirical mode decomposition analysis of two different financial time series and their comparison, Chaos. Solut. Fractals
**2008**, 37, 1214–1227. [Google Scholar] [CrossRef] - Cohen, L. Frequency analysis. IEEE Trans. Signal Process.
**1995**, 55, 2703–2719. [Google Scholar] - Kaslovsky, D.N.; Meyer, F.G. Noise corruption of empirical mode decomposition and its effect on instantaneous frequency. Adv. Adapt. Data Anal.
**2010**, 2, 373–396. [Google Scholar] [CrossRef] - Flandrin, P.; Rilling, G.; Goncalves, P. Empirical mode decomposition as a filter Bank. IEEE Signal Process. Lett.
**2004**, 11, 112–114. [Google Scholar] [CrossRef] - Peng, Z.K.; Tse, P.W.; Chu, F.L. An improved Hilbert-Huang transform and its application in vibration signal analysis. J. Sound Vib.
**2005**, 286, 187–205. [Google Scholar] [CrossRef] - Gilks, W.R.; Richardson, S.; Spiegelhalter, D.J. Morkov Chain Monte Carlo in Practice; Chapman & Hall/CRC: Florida, FL, USA, 1996; p. 1. [Google Scholar]
- Heisenberg, W. Physical Principles of the Quantum Theory; The University of Chicago Press: Chicago, IL, USA, 1930. [Google Scholar]
- Kaiser, G. A Friendly Guide to Wavelets; Birkhause: Boston, MA, USA, 1994; pp. 44–45. [Google Scholar]
- Helong, L.; Xiaoyan, D.; Hongliang, D. Structural damage detection using the combination method of EMD and wavelet analysis. Mech. Syst. Signal Process.
**2007**, 21, 298–306. [Google Scholar] - Li, L. On the block thresholding wavelet estimators with censored data. J. Multivar. Anal.
**2008**, 9, 1518–1543. [Google Scholar] [CrossRef] - Stein, C. Estimation of the mean of a multivariate normal distribution. Annu. Stat.
**1981**, 9, 1135–1151. [Google Scholar] [CrossRef] - Dang, D.S.; Weifeng, T.; Feng, Q. EMD- and LWT-based stochastic noise eliminating method for fiber optic gyro. Measurement
**2010**, 44, 2190–2193. [Google Scholar] [CrossRef] - Palma, R.; Vmoraru, C.; Woo, J. Pahikkala, Parseval Equality, version 8; 2012. Available online: http://planetmath.org/?op=getobj;from=objects;id=4717 (accessed on 12 December 2012).
- Randhawa, S.; Li, J.J.S. Adaptive Order Spline Interpolation for Edge-Preserving Colour Filter Array Demosaicking. In Proceeding of the 2011 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Noosa, QLD, Australia, 6–8 December 2011; pp. 666–671. [CrossRef]

© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).

## Share and Cite

**MDPI and ACS Style**

Premanode, B.; Vongprasert, J.; Toumazou, C.
Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function. *Algorithms* **2013**, *6*, 407-429.
https://doi.org/10.3390/a6030407

**AMA Style**

Premanode B, Vongprasert J, Toumazou C.
Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function. *Algorithms*. 2013; 6(3):407-429.
https://doi.org/10.3390/a6030407

**Chicago/Turabian Style**

Premanode, Bhusana, Jumlong Vongprasert, and Christofer Toumazou.
2013. "Noise Reduction for Nonlinear Nonstationary Time Series Data using Averaging Intrinsic Mode Function" *Algorithms* 6, no. 3: 407-429.
https://doi.org/10.3390/a6030407