Next Article in Journal
A Theoretical Framework to Determine RHP Zero Dynamics in Sequential Interacting Sub-Systems
Previous Article in Journal
A Variable Block Insertion Heuristic for Solving Permutation Flow Shop Scheduling Problem with Makespan Criterion
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Adaptive Derivative Estimator for Fault-Detection Using a Dynamic System with a Suboptimal Parameter

by
Manuel Schimmack
1,2 and
Paolo Mercorelli
1,*
1
Institute of Product- and Processinnovation, Leuphana University of Lueneburg, Volgershall 1, D-21339 Lueneburg, Germany
2
Institute for Electrical Engineering and Information Technology, Kiel University, Kaiserstraße 2, D-24143 Kiel, Germany
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(5), 101; https://doi.org/10.3390/a12050101
Submission received: 20 December 2018 / Revised: 11 April 2019 / Accepted: 22 April 2019 / Published: 10 May 2019

Abstract

:
This paper deals with an approximation of a first derivative of a signal using a dynamic system of the first order. After formulating the problem, a proposition and a theorem are proven for a possible approximation structure, which consists of a dynamic system. In particular, a proposition based on a Lyapunov approach is proven to show the convergence of the approximation. The proven theorem is a constructive one and shows directly the suboptimality condition in the presence of noise. Based on these two results, an adaptive algorithm is conceived to calculate the derivative of a signal with convergence in infinite time. Results are compared with an approximation of the derivative using an adaptive Kalman filter (KF).

1. Introduction

The derivative estimation of a measured signal has considerable importance in signal processing, numerical analysis, control engineering, and failure diagnostics, among others [1]. Derivatives and structures using derivatives of signals are often used in industrial applications, for example PD controllers. These kinds of controllers are often used practically in different fields of application.
In applications, signals are corrupted by measurement and process noise, therefore a filtering procedure needs to be implemented. A number of different approaches have been proposed based on least-squares polynomial fitting or interpolation for off-line applications [1,2]. Another common approach is based on high-gain observers [3,4,5]. These observers adjust the model by weighting the observer output deviations from the output of the system to be controlled.
In [6], a sliding mode control (SMC) using an extended Kalman filter (EKF) as an observer for stimulus-responsive polymer fibers as soft actuator was proposed. Because of the slow velocity of the fiber, the EKF produces poor estimation results. Therefore, a derivative approximation structure is proposed to estimate the velocity through the measurement of the position. This approach realized the approximation of the derivative using also a high gain observer. The method presented in the different applications cited above is a method that approximates the derivative in an infinite horizon of time. In this sense, the proposed differentiator is an asymptotic estimator of the derivative. Several researchers studied this problem by applying the SMC approach.
This paper emphasizes some mathematical aspects of an algorithm that was used in the past for practical applications such as, for instance, in [7,8]. In particular, in [7] this algorithm is used in designing a velocity observer. In [8], a similar algorithm is used to estimate parameter identification in an application in which a synchronous motor is proposed.
In recent years, real-time robust exact differentiation has become the main problem of output-feedback high order of sliding mode (HOSM) control design. Even the most modern differentiators [9] do not provide for exact differentiation with finite-time convergence and without considering noise.
The derivatives may be calculated by successive implementation of a robust exact first-order differentiator [10] with finite-time convergence but without considering noise, as in [11,12]. In [13], an arbitrary-order finite-time-convergent exact robust differentiator is constructed based on the HOSM technique.
In [10], the proposed differentiator provides the proportionality of the maximal differentiation error to the square root of the maximal deviation of the measured input signal from the base signal. Such an order of the differentiation error is shown to be the best possible when the only information known of the base signal is an upper bound for Lipschitz’s constant of the derivative. According to Theorem 2, the proposed algorithm to produce the optimal approximation needs to work the knowledge of the maximal Lipschitz’s constant. More recently in [14], sliding mode (SM)-based differentiation is shown to be exact for a large class of functions and robust with respect to noise. In this case, Lipschitz’s constant must be known to apply the algorithm. Those methods, using the maximal Lipschitz’s constant, perform an approximation in a finite horizon of time. In practical applications, the presence of noise and faults does not allow a Lipschitz’s constant to be set. In fact, the noise is not distinguishable from the input signal. In this sense, it appears impossible to apply these algorithms in real applications.
This paper proposes an approximated derivative structure to be taken into account for such types of applications, so that spikes, noise, and any other kind of undesired signals that occur from the derivatives can be reduced. Thus, the problem of the approximation of the derivative is formulated in the presence of white Gaussian noise and in a infinity horizon of time as in KF. Therefore, the comparison is shown just with the performances of the approximation of the derivative performed by an adaptive KF. After the problem formulation, this paper proves a proposition which allows us to build this possible approximation of the derivative using a dynamic system.
The paper is structured as follows. In Section 2, the problem formulation and a possible solution are proposed. How to approximate a derivative controller using an adaptive KF is presented in Section 3. The results of the simulations are discussed in Section 4, and the conclusion closes the paper.

2. An Approximated Derivative Structure

Using the derivative structures, imprecision occurs. The imprecision is due to spikes generating power dissipation. The idea is to find an approximated structure of general derivatives as they occur in mathematical calculations, and which are often used also in technical problems as proportional derivative controllers. The following formulation states the problem in a mathematical way.
Problem 1.
Assume the following differential is given:
r ( t ) = d y ( t ) d t .
Function y ( t ) R is the function to be differentiated, where t R represents the time variable. The aim of the proposed approach is to look for an approximating expression r ^ ( t ) = r ^ y ( t ) , k app , where k app is a parameter, such that:
lim k app + e r ( t ) = r ( t ) r ^ ( t ) = 0 ,
where r ( t ) represents the real derivative function.
Proposition 1.
Considering (1), then there exists a function M > 0 such that if the following dynamic system is considered:
d r ^ ( t ) d t = M r ^ ( t ) d y ( t ) d t ,
where y ( t ) represents a twice differentiable real function and r ^ ( t ) the approximated derivative function. If
d r ^ ( t ) d t > d r ( t ) d t t ,
and
M > 0 t ,
then
lim t + e r ( t ) = 0 .
Proof. 
Considering the following approximate dynamic system:
d r ^ ( t ) d t = M r ^ ( t ) d y ( t ) d t ,
where M can be a function of y ( t ) or a parameter with M R , if
e r ( t ) = r ( t ) r ^ ( t ) ,
then
d e r ( t ) d t = d r ( t ) d t d r ^ ( t ) d t .
After inserting (7) into (9), it follows that
d e r ( t ) d t = d r ( t ) d t + M r ^ ( t ) d y ( t ) d t ,
if (1) is taken into consideration, then (10) becomes as follows:
d e r ( t ) d t + M e r ( t ) = d r ( t ) d t .
If the following Lyapunov function is considered:
V e r ( t ) = 1 2 e r 2 ( t ) ,
and considering that:
d V e r ( t ) d t = e r ( t ) d e r ( t ) d t ,
according to (11), it is possible to write the following expression:
e r ( t ) = d r ( t ) d t d e r ( t ) d t M ,
and thus from (13), it follows that:
d V e r ( t ) d t = d e r ( t ) d t d r ( t ) d t d e r ( t ) d t 2 M .
Considering (9) and multiplying by d e r ( t ) d t , then
d e r ( t ) d t d r ( t ) d t = d r ( t ) d t 2 d r ^ ( t ) d t d r ( t ) d t ,
and
d e r ( t ) d t d r ( t ) d t d r ^ ( t ) d t d r ( t ) d t ,
if
d r ^ ( t ) d t > d r ( t ) d t t ,
as stated by the hypothesis in (4). Then
d r ^ ( t ) d t d r ( t ) d t > d r ( t ) d t 2 t ,
and thus
d r ^ ( t ) d t d r ( t ) d t > 0 t .
Considering that
M > 0 t ,
as stated by the hypothesis in (5), then:
d V e r ( t ) d t < 0 t .
Thus, (11) is uniformly asymptotically stable and (6) is proven. □
Proposition 2.
The dynamic system
d η ( t ) d t = k app η ( t ) k app 2 y ( t ) r ^ ( t ) = η ( t ) + k app y ( t ) ,
where function η ( t ) R , solves the problem defined in Problem 1.
A supplementary variable is defined as:
η ( t ) = r ^ ( t ) N ( y ( t ) ) ,
where N ( y ( t ) ) is a function to be designed with N ( y ( t ) ) R . Let
M d y ( t ) d t = d N ( y ( t ) ) d t = d N ( y ( t ) ) d y ( t ) d y ( t ) d t .
If N ( y ( t ) ) = k app y ( t ) , then M = k app . Then the asymptotical stability is always guaranteed for k app > 0 and the rate of convergence can also be specified by k app > 0 . From (24), the second part of (23) follows:
r ^ ( t ) = η ( t ) + k app y ( t ) .
Differentiating (26), it follows that
d η ( t ) d t = d r ^ ( t ) d t k app d y ( t ) d t .
Considering (27) with d r ^ ( t ) d t = k app r ^ ( t ) d y ( t ) d t from (7) with M = k app combined with (26), the first part of (23) follows:
d η ( t ) d t = k app η ( t ) k app 2 y ( t ) .
If (23) is transformed by forward Euler, the following expression is obtained:
η ( k ) = ( 1 t s k app ) η ( k 1 ) t s k app 2 y ( k ) r ^ ( k ) = η ( k ) + k app y ( k ) ,
and is a discrete differential equation, where t s indicates the sampling time with t R , and r ^ ( k ) , η ( k ) , and y ( k ) are discrete variables with k N .
Figure 1 presents a graphical representation of the proposed algorithm structure with the discrete input signal y ( k ) , the recursive calculator for the parameter a 2 of the linear least squares method (LSM) and the differential estimator with the discrete approximated derivative function r ^ ( k ) .
Transforming the equations represented by (29) with Z -transform, the following forms are obtained:
H ( z ) = t s k app 2 z 1 Y ( z ) + 1 t s k app z 1 H ( z ) , R ^ ( z ) = H ( z ) + k app Y ( z ) ,
which yields to
R ^ ( z ) = k app 1 z 1 Y ( z ) 1 1 t s k app z 1 .
As described earlier, the objective of a minimum variance approach is to minimize the variation of an output of a system with respect to a desired output signal in the presence of noise. The following theorem gives a result to determine a suboptimal k app to achieve a defined suboptimality.
Theorem 1.
Considering
e r ( t ) = r ( t ) r ^ ( t ) ,
and according to the forward Euler discretization by Z -transform of (1), it follows that
R ( z ) = z 1 1 t s Y ( z ) .
Then it is possible to find a unique value of parameter k app = 1 a 2 t s of (31), which guarantees a suboptimal minimum of e r ( t ) at each k, where a 2 is a parameter to be calculated recursively using the linear least squares method (LSM).
Proof. 
Assuming the following model:
e r ( k ) = a 1 e r ( k 1 ) + a 2 e r ( k 2 ) + b 1 r ( k 1 ) + b 2 r ( k 2 ) + n ( k ) + c 1 n ( k 1 ) + c 2 n ( k 2 ) ,
where coefficients a 1 , a 2 , b 1 , b 2 , and c 1 , c 2 belong to R and need to be estimated, n ( k ) denotes the white noise. At the next sample, (34) becomes:
e r ( k + 1 ) = a 1 e r ( k ) + a 2 e r ( k 1 ) + b 1 r ( k ) + b 2 r ( k 1 ) + n ( k + 1 ) + c 1 n ( k ) + c 2 n ( k 1 ) .
The prediction at time k is:
e ^ r ( k + 1 / k ) = a 1 e r ( k ) + a 2 e r ( k 1 ) + b 1 r ( k ) + b 2 r ( k 1 ) + c 1 n ( k ) + c 2 n ( k 1 ) .
Considering that
J = E { e r 2 ( k + 1 / k ) } = E { [ e ^ r ( k + 1 / k ) + n ( k + 1 ) ] 2 }
and assuming that the noise is not correlated to signal e r ( k ) , it follows that
E { [ e ^ r ( k + 1 / k ) + n ( k + 1 ) ] 2 } = E { [ e ^ r ( k + 1 / k ) ] 2 } + E { [ n ( k + 1 ) ] 2 } = E { [ e ^ r ( k + 1 / k ) ] 2 } + σ n 2 ,
where σ n 2 is defined as the variance of the white noise. The goal is to find r ^ ( k ) such that:
e ^ r ( k + 1 / k ) = 0 .
It is possible to write (34) as:
n ( k ) = e r ( k ) a 1 e r ( k 1 ) a 2 e r ( k 2 ) b 1 r ( k 1 ) b 2 r ( k 2 ) c 1 n ( k 1 ) c 2 n ( k 2 ) .
Considering the effect of the noise as follows:
c 1 n ( k 1 ) + c 2 n ( k 2 ) c 1 n ( k 1 ) ,
and transforming (39) using Z -transform, then
N ( z ) = E r ( z ) a 1 z 1 E r ( z ) a 2 z 2 E r ( z ) b 1 z 1 R ( z ) b 2 z 2 R ( z ) c 1 z 1 N ( z ) ,
and
N ( z ) = ( 1 a 1 z 1 a 2 z 2 ) E r ( z ) 1 + c 1 z 1 ( b 1 z 1 + b 2 z 2 ) R ( z ) 1 + c 1 z 1 ,
where z C and represents the well-known complex variable. The approximation in (40) is equivalent to considering the following assumption:
c 2 < < c 1 .
In other words, the assumption stated in (43) means that the noise model of (39) is assumed to be a model of the first order. Considering the Z -transform of (36) with c 1 n ( k 1 ) + c 2 n ( k 2 ) c 1 n ( k 1 ) , then
z E ^ r ( z ) = a 1 E r ( z ) + a 2 z 1 E r ( z ) + b 1 R ( z ) + b 2 z 1 R ( z ) + c 1 N ( z ) .
Considering that
N ( z ) = ( 1 a 1 z 1 a 2 z 2 ) E r ( z ) 1 + c 1 z 1 ( b 1 z 1 + b 2 z 2 ) R ( z ) 1 + c 1 z 1
and inserting (45) into (44), it follows that
z E ^ r ( z ) = a 1 E r ( z ) + a 2 z 1 E r ( z ) + b 1 R ( z ) + b 2 z 1 R ( z ) + c 1 ( 1 a 1 z 1 a 2 z 2 ) E r ( z ) 1 + c 1 z 1 ( b 1 z 1 + b 2 z 2 ) R ( z ) 1 + c 1 z 1 .
According to (38), then E ^ r ( z ) = 0 , and through some calculations the following expression is obtained:
c 1 E r ( z ) = a 1 E r ( z ) + c 1 a 1 z 1 E r ( z ) + a 2 z 1 E r ( z ) + c 1 a 2 z 2 E r ( z ) + b 1 R ( z ) + c 1 b 1 z 1 R ( z ) + b 2 z 1 R ( z ) + c 1 b 2 z 2 R ( z ) + c 1 ( 1 a 1 z 1 a 2 z 2 ) E r ( z ) c 1 ( b 1 z 1 + b 2 z 2 ) R ( z ) .
From (47), it follows that
R ( z ) = ( a 1 + c 1 + a 2 z 1 ) E r ( z ) b 1 + b 2 .
Considering that
E r ( z ) = R ( z ) R ^ ( z ) ,
then relation (48) becomes
R ( z ) = ( a 1 + c 1 + a 2 z 1 ) ( R ( z ) R ^ ( z ) ) b 1 + b 2 ,
and thus the derivative approximation according to the forward method is
R ( z ) = z 1 1 t s Y ( z ) .
It follows that
1 z 1 t s Y ( z ) = a 1 + c 1 + a 2 z 1 ( 1 z 1 ) t s 1 Y ( z ) R ^ ( z ) b 1 + b 2 ,
thus
( 1 z 1 ) ( b 1 + b 2 ) t s 1 + ( a 1 + c 1 + a 2 z 1 ) t s 1 Y ( z ) = a 1 + c 1 + a 2 z 1 R ^ ( z ) ,
and finally,
R ^ ( z ) = ( 1 z 1 ) ( b 1 + b 2 ) t s 1 + ( a 1 + c 1 + a 2 z 1 ) t s 1 Y ( z ) a 1 + c 1 + a 2 z 1 ,
which can be written as
R ^ ( z ) = ( 1 z 1 ) ( b 1 + b 2 ) t s 1 + ( a 1 + c 1 + a 2 z 1 ) t s 1 Y ( z ) a 1 c 1 a 2 z 1 .
Recalling (31),
R ^ ( z ) = k app 1 z 1 Y ( z ) 1 1 t s k app z 1 ,
and comparing (54) with (56), the denominator constraints are
c 1 a 1 = 0
and
a 2 = 1 t s k app ,
together with
k app = 1 a 2 t s .
Parameter a 2 can be calculated by LSM.
The numerator constraint is the following:
( b 1 + b 2 ) t s 1 + ( a 1 + c 1 + a 2 z 1 ) t s 1 = k app .
Considering the conditions of the denominator, we obtain
( b 1 + b 2 ) t s 1 + ( 1 + 1 t s k app z 1 ) t s 1 = k app .
This yields
b 1 + b 2 = k app t s + 1 1 t s k app z 1 .
k app being, in our context, a function of time k app = k app ( t ) , it is possible to write in Z -domain as follows:
b 1 ( z ) + b 2 ( z ) = k app ( z ) t s + 1 1 t s k app ( z ) z 1 ,
and thus consider the back Z -transform
b 1 ( k ) + b 2 ( k ) = k app ( k ) t s + 1 1 t s k app ( k 1 ) .
If t s is small enough, k app ( k ) k app ( k 1 ) . This implies
b 1 ( k ) + b 2 ( k ) = 0 .
 □
Remark 1.
Conditions (57), (59), and (60) guarantee that signal r ^ ( t ) equals y ( t ) . Nevertheless, obtaining the rejection of the noise coefficients b 1 , b 2 , and c 1 should be adaptively calculated using LSM. In our tests, in order to reduce the calculation load, the following conditions are considered: c 1 = b 1 = b 2 = 0 .

3. Using an Adaptive Kalman Filter to Approximate a Derivative Controller

Assuming that the polynomial that approximates the derivative of the signal is of the first order as follows:
y ^ ( t ) = p 0 + p 1 t ,
in which y ^ ( t ) represents the polynomial approximation of the signal y ( t ) . The following adaptive Kalman filter (KF) can be implemented in which at each sampling time, constant parameters p 0 and p 1 should be calculated. The following state representation is obtained:
y ^ ˙ ( t ) r ^ ˙ ( t ) = 0 1 0 0 y ^ ( t ) r ^ ( t ) .
It should be noted that y ^ ˙ ( t ) = r ^ ( t ) represents the approximation of the derivative of the signal as proposed in Proposition 1. In this sense, according to the following general notation, x ( t ) R 2 , u ( t ) R , and y ( t ) R , then:
x ˙ ( t ) = A x ( t ) y ( t ) = H x ( t ) ,
where
x ˙ ( t ) = y ^ ˙ ( t ) r ^ ˙ ( t ) and A = 0 1 0 0 ,
and H = [ 1 0 ] . Considering the discretization of system (63), the following discrete system is obtained:
x ˙ ( k / k 1 ) = A d x ( k 1 / k 1 ) + Q w y ( k / k ) = H x ( k / k ) + ζ ,
where Q w is the process noise covariance matrix, and ζ is the measurement noise covariance. The discrete forms of matrix A of (64) are represented by A d , respectively. If the forward Euler method with the sampling time t s is applied, the following matrices are obtained:
A d = 1 t s 0 1 .
The a priori predicted state is
x ( k / k 1 ) = A d x ( k 1 / k 1 ) ,
and the a priori predicted covariance matrix is
P ( k / k 1 ) = A d P ( k 1 / k 1 ) A d T + Q w .
The following equations state the correction (a posteriori prediction) of the adaptive KF:
K ( k ) = P ( k / k 1 ) H T H P ( k / k 1 ) H T + ζ 1 , x ( k / k ) = x ( k / k 1 ) + K ( k ) y ( k ) H x ( k / k 1 ) , P ( k / k ) = P ( k / k 1 ) K ( k ) H P ( k / k 1 ) ,
where K ( k ) is the Kalman gain.
Remark 2.
It should be noted that matrix Q w (process covariance noise) consists of the following structure:
Q w = 0 0 0 R 2 , 2 ,
in which R 2 , 2 states a squared variance. The reason for this structure is that the first equation corresponding to matrix A of (64) is a definition of the velocity, and in this sense, no uncertainty should be set. The second equation is an equation which considers a comfortable condition, but of course it is not true. In this case, an uncertainty variable must be set. According to our experience, a very wide system needs a very wide uncertainty variable to be set.
Remark 3.
Parameter R 2 , 2 is adapted by directly using the definition of the covariance as follows:
R 2 , 2 = ( e a ( N ) mean ( e a ( 1 : N ) ) ) 2 ,
in which a mean value with the last N values of error e a ( k ) are calculated.

4. Results and Discussion

The following section presents the results using a derivative realized through the adaptive KF and the proposed algorithm. The results are compared with the exact mathematical derivative. In order to reduce the calculation load in the case study, the following conditions were considered: c 1 = b 1 = b 2 = 0 . This approach allows us to also compare the results with a polynomial adaptive KF method.
Figure 2 shows an ideal measured position y ( t ) signal. Therefore neither noise nor faults are present in the measured data. Figure 3 shows the approximated derivative of the measured sine function, and the result of this is shown in detail in Figure 4. With this graphical representation of the result, it is visible that the adaptive KF, compared with the proposed algorithm structure, shows a better performance.
Figure 5 shows the position signal used, y noise ( t ) , in which noise in superposition is added, and the graphical representation of the resulting velocity is shown in Figure 6. The signal used, y noise ( t ) , is represented in Figure 7 together with a fault in its measurement. Figure 8 shows the approximation derivative of the measured signal, and details of this result are shown in Figure 9. Also in the case of the presence of faults and using a more appropriate adaption of the adaptive KF, the two methods offer similar results.
Table 1 presents the results of different input signals and the estimation errors of both adaptive estimators based on the Euclidean norm.
The adaptive derivative estimator shows worse results with respect to the results obtained by the derivative obtained through the adaptive KF in the case of missing noise and faults.
In case of the presence of noise and using a more appropriate adaption of the adaptive KF, the two methods offer similar results. The proposed algorithm structure shows better results in the presence of faults.
Figure 10 shows that a part of a chirp input signal starts from a frequency of 200 Hz and in 0.3 s reaches to 2 kHz. Because of the presence of the derivative, the signal to be approximated changes its amplitude linearly as a function of the frequency over time. The low-frequency part of the chirp input signal is presented in Figure 11, and the high-frequency part is shown in Figure 12. It also shows the resulting tracking of the polynomial adaptive KF and the proposed algorithm structure.
A fault is defined as an abrupt change in the amplitude over time, and in this sense is characterized by a high amplitude and high frequencies. In this context, it is clear that the proposed algorithm can better localize the faults because it is tuned through the choice of t s on the desired signal.
The proposed algorithm structure does not need to be tuned or, in other words, is tuned using the sample time, which in general is fixed in terms of upper bound by the Shannon Theorem. Now k app represents the time constant of our approximating derivative, which is calculated adaptively through the least squares method. The simulation shows that once the KF is tuned, this shows better results at high frequencies with respect to the algorithm structure, but in the range of the frequency in which t s is consistently chosen, the proposed algorithm shows similar results as those offered by KF.

5. Conclusions

This paper deals with an approximation of a first derivative of a input signal using a dynamic system of the first order to avoid spikes and noise. It presents an adaptive derivative estimator for fault-detection using a suboptimal dynamic system in detail. After formulating the problem, a proposition and a theorem were proven for a possible approximation structure, which consists of a dynamic system. In particular, a proposition based on a Lyapunov approach was proven to show the convergence of the approximation. The proven theorem is constructive and directly shows the suboptimality condition in the presence of noise. A comparison of simulation results with the derivative realized using an adaptive KF and with the exact mathematical derivative were presented. It was shown that the proposed adaptive suboptimal auto-tuning algorithm structure does not depend on the setting of the parameters. Based on these results, an adaptive algorithm was conceived to calculate the derivative of an input signal with convergence in infinite time. The proposed algorithm showed worse results with respect to the results obtained by the derivative obtained through the adaptive KF in the case without noise and faults. In case of the presence of noise and using a more appropriate version of the adaptive KF, the two methods offer similar results. In the presence of faults, the proposed algorithm structure showed better results.

Author Contributions

Conceptualization, M.S. and P.M.; software, M.S. and P.M.; validation, M.S. and P.M.; formal analysis, M.S. and P.M.; investigation, M.S. and P.M.; resources, M.S. and P.M.; data curation, M.S. and P.M.; writing–original draft preparation, M.S. and P.M.; writing–review and editing, M.S. and P.M.; visualization, M.S. and P.M.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

a 2 Adaptive least square parameter
x ( t ) State vector of Kalman filter
e r ( t ) Derivative error
kDiscrete variable
K ( k ) Kalman gain
k app Parameter
n ( t ) Noise signal
N ( z ) Z -transformed noise signal
Q w Process noise covariance matrix
r ( t ) Derivative function
r ^ ( t ) Approximated derivative function
R ( z ) Z -transformed derivative function
r ^ ( t ) Approximated derivative function
Y ( z ) Z -transformed signal to be derivated with or without noise
y ( t ) Signal to be derivated
y ^ ( t ) Polynomial expression of the signal
y noise ( t ) Position signal with noise
t s Sampling time
ζ Measurement noise covariance matrix

References

  1. Rafael Morales Fernando, R.J.D.G.; López, J.C. Real-Time Algebraic Derivative Estimations Using a Novel Low-Cost Architecture Based on Reconfigurable Logic. Sensors 2014, 14, 9349–9368. [Google Scholar] [Green Version]
  2. Ibrir, S.; Diop, S. A numerical procedure for filtering and efficient high-order signal differentiation. Int. J. Appl. Math. Comput. Sci. 2004, 14, 201–208. [Google Scholar]
  3. Estafandiari, F.; Khalil, K. Output feedback stabilization of fully linearizable systems. Int. J. Control 1992, 56, 1007–1037. [Google Scholar] [CrossRef]
  4. Dabroom, A.M.; Khalil, H.K. Discrete-time implementation of high-gain observers for numerical differentiation. Int. J. Control 1999, 72, 1523–1537. [Google Scholar] [CrossRef]
  5. Chitour, Y. Time-varying high-gain observers for numerical differentiation. IEEE Trans. Autom. Control 2002, 47, 1565–1569. [Google Scholar] [CrossRef]
  6. Schimmack, M.; Mercorelli, P. A sliding mode control using an extended Kalman filter as an observer for stimulus-responsive polymer fibres as actuator. Int. J. Model. Identif. Control 2017, 27, 84–91. [Google Scholar] [CrossRef]
  7. Mercorelli, P. Robust feedback linearization using an adaptive PD regulator for a sensorless control of a throttle valve. Mechatron. J. IFAC 2009, 19, 1334–1345. [Google Scholar] [CrossRef]
  8. Mercorelli, P. A Decoupling Dynamic Estimator for Online Parameters Indentification of Permanent Magnet Three-Phase Synchronous Motors. In Proceedings of the IFAC 16th Symposium on System Identification, Brussels, Belgium, 11–13 July 2012; Volume 45, pp. 757–762. [Google Scholar]
  9. Dabroom, A.; Khalil, H.K. Numerical differentiation using high-gain observers. In Proceedings of the 36th IEEE Conference on Decision and Control, San Diego, CA, USA, 12 December 1997; Volume 5, pp. 4790–4795. [Google Scholar]
  10. Levant, A. Robust exact differentiation via sliding mode technique. Automatica 1998, 34, 379–384. [Google Scholar] [CrossRef]
  11. Levant, A. Universal single-input-single-output (SISO) sliding-mode controllers with finite-time convergence. IEEE Trans. Autom. Control 2001, 46, 1447–1451. [Google Scholar] [CrossRef]
  12. Yu, X.; Xu, J. Nonlinear derivative estimator. Electron. Lett. 1996, 32, 1445–1447. [Google Scholar] [CrossRef]
  13. Levant, A. Higher order sliding modes and arbitrary-order exact robust differentiation. In Proceedings of the 2001 European Control Conference (ECC), Porto, Portugal, 4–7 September 2001; pp. 996–1001. [Google Scholar]
  14. Levant, A.; Livne, M.; Yu, X. Sliding-Mode-Based Differentiation and Its Application. In Proceedings of the 20th IFAC World Congress, Toulouse, France, 9–14 July 2017; Volume 50, pp. 1699–1704. [Google Scholar]
Figure 1. Graphical representation of the algorithm structure and its components. LSM: linear least squares method.
Figure 1. Graphical representation of the algorithm structure and its components. LSM: linear least squares method.
Algorithms 12 00101 g001
Figure 2. Graphical representation of the position y ( t ) .
Figure 2. Graphical representation of the position y ( t ) .
Algorithms 12 00101 g002
Figure 3. Graphical representation of the resulting velocity and its estimations from Figure 2.
Figure 3. Graphical representation of the resulting velocity and its estimations from Figure 2.
Algorithms 12 00101 g003
Figure 4. Detailed representation of Figure 3 and its estimations.
Figure 4. Detailed representation of Figure 3 and its estimations.
Algorithms 12 00101 g004
Figure 5. Graphical representation of the position y ( t ) with noise.
Figure 5. Graphical representation of the position y ( t ) with noise.
Algorithms 12 00101 g005
Figure 6. Detailed representation of Figure 5 and its estimations.
Figure 6. Detailed representation of Figure 5 and its estimations.
Algorithms 12 00101 g006
Figure 7. Graphical representation of the position y ( t ) with noise and faults.
Figure 7. Graphical representation of the position y ( t ) with noise and faults.
Algorithms 12 00101 g007
Figure 8. Graphical representation of the resulting velocity from Figure 7 and its estimations with noise and faults.
Figure 8. Graphical representation of the resulting velocity from Figure 7 and its estimations with noise and faults.
Algorithms 12 00101 g008
Figure 9. Detailed representation of Figure 7 with its estimations.
Figure 9. Detailed representation of Figure 7 with its estimations.
Algorithms 12 00101 g009
Figure 10. Graphical representation of chirp input signal with a starting frequency of 200 Hz.
Figure 10. Graphical representation of chirp input signal with a starting frequency of 200 Hz.
Algorithms 12 00101 g010
Figure 11. Detailed representation of the low-frequency part of the chirp input signal and its estimations from Figure 10.
Figure 11. Detailed representation of the low-frequency part of the chirp input signal and its estimations from Figure 10.
Algorithms 12 00101 g011
Figure 12. Detailed representation of the high-frequency part of the chirp input signal and its estimations from Figure 10.
Figure 12. Detailed representation of the high-frequency part of the chirp input signal and its estimations from Figure 10.
Algorithms 12 00101 g012
Table 1. Overview of the estimation errors of different input signals y ( t ) using the Euclidean norm.
Table 1. Overview of the estimation errors of different input signals y ( t ) using the Euclidean norm.
Input SignalAdaptive Kalman FilterAdaptive Derivative Estimator
sin 2 ( t ) 4.04 × 10 3 13.79
sin 2 ( t ) with noise12.0613.95
sin 2 ( t ) with noise & sin(t)32.3432.12
sin 2 ( t ) with noise & sin(t) & fault159.034.73
sin 3 ( t ) 1.4 × 10 3 4.29
sin 3 ( t ) with noise12.054.41
sin 3 ( t ) with noise & sin(t)32.3422.98
sin 3 ( t ) with noise & sin(t) & fault159.027.10

Share and Cite

MDPI and ACS Style

Schimmack, M.; Mercorelli, P. An Adaptive Derivative Estimator for Fault-Detection Using a Dynamic System with a Suboptimal Parameter. Algorithms 2019, 12, 101. https://doi.org/10.3390/a12050101

AMA Style

Schimmack M, Mercorelli P. An Adaptive Derivative Estimator for Fault-Detection Using a Dynamic System with a Suboptimal Parameter. Algorithms. 2019; 12(5):101. https://doi.org/10.3390/a12050101

Chicago/Turabian Style

Schimmack, Manuel, and Paolo Mercorelli. 2019. "An Adaptive Derivative Estimator for Fault-Detection Using a Dynamic System with a Suboptimal Parameter" Algorithms 12, no. 5: 101. https://doi.org/10.3390/a12050101

APA Style

Schimmack, M., & Mercorelli, P. (2019). An Adaptive Derivative Estimator for Fault-Detection Using a Dynamic System with a Suboptimal Parameter. Algorithms, 12(5), 101. https://doi.org/10.3390/a12050101

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop