Next Article in Journal
A Survey on Massive MIMO Systems in Presence of Channel and Hardware Impairments
Previous Article in Journal
Optimization of the Energy Consumption of Depth Tracking Control Based on Model Predictive Control for Autonomous Underwater Vehicles
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Features Fusion for Fault Diagnosis of Pedal Robot Using Time-Speed Signals

1
State Key Laboratory of Mechanical System and Vibration, Shanghai Jiao Tong University, Shanghai 200240, China
2
Shanghai Engineering Research Center of Civil Aircraft Health Monitoring, Shanghai Aircraft Customer Service Co., Ltd., Shanghai 200241, China
3
School of Electronic, Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
*
Author to whom correspondence should be addressed.
Sensors 2019, 19(1), 163; https://doi.org/10.3390/s19010163
Submission received: 3 December 2018 / Revised: 23 December 2018 / Accepted: 27 December 2018 / Published: 4 January 2019
(This article belongs to the Section Intelligent Sensors)

Abstract

:
In order to realize automation of the pollutant emission tests of vehicles, a pedal robot is designed instead of a human-driven vehicle. Sometimes, the actual time-speed curve of the vehicle will deviate from the upper or lower limit of the worldwide light-duty test cycle (WLTC) target curve, which will cause a fault. In this paper, a new fault diagnosis method is proposed and applied to the pedal robot. Since principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE), and Autoencoder cannot extract feature information adequately when they are used alone, three types of feature components extracted by PCA, t-SNE, and Autoencoder are fused to form a nine-dimensional feature set. Then, the feature set is reduced into three-dimensional space via Treelet Transform. Finally, the fault samples are classified by Gaussian process classifier. Compared with the methods using only one algorithm to extract features, the proposed method has the minimum standard deviation, 0.0078, and almost the maximum accuracy, 98.17%. The accuracy of the proposed method is only 0.24% lower than that without Treelet Transform, but the processing time is 6.73% less than that without Treelet Transform. These indicate that the multi-features fusion model and Treelet Transform method is quite effective. Therefore, the proposed method is quite helpful for fault diagnosis of the pedal robot.

1. Introduction

Before vehicle road tests or quality inspection, pollutant emission tests must be carried out. Over the past decade, these tests have been conducted by people who drive vehicles and follow the world light-duty test cycle (WLTC) target curve reported in Reference [1]. As shown in Figure 1, it is a time-speed curve that lasts 1800 s. In fact, when a vehicle is driven, the actual curve often deviates from the target curve. The noise in the test environment is too loud to be tolerated by operators, which may cause human error. In addition, research shows that although the deviation caused by robot control is less than that caused by human control, the number of deviations still needs to be reduced to almost zero. Generally, when the deviation between the actual curve and the target curve is greater than 2 km/h, a fault will occur. Different control parameters of pedal robot are needed for different types of faults. Therefore, how to identify fault types is still the most challenging part of robot control optimization.
Traditional fault diagnosis is analyzed by skilled engineers in time domain [2], which is difficult and time-consuming, especially in the case of large samples and/or high-dimensional samples. With the increase of data size, analysis becomes quite complex. It is necessary to reduce data size by feature extraction [3]. So far, many studies on feature extraction methods have been reported for the dynamic characteristics of fault signals [4]. One of the traditional feature extraction techniques is Fourier transform [5], but a Fourier coefficient represents a component that lasts for all time. Therefore, Fourier analysis is less suitable for nonstationary signals [6]. Other feature extraction algorithms, such as local mean decomposition, correlation coefficient, and kurtosis, can also extract features. In general, these features are directly used as input of the classifier. The calculation speed of these methods is fast, but they are not able to present adequate information of fault signals. Autoencoder, a deep learning algorithm, is used to extract more abstract features from the raw input. A major feature component is received via t-distributed stochastic neighbor embedding (t-SNE) algorithm or principal component analysis (PCA) method. These extracted feature components, or representations, are fused into a new matrix, which may contain more local major feature information. Thus, this work proposes a fusion methodology of multi-feature learning models via Autoencoder, t-SNE, and PCA.
PCA generates a set of orthogonal bases that capture the direction of the largest variance in training data [7]. Dunia et al. [8] used PCA for sensor fault identification via reconstruction. The disadvantage of PCA is that it can only extract linear features, while t-SNE can solve nonlinear problems. T-SNE, as proposed in Reference [9], is a very effective nonlinear dimension reduction technology, which was widely used recently. It is most commonly used to generate two-dimensional embedding of high-dimensional data in order to simplify cluster recognition [10]. However, PCA and t-SNE need to decompose the problem into different parts and merge the results in the final stage. In addition, most of the features in machine learning need to be recognized by experts, which is very troublesome. Deep learning has no such problems mentioned above, and can obtain the representations with sensitive and valuable feature through automatically learning from the original feature set [11]. In addition, because of the deep model [12], deep learning has stronger adaptability and stability. Therefore, an Autoencoder-based learning algorithm is used to extract abstract features from the raw input. The Autoencoder model also a special kind of neural network and is able to learn richer representation of input data, as well as represent the target output [13]. The aim of this model is to reconstruct input from the learnt representation [14]. Although the Autoencoder can automatically learn features, it also has some disadvantages. For example, when the training set is small, or the training sample does not follow the same distribution as the test sample, the accuracy of feature extraction will be reduced. Clearly, PCA, t-SNE, and Autoencoder have their own advantages and shortcomings over the feature learning. In order to make full use of their advantages, this work proposes a novel additive combination feature representation respectively from PCA, t-SNE, and Autoencoder. The features provided by them will contain more dimensional information of fault signals.
After feature extraction, the features are combined into a high-dimensional feature set. In order to speed up the calculation and visualize the dimension reduction results, it is necessary to reduce the dimension of feature set to three. Treelet Transform is a data dimension reduction technology [15]. It generates some sparse components, which is able to reveal the inherent structure of data [16]. After dimension reduction, a pattern classification method is needed. Gaussian process classifier based on Bayesian theory is one of the machine learning methods [17]. It can effectively classify nonlinear samples [18]. Gaussian process is used to learn the extracted features and present the fault detection results.
In this paper, a new fault diagnosis method based on multi-feature learning model is proposed. This method can identify fault type and help optimize the control of the pedal robot. The experimental results are compared with some typical feature extraction algorithms and the proposed model in terms of accuracy. The structure of this paper is as follows: Section 2 gives the proposed methodology, Section 3 gives experiment setup, Section 4 presents the results and discussion, and the conclusion is arranged in Section 5.

2. Methodology

The proposed fault diagnosis method for pedal robot mainly includes three steps: Feature extraction, dimension reduction, and fault classification. First, fault samples are extracted from the curve samples. Second, three-dimensional features are extracted from the fault samples by PCA, t-SNE, and Autoencoder, respectively. Next, since PCA, t-SNE, and Autoencoder cannot extract feature information adequately when they are used alone, and the types of features extracted by them are different, the features are combined into a nine-dimensional feature set to reduce information loss. Then, the feature set is reduced to three dimensions by Treelet Transform, thus, the calculation speed could be accelerated, and the fault samples could be visualized in three-dimensional space. Finally, after being classified by Gaussian process classifier, the accuracy and result of fault diagnosis are obtained.

2.1. Data Preprocessing

A fault sample contains two curves, namely, the target curve and the actual curve. Ten seconds before and after the midpoint of the deviation, the segment was extracted as a fault sample. Since the curve took a point at 0.1 s intervals, a fault sample had 200 points. In order to extract more features, data integration was necessary. The 200 points of the actual curve were combined with the 200 points of the corresponding target curve to form a 400-dimensional feature. In addition, this study also collected the time-displacement curves of the pedal robot’s two end-effectors. Therefore, the 400-dimensional feature and the 200 points of each end-effector displacement at the corresponding time were combined to form an 800-dimensional feature, which could be used as a fault sample.

2.2. Feature Extraction

This work explores the feasibility of feature extraction by using PCA, t-SNE, and Autoencoder to extract three-dimensional features, respectively, then these three three-dimensional features are combined into a nine-dimensional feature matrix.

2.2.1. Feature Extraction by PCA

The process of feature extraction by PCA consists of the following five stages:
Stage 1: Each sample is 800 dimensions, and 11430 samples constitute a sample array:
x i = ( x i 1 , x i 2 , , x i p ) T , i = 1 , 2 , , 11430 ; p = 1 , 2 , , 800 ,
then, the sample elements are normalized as follows:
Z i j = x i j x j ¯ s j , i = 1 , 2 , , 11430 ; j = 1 , 2 , , 800 ,
where x j ¯ = i = 1 11430 x i j 11430 , s j 2 = i = 1 11430 ( x i j x j ¯ ) 2 11430 1 , thus the normalized matrix Z is found.
Stage 2: Calculate the correlation coefficient matrix of the normalized matrix Z.
R = Z T Z 11430 1 ,
Stage 3: Solve the eigenvalue of matrix R, then 800 eigenvalues are got to determine the principal components. The value of m is determined by the following equation [19]:
j = 1 m λ j j = 1 800 λ j 0.85 ,
For each λ j , j = 1 , 2 , , m , solve the equations set R b = λ j b to obtain the unit eigenvector b j 0 .
Stage 4: Convert the normalized indicator variables into principal components:
U i j = z i T b j 0 , j = 1 , 2 , , m ,
where U 1 is the first principal component, U 2 is the second principal component and U p is the p t h principal component.
Stage 5: Comprehensive evaluation of m principal components.
In this paper, the total eigenvalues contribution of the first three eigenvalues was 99.37%. It indicates that the first three dimensions already contain enough information of the original sample. Thus, they were extracted as features of the high-dimensional fault samples.

2.2.2. Feature Extraction by t-SNE

The d-dimensional input dataset is denoted by χ = { x 1 , x 2 , , x n } C d . Before using t-SNE algorithm, PCA is used to initialize data. t-SNE computes a s-dimensional embedding of the points in χ , denoted by y = { y 1 , y 2 , , y n } C s , where s<d. The joint probability p i j measuring the similarity between x i and x j is computed as:
p i | j = e x p ( x i x j 2 / 2 σ i 2 ) k i e x p ( x i x k 2 / 2 σ i 2 ) , and   p i j = p i | j + p j | i 2 n .
The bandwidth of the Gaussian kernel, σ i , is often chosen such that the perplexity of P i matches a user defined value, where P i is the conditional distribution across all data point given x i . The similarity between y i and y j in the low dimensional embedding is defined as:
q i j = ( 1 + y i y j 2 ) 1 k l ( 1 + y k y l 2 ) 1 .
t-SNE finds the points { y 1 , y 2 , , y n } which minimize the Kullback-Leibler divergence between the joint distribution P of points in the input space and the joint distribution Q of points in the embedding space:
C ( y ) = K L ( P Q ) = i j p i j l o g p i j q i j .
The point y is initialized randomly, and the cost function C ( y ) is minimized using gradient descent. In this paper, d = 800, and s = 3. Thus, three-dimensional feature was extracted from the high-dimensional feature set.

2.2.3. Feature Extraction by Autoencoder

An Autoencoder learns a feed-forward, hidden representation h ( x ) of the input x , and a reconstruction x ^ could be got, which is as close as possible to x .
h ( x ) = g ( b + ω x ) ,
x ^ = s i g m ( c + ν h ( x ) ) ,
where ω and ν are matrices, b and c are vectors, g is a nonlinear activation function and sigm ( a ) = 1 / ( 1 + exp ( a ) ) . The loss function used in this paper is cross-entropy loss:
l ( x ) = d = 1 D = x d l o g x ^ d ( 1 x d ) l o g ( 1 x ^ d ) ,
Training the Autoencoder corresponds to optimizing the parameters to reduce the average loss on the training examples. In this paper, the hidden layer h ( x ) has three neurons, which represent the features extracted from the input.

2.3. Dimension Reduction by Treelet Transform

After using PCA, t-SNE, and Autoencoder to extract three-dimensional features, respectively, the features are combined into a nine-dimensional feature set. Thus, a 11430 × 9 feature set matrix is extracted from the 11430 × 800 fault samples matrix. Then, this feature set is reduced to three-dimensional feature set by Treelet Transform.
Stage 1: Normalize the features extracted by PCA, t-SNE, and Autoencoder, respectively.
Stage 2: Define the clustering levels l = 0 , 1 , 2 , , L ( L m a x = p 1 ) , where p is the dimension 9. At level l = 0 , each signal x k is represented by the original variables x ( 0 ) = [ s 0 , 1 , s 0 , 2 , , s 0 , p ] where s 0 , k = x k . Initialize similarity matrix and covariance matrix by the following Equation:
ρ i j = Σ i j Σ i j Σ i j ,
with i j = E [ ( s i E s i ) ( s j E s j ) ] T ,
where ρ i j is the correlation coefficients of the similarity matrix M ij , and M i j = | ρ i j | + λ | i j | . Initialize the basis matrix B 0 = [ ϕ 01 , ϕ 02 , , ϕ 0 p ] as an identity matrix whose size is p × p . Initialize δ = { 1 , 2 , p } . For the L-level Treelet Transform, let l = 1 , 2 , , L , repeat the following stages:
Stage 3: At each level of the tree, the most similar variables are obtained according to the similarity matrix M ( l 1 ) , denote the two variables which have the maximum correlation coefficients as α and β :
( α , β ) = arg max i , j δ M i j ( l 1 ) ,
Stage 4: Find the Jacobi rotation matrix:
J ( α , β , θ 1 ) = [ 1 0 0 c ] J ( α , β , θ 1 ) = [ 1 0 0 0 0 c s 0 0 s c 0 0 0 0 1 ] ,
where c = cos ( θ l ) and s = sin ( θ l ) , and the rotation angle θ 1 can be obtained by:
| θ l | π 4 , ( l ) = J T ( l 1 ) J , α β ( l ) = β α ( l ) = 0 ,
This transformation corresponds to a change of basis B l = B l 1 J , and new coordinates x ( l ) = J T x ( l 1 ) . Update the similarity matrix M ( l ) = J T M ( l 1 ) J accordingly.
Stage 5: Multi-resolution analysis. If α α ( l ) β ( l ) , define the difference and sum as d = x β l and s 1 = x α ( l ) . Define the scaling and detail functions ϕ 1 and ψ 1 as columns α and β of the basis matrix B l .
Stage 6: Repeat stage 2 to stage 4 until reach the highest level l = L . Then, the final orthogonal basis matrix B can be obtained as follows:
B = [ ϕ l , ψ 1 , ψ 2 , , ψ l ] T ,
where ϕ 1 is the scale of the highest level, and ψ 1 is the detail vector of the 1 t h level. Then, the orthonormal treelet decomposition at level l can be obtained:
x = i = 1 p 1 s l , i ϕ l , i + i = 1 l d i ψ i ,
Stage 7: Normalize the obtained eigenvectors after Treelet Transform.
In this paper, the input was the 11430 × 9 feature set matrix, and the maximum level L = 6 . The output was three-dimensional normalized eigenvectors. The score of remaining three dimensions was 94.61%, which indicated that almost all the information had been extracted from the feature set matrix. So, the remaining three dimensions were input into the classifier as a representative of the high-dimensional feature set matrix.

2.4. Fault Classification by Gaussian Process Classifier

After feature extraction and dimension reduction, a 11430 × 3 matrix of fault samples is obtained. Then, these fault samples are trained and formed into four classes by Gaussian process classifier.
Assume the dataset is D = { ( x i , y i ) | i = 1 , , N } , where x is the input vector of dimension d and y is the class labels +1/−1. The input N × d matrix is denoted as X . Predictions for new inputs x * are made out of this given training data using the GP model. And GP binary classifier is done by first calculating the distribution over the latent function f corresponding to the test case,
p ( f * | X , y , x * ) = p ( f * | X , x * , f ) p ( f | X , y ) d f ,
where p ( f | X , y ) = p ( y | f ) p ( f | X ) / p ( y | X ) is the latent variable posterior. So, the probabilistic prediction is made by
π ¯ * = p ( y * = + 1 | X , y , x * ) = σ ( f * ) p ( f * | X , y , x * ) d f * ,
In this work, N = 11430 and d = 3 . Since the outputs are divided into four classes, a multi-classification method is presented as: The first class consists of larger samples, and other three sample sets form the second class. This completes one classification and presents the classification probability of larger samples. Next, this step is repeated at the other three sample sets. Finally, one probability of every sample at each classification is obtained. The maximum probability is endowed to this sample. The accuracy is computed by comparing the predictive labels with the expert experience-based labels.

3. Experiment

In order to realize the automation of pollutant emission tests without changing and disassembling the structure of vehicle products, a pedal robot instead of a human-driven vehicle is designed. As shown in Figure 2, the main components of the pedal robot are servo motors, sliders, and end-effectors. The servo motor pushes the end effector by controlling the slider. In order to improve the stability of control, the end-effector is designed to imitate the shape of human foot. As shown in Figure 3, the robot system consists of three parts: Control box, manipulator, and end effector.
As shown in Figure 4, the operation process is as follows: First, the control strategy of the robot is transmitted to the advanced RISC machine (ARM) by computer. Next, ARM sends control signals to Field Programmable Gate Array (FPGA). Then, the servo motor is controlled by the servo driver. Finally, the servo motor pushes the end-effector to drive the brake pedal or acceleration pedal of the vehicle. During emission testing, the pedal robot will control the vehicle and make the time-speed curve of the vehicle follow the WLTC curve. The actual speed of the vehicle will be fed back to the computer every 0.1 s by Controller Area Network (CAN). Therefore, the faults can be analyzed according to the feedback information.
In this work, 50 different vehicles are used. After each vehicle test, the parameters of the robot control are adjusted according to the type and number of the faults, and the test is done again. On average, each vehicle undergoes about seven optimization processes, and finally, the number of faults in one test is reduced to almost zero. The fault appearance of each vehicle is not the same. By testing 50 vehicles and testing each vehicle several times, the usability of our method is improved, and the accuracy of fault prediction is also improved.

4. Results and Discussion

In this work, 346 pollutant emission tests are carried out, and 11,430 fault samples are obtained. Figure 5 shows the sample entropy of 800 samples. Based on expert experience, the fault samples are classified into four types, and each fault sample has a type label. The expert experience-based labels is the opinion of human operators. The operators are experienced, and by comparing the actual curve with the target curve, the type of fault can be correctly identified. Figure 6 shows the schematic diagram of four types of faults. According to the time-speed curve, faults can be classified into four types: Acceleration fault, deceleration fault, convex fault, and concave fault.
Gaussian process classification is quite time-consuming. If the dimension of the extracted feature is high, the total processing time will be long. But, if the dimension is low, the information will be lost. After many tests, it is found that the first three dimensions already contain enough information. It can not only reduce the dimension of data and accelerate the calculation speed, but also retain the feature information and the classification accuracy. Furthermore, 3D is good for visualization. After feature extraction, three-dimensional features are extracted from the fault samples by PCA, t-SNE, and Autoencoder, respectively. After dimension reduction by Treelet Transform, the feature set is reduced to a three-dimensional feature set matrix. Figure 7 is the 3D surface graph of three-dimensional features extracted by PCA, t-SNE, Autoencoder, and Treelet Transform, respectively. The values of x, y, and z coordinates are extracted three vectors.
The fault samples are classified into four types by Gaussian process classifier. Since ten-fold cross-validation has good performance [20], it is used in this paper. The average accuracy and standard deviation comparison between different methods is shown in Table 1. Method 4 is the method proposed in this paper, which has the minimum standard deviation and almost the maximum accuracy.
As shown in Table 1, when PCA, t-SNE, and Autoencoder are used to extract features as the input to Gaussian process classifier, respectively, the accuracy of Autoencoder is higher than that of t-SNE and PCA. This shows that the deep learning method is more effective than other two feature extraction methods. Since PCA, t-SNE, and Autoencoder cannot extract feature information adequately when they are used alone, and the feature types extracted by them are different, these features are combined into a nine-dimensional feature set as the input of Gaussian process classifier. And the accuracy of this method (Method 5) is 98.41%, which is much higher than that of PCA, t-SNE, and Autoencoder, respectively. This shows that the information loss can be effectively prevented by multi-features fusion.
In addition, as shown in Table 1, the processing time of Method 4 is far less than that of Method 5. Because the higher the dimension, the longer the time required for Gaussian process classification. The accuracy of Method 4 is only 0.24% lower than that of Method 5, but the processing time of one sample in Method 4 is 6.73% less than that of Method 5. These clearly prove that Treelet Transform can not only reduce the dimension of data and accelerate the calculation speed, but also retain the feature information and the classification accuracy. Therefore, the hybrid multi-models and Treelet Transform methodology is more effective than other methods.
Furthermore, since the feature set is reduced to three dimensions, the fault samples could be visualized in three-dimensional space. This paper randomly selects some samples to make the figure clearer. As shown in Figure 8, the same position of three-dimensional axes are imposed in all the figures. The dots of Method 4 have more obvious interfaces than those of Method 1, Method 2, and Method 3. This also indicates that the method proposed in this paper produces great performance.

5. Conclusions

In order to realize the automation of pollutant emission tests, a pedal robot is designed instead of a human-driven vehicle. The actual time-speed curve of the vehicle should follow the WLTC target curve. However, sometimes the actual curve will deviate from the upper or lower limit of the target curve, which will cause a fault. In this paper, a new fault diagnosis method is proposed and applied to the fault diagnosis of pedal robot. There were 346 pollutant emission tests carried out, and 11430 fault samples obtained for fault diagnosis. To find more effective feature information from the fault samples, a deep learning algorithm, Autoencoder, is used for feature extraction. The results show that it is better than other feature extraction algorithms. Since PCA, t-SNE, and Autoencoder cannot extract feature information adequately when they are used alone, and the feature types they extract are different, three types of feature components extracted by PCA, t-SNE, and Autoencoder are fused to form a nine-dimensional feature set. Then, the feature set is reduced into three-dimensional space via Treelet Transform. Finally, the fault samples are classified into four types by using Gaussian process classifier. The experimental results show that the proposed method has the minimum standard deviation, 0.0078, and almost the maximum accuracy, 98.17%. Compared with the method using only one algorithm to extract features, the proposed method has much higher classification accuracy. This shows that the information loss can be effectively prevented by multi-features fusion.
Moreover, Treelet Transform is used to reduce dimension of feature set, so that the fault samples can be visualized in three-dimensional space. The accuracy of the proposed method is only 0.24% lower than that without Treelet Transform, but the processing time of one sample is 6.73% less than that without Treelet Transform. These clearly prove that Treelet Transform can not only reduce the dimension of data and accelerate the calculation speed, but also retain the feature information and the classification accuracy. It indicates that the hybrid multi-models and Treelet Transform method is quite effective. Therefore, the proposed method can efficiently identify the type of the faults, which can help optimize the control of the pedal robot in pollutant emission test.

Author Contributions

Y.Z. designed the experiments and the methodology; Z.F. performed the experiments and analyzed the data; X.C. and Q.W. validated the established model in this work and performed writing—review and editing; Z.F. contributed the project administration and the funding acquisition; Y.Z. wrote the paper.

Funding

This work was partially supported by the National Natural Science Foundation of China (Grant nos. 61473192, U1401240 and 61075086); and the Project of Science and Technology Commission of Shanghai Municipality (Grant no. 17441901000).

Acknowledgments

We want to gratefully acknowledge Yao Wang for his help and feedback provided on this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Limits and Measurement Methods for Emissions from Light-Duty Vehicles. GB18352.6 – 2016. 2016, pp. 57–67. Available online: https://www.chinesestandard.net/PDF/English.aspx/GB18352.6-2016 (accessed on 2 January 2019).
  2. Chen, S.; Qi, Z.; Chen, D.H.; Guo, L.F.; Peng, W.W. Investigation of Bayesian network for reliability analysis and fault diagnosis of complex systems with real case applications. Adv. Mech. Eng. 2017, 9. [Google Scholar] [CrossRef]
  3. Chang, S.; Leng, M.; Wu, H.; Thompson, J. Aircraft ice accretion prediction using neural network and wavelet packet transform. Aircr. Eng. Aerosp. Technol. 2016, 88, 128–136. [Google Scholar] [CrossRef]
  4. Cheng, Y.; Yuan, H.; Liu, H.; Lu, C. Fault diagnosis for rolling bearing based on sift-kpca and svm. Eng. Comput. 2017, 34, 53–65. [Google Scholar] [CrossRef]
  5. Thatoi, D.N.; Das, H.C.; Parhi, D.R. Review of techniques for fault diagnosis in damaged structure and engineering system. Adv. Mech. Eng. 2012, 3, 181–197. [Google Scholar] [CrossRef]
  6. Zhang, M.; Li, K.; Hu, Y. Classification of power quality disturbances using wavelet packet energy and multiclass support vector machine. Int. J. Comput. Math. Electr. Electron. Eng. 2013, 31, 424–442. [Google Scholar] [CrossRef]
  7. Turk, M.; Pentland, A. Eigenfaces for recognition. J. Cognit. Neurosci. 1991, 3, 71. [Google Scholar] [CrossRef] [PubMed]
  8. Dunia, R.; Qin, S.J.; Edgar, T.F.; Mcavoy, T.J. Identification of faulty sensors using principal component analysis. AIChE J. 2010, 42, 2797–2812. [Google Scholar] [CrossRef]
  9. Van der Maaten, L.; Hinton, G. Visualizing Data using t-SNE. J. Mach. Learn. Res. 2008, 9, 2579–2605. [Google Scholar]
  10. Linderman, G.C.; Steinerberger, S. Clustering with t-SNE, Provably. arXiv 2017, arXiv:1706.02582v1. [Google Scholar]
  11. Lecun, Y.; Bengio, Y.; Hinton, G. Review: Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  12. Li, H. Deep learning for natural language processing: Advantages and challenges. Natl. Sci. Rev. 2018, 5, 24–26. [Google Scholar] [CrossRef]
  13. Vincent, P.; Larochelle, H.; Lajoie, I.; Bengio, Y. Manzagol, stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Mach. Learn. Res. 2010, 11, 3371–3408. [Google Scholar]
  14. Akshay, S.; Maneet, S.; Richa, S.; Mayank, V. Residual codean autoencoder for facial attribute analysis. Pattern Recognit. Lett. 2018, in press. [Google Scholar]
  15. Zhao, C.; Zhang, W.; Yang, Y.; Yao, S. Treelet-based clustered compressive data aggregation for wireless sensor networks. IEEE Trans. Veh. Technol. 2015, 64, 4257–4267. [Google Scholar] [CrossRef]
  16. Assi, N.; Moskal, A.; Slimani, N.; Viallon, V.; Chajes, V.; Freisling, H. A treelet transform analysis to relate nutrient patterns to the risk of hormonal receptor-defined breast cancer in the european prospective investigation into cancer and nutrition (epic). Public Health Nutr. 2016, 19, 1–13. [Google Scholar] [CrossRef] [PubMed]
  17. Hassan, T.; Asghari, O.; Emery, X. Application of plurigaussian simulation to delineate the layout of alteration domains in Sungun copper deposit. Cent. Eur. J. Geosci. 2013, 5, 514–522. [Google Scholar] [Green Version]
  18. Ghahramani, Z. Probabilistic machine learning and artificial intelligence. Nature 2015, 521, 452–459. [Google Scholar] [CrossRef] [PubMed]
  19. Yuan, L.; Bai, Y.S. KNN fault detection based on principle component research. J. Shenyang Univ. Chem. Technol. 2018, 32, 366–371. [Google Scholar]
  20. Kohavi, R. A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection. In Proceedings of the International Joint Conference on Artificial Intelligence, Montreal, QC, Canada, 20–25 August 1995. [Google Scholar]
Figure 1. Curve of the worldwide light-duty test cycle (WLTC).
Figure 1. Curve of the worldwide light-duty test cycle (WLTC).
Sensors 19 00163 g001
Figure 2. The axonometric drawing of pedal robot designed in this work. The numbers denote the following: 1—end-effectors of the robot, 2—sliders, and 3—servo motors.
Figure 2. The axonometric drawing of pedal robot designed in this work. The numbers denote the following: 1—end-effectors of the robot, 2—sliders, and 3—servo motors.
Sensors 19 00163 g002
Figure 3. The robot system designed in this work. The numbers denote the following: 1—control box, 2—emergency stop button, 3—interface to the computer, 4—interface to the power supply, 5—servo motor, 6—slider, 7—end-effectors of the robot, and 8—brake pedal of the vehicle.
Figure 3. The robot system designed in this work. The numbers denote the following: 1—control box, 2—emergency stop button, 3—interface to the computer, 4—interface to the power supply, 5—servo motor, 6—slider, 7—end-effectors of the robot, and 8—brake pedal of the vehicle.
Sensors 19 00163 g003
Figure 4. Procedures of robot control.
Figure 4. Procedures of robot control.
Sensors 19 00163 g004
Figure 5. The sample entropy of 800 samples.
Figure 5. The sample entropy of 800 samples.
Sensors 19 00163 g005
Figure 6. The schematic diagram of four types of faults: (a) Acceleration fault; (b) Deceleration fault; (c) Convex fault; and (d) Concave fault.
Figure 6. The schematic diagram of four types of faults: (a) Acceleration fault; (b) Deceleration fault; (c) Convex fault; and (d) Concave fault.
Sensors 19 00163 g006
Figure 7. 3D surface graph of three-dimensional features extracted by each algorithm. (a) Three-dimensional features extracted by principal component analysis (PCA); (b) Three-dimensional features extracted by t-distributed stochastic neighbor embedding (t-SNE); (c) Three-dimensional features extracted by Autoencoder; (d) Three-dimensional features extracted by Treelet Transform.
Figure 7. 3D surface graph of three-dimensional features extracted by each algorithm. (a) Three-dimensional features extracted by principal component analysis (PCA); (b) Three-dimensional features extracted by t-distributed stochastic neighbor embedding (t-SNE); (c) Three-dimensional features extracted by Autoencoder; (d) Three-dimensional features extracted by Treelet Transform.
Sensors 19 00163 g007
Figure 8. Distribution of 4 classes of faults. Class 1—convex fault; Class 2—concave fault; Class 3—acceleration fault; and Class 4—deceleration fault. (a) Distribution of Method 4; (b) Distribution of Method 3; (c) Distribution of Method 2; (d) Distribution of Method 1.
Figure 8. Distribution of 4 classes of faults. Class 1—convex fault; Class 2—concave fault; Class 3—acceleration fault; and Class 4—deceleration fault. (a) Distribution of Method 4; (b) Distribution of Method 3; (c) Distribution of Method 2; (d) Distribution of Method 1.
Sensors 19 00163 g008aSensors 19 00163 g008b
Table 1. Comparison among different methods.
Table 1. Comparison among different methods.
MethodFeature Extraction AlgorithmDimension Reduction AlgorithmAccuracyStandard DeviationProcessing Time of One Sample
1PCANone87.60%0.01979 ms
2t-SNENone91.53%0.011272 ms
3AutoencoderNone93.28%0.013623 ms
4PCA + t-SNE + AutoencoderTreelet Transform98.17%0.007897 ms
5PCA + t-SNE + AutoencoderNone98.41%0.0094104 ms

Share and Cite

MDPI and ACS Style

Zhu, Y.; Fu, Z.; Fu, Z.; Chen, X.; Wu, Q. Multi-Features Fusion for Fault Diagnosis of Pedal Robot Using Time-Speed Signals. Sensors 2019, 19, 163. https://doi.org/10.3390/s19010163

AMA Style

Zhu Y, Fu Z, Fu Z, Chen X, Wu Q. Multi-Features Fusion for Fault Diagnosis of Pedal Robot Using Time-Speed Signals. Sensors. 2019; 19(1):163. https://doi.org/10.3390/s19010163

Chicago/Turabian Style

Zhu, Yuhao, Zeyu Fu, Zhuang Fu, Xi Chen, and Qi Wu. 2019. "Multi-Features Fusion for Fault Diagnosis of Pedal Robot Using Time-Speed Signals" Sensors 19, no. 1: 163. https://doi.org/10.3390/s19010163

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop