Next Article in Journal
The Role of Strontium Enriched Hydroxyapatite and Tricalcium Phosphate Biomaterials in Osteoporotic Bone Regeneration
Next Article in Special Issue
Adaptive Edge Preserving Weighted Mean Filter for Removing Random-Valued Impulse Noise
Previous Article in Journal
Estimating Efforts and Success of Symmetry-Seeing Machines by Use of Synthetic Data
Previous Article in Special Issue
A Prediction Method for the Damping Effect of Ring Dampers Applied to Thin-Walled Gears Based on Energy Method
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Parametric Fault Diagnosis of Analog Circuits Based on a Semi-Supervised Algorithm

1
College of Mechanical and Electrical Engineering, Henan Agricultural University, Zhengzhou 450002, China
2
Department of Communication, National Digital Switching System Engineering and Technology R&D Center (NDSC), Zhengzhou 450002, China
*
Author to whom correspondence should be addressed.
Symmetry 2019, 11(2), 228; https://doi.org/10.3390/sym11020228
Submission received: 10 January 2019 / Revised: 6 February 2019 / Accepted: 12 February 2019 / Published: 14 February 2019
(This article belongs to the Special Issue Symmetry in Engineering Sciences)

Abstract

:
The parametric fault diagnosis of analog circuits is very crucial for condition-based maintenance (CBM) in prognosis and health management. In order to improve the diagnostic rate of parametric faults in engineering applications, a semi-supervised machine learning algorithm was used to classify the parametric fault. A lifting wavelet transform was used to extract fault features, a local preserving mapping algorithm was adopted to optimize the Fisher linear discriminant analysis, and a semi-supervised cooperative training algorithm was utilized for fault classification. In the proposed method, the fault values were randomly selected as training samples in a range of parametric fault intervals, for both optimizing the generalization of the model and improving the fault diagnosis rate. Furthermore, after semi-supervised dimensionality reduction and semi-supervised classification were applied, the diagnosis rate was slightly higher than the existing training model by fixing the value of the analyzed component.

1. Introduction

Analog circuits are extensively used in consumer electronics, industrial systems, and aerospace applications. However, services provided by the analog circuits are severely threatened by parametric faults. Therefore, parametric fault diagnosis and fault location in the analog circuits are now some of the hottest fields. Feature extraction, dimensionality reduction, and selection of classification algorithms are the main research contents of the parametric fault diagnosis in the analog circuits.
Fault feature extraction is the precondition and foundation for the design of subsequent classifiers. Due to tolerance and nonlinearity of electronic components, the original signal overlaps in both the traditional time domain and frequency domain. The fault feature extraction based on signal processing is one of the hot topics, where Hilbert–Huang Transform (HHT) [1], wavelet [2,3,4], and wavelet packet transform [5] can obtain the time-frequency features for fault diagnosis in analog circuits. Rényi’s entropy [6], conditional entropy [4,7] and cross-wavelet singular entropy [8] are used for fault feature extraction, since the entropy can be used to measure the uncertainty and variation of information. In order to reflect the faulty information from different perspectives, the statistical properties of the fractional transform signals are proposed as the fault features [9], for example, distance, mean, standard deviation, skewness, kurtosis, entropy, median, third central moment, and centroid. The modified binary bat algorithm (MBBA) with chaos and Doppler Effect is used to utilize the optimized feature subset [10].
Due to the high dimensionality of fault features and the complexity of classifier, it is necessary to reduce the dimension before inputting the fault feature to the classifier. The current dimensionality reduction methods can be classified into two groups: linear dimensionality reduction and nonlinear dimensionality reduction. For the linear dimensionality reduction, principal component analysis (PCA) [11] and linear discriminant analysis (LDA) [12] are commonly used, where PCA is mainly for maximizing the mutual information between the original high-dimensional data and the post-projection low-dimensional data. The LDA, also called Fisher linear discriminant analysis (FDA), is utilized to obtain the optimal projection vector by maximizing the trace ratio of between-class scatter and within-class scatter.
The key to parametric fault diagnosis is the selection and optimization of the classifier. The classification algorithm, back propagation (BP) neural network (NN) [13], neuromorphic analyzers [14], extreme learning machine (ELM) [15,16,17], decision tree support vector machine (DTSVM) [18], quantum clustering-based multi-valued quantum fuzzification decision tree (QC-MQFDT) [19], and Gaussian Bernoulli deep belief network (GB-DBN) [20] were used in fault diagnosis of the analog circuits. The average fault diagnosis rate is more than 90% in the fixed training samples and test samples.
All the above mentioned methods were handled for electronic component parameter deviation of ± 50 % . The fault diagnosis rate of the current methodology achieved by the above given references is very low, where the models were trained by a single fixed parameter. In order to improve the fault diagnosis rate in analog circuits, and improve the generalization ability of the training models, this paper presents a new method that randomly selects component parameters in the range of parametric variation as unlabeled samples, where the representative samples are labeled by experts. Semi-supervised learning (SSL) received significant attention over the past decade from computer vision and machine learning research communities [21,22]. During the dimensionality reduction, both labeled and unlabeled samples are considered for semi-supervised dimensionality reduction. A semi-supervised dimensionality reduction algorithm based on local preserving mapping (LPP) that optimizes FDA is proposed to extract circuit features. Then, the semi-supervised cooperative training algorithm is used to diagnose the fault.
The remainder of this paper is organized as follows. First, we start with outlining the feature extraction method of lifting wavelet in Section 2. Then, the semi-supervised LPP optimization FDA algorithm is introduced in Section 3. This in followed by the semi-supervised random forest algorithm, which is elaborated on in Section 4. Afterward, the framework for analog circuit fault diagnosis, the detailed experimental process, and results analysis are given in Section 5. Finally, the paper is concluded.

2. Lifting Wavelet Transform

The lifting wavelet transform, also known as the second-generation wavelet transform, is used to improve the Laurent polynomial convolution algorithm associated with the Euclidean algorithm [23,24]. The lifting wavelet transform uses simple scalar multiplication to replace the convolution operation of the original wavelet transform. It can simplify the computation, realize the integer wavelet transform, and solve the boundary problem. The lifting scheme divides the transformation process into three phases: split/merge, prediction, and update [25,26,27].According to the parity, the split stage divides the input signal s i into two groups, including s i 1 and d i 1 , where the split function is F ( s i ) = ( s i 1 , d i 1 ) ;the prediction stage uses the forecast sequence P ( s i 1 ) of odd sequence s i 1 to predict d i 1 , following the value of the residual signal P ( s i 1 ) d i 1 placed with d i 1 , i.e., d i 1 = d i 1 P ( s i 1 ) .Then, the decomposition signal carrying decompose information is used to repeat the decomposition and the prediction process. The original signal s i can be represented as { s n , d n , , s 1 , d 1 } . In order to maintain the global characteristics of the signal s i in the updating stage, Q ( s i 1 ) = Q ( s i ) , an operator U and d i 1 are introduced to update s i 1 , i.e., s i 1 = s i 1 + U ( d i 1 ) .The reconfiguration process is just the opposite, which is shown in Figure 1.
In the lifting algorithm, the split operator F, the prediction operator P, and the update operator U are expressed as follows:
F ( s i ) = ( e v e n i 1 , o d d i 1 ) , d i 1 = o d d i 1 P ( e v e n i 1 ) , s i 1 = e v e n i 1 + U ( o d d i 1 ) .
There steps of the reconstruction process of lifting transformation, i.e., restore update, restore prediction, and merge, are given as follows:
e v e n i 1 = s i 1 U ( o d d i 1 ) , o d d i 1 = d i 1 + P ( even i 1 ) , s i = m e r g e ( e v e n i 1 , o d d i 1 ) .
In this model, different prediction operators P and update operators U can be used to construct the required wavelet functions, for example, the Harr wavelet, db2 wavelet, and so on.

3. Local Fisher Discriminant Analysis (LFDA)

x i R d ( i = 1 , 2 , , n ) represents the d-dimensional sample, where y i { 1 , 2 , , c } is related to the label set. According to the definition of between-class scatter matrix and within-class scatter matrix, we can obtain
S b = i = 1 c n i ( μ i μ ) ( μ i μ ) T ,
S w = i = 1 c x k c l a s s i ( μ i x k ) ( μ i x k ) T ,
where S b is a between-class scatter matrix, S w is a within-class scatter matrix, i is the number of samples, x i denotes the sample of I, μ i = 1 n x c l a s s i x is the mean of the samples in I, μ = 1 m i = 1 m x i is the mean of the overall sample, and ( μ i μ ) ( μ i μ ) T is the covariance matrix describing the relationship between the sample x i and the overall samples. The function in the diagonal line of the matrix represents the variance of the class related to all the samples. Similarly, the non-diagonal elements represent the covariance of the sample population means, i.e., the degree of the correlation or the redundancy between the sample and the overall samples. The lower the coupling degree is between classes, the higher the degree of polymerization will be within the class or the smaller the value will be of the within-class scatter matrix; thus, the larger the value will be of the between-class scatter matrix.
The Fisher discriminant expression is shown as
J f i s h e r ( φ ) = φ T S b φ φ T S w φ ,
where φ is an n-dimensional vector; FDA is used to select the vector φ when J f i s h e r ( φ ) reaches the maximum value as the projection direction. The meaning is that the projected samples have the maximum between-class scatter and the minimum within-class scatter.
W o p t = arg max | w T S b w | | w T S w w | = [ w 1 , w 2 , , w n ] .
The formulas findout a set of projection matrices consisting of optimal discriminant vectors W o p t , which are also the eigenvectors corresponding to the maximum eigenvalues of S b φ = λ S w φ . The number of projection axes is d c 1 .
FDA is a traditional linear supervised dimensionality reduction method [28,29,30]; however, the dimensionality reduction effect is not suitable for multi-peak sample data. For dimensionality reduction of a multi-peak sample, the first thing needed is to preserve the local structure of the data. LPP can achieve a good dimensionality reduction effect by preserving the local structure of the data [31], but it can be only used in unsupervised situations. The label information of the sample cannot be taken into account, because the between-class scatter matrix is not full rank. Therefore, FDA can only map the data to a low-dimensional space, where the dimension is less than the number of classes.
LPP is a more classical manifold algorithm. The main idea is to study the local domain structure of the samples in high-dimensional space, and to preserve this manifold structure after dimensionality reduction [32]. That is, LPP is to minimize the weighted square sum of the distance between adjacent samples in low-dimensional space. The solution is to calculate the generalized eigenvalues. The distance of the projected samples is the same as that before projection.
min 1 2 i , j ( y i y j ) 2 S i j .
Let A be the total sample; then, A i , j represents the correlation matrix between the two sets of samples x i and x j , when the sample sum is in the k-nearest neighbors.
A i , j = exp ( x i x j σ i σ j 2 ) ,
where σ i represents the domain scale of the sample x i determined by σ i = x i x i ( k ) , and x i ( k ) represents the k-nearest neighbors of the samples x i . If A i , j [ 0 , 1 ] , the closer x i and x j are, the bigger A i , j will be.
When there are many scattered aggregation points in the same class of the sample space due to the integrity of samples, a mapping error might occur in the FDA algorithm. Since LPP is an unsupervised dimensionality reduction method which does not consider the class information, there will be an overlap while dealing with the samples with similar positions but different classes. In order to overcome the shortcomings of these two methods, a local Fisher linear discriminant analysis (LFDA) is proposed to calculate the local between-class and within-class scatter [33,34,35]. Considering the ability to preserve the local information, the LPP is applied to FDA to ensure the dimensionality reduction effect of the multi-peak data, and also to improve the efficiency of feature extraction.
The formulas of FDA are expressed as
S ( W ) = 1 2 i = 1 n j = 1 n w i , j ( w ) ( x i x j ) ( x i x j ) T ,
S ( b ) = 1 2 i = 1 n j = 1 n w i , j ( b ) ( x i x j ) ( x i x j ) T ,
where
w i , j ( w ) { 1 n l i f y i = y j = l 0 i f y i y j
w i , j ( b ) = { 1 n 1 n l i f y i = y j = l 1 n i f y i y j .
The expressions of LFDA are defined as
S ( W ) = 1 2 i = 1 n j = 1 n w i , j ( w ) ( x i x j ) ( x i x j ) T ,
S ( b ) = 1 2 i = 1 n j = 1 n w i , j ( b ) ( x i x j ) ( x i x j ) T ,
where
w i , j ( w ) { A i , j n l i f y i = y j = l 0 i f y i y j ,
w i , j ( b ) = { A i , j ( 1 n 1 n l ) i f y i = y j = l 1 n i f y i y j .
LFDA can be converted into a projection vector as
T L F D A arg max T R d × r [ tr ( T T S ( w ) ) 1 T T S ( b ) T ] .

4. Semi-Supervised Random Forest Algorithm

The semi-supervised cooperative training algorithm [36,37] assumes that there are two independent groups of data with the same distribution. These two groups of labeled data are trained to obtain two classifiers, which are used to label each sample to finalize the semi-supervised learning.
Random forest is an integrated classifier composing of multiple decision trees [38,39,40,41], which is a strong classifier formed by the combination of several weak classifiers in the form of voting. In practice, it is difficult to divide the data sets into two disjoint subsets. The training data subset is sampled from the training data by the bootstrap method, and the attribute subset is randomly selected to keep the randomness of the trees and the nodes in the decision tree. Semi-supervised random forest introduces the cooperative training idea of the semi-supervised learning to the random forest algorithm, training the random forest classifier, such as H 1 and H 2 , from the labeled data, shown as Figure 2. The two classifiers are used to predict the unlabeled samples, where the consistency of prediction label is taken as the confidence degree of the samples. The unlabeled samples with a confidence degree greater than the default threshold are added into the training samples of the other side; then, the classifier is retrained and iterated over and over until all samples are labeled.
The training sample set X consists of the labeled sample set X L = { x 1 , x 2 , x l } and the unlabeled set X U = { x l + 1 , x l + 2 , x u } . The threshold of the classification model is defined as θ . The unlabeled samples whose confidence degrees greater than θ are going to be added to the new training set. The number of decision trees in a random forest is set as odd, and the characteristic attribute subset of decision tree is given as log 2 M + 1 , where M denotes the number of attributes of the dataset. The unlabeled samples whose prediction consistency is greater than the threshold of decision trees in random forests are added into the other labeled samples and iterated repeatedly until all samples are labeled. Finally, the semi-supervised random forest classification models H 1 ( x ) and H 2 ( x ) are established.

5. Experimental Results and Discussion

The electronic components in analog circuits mainly include resistors, capacitors, inductors, and integrated circuits. When the electronic components change within a certain range, the topology of the circuit will not be changed, but the output features of the circuit will be changed. These include the voltage and current of time domain response, and the corresponding amplitude and phase of the frequency domain. In fact, the parametric fault diagnosis of the analog circuits is the fault location and separation of the electronic components with the changed parameters. Parametric fault refers to the degradation parameters of the electronic components to a certain extent. The degradation parameters are continuous values. There are a large number of component parameters in theory; therefore, a large number of unlabeled component values are generated randomly in the parametric fault zone of electronic components, where some samples are labeled by experts. After extracting the features of the tested circuits using lifting wavelet, the unlabeled samples are used as k-nearest neighbors of the labeled samples to reduce the dimension of features. Then, the semi-supervised random forest algorithm is used to train the fault classifier, and the test samples are put into the classifier after the feature extraction to locate the fault components. The fault diagnosis process is illustrated in Figure 3.

5.1. Sallen–Key Band-Pass Filter Circuit

The Sallen–Key band-pass filter circuit was the same as with References [3,8,9,14,17], and was composed of five resistive, two capacitive, and one operational amplifier. Its nominal value with a central frequency of 25 kHz is shown in Figure 4. The tolerances of the resistors and capacitors were set to 5% and 10%, respectively A component sensitivity analysis was performed on the circuit under test (CUT) to identify the critical single faults. The sensitivity ranking of the discrete components to center frequency was R3, C2, R2, and C1 [14]. Therefore, the parametric faults of four components were mainly considered as in References [3,8,9,14,17]. The input signal was a monopulse signal with the amplitude of 5V and the pulse width of 10 μ s . Eight single-fault modes were considered as shown in Table 1.
As seen from the parameter sweep curve in Figure 5, the response curve of the component parameter was different in the parametric fault range. From the sensitivity analysis, the higher the sensitivity of the component is, the greater the change of the output will be. It can be seen from Figure 5a that the changes of C1 parameter had little effect on the output; however, the parameters in Figure 5b,c had a great influence on the output. Therefore, it was necessary to randomly select the parameters in the parametric fault range. From the time domain response curve of output, it can be seen that some response characteristic curves were very similar; thus, the degree of distinction was low. The method of lifting wavelet transform was used to extract the fault feature. By selecting three cases with similar time domain response curves, the feature extraction of three-layer Harr lifting wavelet was carried out. As seen in Figure 6, these three kinds of time domain original signals were very similar; thus, it was very difficult to classify them by using the time domain feature extraction method. However, through the three-layer lifting wavelet transform, we can see that the approximate and detail coefficients of these three layers were surely different. Therefore, three layers of detail coefficients were selected for feature extraction.
In the parametric range of Table 1 shown, eight types of single faults and 100 component parameters were randomly selected. The transient response curve in the time domain had 2000 dimensions, meaning that the data amount was 9 × 100 × 2000. After lifting wavelet transform, three-layer detail coefficients were selected as the feature, which had 250 dimensions. Experts randomly labeled 40% of the faulty data, selecting k-nearest neighbors of unlabeled data. In this experiment, LFDA was used to reduce the dimensions to eight, where the data amount also reduced to 9 × 100 × 8. Then, the data were put into the semi-supervised random forest for classification, where the number of attributes of the dataset was classified as eight. By setting the decision tree number of the random forest as odd and the feature attribute subset of the decision tree as 4,two semi-supervised random forest classification models on the labeled sample set H 1 ( x ) and H 2 ( x ) were established, which were used to predict the same unlabeled sample. The fault diagnosis rate and its comparison with existing methods are shown in Table 2.
According to the engineering statistics, eight types of double faults of electronic components were considered as shown in Table 3. In the range of the two fault elements, 100 groups of fault components were randomly generated, and each type of fault value was further analyzed using the Monte-Carlo method, where the instance value was 5% of resistance and 10% of capacitance, and the distribution was Gaussian. After extracting the time domain response signals of the 100 × 100 set, the transient response curve is depicted in Figure 7. The fault diagnosis rate is shown in Table 4, where the average was 98.6%.

5.2. Three-Opamp Active Band-Stop Filter Circuit

The three operational amplifiers active band-stop filter circuit was the same as References [18,42], and was composed of 12 resistors, four capacitors, and three operational amplifiers. Its nominal value is shown in Figure 8, with tolerance ranges of 5% and 10% for resistors and capacitors, respectively. The input was a monopulse signal with a 5-V peak and a 10- μ s pulse width. In Reference [18], only the single faults were diagnosed. Thus, the single, double, and mixed fault models were taken from Reference [42], and eight common single failures and three combinations ofthem were selected, as shown in Table 5.
For a certain fault range, 100 fault values were randomly selected, components of which were analyzed using the Monte-Carlo method to generate 100 combinations; the instance value was 5% of other resistances and 10% of other capacitances, and the distribution was Gaussian. By following this, 100 × 100 sets of transient response signals were extracted in the time domain, where the transient response curve had 2000 dimensions, meaning that the data amount was 13 × 100 × 100 × 2000. Through the three-layer lifting wavelet transform, three layers of detail coefficients were selected for feature extraction, each of which had 250 dimensions. Thus, the amount of data could be reduced to 13 × 100 × 100 × 250. Experts randomly labeled 30% of the faulty data, selecting k-nearest neighbors of the unlabeled data. LFDA was used to reduce the dimensions to eight, where the data amount also reduced to 13 × 100 × 100 × 8.Then, the data were put into the semi-supervised random forest for classification, where the number of attributes of the dataset was classified as 13.By setting the decision tree number of the random forest and the feature attribute subset as 4, two semi-supervised random forest classification models on the labeled sample sets H 1 ( x ) and H 2 ( x ) were established, which were used to predict the same unlabeled sample. As illustrated in Table 6, the average fault diagnosis rate of the proposed method was 98.2%, which is higher than that of 93.08% achieved by Reference [42].

6. Conclusions

In this paper, a semi-supervised random forest algorithm for parametric fault diagnosis in analog circuits was proposed. The difficulty in diagnosing analog circuit parametric fault is the successive changes of the parameter values. The existing fault diagnosis models trained by fixing fault component values cannot adapt to the engineering applications. The change in fault parameters produces a large number of fault samples. However, the labeled fault samples are also limited. Therefore, a semi-supervised learning algorithm was used to aid unlabeled samples through the labeled ones. In order to improve the accuracy of the semi-supervised classification algorithm, LDFA was utilized after feature extraction with lifting wavelet for feature dimensionality reduction, in consideration of labeled and unlabeled samples. Then, two circuits were used to validate the proposed method, which diagnosed single, multiple, and mixed faults, under the premise of improving generalization ability, where by the fault diagnosis rate is slightly higher than existing methods. Future work will cover implementation of the complex analog circuits and development of the test assemblies.

Author Contributions

L.W. and D.Z. conceived and designed this work; H.Z. and W.Z. collected and analyzed the data; L.W. drafted the manuscript; H.T. revised the manuscript. All authors read and approved the final manuscript.

Funding

This research was supported by the National Natural Science Fund (31501213), the Science and Technology Key project of Henan Province (172102310244, 172102310696) and (182102110250, 182102110356), and the China Postdoctoral Science Foundation (2017M612399).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Tang, S.; Li, Z.; Chen, L. Fault Detection in Analog and Mixed-Signal Circuits by Using Hilbert-Huang Transform and Coherence Analysis; Elsevier: Amsterdam, The Netherlands, 2015. [Google Scholar]
  2. Wang, Y.H.; Yan, Y.Z.; Signal, S. Wavelet-based feature extraction in fault diagnosis for biquad high-pass filter circuit. Math. Probl. Eng. 2016, 2016, 1–13. [Google Scholar] [CrossRef]
  3. Zhang, C.L.; He, Y.G.; Yuan, LF. A Novel Approach for Diagnosis of Analog Circuit Fault by Using GMKL-SVM and PSO. J. Electron. Test.-Theory Appl. 2016, 32, 531–540. [Google Scholar] [CrossRef]
  4. Long, Y.; Xiong, Y.J.; He, Y.G. A new switched current circuit fault diagnosis approach based on pseudorandom test and preprocess by using entropy and Haar wavelet transform. Analog Integr. Circuits Signal Process. 2017, 91, 445–461. [Google Scholar] [CrossRef]
  5. Li, J.M. The Application of Dual-Tree Complex Wavelet Packet Transform in Fault Diagnosis. Agro Food Ind. Hi-Tech 2017, 28, 406–410. [Google Scholar]
  6. Xie, X.; Li, X.; Bi, D.; Zhou, Q.; Xie, S.; Xie, Y. Analog Circuits Soft Fault Diagnosis Using Rényi’s Entropy. J. Electron. Test. 2015, 31, 217–224. [Google Scholar] [CrossRef]
  7. Long, T.; Jiang, S.; Luo, H.; Deng, C. Conditional entropy-based feature selection for fault detection in analog circuits. Dyna 2016, 91, 309–318. [Google Scholar] [CrossRef]
  8. He, W.; He, Y.; Li, B.; Zhang, C. Analog Circuit Fault Diagnosis via Joint Cross-Wavelet Singular Entropy and Parametric t-SNE. Entropy 2018, 20, 604. [Google Scholar] [CrossRef]
  9. Song, P.; He, Y.; Cui, W. Statistical property feature extraction based on FRFT for fault diagnosis of analog circuits. Analog Integr. Circuits Signal Process. 2016, 87, 427–436. [Google Scholar] [CrossRef]
  10. Zhao, D.; He, Y. A novel binary bat algorithm with chaos and Doppler effect in echoes for analog fault diagnosis. Analog Integr. Circuits Signal Process. 2016, 87, 437–450. [Google Scholar] [CrossRef]
  11. Prieto-Moreno, A.; Llanes-Santiago, O.; García-Moreno, E. Principal components selection for dimensionality reduction using discriminant information applied to fault diagnosis. J. Process Control 2015, 33, 14–24. [Google Scholar] [CrossRef]
  12. Haddad, R.Z.; Strangas, E.G. On the Accuracy of Fault Detection and Separation in Permanent Magnet Synchronous Machines Using MCSA/MVSA and LDA. IEEE Trans. Energy Convers. 2016, 31, 924–934. [Google Scholar] [CrossRef]
  13. Sugiyama, M. Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis. J. Mach. Learn. Res. 2007, 8, 1027–1061. [Google Scholar]
  14. Spina, R.; Upadhyaya, S. Linear circuit fault diagnosis using neuromorphic analyzers. IEEE Trans. Circuits Syst. II Analog Digit. Signal Process. 1997, 44, 188–196. [Google Scholar] [CrossRef]
  15. Jia, W.; Zhao, D.; Shen, T.; Ding, S.; Zhao, Y.; Hu, C. An optimized classification algorithm by BP neural network based on PLS and HCA. Appl. Intell. 2015, 43, 1–16. [Google Scholar] [CrossRef]
  16. Yuan, Z.; He, Y.; Yuan, L. Diagnostics Method for Analog Circuits Based on Improved KECA and Minimum Variance ELM. IOP Conf. Ser.Mater. Sci. Eng. 2017. [Google Scholar] [CrossRef]
  17. Yu, W.X.; Sui, Y.; Wang, J. The Faults Diagnostic Analysis for Analog Circuit Based on FA-TM-ELM. J. Electron. Test. 2016, 32, 1–7. [Google Scholar] [CrossRef]
  18. Ma, Q.; He, Y.; Zhou, F. A new decision tree approach of support vector machine for analog circuit fault diagnosis. Analog Integr. Circuits Signal Process. 2016, 88, 455–463. [Google Scholar] [CrossRef]
  19. Cui, Y.Q.; Shi, J.Y.; Wang, Z.L. Analog circuit fault diagnosis based on Quantum Clustering based Multi-valued Quantum Fuzzification Decision Tree (QC-MQFDT). Measurement 2016, 93, 421–434. [Google Scholar] [CrossRef]
  20. Liu, Z.B.; Jia, Z.; Vong, C.M. Capturing High-Discriminative Fault Features for Electronics-Rich Analog System via Deep Learning. IEEE Trans. Ind. Inform. 2017, 13, 1213–1226. [Google Scholar] [CrossRef]
  21. Zhuang, L.; Zhou, Z.; Gao, S.; Yin, J.; Lin, Z.; Ma, Y. Label Information Guided Graph Construction for Semi-Supervised Learning. IEEE Trans. Image Process. 2017, 26, 4182–4192. [Google Scholar] [CrossRef]
  22. Zhou, X.; Prasad, S. Active and Semisupervised Learning with Morphological Component Analysis for Hyperspectral Image Classification. IEEE Geosci. Remote Sens. Lett. 2017, 26, 1–5. [Google Scholar] [CrossRef]
  23. Guoming, S.; Houjun, W.; Hong, L. Analog circuit fault diagnosis using lifting wavelet transform and SVM. J. Electron. Meas. Instrum. 2010, 24, 17–22. [Google Scholar]
  24. Qing, Y.; Feng, T.; Dazhi, W.; Dongsheng, W.; Anna, W. Real-time fault diagnosis approach based on lifting wavelet and recursive LSSVM. Chin. J. Sci. Instrum. 2011, 32, 596–602. [Google Scholar]
  25. Pan, H.; Siu, W.C.; Law, N.F. A fast and low memory image coding algorithm based on lifting wavelet transform and modified SPIHT. Signal Process. Image Commun. 2008, 23, 146–161. [Google Scholar] [CrossRef]
  26. Hou, X.; Yang, J.; Jiang, G.; Qian, X. Complex SAR Image Compression Based on Directional Lifting Wavelet Transform with High Clustering Capability. IEEE Trans. Geosci. Remote Sens. 2013, 51, 527–538. [Google Scholar] [CrossRef]
  27. Roy, A.; Misra, A.P. Audio signal encryption using chaotic Hénon map and lifting wavelet transforms. Eur. Phys. J. Plus 2017, 132, 524. [Google Scholar] [CrossRef]
  28. Chiang, L.H.; Kotanchek, M.E.; Kordon, A.K. Fault diagnosis based on Fisher discriminant analysis and support vector machines. Comput. Chem. Eng. 2004, 28, 1389–1401. [Google Scholar] [CrossRef]
  29. Yin, Y.; Hao, Y.; Bai, Y.; Yu, H. A Gaussian-based kernel Fisher discriminant analysis for electronic nose data and applications in spirit and vinegar classification. J. Food Meas. Charact. 2017, 11, 24–32. [Google Scholar] [CrossRef]
  30. Li, C.; Jiang, K.; Zhao, X.; Fan, P.; Wang, X.; Liu, C. Spectral identification of melon seeds variety based on k-nearest neighbor and Fisher discriminant analysis. In Proceedings of the AOPC 2017: Optical Spectroscopy and Imaging, Beijing, China, 4–6 June 2017. [Google Scholar]
  31. Wang, Z.; Ruan, Q.; An, G. Facial expression recognition using sparse local Fisher discriminant analysis. Neurocomputing 2016, 174, 756–766. [Google Scholar] [CrossRef]
  32. Yu, Q.; Wang, R.; Li, B.N.; Yang, X.; Yao, M. Robust Locality Preserving Projections With Cosine-Based Dissimilarity for Linear Dimensionality Reduction. IEEE Access 2017, 5, 2676–2684. [Google Scholar] [CrossRef]
  33. Sugiyama, M.; Idé, T.; Nakajima, S.; Sese, J. Semi-supervised local Fisher discriminant analysis for dimensionality reduction. Mach. Learn. 2010, 78, 35. [Google Scholar] [CrossRef]
  34. Wang, S.; Lu, J.; Gu, X.; Du, H.; Yang, J. Semi-supervised linear discriminant analysis for dimension reduction and classification. Pattern Recognit. 2016, 57, 179–189. [Google Scholar] [CrossRef]
  35. Cheng, G.; Zhu, F.; Xiang, S.; Wang, Y.; Pan, C. Semisupervised Hyperspectral Image Classification via Discriminant Analysis and Robust Regression. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 9, 595–608. [Google Scholar] [CrossRef]
  36. Blum, A.; Mitchell, T. Combining labeled and unlabeled data with co-training. In Proceedings of the Eleventh Annual Conference on Computational Learning Theory, Madison, WI, USA, 24–26 July 1998; pp. 92–100. [Google Scholar]
  37. Zhao, J.H.; Wei-Hua, L.I. One of semi-supervised classification algorithm named Co-S3OM based on cooperative training. Appl. Res. Comput. 2013, 30, 3237–3239. [Google Scholar]
  38. Díaz-Uriarte, R.; De Andres, S.A. Gene selection and classification of microarray data using random forest. BMC Bioinform. 2006, 7, 3. [Google Scholar] [CrossRef] [PubMed]
  39. Belgiu, M.; Drăguţ, L. Random forest in remote sensing: A review of applications and future directions. ISPRS J. Photogramm. Remote Sens. 2016, 114, 24–31. [Google Scholar] [CrossRef]
  40. Li, C.; Sanchez, R.V.; Zurita, G.; Cerrada, M.; Cabrera, D.; Vásquez, R.E. Gearbox fault diagnosis based on deep random forest fusion of acoustic and vibratory signals. Mech. Syst. Signal Process. 2016, 76, 283–293. [Google Scholar] [CrossRef]
  41. Mellor, A.; Boukir, S.; Haywood, A.; Jones, S. Exploring issues of training data imbalance and mislabelling on random forest performance for large area land cover classification using the ensemble margin. ISPRS J. Photogramm. Remote Sens. 2015, 105, 155–168. [Google Scholar] [CrossRef]
  42. Jiang, Y.; Wang, Y.; Luo, H. Fault diagnosis of analog circuit based on a second map SVDD. Analog Integr. Circuits Signal Process. 2015, 85, 395–404. [Google Scholar] [CrossRef]
Figure 1. The decomposition and reconstruction of the lifting wavelet transform.
Figure 1. The decomposition and reconstruction of the lifting wavelet transform.
Symmetry 11 00228 g001
Figure 2. Semi-supervised random forest classifier.
Figure 2. Semi-supervised random forest classifier.
Symmetry 11 00228 g002
Figure 3. Parametricfault diagnosis process of analog circuit based on semi-supervised learning.
Figure 3. Parametricfault diagnosis process of analog circuit based on semi-supervised learning.
Symmetry 11 00228 g003
Figure 4. Sallen–Key band-pass filter.
Figure 4. Sallen–Key band-pass filter.
Symmetry 11 00228 g004
Figure 5. The representative response curves of parametric fault in different fault modes;(a) F1,three representative response curve of C1 within the range of 5 nF (1 + 50%) 5 nF (1 + 100%); (b) F6,three representative response curve of R2 within the range of 3 kΩ (1 − 80%) 3 kΩ (1 − 50%); (c) F8,three representative response curve of R3 within the range of 3 kΩ (1 − 80%) 3 kΩ (1 − 50%).
Figure 5. The representative response curves of parametric fault in different fault modes;(a) F1,three representative response curve of C1 within the range of 5 nF (1 + 50%) 5 nF (1 + 100%); (b) F6,three representative response curve of R2 within the range of 3 kΩ (1 − 80%) 3 kΩ (1 − 50%); (c) F8,three representative response curve of R3 within the range of 3 kΩ (1 − 80%) 3 kΩ (1 − 50%).
Symmetry 11 00228 g005aSymmetry 11 00228 g005b
Figure 6. Three types of fault signals with lower fault differentiation and three-layer Haar lifting wavelet decomposition; (a) lifting wavelet feature extraction of F0, (b) lifting wavelet feature extraction of F4, and (c) lifting wavelet feature extraction of F6.
Figure 6. Three types of fault signals with lower fault differentiation and three-layer Haar lifting wavelet decomposition; (a) lifting wavelet feature extraction of F0, (b) lifting wavelet feature extraction of F4, and (c) lifting wavelet feature extraction of F6.
Symmetry 11 00228 g006aSymmetry 11 00228 g006b
Figure 7. Typical transient response curves of the double faults.
Figure 7. Typical transient response curves of the double faults.
Symmetry 11 00228 g007
Figure 8. Three-opamp active band-stop filter circuit.
Figure 8. Three-opamp active band-stop filter circuit.
Symmetry 11 00228 g008
Table 1. Single fault in Sallen–Key band-pass filter.
Table 1. Single fault in Sallen–Key band-pass filter.
Fault IDFault ModeNominalFaulty Value and Variation Percentage
F0normal------
F1C1↑5 nF5 nF (1 + 50%) 5 nF (1 + 100%)
F2C1↓5 nF5 nF (1 − 80%) 5 nF (1 − 50%)
F3C2↑5 nF5 nF (1 + 50%) 5 nF (1 + 100%)
F4C2↓5 nF5 nF (1 − 80%) 5 nF (1 − 50%)
F5R2↑3 kΩ3 kΩ (1 + 50%) 3 kΩ (1 + 100%)
F6R2↓3 kΩ3 kΩ (1 − 80%) 3 kΩ (1 − 50%)
F7R3↑2 kΩ2 kΩ (1 + 50%) 2 kΩ (1 + 100%)
F8R3↓2 kΩ2 kΩ (1 − 80%) 2 kΩ (1 − 50%)
Table 2. The single-fault diagnosis results of Sallen–Key band-pass filter.
Table 2. The single-fault diagnosis results of Sallen–Key band-pass filter.
Fault IDFault TypeNominalMethod 1 [9]Method 2 [3]Method 3 [17]Proposed Method
Fault ValueAccuracyFault ValueAccuracyFault ValueAccuracyFault ValueAccuracy
F0normal------97.2%---99%---100%---100%
F1C1↑5 nF7.5 nF99%10 nF100%7.5 nF95%7.5 nF 10 nF100%
F2C1↓5 nF2.5 nF100%2.5 nF100%2.5 nF100%1 nF 2.5 nF100%
F3C2↑5 nF7.5 nF96%10 nF100%7.5 nF90%7.5 nF 10 nF100%
F4C2↓5 nF2.5 nF97%2.5 nF100%2.5 nF100%1 nF 2.5 nF100%
F5R2↑3 kΩ4.5 kΩ98%6 kΩ99.3%4.5 kΩ100%4.5 kΩ 6 kΩ98%
F6R2↓3 kΩ1.5 kΩ100%1.5 kΩ99.3%1.5 kΩ100%0.6 kΩ 1.5 kΩ95%
F7R3↑2 kΩ3 kΩ100%4 kΩ100%3 kΩ95%3 kΩ 4 kΩ100%
F8R3↓2 kΩ1 kΩ98.6%1 kΩ100%1 kΩ100%0.4 kΩ 1 kΩ100%
Table 3. Double faults of Sallen–Key band-pass filter.
Table 3. Double faults of Sallen–Key band-pass filter.
Fault IDFault ModeNominalFaulty Value and Variation Percentage
F0---------
F1C1↑C2↑5 nF 5 nF5 nF (1 + 50%) 5 nF (1 + 100%)
5 nF (1 + 50%) 5 nF (1 + 100%)
F2C1↓C2↓5 nF 5 nF5 nF (1 − 80%) 5 nF (1 − 50%)
5 nF (1 − 80%) 5 nF (1 − 50%)
F3R2↑R3↑3 kΩ 2 kΩ3 kΩ (1 + 50%) 3 kΩ (1 + 100%)
2 kΩ (1 + 50%) 2 kΩ (1 + 100%)
F4R2↓R3↓3 kΩ 2 kΩ3 kΩ (1 − 80%) 3 kΩ (1 − 50%)
2 kΩ (1 − 80%) 2 kΩ (1 − 50%)
F5R2↑C1↑3 kΩ 5 nF3 kΩ (1 + 50%) 3 kΩ (1 + 100%)
5 nF (1 + 50%) 5 nF (1 + 100%)
F6R2↑C2↓3 kΩ 5 nF3 kΩ (1 + 50%) 3 kΩ (1 + 100%)
5 nF (1 − 80%) 5 nF (1 − 50%)
F7R3↓C1↑2 kΩ 5 nF2 kΩ (1 − 80%) 2 kΩ (1 − 50%)
5 nF (1 + 50%) 5 nF (1 + 100%)
F8R3↓C2↓2 kΩ 5nf2 kΩ (1 − 80%) 2 kΩ (1 − 50%)
5 nF (1 − 80%) 5 nF (1 − 50%)
Table 4. The double fault diagnosis result of Sallen–Key band-pass filter.
Table 4. The double fault diagnosis result of Sallen–Key band-pass filter.
Fault IDF0F1F2F3F4F5F6F7F8
Accuracy100%100%100%100%96%100%100%98%94%
Table 5. The fault modes of three-opamp active band-stop filter circuit.
Table 5. The fault modes of three-opamp active band-stop filter circuit.
Fault IDFault ModeNominalFaulty Value and Variation Percentage
F0normal------
F1C4open10 nF100 MΩ
F2R1↑15 kΩ15 kΩ (1 + 20%) 15 kΩ (1 + 50%)
F3R2↑15 kΩ15 kΩ (1 + 20%) 15 kΩ (1 + 50%)
F4C2↓10 nF10 nF (1 − 50%) 10 nF (1 − 20%)
F5C3↑10 nF10 nF (1 + 50%) 10 nF (1 + 100%)
F6R8↓5.65 kΩ5.65 kΩ (1 − 80%) 5.65 kΩ (1 − 50%)
F7R9↑10 kΩ10 kΩ (1 + 50%) 10 kΩ (1 + 100%)
F8R10↓10 kΩ10 kΩ (1 − 80%) 10 kΩ (1 − 50%)
F9R11↑10 kΩ10 kΩ (1 + 50%) 10 kΩ (1 + 100%)
F10R5↓
and R6↑
and C2↓
31 kΩ
31 kΩ
10 nF
31 kΩ (1 − 50%) 31 kΩ (1 − 20%)
31 kΩ (1 + 20%) 31 kΩ (1 + 50%)
10 nF (1 − 50%) 10 nF (1 − 20%)
F11R8↓
and R9↑
and C3↑
5.65 kΩ
10 kΩ
10 nF
5.65 kΩ (1 − 50%) 5.65 kΩ (1 − 20%)
10 kΩ (1 + 20%) 10 kΩ (1 + 50%)
10 nF (1 + 20%) 10 nF (1 + 50%)
F12R10↓
and R11↑
10 kΩ
10 kΩ
10 kΩ (1 − 50%) 10 kΩ (1 − 20%)
10 kΩ (1 + 20%) 10 kΩ (1 + 50%)
Table 6. The fault diagnosis results of three-opamp active band-stop filter circuit.
Table 6. The fault diagnosis results of three-opamp active band-stop filter circuit.
Fault IDFault TypeNominalMethod1 [42]Proposed Method
Fault ValueFault Value
F0normal---------
F1C4open10 nFC4open100MΩ
F2R1↑15 kΩ15 kΩ (1 + 20%)15 kΩ (1 + 20%) 15 kΩ (1 + 50%)
F3R2↑15 kΩ15 kΩ (1 + 20%)15 kΩ (1 + 20%) 15 kΩ (1 + 50%)
F4C2↓10 nF10 nF (1 − 20%)10 nF (1 − 50%) 10 nF (1 − 20%)
F5C3↑10 nF10 nF (1 + 50%)10 nF (1 + 50%) 10 nF (1 + 100%)
F6R8↓5.65 kΩ5.65 kΩ (1-50%)5.65 kΩ (1 − 80%) 5.65 kΩ (1 − 50%)
F7R9↑10 kΩ10 kΩ (1 + 50%)10 kΩ (1 + 50%) 10 kΩ (1 + 100%)
F8R10↓10 kΩ10 kΩ (1-50%)10 kΩ (1 − 80%) 10 kΩ (1 − 50%)
F9R11↑10 kΩ10 kΩ (1 + 50%)10 kΩ (1 + 50%) 10 kΩ (1 + 100%)
F10R5↓
and R6↑
and C2↓
31 kΩ
31 kΩ
10 nF
31 kΩ (1 − 20%)
31 kΩ (1 + 20%)
10 nF (1 − 20%)
31 kΩ (1 − 50%) 31 kΩ (1 − 20%)
31 kΩ (1 + 20%) 31 kΩ (1 + 50%)
10 nF (1 − 50%) 10 nF (1 − 20%)
F11R8↓
and R9↑
and C3↑
5.65 kΩ
10 kΩ
10 nF
5.65 kΩ (1 − 80%)
10 kΩ (1 + 20%)
10 nF (1 + 20%)
5.65 kΩ (1 − 50%) 5.65 kΩ (1 − 20%)
10 kΩ (1 + 20%) 10 kΩ (1 + 50%)
10 nF (1 + 20%) 10 nF (1 + 50%)
F12R10↓
and R11↑
10 kΩ
10 kΩ
10 kΩ (1 − 20%)
10 kΩ (1 + 20%)
10 kΩ (1 − 50%) 10 kΩ (1 − 20%)
10 kΩ (1 + 20%) 10 kΩ (1 + 50%)
Average fault diagnosis 93.08%98.2%

Share and Cite

MDPI and ACS Style

Wang, L.; Zhou, D.; Tian, H.; Zhang, H.; Zhang, W. Parametric Fault Diagnosis of Analog Circuits Based on a Semi-Supervised Algorithm. Symmetry 2019, 11, 228. https://doi.org/10.3390/sym11020228

AMA Style

Wang L, Zhou D, Tian H, Zhang H, Zhang W. Parametric Fault Diagnosis of Analog Circuits Based on a Semi-Supervised Algorithm. Symmetry. 2019; 11(2):228. https://doi.org/10.3390/sym11020228

Chicago/Turabian Style

Wang, Ling, Dongfang Zhou, Hui Tian, Hao Zhang, and Wei Zhang. 2019. "Parametric Fault Diagnosis of Analog Circuits Based on a Semi-Supervised Algorithm" Symmetry 11, no. 2: 228. https://doi.org/10.3390/sym11020228

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop