Next Article in Journal
A Survey on Node Clustering in Cognitive Radio Wireless Sensor Networks
Previous Article in Journal
A Hybrid Algorithm for Missing Data Imputation and Its Application to Electrical Data Loggers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Semi-Supervised Method of Electronic Nose for Indoor Pollution Detection Trained by M-S4VMs

College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Sensors 2016, 16(9), 1462; https://doi.org/10.3390/s16091462
Submission received: 16 July 2016 / Revised: 2 September 2016 / Accepted: 5 September 2016 / Published: 10 September 2016
(This article belongs to the Section Physical Sensors)

Abstract

:
Electronic nose (E-nose), as a device intended to detect odors or flavors, has been widely used in many fields. Many labeled samples are needed to gain an ideal E-nose classification model. However, the labeled samples are not easy to obtain and there are some cases where the gas samples in the real world are complex and unlabeled. As a result, it is necessary to make an E-nose that cannot only classify unlabeled samples, but also use these samples to modify its classification model. In this paper, we first introduce a semi-supervised learning algorithm called S4VMs and improve its use within a multi-classification algorithm to classify the samples for an E-nose. Then, we enhance its performance by adding the unlabeled samples that it has classified to modify its model and by using an optimization algorithm called quantum-behaved particle swarm optimization (QPSO) to find the optimal parameters for classification. The results of comparing this with other semi-supervised learning algorithms show that our multi-classification algorithm performs well in the classification system of an E-nose after learning from unlabeled samples.

1. Introduction

Pollution attracts more and more attention as people grow more highly aware of air quality issues. As a result, it is important to detect indoor air pollution effectively. Electronic nose (E-nose) is a metal device that includes a gas sensor array and a processing unit carrying artificial intelligence algorithms. It is usually used in gas analysis problems [1,2,3] and has turned out to be effective. According to previous studies, E-nose has been applied in many fields such as environmental monitoring [4,5], food detection [6,7,8], dangerous objects detection [9], disease diagnosis [10,11,12,13], and aerospace applications [14].
As a branch of gas detection, indoor gas pollution has led to many health problems for people that spend a large amount of their time indoors. Furthermore, if a person inhales too much polluted air unconsciously, it will incur health problems in many aspects. On the other hand, indoor pollution gases such as formaldehyde, toluene, and carbon monoxide are hard to detect and classify in normal ways. So it has become a hot topic to find an effective way to detect indoor gas pollution. Our previous research has proven that E-nose performs well in analyses of these indoor pollution gases [15,16].
To improve the performance of E-nose, researchers have proposed many strategies. One focus is on finding new material to build more advanced sensor arrays for E-nose, because sensor arrays usually have many limitations when applied to different fields. It is often the case that sensor arrays respond very quickly to one type of gas, however, they lack sensitivity to another. As result, many sensor arrays have been proposed to improve the performance of E-nose, such as electrochemical, metal oxide, conducting polymer, and coustic wave sensor arrays [17]. They each have advantages in specific fields. For example, metal oxide sensors have a very good response to some gases on the order of sub ppm levels, and electrochemical sensors have good performance in robust as well as low consumption settings. Also, electrochemical sensors can operate at room temperature while other types of sensors often need specific operating temperatures. Moreover, there are many new types of sensor arrays such as colorimetric and optical sensors that are also applied to E-nose [18,19], and they can improve the performance of E-nose significantly. On the other hand, improving the effectiveness of data processing can also help to improve the performance of E-nose. The data processing can be roughly divided into two groups: feature extraction and classification algorithm. In the practical analysis of E-nose, the figures for sample data can be numerous. This makes it hard to deal with data and may increase the running time. Thus, feature extraction as a valid method to reduce the dimension of samples has been introduced for the treatment of samples. Many effective feature extraction methods have been applied for E-nose; for example, principal component analysis (PCA) [20,21] uses an orthogonal transformation to convert a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables, which can significantly reduce the dimensions of target samples. Another way to enhance the efficiency of an E-nose system is to use advanced analysis algorithms. In the past, researchers often used genetic algorithm [22,23,24] as the major algorithm for classification. However, as samples become more and more complex, these algorithms have been replaced due to their longer analysis times and lower efficiencies. By contrast, many new algorithms such as support vector machines (SVMs) [25,26] and artificial neural networks (ANNs) [27,28] have been introduced to this field.
As a mature technique for classification [29,30], SVM performs well in binary (two-class) classification problems. However, when it comes to gas classification, the number of gases to classify is often more than two, which increases the complexity of the classification. Thus, the use of a single SVM is always avoided to solve multi-class problems directly. On the contrary, it is useful to use a combination of several binary SVM classifiers to solve a given multi-class problem. Researchers have proposed many strategies such as winner-takes-all (WTA-SVM), one-versus-one method implemented by max-wins voting (MWV-SVM), directed acyclic graph procedure (DAG-SVM) [31], and error-correcting codes [32].
On the other hand, SVM often needs to set parameters to reach its best performance. Thus, the use of particle swarm optimization (PSO) [33,34] and QPSO [35] as optimization algorithms to find the best parameters have been applied to the classification methods. Our previous work has proven that these optimization methods can obviously improve the classification rate of E-nose [36].
However, in the real world it is not easy for E-nose to classify different gases under the interference of unlabeled samples. This is because the unlabeled samples are more complex than the labeled samples used in the laboratory. However, in the sampling experiments, there are some cases where the label information of the samples is lost due to mistakes of the operators; for example, where label information is not written on the tag. This leads to the waste of experimental samples. On the other hand, there is plenty of information on unlabeled samples, which can effectively enhance the performance of E-nose [37]. Additionally, unlabeled samples are often easier to obtain and require less time to train the E-nose. The performance of unlabeled samples is only slightly worse than that of labeled samples. Thus, the addition of unlabeled samples is an alternative method to enhance the performance of E-nose.
Researchers have explored many methods in order to make full use of unlabeled samples as well as minimize the risk of error accumulation. These methods can be roughly divided into three classes:
(1)
Active learning: This method is able to select the data to enhance its performance from which it learns. Thus it can achieve ideal accuracy with fewer training labels [38,39].
(2)
Transfer learning: These techniques can gain knowledge from related but different tasks to achieve better accuracy in the main task [40,41,42]. However, this often needs sufficient labeled data to provide enough accurate knowledge.
(3)
Semi-supervised learning (SSL): This learning paradigm focuses on using a small number of labeled data to determine the label of the data with the help of a large number of unlabeled data [43,44].
Based on the actual situation, we chose semi-supervised learning for improving E-nose performance, because it is easy to obtain many unlabeled samples. The rest of this paper is organized as follows: Section 2 introduces the E-nose system, experimental procedure, and the data set of this paper; Section 3 presents the theory of the S4VMs technique and our enhancement algorithm; Section 4 describes the results of multi-classification S4VMs (M-S4VMs) while it is used for training the classification system of E-nose to distinguish target pollution gases, and to compare with other semi-supervised algorithms. Finally, we draw our conclusion of this paper in Section 5.

2. E-Nose System and Experiments

A concrete description of the E-nose system and the experimental procedure has been expounded in our previous research [37]. Here we simply describe the details which are different from the previous experiment.
In this experiment, we selected three common indoor pollution gases to distinguish. They were carbon monoxide (CO), toluene (C7H8), and formaldehyde (CH2O). We applied a spectrophotometric method and gas chromatography (GC) to determine the concentration of these three gases. The real concentration of the three gases are shown in Table 1.
Another difference in this paper is the data set. To prove the efficiency of using a semi-supervised algorithm in complex multi-gas classification, we set different proportions of labeled and unlabeled data. The total data was equally divided into two groups, which was used for training and testing, respectively. Therefore, there were 501 samples of complex multi-gases in the training set. Then, we changed the proportion of the data used as labeled data, while the unlabeled data was also changed to keep the total data even. In this paper, we tested the algorithm with a wide range of unlabeled data rates ranging from 10% to 90%. Typical examples of these data sets is shown in Table 2, Table 3 and Table 4.

3. M-S4VMs Technique

3.1. S4VMs

S4VMs is an enhancement algorithm generated from S3VMs, which is a branch of transductive support vector machines (TVSM). It has been demonstrated by Yufeng Li and Zhihua Zhou [45] that S4VMs have more advantages in using unlabeled samples than S3VMs and TSVM, and has lower risk with the use of unlabeled samples. The principle of S4VMs is as follows:
First, suppose { y ^ t } t T = 1 as the predictor of multiple low-density separators, the ground-truth label assignment is y * , and let y s v m define the predictions of inductive SVM on y { ± 1 } u as unlabeled data. For each label assignment, earn ( y , y * , y s v m ) and lose ( y , y * , y s v m ) are the increased and decreased accuracy which are used to compare with the inductive SVM, respectively. The next step is to improve the performance over the inductive SVM through y; this step can be transformed into an optimization problem shown as Equation (1):
max { earn ( y , y * , y svm ) λ lose ( y , y * , y svm ) } ,   y { ± 1 } u
In this equation, λ is a parameter to trade off how much risk it will undertake during the process. In order to simplify the calculation, we set earn ( y , y * , y svm ) λ lose ( y , y * , y svm ) as J ( y , y ^ , y s v m ) .
To solve Equation (1), we must know y, y * , and y s v m . According to the parameter set, y is the labeled samples we already know, and y s v m is the prediction of unlabeled samples. But the ground-truth y * is unknown, which makes it difficult to complete Equation (1). Thus, we assume that the ground-truth boundary y * can be gained by a low-density separator in { y ^ t } t = 1 T ,    i.e. ,    y * M = { y ^ t } t = 1 T . We set y ¯ to optimize the worst-case improvement over the inductive SVM, and then y ¯ can be deduced in Equation (2):
y ¯ = arg   max   min   J ( y , y ^ , y svm ) , y { ± 1 } u , y ^ M
Here is a theorem which shows that the hypothesis is correct. This theorem is shown below:
Theorem 1. 
If y * { y ^ t } t = 1 T and λ 1 , the accuracy of y ¯ is never worse than that of y s v m .
Via Theorem 1 we can get Proposition 1: If y * { y ^ t } t = 1 T and λ 1 , the accuracy of y is never worse than that of y s v m , as long as the accuracy of y satisfies min y ^ M J ( y , y ^ , y s v m ) 0 .
On the other hand, earn   ( y , y * , y s v m ) and lose ( y , y * , y s v m ) can be expressed as Equations (3) and (4) because they are linear functions of y :
earn ( y , y * , y s v m ) = j = 1 u I ( y j = y j * ) I ( y j * y j s v m ) = j = 1 u 1 + y j y j * 2 1 y j s v m y j * 2
lose ( y , y * , y s v m ) = j = 1 u I ( y j y j * ) I ( y j * = y j s v m ) = j = 1 u 1 y j y j * 2 1 + y j s v m y j * 2
In order not to lose generality, let J ( y , y ^ , y s v m ) = c t y + d t . Then Equation (2) can be expressed as:
max   θ   s . t .     θ c t y + d t , t = 1 T , θ , y { ± 1 } u
Although Equation (5) is an integer linear programming, there is no need to obtain the optimal solution to achieve our target based on Proposition 1, therefore a simple heuristic technique is introduced to solve Equation (5). In particular, we relax the integer constraint of y in Equation (5) to [−1,1]u and project back to the integer solution with minimum distance to solve this convex linear programming. Then the output will be replaced when y s v m is larger than the result of the integer solution. The final solution clearly satisfies Proposition 1.
It is not difficult to incorporate prior knowledge on low-density separators into this framework. In order to constrain Equation (5), we employ a dual variable α to complete this target. According to the karush–kuhn–tucker (KKT) condition, we can replace Equation (5) by Equation (6), shown below:
max min t = 1 T α t ( c t y + d t )
where α t is proposed as a probability and y ^ t is in accordance with the ground-truth solution. Thus, while the probabilities α are available for prior knowledge, it can learn the optimal y to improve its performance to the target in Equation (6), by means of known α .
Then set h ( f , y ^ ) to represent the function to be minimized by the objective function of S3VMs:
h ( f , y ^ ) = f h 2 + C 1 i = 1 l l ( y i , f ( x i ) ) + C 2 j = 1 u l ( y ^ j , f ( x ^ j ) )
In order to gain multiple large-margin low density separators { f t } t = 1 T and the corresponding label assignments { y ^ t } t = 1 T , we construct Equation (8) to minimize Equation (7):
min t = 1 T h ( f t , y ^ t ) + M Ω ( { y ^ t } t = 1 T ) , { f t , y ^ t β }
where T represents the number of separators, M represents a large constant enforcing large diversity, and Ω is a quantity of penalty about the diversity of the separators. It is not hard to find that minimizing Equation (7) favors not only the separators with large-margins but also large diversity.
Then, we consider Ω { y ^ t } t = 1 T as the sum of pairwise terms, which can be expressed as Equation (9):
Ω    ( { y ^ t } t = 1 T ) = 1 t t ˜ T I ( y ^ t y ^ t ˜ u 1 ε )
In this equation, I is the identity function and ε [ 0 ,   1 ] is a constant when the other penalty quantities are also applicable.
In order to let the outcome be more acceptable, suppose that f is a linear model, which can be expressed as f ( x ) = w ϕ ( x ) + b , where ϕ ( x ) is induced by the kernel k, which serves as feature mapping. Hence, Equation (10) can be deduced as follows:
min t = 1 T ( 1 2 w t 2 + C 1 i = 1 l ε i + C 2 j = 1 u ε ^ j ) + M 1 t t ˜ T I ( y ^ t y ^ t ˜ u 1 ε ) ; { w t , b t , y ^ t β } t = 1 T y i ( w t ϕ ( x ) + b t ) 1 ε i , ε i 0 ; y ^ t , j ( w t ϕ ( x ^ j ) + b t ) 1 ε j , ε j 0 ; i = 1 , ... , j = 1 , ... , u , t = 1 , ... T ,
In which y ^ t , j is the jth parameters of y ^ t . If Equation (10) is non-convex then implementation of this method will be presented as follows.
Simulated annealing (SA) [46,47] is an effective method to gain global solutions from objective functions through multiple local minima, which has been proven by Kirkpatrick and V. Černý. Additionally, SA has advantages in replacing current solutions by random nearby solutions according to the value difference between global parameters and function targets in its step. If a global parameter is large, random change will take place in the current solution. On the contrary, if the global parameter is going towards zero, the changes of the current solution will also gradually decline. Laarhoven and Aarts [48] have demonstrated that the convergence analysis of the global solution approaches one when the SA process is extended.
According to Sindhwani [49], a deterministic local search method is used for reducing the low convergence rate of the original SA. Particularly, once { y ^ t } t T = 1 is fixed, multiple individual SVM subroutines will solve { w t , b t } t = 1 T ; and when { w t , b t } t = 1 T is fixed, { y ^ t } t T = 1 is updated according to the local binary search, repeating until convergence.

3.2. Multi-Classifier Strategy

However, this type of S4VMs is only a binary (two-class) classification. When it is used for gas sample classification, it is always necessary to construct multi-classifiers.
According to Kai-Bo Duan and S. Sathiya Keerthi [50], two popular methods for doing this are as follows:
(1)
Winner-takes-all strategy SVM (WTA-SVM): this method needs M binary classifiers. Suppose the function ρ i is the output of the i-th classifier trained by the examples which comes from w i . ρ i divides the samples into two groups, in which the single type examples are the positive class and all other types are the negative class. Then, the positive class will be removed as one class, and the remaining part will repeat the classification and remove steps until all samples have been classified into their group, which often needs M times to classify as well as M binary classifiers.
(2)
Max-wins voting strategy SVM (MWV-SVM): This method requires constructing M ( M 1 ) 2 binary classifiers in which each binary classifier corresponds to every pair of distinct classes. Assuming that a binary classifier is c i j , it is trained by the negative and positive samples taken from sample data w i . When there is a new sample x, c i j will determine if this sample belongs to class w i , and the vote for class w i will add or decrease by one according to the result of c i j . After each of the M ( M 1 ) 2 binary classifiers finishes its voting, MWV will allocate x to the class based on the side having the largest number of votes.

3.3. M-S4VMs Technique

Based on the original S4VMs, we first improve its performance by constructing a multi-classifier to classify three different gases. Although S4VMs can solve problems with unlabeled data, we want this algorithm to not only use labeled data to train its models for classification, but also add unlabeled data into its learning process. Thus we propose a novel method to improve the performance of S4VMs to complete this target. First, we train S4VMs using labeled data as usual, and classify the unlabeled data through this model. Secondly, S4VMs will add labels to these unlabeled data and calculate the error rate. Then, we add unlabeled data into the labeled data with its label one by one. Finally, we retrain S4VMs with new labeled data to gain a new model to classify the test data.
To gain better performance of S4VMs, we also used an optimization method called QPSO to find the best fit parameters for M-S4VMs. It has been demonstrated that QPSO has fairly good performance in global optimization.
We called our enhanced S4VMs as M-S4VMs, and the steps of the M-S4VMs algorithm (Algorithm 1) is shown as follows:
Algorithm 1 (M-S4VMs algorithm):
Step 1: 
Randomly generate initialized parameters of M-S4VMs;
Step 2: 
Train the M-S4VMs through labeled data, then use this model to classify unlabeled samples, finally add this unlabeled data to the labeled data with their label. Retrain the M-S4VMs and get the error rate by classifying the test date. Feedback error rate to QPSO;
Step 3: 
Calculate the best fit parameters from the error rate, and update the particles to gain modified parameters;
Step 4: 
Return new parameters to M-S4VMs;
Step 5: 
Loop Step 2 to Step 4 until the error rate meets the threshold or the loop time arrives to a preset number;
Step 6: 
Output the best fitness and classify accuracy rate.

4. Results and Discussion

In this section, the first step is to decide which multi-class strategy is applied. We designed two different S4VMs with WTA and MWV to test which is better for the current situation. Each test runs ten times to reduce accidental error. Table 5 and Table 6 show the outcomes of these two methods and the classification rate of these two methods for formaldehyde, toluene, and carbon monoxide. Figure 1 below shows the accuracy rate and error rate of WTA-S4VMs and MWV-S4VMs. Figure 2 represents the classification rate of the three gas and the label which has been wrongly labeled.
It is clear that WTA-S4VMs has better performance than MWV-S4VMs, so WTA-S4VMs was selected as the major multi-classifier method of M-S4VMs. Then we applied our optimizing methods to WTA-S4VMs. Table 7 illustrates the performance of these two methods and the different classification rates of these two methods. Then Table 8 show the classification rate with different unlabeled sample rates. Figure 3, Figure 4, Figure 5 and Figure 6 show the classification rates of WTA-S4VMs and M-S4VMs with different unlabeled rates as well as the classification rates of the three gases with wrong labeled sample rates. Table 9 and Figure 7 show that the performance of M-S4VMs with different unlabeled rates from 10% to 90% where the total number of labeled samples and unlabeled samples are constant, but the number of unlabeled samples and labeled samples are dynamic as the unlabeled rate changes.
It is easy to see that with unlabeled samples added into the training samples, the performance of S4VMs gains obvious improvement. Additionally, the classification rate of target gases is also improved. This is because the unlabeled data also contains a lot of useful information about classifying different gases. When it is added into the training samples, S4VMs has more samples to modify its prediction model for classification. However, the hidden information in unlabeled samples is limited. In Table 7 it is clear that the improved accuracy from adding unlabeled samples declines as the unlabeled rate increases from 50% to 75%.
In Table 9 and Figure 7, it is obvious that the performance of M-S4VMs decreases quickly at the first classification, but then increases and maintains an ideal level in the second classification. This is because the labeled samples are sufficient at first, so the performance of M-S4VMs in the first classification is quite good. However, as the labeled samples decrease, the performance of M-S4VMs also declines. This shows that with insufficient labeled samples, the M-S4VMs performance cannot achieve its full potential. On the other hand, the unlabeled samples increase at the same time, and M-S4VMs can add unlabeled samples that it has classified into the training samples to modify its model, which obviously improves the performance of M-S4VMs in the second classification. However, when the unlabeled rate is too large, the error accumulation of semi-supervised algorithms will damage the performance of M-S4VMs, which can be observed in the 80% as well as 90% unlabeled rate. In contrast, without unlabeled samples added into the training set, the accuracy of the classification continues to decline.
When M-S4VMs is applied to the E-nose system, there are two optimization parameters (the penalty coefficient and the radius of the kernel function in SVM) that need to be decided. QPSO [51] is a fairly global optimization algorithm which has been proven to be effective in searching for the best parameters for classifiers. Because the M-S4VMs has two parameters, we set the dimension at two and the swarm size as ten. The flow chart of the algorithm is shown in Figure 8 below.
As a novel classification algorithm, it is necessary to compare with other semi-supervised algorithms; thus, we selected meanS3vm [52], M-training [53,54], and SR [55,56] for comparison. We also added two conventional nonlinear supervised methods, BP-ANN and SVM, into the comparison. Table 10 and Table 11 below show the outcome of the six semi-supervised algorithms and their classification rates of the target gases. Figure 9 and Figure 10 illustrate the accuracy of these six semi-supervised algorithms as well as the classification of target gases and their wrongly labeled sample rates. Figure 11 show the accuracy of the two conventional nonlinear supervised methods.
It is clear that M-S4VMs has a better performance than the other algorithms. For the outcomes of the four different semi-supervised algorithms, M-S4VMs has best performance not only in the minimum but also in the maximum classification rates. When it comes to the average classification rate, M-S4VMs still has the best performance.
We also compared this method with the SR and MeanS3vm methods with respect to running time. The results are shown in Table 12 below.
It is clear that meanS3vm has the longest classification running time, while BP-ANN took the least time to complete its classification. Within these six methods, M-S4VMs also performs very well. Although its running time is slightly longer than SR and BP-ANN, we find that M-S4VMs has a better accuracy rate after classification.
As for the outcomes of the classification rate of target gases, M-S4VMs performs well in classifying toluene as well as carbon monoxide. Conversely, although meanS3vm has the best performance in the classification of formaldehyde and carbon monoxide, it performs poorly in the classification of toluene, where M-S4VMs is significantly better. Summarizing all of the three target gases, M-S4VMs performs better than SR, although a little worse than SR, in classifying formaldehyde. When M-S4VMs was compared with the two conventional nonlinear supervised methods, it was also very obvious that M-S4VMs outweighs SVM and BP-ANN not only in total classification, but also in the classification of each gas except toluene, where BP-ANN only slightly exceeded M-S4VMs.

5. Conclusions

An E-nose consisting of a sensor array and an artificial algorithm can identify typical patterns to gas samples. In order to detect different gases precisely, it is essential to train the E-nose with enough samples. In general, researchers often use labeled samples to train the E-nose, which can help it gain ideal accuracy. However, this usually requires a large number of labeled samples and a long time to train the E-nose. By contrast, in the real world, unlabeled samples are easier to find and require less time than labeled samples in the training of an E-nose. Thus, it has become a hot topic to introduce unlabeled samples into E-nose training.
In this paper, we focus on making full use of unlabeled samples based on S4VMs, but original S4VMs can only solve binary classification problems. Therefore, we first change S4VMs into a multi-classifier by a popular multi-classifier SVM construction strategy. Then we propose a novel method to add unlabeled samples into training samples to improve the classification ability of S4VMs. These do not only use labeled samples to train the E-nose to classify unlabeled samples, but also use these unlabeled samples to revise its model, which can improve the performance of the E-nose. The results of the experiment with these multi-gases and the comparison with other semi-supervised algorithms have proven that this method can improve the performance of S4VMs and gain classification information from the unlabeled samples. However, the information in unlabeled samples is limited, so the classification rate will decline as the number of unlabeled samples increases. This is also clear for our experimental results; if the unlabeled rate increases too much, the accuracy of the algorithm declines gradually.
In conclusion, it is valid to add unlabeled samples into training samples to enhance the performance of the E-nose, especially when labeled samples are insufficient. Moreover, unlabeled samples can increase the variety of the training samples, which can make the E-nose perform better in real world applications. Additionally, unlabeled samples are easier to obtain and use, which can reduce the cost of the E-nose. However, there are still some problems that remain. For example, reduction of the speed of error accumulation needs further research. Unlabeled samples may be wrongly classified, and when this accumulates, it can lead to the decline of the E-nose’s accuracy as the number of unlabeled samples increases. However, S4VMs has demonstrated that it has a lower classification risk than other semi-supervised methods when it encounters unlabeled samples. All of these results make it obvious that M-S4VMs is an effective semi-supervised method for the E-nose, used to classify carbon monoxide, formaldehyde, and toluene.

Acknowledgments

The work is supported by the Program for New Century Excellent Talents in University (No. [2013] 47), the National Natural Science Foundation of China (No. 61372139, No. 61101233, No. 60972155), the Fundamental Research Funds for the Central Universities (No. XDJK2015C073), the Science and Technology personnel training program Fund of Chongqing (No. Cstc2013kjrc-qnrc40011), Chongqing Postdoctoral Science Foundation Special Funded Project (Grant No. xm2015020) and the Fundamental Research Funds for the Central Universities (No. SWU115009).

Author Contributions

Tailai Huang was in charge of the project management and proposed the algorithm. Pengfei Jia was responsible for the data analysis and the discussion of the results. Shukai Duan provided valuable advice about the revised manuscript. Peilin He, Jia Yan, and Lidan Wang were involved in discussions and the experimental analysis.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Ciosek, P.; Wróblewski, W. The analysis of sensor array data with various pattern recognition techniques. Sens. Actuators B Chem. 2006, 114, 85–93. [Google Scholar] [CrossRef]
  2. Liu, Q.; Wang, H.; Li, H.; Zhang, J.; Zhuang, S.; Zhang, F. Impedance sensing and molecular modeling of an olfactory biosensor based on chemosensory proteins of honeybee. Biosensor. Bioelectron. 2012, 40, 174–179. [Google Scholar] [CrossRef] [PubMed]
  3. Sohn, J.H.; Hudson, N.; Gallagher, E.; Dunlop, M.; Zeller, L.; Atzeni, M. Implementation of an electronic nose for continuous odor monitoring in a poultry shed. Sens. Actuators B Chem. 2008, 133, 60–69. [Google Scholar] [CrossRef]
  4. Ameer, Q.; Adeloju, S.B. Polypyrrole-based electronic noses for environmental and industrial analysis. Sens. Actuators B Chem. 2005, 106, 541–552. [Google Scholar]
  5. Lamagna, A.; Reich, S.; Rodríguez, D.; Boselli, A.; Cicerone, D. The use of an electronic nose to characterize emissions from a highly polluted river. Sens. Actuators B Chem. 2008, 131, 121–124. [Google Scholar] [CrossRef]
  6. Loutfi, A.; Coradeschi, S.; Mani, G.K.; Shankar, P.; Rayappan, J.B. Electronic noses for food quality: A review. J. Food Eng. 2015, 144, 103–111. [Google Scholar] [CrossRef]
  7. Gobbi, E.; Falasconi, M.; Zambotti, G.; Sberveglieri, V.; Pulvirenti, A.; Sberveglieri, G. Rapid diagnosis of Enterobacteriaceae, in vegetable soups by a metal oxide sensor based electronic nose. Sens. Actuators B Chem. 2015, 207, 1104–1113. [Google Scholar] [CrossRef]
  8. Hui, G.; Wang, L.; Mo, Y.; Zhang, L. Study of grass carp (Ctenopharyngodon idellus) quality predictive model based on electronic nose. Sens. Actuators B Chem. 2012, 166–167, 301–308. [Google Scholar]
  9. Norman, A.; Stam, F.; Morrissey, A.; Hirschfelder, M.; Enderlein, D. Packaging effects of a novel explosion-proof gas sensor. Sens. Actuators B Chem. 2003, 95, 287–290. [Google Scholar] [CrossRef]
  10. Green, G.C.; Chan, A.C.; Dan, H.; Lin, M. Using a metal oxide sensor (MOS)-based electronic nose for discrimination of bacteria based on individual colonies in suspension. Sens. Actuators B Chem. 2011, 152, 21–28. [Google Scholar] [CrossRef]
  11. Chapman, E.A.; Thomas, P.S.; Stone, E.; Lewis, C.; Yates, D.H. A breath test for malignant mesothelioma using an electronic nose. Eur. Respir. J. 2012, 40, 448–54. [Google Scholar] [CrossRef] [PubMed]
  12. Jia, P.; Tian, F.; He, Q.; Fan, S.; Liu, J.; Yang, S.X. Feature extraction of wound infection data for electronic nose based on a novel weighted KPCA. Sens. Actuators B Chem. 2014, 201, 555–566. [Google Scholar] [CrossRef]
  13. D’Amico, A.; Di, N.C.; Falconi, C.; Martinelli, E.; Paolesse, R.; Pennazza, G. Detection and identification of cancers by the electronic nose. Expert Opin. Med. Diagn. 2012, 6, 175–185. [Google Scholar] [CrossRef] [PubMed]
  14. Young, R.C.; Buttner, W.J.; Linnell, B.R.; Ramesham, R. Electronic nose for space program applications. Sens. Actuators B Chem. 2003, 93, 7–16. [Google Scholar] [CrossRef]
  15. Zhang, L.; Tian, F.; Liu, S.; Guo, J.; Hu, B.; Ye, Q. Chaos based neural network optimization for concentration estimation of indoor air contaminants by an electronic nose. Sens. Actuat. A Phys. 2013, 189, 161–167. [Google Scholar] [CrossRef]
  16. Zhang, L.; Tian, F.; Peng, X.; Dang, L.; Li, G.; Liu, S. Standardization of metal oxide sensor array using artificial neural networks through experimental design. Sens. Actuators B Chem. 2013, 177, 947–955. [Google Scholar] [CrossRef]
  17. Deshmukh, S.; Bandyopadhyay, R.; Bhattacharyya, N.; Pandey, R.A.; Jana, A. Application of electronic nose for industrial odors and gaseous emissions measurement and monitoring-An overview. Talanta 2015, 144, 329–340. [Google Scholar] [CrossRef] [PubMed]
  18. Khulal, U.; Zhao, J.; Hu, W.; Chen, Q. Intelligent evaluation of total volatile basic nitrogen (TVB-N) content in chicken meat by an improved multiple level data fusion model. Sens. Actuators B Chem. 2017, 238, 337–345. [Google Scholar] [CrossRef]
  19. Chen, Q.; Hu, W.; Su, J.; Li, H.; Ouyang, Q.; Zhao, J. Nondestructively sensing of total viable count (TVC) in chicken using an artificial olfaction system based colorimetric sensor array. J. Food Eng. 2015, 168, 259–266. [Google Scholar] [CrossRef]
  20. Zheng, S.; Ren, W.; Huang, L. Geoherbalism evaluation of Radix Angelica sinensis, based on electronic nose. J. Pharm. Biomed. Anal. 2015, 105, 101–106. [Google Scholar] [CrossRef] [PubMed]
  21. Yu, H.; Wang, J.; Xiao, H.; Liu, M. Quality grade identification of green tea using the eigenvalues of PCA based on the E-nose signals. Sens. Actuators B Chem. 2009, 140, 378–382. [Google Scholar] [CrossRef]
  22. Gardner, J.W.; Boilot, P.; Hines, E.L. Enhancing electronic nose performance by sensor selection using a new integer-based genetic algorithm approach. Sens. Actuators B Chem. 2005, 106, 114–121. [Google Scholar] [CrossRef]
  23. Banerjee, R.; Khan, N.S.; Mondal, S.; Tudu, B.; Bandyopadhyay, R.; Bhattacharyya, N. Features extraction from electronic nose employing genetic algorithm for black tea quality estimation. In Proceedings of the International Conference on Advanced Electronic Systems, Pinani, India, 21–23 September 2013; pp. 64–67.
  24. Jiang, M.J.; Liu, Y.X.; Yang, J.X.; Yu, W.J. A model of classification for e-nose based on genetic algorithm. Appl. Mech. Mater. 2013, 475-476, 952–955. [Google Scholar] [CrossRef]
  25. Nosov, A.V. An Introduction to Support Vector Machines; China Machine Press: Beijing, China, 2005; pp. 1–28. [Google Scholar]
  26. Platt, J. A fast algorithm for training support vector machines. J. Inf. Technol. 1998, 2, 1–28. [Google Scholar]
  27. Haugen, J.E.; Kvaal, K. Electronic nose and artificial neural network. Meat Sci. 1998, 49, S273–S286. [Google Scholar] [CrossRef]
  28. Hong, H.K.; Kwon, C.H.; Kim, S.R.; Yun, D.H.; Lee, K.; Sung, Y.K. Portable electronic nose system with gas sensor array and artificial neural network. Sens. Actuators B Chem. 2000, 66, 49–52. [Google Scholar] [CrossRef]
  29. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A training algorithm for optimal margin classifiers. In Proceedings of the 5th Annual ACM Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152.
  30. Vapnik, V.N. Statistical Learning Theory. Encycl. Sci. Learn. 2010, 41, 3185–3185. [Google Scholar]
  31. Platt, J.C.; Cristianini, N.; Shawe-Taylor, J. Large margin DAGs for multiclass classification. Adv. Neural Inf. Process. Syst. 2010, 12, 547–553. [Google Scholar]
  32. Dietterich, T.G.; Bakiri, G. Solving multiclass learning problems via error-correcting output codes. J. Artif. Intell. Res. 1995, 2, 263–286. [Google Scholar]
  33. Yan, J.; Tian, F.; Feng, J.; Jia, P.; He, Q.; Shen, Y. A PSO-SVM Method for parameters and sensor array optimization in wound infection detection based on electronic nose. J. Comput. 2012, 7, 2663–2670. [Google Scholar] [CrossRef]
  34. He, Q.; Yan, J.; Shen, Y.; Bi, Y.; Ye, G.; Tian, F.; Wang, Z. Classification of Electronic Nose Data in Wound Infection Detection Based on PSO-SVM Combined with Wavelet Transform. Intell. Autom. Soft Comput. 2012, 18, 967–979. [Google Scholar] [CrossRef]
  35. Yan, J. Hybrid feature matrix construction and feature selection optimization-based multi-objective QPSO for electronic nose in wound infection detection. Sensor Rev. 2016, 36, 23–33. [Google Scholar] [CrossRef]
  36. Jia, P.; Tian, F.; Fan, S.; He, Q.; Feng, J.; Yang, S.X. A novel sensor array and classifier optimization method of electronic nose based on enhanced quantum-behaved particle swarm optimization. Sensor Rev. 2014, 34, 304–311. [Google Scholar] [CrossRef]
  37. Jia, P.; Huang, T.; Duan, S.; Ge, L.; Yan, J.; Wang, L. A novel semi-supervised electronic nose learning technique: M-Training. Sensors 2016, 16, 370. [Google Scholar] [CrossRef] [PubMed]
  38. Active Learning Literature Survey. Available online: http://s3.amazonaws.com/academia.edu.documents/30743174/settles_active_learning.pdf?AWSAccessKeyId=AKIAJ56TQJRTWSMTNPEA&Expires=1473420623&Signature=1HWPhuu2akUY9WNJyKgQ6e0aR7c%3D&response-content-disposition=inline%3B%20filename%3DActive_learning_literature_survey.pdf (accessed on 9 September 2016).
  39. Schohn, G.; Cohn, D. Less is more: Active learning with support vector machines. In Proceedings of the International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013; pp. 839–846.
  40. Pan, S.J.; Yang, Q. A Survey on Transfer Learning. IEEE Trans. Knowl. Data Eng. 2010, 22, 1345–1359. [Google Scholar] [CrossRef]
  41. Yang, H.; King, I.; Lyu, M.R. Multi-task Learning for one-class classification. In Proceedings of the International Joint Conference on Neural Networks, Barcelona, Spain, 18–23 July 2010; pp. 1–8.
  42. Yang, H.; Lyu, M.R.; King, I. Efficient online learning for multitask feature selection. ACM Trans. Knowl. Discov. Data. 2013, 7, 1693–696. [Google Scholar] [CrossRef]
  43. Zhou, Z.H.; Li, M. Semi-supervised learning by disagreement. Knowl. Inf. Syst. 2010, 24, 415–439. [Google Scholar] [CrossRef]
  44. Xu, Z.; King, I. Introduction to semi-supervised learning. Synth. Lect. Artif. Intell. Mach. Learn. 2009, 3, 130. [Google Scholar]
  45. Zhou, Z.H. Towards making unlabeled data never hurt. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 37, 1. [Google Scholar] [CrossRef] [PubMed]
  46. Kirkpatrick, S. Optimization by simulated annealing: Quantitative studies. J. Stat. Phys. 1984, 34, 975–986. [Google Scholar] [CrossRef]
  47. Černý, V. Thermodynamical approach to the traveling salesman problem: An efficient simulation algorithm. J. Optim. Theory Appl. 1985, 45, 41–51. [Google Scholar]
  48. Hajek, B. A tutorial survey of theory and applications of simulated annealing. In Proceedings of the 24th IEEE Conference on Decision & Control, Fort Lauderdale, FL, USA, 11–13 December 1985; pp. 755–760.
  49. Sindhwani, V.; Keerthi, S.S.; Chapelle, O. Deterministic annealing for semi-supervised kernel machines. In Proceedings of the 23th International Conference on Machine Learning, Pittsburgh, PA, USA, 25–29 June 2006; pp. 841–848.
  50. Duan, K.B.; Keerthi, S.S. Which is the best multiclass SVM method? An empirical study. Multi. Classif. Syst. 2005, 3541, 278–285. [Google Scholar]
  51. Jia, P.; Duan, S.; Yan, J. An enhanced quantum-behaved particle swarm optimization based on a novel computing way of local attractor. Information 2015, 6, 633–649. [Google Scholar] [CrossRef]
  52. Li, Y.F.; Kwok, J.T.; Zhou, Z.H. Semi-supervised learning using label mean. In Proceedings of the 26th Annual International Conference on Machine Learning, Montreal, QC, Canada, 14–18 June 2009; pp. 633–640.
  53. Goldman, S.A.; Zhou, Y. Enhancing supervised learning with unlabeled data. In Proceedings of the Seventeenth International Conference on Machine Learning, Stanford, CA, USA, 29 June–2 July 2000; pp. 327–334.
  54. Angluin, D.; Laird, P. Learning from noisy examples. Mach. Learn. 1988, 2, 343–370. [Google Scholar] [CrossRef]
  55. Semi-Supervised Regression Using Spectral Techniques. Available online: https://www.ideals.illinois.edu-/bitstream/handle/2142/11232/SemiSupervised%20Regression%20using%20Spectral%20Techniques.pdf?sequence=2&isAllowed=y (accessed on 8 September 2016).
  56. Cai, D.; He, X.; Han, J. Spectral regression: A unified subspace learning framework for content-based image retrieval. In Proceedings of the International Conference on Multimedia 2007, Augsburg, Germany, 24–29 September 2007; pp. 455–482.
Figure 1. Total accuracy of the two multi-classification methods.
Figure 1. Total accuracy of the two multi-classification methods.
Sensors 16 01462 g001
Figure 2. Accuracy of the two multi-classification methods. (a) WTA-S4VMs; (b) WTA-S4VMs.
Figure 2. Accuracy of the two multi-classification methods. (a) WTA-S4VMs; (b) WTA-S4VMs.
Sensors 16 01462 g002
Figure 3. Total accuracy rate of the two algorithms with different unlabeled rates. It illustrates the total classification results for these three target gases by M-S4VMs and WTA-S4VMs when the unlabeled data account for 50% (a), 25% (b) and 75% (c), respectively. And it obvious that no matter what unlabeled rate it is, M-S4VMs always performs better than WTA-S4VMs.
Figure 3. Total accuracy rate of the two algorithms with different unlabeled rates. It illustrates the total classification results for these three target gases by M-S4VMs and WTA-S4VMs when the unlabeled data account for 50% (a), 25% (b) and 75% (c), respectively. And it obvious that no matter what unlabeled rate it is, M-S4VMs always performs better than WTA-S4VMs.
Sensors 16 01462 g003
Figure 4. 50% unlabeled rate of the two methods. It illustrates the classification results based on three target gases by M-S4VMs (a) and WTA-S4VMs (b) when the unlabeled data account for 50%. It is obvious that M-S4VMs performs better than WTA-S4VMs especially in the classification of carbon monoxide.
Figure 4. 50% unlabeled rate of the two methods. It illustrates the classification results based on three target gases by M-S4VMs (a) and WTA-S4VMs (b) when the unlabeled data account for 50%. It is obvious that M-S4VMs performs better than WTA-S4VMs especially in the classification of carbon monoxide.
Sensors 16 01462 g004
Figure 5. 25% unlabeled rate of the two methods. It illustrates the classification results based on three target gases by M-S4VMs (a) and WTA-S4VMs (b) when the unlabeled data account for 25%. According to the figure, it is clear that M-S4VMs gets better accuracy rate than WTA-S4VMs in 25% unlabeled rate.
Figure 5. 25% unlabeled rate of the two methods. It illustrates the classification results based on three target gases by M-S4VMs (a) and WTA-S4VMs (b) when the unlabeled data account for 25%. According to the figure, it is clear that M-S4VMs gets better accuracy rate than WTA-S4VMs in 25% unlabeled rate.
Sensors 16 01462 g005
Figure 6. 75% unlabeled rate of the two methods. It illustrates the classification results based on three target gases by M-S4VMs (a) and WTA-S4VMs (b) when the unlabeled data account for 75%. It is clear that M-S4VMs has specific improvement in the classification of carbon monoxide compared with WTA-S4VMs in 75% unlabeled rate.
Figure 6. 75% unlabeled rate of the two methods. It illustrates the classification results based on three target gases by M-S4VMs (a) and WTA-S4VMs (b) when the unlabeled data account for 75%. It is clear that M-S4VMs has specific improvement in the classification of carbon monoxide compared with WTA-S4VMs in 75% unlabeled rate.
Sensors 16 01462 g006
Figure 7. Performance of M-S4VMs with different unlabeled rates. Note: Data1 (blue) represents the accuracy of the first classification without unlabeled sample, and data2 (red) represents the accuracy of M-S4VMs with unlabeled sample in the second classification.
Figure 7. Performance of M-S4VMs with different unlabeled rates. Note: Data1 (blue) represents the accuracy of the first classification without unlabeled sample, and data2 (red) represents the accuracy of M-S4VMs with unlabeled sample in the second classification.
Sensors 16 01462 g007
Figure 8. Flow chart of the algorithm.
Figure 8. Flow chart of the algorithm.
Sensors 16 01462 g008
Figure 9. Total accuracy rate of the four semi-supervised algorithms.
Figure 9. Total accuracy rate of the four semi-supervised algorithms.
Sensors 16 01462 g009
Figure 10. Accuracy rate of the four semi-supervised algorithms. These are the classification results based on three target gases with two normal semi-supervised algorithms which are M-S4VMs (a); M-training (b); meanS3vm (c); SR (d), respectively. It is clear that M-S4VMs has the ideal performance in classification. Although other algorithms also performs well in some specific gases classification, M-S4VMs still works better than them in terms of total three gases classification.
Figure 10. Accuracy rate of the four semi-supervised algorithms. These are the classification results based on three target gases with two normal semi-supervised algorithms which are M-S4VMs (a); M-training (b); meanS3vm (c); SR (d), respectively. It is clear that M-S4VMs has the ideal performance in classification. Although other algorithms also performs well in some specific gases classification, M-S4VMs still works better than them in terms of total three gases classification.
Sensors 16 01462 g010aSensors 16 01462 g010b
Figure 11. Accuracy rate of the two conventional nonlinear supervised methods. These are the classification results based on three target gases with conventional nonlinear supervised methods which are BP-ANN (a) and SVM (b). It is obvious that these two algorithms all perform well in specific gas classification. However, their performance get worse while applied to other gases classification.
Figure 11. Accuracy rate of the two conventional nonlinear supervised methods. These are the classification results based on three target gases with conventional nonlinear supervised methods which are BP-ANN (a) and SVM (b). It is obvious that these two algorithms all perform well in specific gas classification. However, their performance get worse while applied to other gases classification.
Sensors 16 01462 g011
Table 1. Concentration of the target gases.
Table 1. Concentration of the target gases.
GasesConcentration Range (ppm)
Carbon monoxide[4, 12]
Toluene[0.0668, 0.1425]
Formaldehyde[0.0565, 1.2856]
Table 2. Amount of samples in a data set with 50% unlabeled rate.
Table 2. Amount of samples in a data set with 50% unlabeled rate.
GasesTraining SetUnlabeled SetTest Set
Carbon monoxide116116116
Toluene132132132
Formaldehyde253253253
All-3501501501
Table 3. Amount of samples in a data set with 75% unlabeled rate.
Table 3. Amount of samples in a data set with 75% unlabeled rate.
GasesTraining SetUnlabeled SetTest Set
Carbon monoxide58174116
Toluene66198132
Formaldehyde126380253
All-3250752501
Table 4. Amount of samples in a data set with 25% unlabeled rate.
Table 4. Amount of samples in a data set with 25% unlabeled rate.
GasesTraining SetUnlabeled SetTest Set
Carbon monoxide17458116
Toluene19866132
Formaldehyde380126253
All-3752250501
Table 5. Outcome of WTA-S4VMs and MWV-S4VMs in multi-gases.
Table 5. Outcome of WTA-S4VMs and MWV-S4VMs in multi-gases.
MinMaxAverage
WTA-S4VMs0.85240.87240.8692
MWV-S4VMs0.82760.86780.8438
Note: the accuracy rate is defined as follows: Accuracy = (acc1 * n1 + acc2 * n2 + acc3 * n3)/(n1 + n2 + n3), where acc1, acc2, acc3 represent the accuracy of formaldehyde, toluene, and carbon monoxide, respectively. And n1, n2, n3 state the sample number of formaldehyde, toluene, and carbon monoxide.
Table 6. Outcome of WTA-S4VMs and MWV-S4VMs in multi-gases.
Table 6. Outcome of WTA-S4VMs and MWV-S4VMs in multi-gases.
FormaldehydeToluene Carbon Monoxide
WTA-S4VMs0.85750.95990.8049
MWV-S4VMs0.81760.89980.8978
Note: the accuracy rate is defined as follows: accuracy = L/N; L represents the number of labels given by a classifier which meet the true labels; N states the number of labels.
Table 7. Outcome of the total accuracy rate at different unlabeled rates.
Table 7. Outcome of the total accuracy rate at different unlabeled rates.
MinMaxAverageImprovementUnlabeled Rate
WTA-S4VMs0.84240.87240.85925%50%
M-S4VMs0.89720.91400.9002
WTA-S4VMs0.83240.86790.85793%25%
M-S4VMs0.87720.90700.8895
WTA-S4VMs0.83240.86240.85923%75%
M-S4VMs0.88720.91430.8867
Table 8. Outcome of the target gases accuracy rate at different unlabeled rates.
Table 8. Outcome of the target gases accuracy rate at different unlabeled rates.
FormaldehydeToluene Carbon MonoxideUnlabeled Rate
WTA-S4VMs0.83430.95880.710450%
M-S4VMs0.87420.95990.9188
WTA-S4VMs0.85750.95990.804925%
M-S4VMs0.87390.96390.9438
WTA-S4VMs0.86360.96020.772275%
M-S4VMs0.86870.95990.8537
Table 9. Outcome of M-S4VMs at different unlabeled rates.
Table 9. Outcome of M-S4VMs at different unlabeled rates.
Unlabeled RateAccuracy1Accuracy2
10%0.95270.9200
20%0.92620.9343
30%0.90860.9210
40%0.84380.9037
50%0.87280.9140
60%0.84300.8960
70%0.82710.8999
80%0.78960.8589
90%0.74350.7576
Note: accuracy1 represents the accuracy of the first classification without unlabeled samples, and accuracy2 represents the accuracy of M-S4VMs with unlabeled samples.
Table 10. Outcomes of the six semi-supervised algorithms.
Table 10. Outcomes of the six semi-supervised algorithms.
MinMaxAverage
M-S4VMs0.89670.91660.9102
M-training0.86330.87550.8702
meanS3vm0.73540.76320.7448
SR0.84370.86370.8535
BP-ANN0.84250.87640.8525
SVM0.84300.82580.8335
Table 11. Outcomes of the classification rates of target gases.
Table 11. Outcomes of the classification rates of target gases.
FormaldehydeTolueneCarbon Monoxide
M-S4VMs0.87420.95990.9188
M-training0.86870.85550.8733
meanS3vm0.93170.56820.9277
SR0.89980.89880.9188
BP-ANN0.72220.96970.8412
SVM0.66520.82230.7932
Table 12. Running time of classification rate of target gases per second.
Table 12. Running time of classification rate of target gases per second.
Running Time (s)
M-S4VMs35.410543
M-training48.008512
meanS3vm179.690567
SR21.677869
BP-ANN18.648126
SVM42.281541

Share and Cite

MDPI and ACS Style

Huang, T.; Jia, P.; He, P.; Duan, S.; Yan, J.; Wang, L. A Novel Semi-Supervised Method of Electronic Nose for Indoor Pollution Detection Trained by M-S4VMs. Sensors 2016, 16, 1462. https://doi.org/10.3390/s16091462

AMA Style

Huang T, Jia P, He P, Duan S, Yan J, Wang L. A Novel Semi-Supervised Method of Electronic Nose for Indoor Pollution Detection Trained by M-S4VMs. Sensors. 2016; 16(9):1462. https://doi.org/10.3390/s16091462

Chicago/Turabian Style

Huang, Tailai, Pengfei Jia, Peilin He, Shukai Duan, Jia Yan, and Lidan Wang. 2016. "A Novel Semi-Supervised Method of Electronic Nose for Indoor Pollution Detection Trained by M-S4VMs" Sensors 16, no. 9: 1462. https://doi.org/10.3390/s16091462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop