Next Article in Journal
IoT and UAV Integration in 5G Hybrid Terrestrial-Satellite Networks
Next Article in Special Issue
A Computational Framework for Data Fusion in MEMS-Based Cardiac and Respiratory Gating
Previous Article in Journal
Deep Learning-Based Target Tracking and Classification for Low Quality Videos Using Coded Aperture Cameras
Previous Article in Special Issue
Adaptive Neuro-Fuzzy Fusion of Multi-Sensor Data for Monitoring a Pilot’s Workload Condition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wasserstein Distance Learns Domain Invariant Feature Representations for Drift Compensation of E-Nose

School of Communication and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
*
Authors to whom correspondence should be addressed.
Sensors 2019, 19(17), 3703; https://doi.org/10.3390/s19173703
Submission received: 2 August 2019 / Revised: 17 August 2019 / Accepted: 19 August 2019 / Published: 26 August 2019
(This article belongs to the Collection Multi-Sensor Information Fusion)

Abstract

:
Electronic nose (E-nose), a kind of instrument which combines with the gas sensor and the corresponding pattern recognition algorithm, is used to detect the type and concentration of gases. However, the sensor drift will occur in realistic application scenario of E-nose, which makes a variation of data distribution in feature space and causes a decrease in prediction accuracy. Therefore, studies on the drift compensation algorithms are receiving increasing attention in the field of the E-nose. In this paper, a novel method, namely Wasserstein Distance Learned Feature Representations (WDLFR), is put forward for drift compensation, which is based on the domain invariant feature representation learning. It regards a neural network as a domain discriminator to measure the empirical Wasserstein distance between the source domain (data without drift) and target domain (drift data). The WDLFR minimizes Wasserstein distance by optimizing the feature extractor in an adversarial manner. The Wasserstein distance for domain adaption has good gradient and generalization bound. Finally, the experiments are conducted on a real dataset of E-nose from the University of California, San Diego (UCSD). The experimental results demonstrate that the effectiveness of the proposed method outperforms all compared drift compensation methods, and the WDLFR succeeds in significantly reducing the sensor drift.

1. Introduction

Electronic nose (E-nose) is known as machine olfaction, consisting of the gas sensor array and corresponding pattern recognition algorithms, and is used to identify gases. Zhang et al. [1] and Wang et al. [2] used E-nose for air quality monitoring. Yan et al. [3] utilized E-nose to analysis disease. Rusinek et al. [4] used it for quality control of food. An increasing number of E-nose systems are being developed into actual applications because the E-nose systems are convenient to use, fast, and cheap. However, the sensor drift of E-nose still is a serious problem which decreases the performance of E-nose system and is receiving more and more attention. For most chemical sensors, sensor sensitivity may be influenced by many factors, such as environmental factors (temperature, humidity, pressure), self-aging and poisoning, etc. The change of sensor sensitivity will result in the fluctuation of sensor responses when the E-nose exposed to the same gas in different time, which is called the sensor drift [5]. In this paper, we mainly focus on the drift compensation of the sensor.
A number of methods have been applied to cope with the sensor drift of E-nose. From the perspective of signal preprocessing methods [6,7], frequency analysis and baseline manipulation have been adopted to compensate each sensor response. From the perspective of component correction, Artursson et al. [8] proposed the principal component analysis (PCA) method to correct the entire sensor response and suppress the sensor drift. The orthogonal signal correction (OSC) was proposed in [9,10] for the drift compensation of the E-nose. From the angle of the classifier, Vergara et al. [5]. proposed an ensemble strategy to enhance the robustness of the classifier and address the sensor drift. Dang et al. [11] proposed an improved support vector machine ensemble (ISVMEN) method, which improved classification accuracies and dealt with the sensor drift. In addition, some adaptive methods also are used to solve the problem of the sensor drift, such as the self-organizing map (SOM) [12], domain adaption methods [13], etc.
The above methods can suppress the drift to a certain extent, but the effects of these methods are limited due to the weak generalization of drift data. The sensor drift will make a variation of data distributions between the collected samples previously (data without drift) and the collected samples later (drift data). It will cause a great decrease in classification accuracies when the model trained with the data without drift is directly applied in testing samples with the drift. Therefore, in the sensor researches and pattern recognition communities, it is challenging to find a drift compensation algorithm with good robustness and adaptability.
In these cases, domain adaption techniques are a proper solution to deal with the problem of inconsistent data distributions between the source and the target domain samples. These techniques also have broad applications in many research fields, including natural language processing, machine vision, etc. [14,15,16]. In the drift compensation of the sensor, it is assumed that the data without drift is viewed as the source domain, and the drift data is considered as the target domain. At present, some scholars have performed domain adaption techniques on drift compensation algorithms. An intuitive idea is to reduce the difference of distributions among domains in the feature level, i.e., to learn domain-invariant feature representations. The geodesic flow kernel (GFK) method for the drift compensation was presented by Gong et al. [17], and it aimed to model the domain shift by integrating an infinite number of subspaces that describe the change in geometric and statistical properties from the source domain to the target domain. The advancement of the GFK for the drift compensation was presented in [18], namely domain adaption by shifting covariance (DASC). The mentioned methods have reduced the sensor drift to a certain extent. However, these domain adaption methods project the source and the target samples into a separate subspace, and the one domain, as a subspace, is not sufficient to represent the difference of distributions across domains. In this paper, we are committed to learn domain invariant feature representations in the common feature space, and some scholars performed made relevant researches. A domain regularization component analysis (DRCA) method was proposed by Zhang et al. [19] to map all samples from two domains to the same feature subspace, and it measured the distribution discrepancy between the source and the target domain using the maximum mean discrepancy (MMD). However, a linear mapping technique cannot seriously ensure “drift-less” properties in the E-nose. Ke Yan, Lu Kou et al. [20] minimized the distance between the source and target domain feature representations by maximizing the independence between data features and domain features (device label and acquisition time of a sample), which solved the issue of the sensor drift. A domain correction, based on the kernel transformation (DCKT) method, was proposed in [21]. It aligned the distributions of the two domains by mapping all samples to a high-dimensional reproducing kernel space and reduced the sensor drift. Some algorithms that have appeared in deep learning in recent years are applicable to guide feature representation learning. The representative features of the E-nose were extracted with autoencoders [22,23]. In addition, some adversarial domain adaption methods were also adopted to reduce the discrepancy across domains [24,25,26]. Arjovsky et al. [27] utilized Wasserstein distance to achieve a great breakthrough in the fields of common sentiments and images. However, there are few relative researches on Wasserstein distance to reduce the drift in E-nose.
Inspired by Wasserstein GAN (WGAN) [26] and spectral normalized GANs (SN-GANs) [27], a new drift compensation algorithm called Wasserstein Distance Learned Feature Representations (WDLFR) is proposed in this paper. First, the WDLFR measures the distribution discrepancy across domains using Wasserstein distance, and it estimates the empirical Wasserstein distance between feature representations of the source and the target domain by learning an optimal domain discriminator. Then, WDLFR minimizes the estimated empirical Wasserstein distance by constantly updating a feature extractor network in an adversarial manner. Finally, in order to make the extracted feature representations class-distinguished, the WDLFR incorporates the supervision information of the source domain samples into the feature representation learning. That is, the learned feature representations are domain-invariant and class-distinguished. Empirical studies on a dataset of E-nose from the University of California, San Diego (UCSD) demonstrate that the effectiveness of the proposed WDLFR outperforms the comparison approaches.
The rest of this paper is organized as follows. The basis of the proposed method is presented in Section 2. Section 3 details the proposed WDLFR approach based on the domain invariant feature representation learning. The experiments and results are discussed in Section 4. Finally, Section 5 concludes this paper.

2. Related Work

In this section, a brief introduction of the Wasserstein distance will be given. It is the basis of the proposed method.

Wasserstein Distance

Wasserstein distance is used to measure the distance between two probability distributions and is defined as
W [ P , Q ] = inf γ [ P , Q ] γ ( x , y ) ρ ( x , y ) d x d y
where ρ ( x , y ) is a cost function representing the cost of transportation from the instance x to y, and the common cost function is based on l norm, such as 1-norm x y 1 and 2-norm x y 2 . Due to the equivalence property of norm, the final result of Wasserstein distance is closed. γ ( P , Q ) shows that γ is a joint distribution, satisfying constraint γ ( x , y ) d y = P ( x ) and γ ( x , y ) d x = Q ( y ) simultaneously, and P , Q are marginal distributions. In fact, Wasserstein distance metric appears in the problem of optimal transport: γ ( x , y ) is considered as a randomized scheme for transporting goods from a random location x to another random location y, and it satisfies the marginal constraint x P and x Q . If the cost of transporting a unit of goods from x P to x Q is given by ρ ( x , y ) , W [ P , Q ] is the minimum expected transport cost.
The Kantorovich-Rubinstein theorem shows that the dual form of Wasserstein distance [28] can be written as follows
W [ P , Q ] = sup f L 1 E x P [ f ( x ) ] E x Q [ f ( x ) ] ,
where the Lipschitz constraint is used to limit the change of function value and is defined as f L = sup | f ( x ) f ( y ) | / ρ ( x , y ) . In this paper, for simplicity, Equation (2) is viewed as the final Wasserstein distance, and [29] has showed that Wasserstein distance has a good gradient and generalization bound for domain adaption under the Lipschitz constraint.

3. Wasserstein Distance Learned Feature Representations (WDLFR)

3.1. Problem Definition

In domain adaption techniques, the source and the target domain are defined by describing “S” and “T”, respectively. We have the training set X S = { ( x i s , y i s ) } i = 1 n s of n s labeled samples from the source domain D S , and the testing set is defined as X t = { ( x j t ) } j = 1 n t of n t unlabeled samples from the target domain D T . It is assumed that the source and the target domain share the same feature space, but the marginal distributions are different ( P x s and P x t respectively). The purpose of domain adaption is to reduce the divergence between the two domains, and the classifier of the source domain can be directly applied to the target domain.

3.2. Domain Invariant Feature Representation Learning

The sensor drift of the E-nose leads to inconsistent data distributions between the previously collected samples (source domain) and the later collected samples (target domain), which means that the model trained with source domain samples may be highly biased in the target domain. To solve this problem, a new method (WDLFR) is proposed in this paper. The learned feature representations are invariant to the change of domain by minimizing the empirical Wasserstein distance between the source and the target feature representations through an adversarial training manner.
The adversarial method is composed of two parts, including the feature extractor and the domain discriminator implemented by a neural network. The feature extractor network is used to learn the feature representations of the source and the target domain, and domain discriminator is used to estimate the empirical Wasserstein distance between the feature representations of both domains. First, considering a sample of any domain x R m , the feature extractor network learns a function f g : R m R d that maps the sample to a d-dimensional representation with the network parameters θ g . The feature representations can be calculated by h = f g ( x ) , and the feature representation distributions of the source and the target domain are P h s and P h t , respectively. Therefore, the Wasserstein distance between the feature representation distributions P h s and P h s can be expressed by Equation (2)
W [ P h s ,   P h s ] = sup f L 1 E h P h s [ f ( h ) ] E h P h t [ f ( h ) ] .
For the function f, we can train a domain discriminator, as suggested in [28], to learn a function f w : R d R that maps feature representations to a real number with corresponding network parameters θ w . The Wasserstein distance can be reformulated as
W [ P h s ,   P h s ] = sup f L 1 E h P h s [ f w ( h ) ] E h P h t [ f w ( h ) ] .
According to the feature extractor network, the feature representations of the source and the target domain are h s = f g ( x s ) and h t = f g ( x t ) , respectively. The Wasserstein distance between the feature representation distributions of the source and the target domain can, again, be written as follows
W [ P h s ,   P h s ] = sup f L 1 E h P h s [ f w ( h ) ] E h P h t [ f w ( h ) ] = sup f L 1 E x P x s [ f w ( f g ( x ) ) ] E x P x t [ f w ( f g ( x ) ) ]
If the function of the domain discriminator f w satisfies the Lipschitz constraint, and the Lipschitz norm is bounded to 1, the empirical Wasserstein distance can be approximated by maximizing the domain discriminator loss l w d with respect to parameters θ w
max θ w   l w d ,
where the domain discriminator loss l w d is represented as
l w d ( x s , x t ) = 1 n s x s X s f w ( f g ( x s ) ) 1 n t x t X t f w ( f g ( x t ) ) .
Here, the question of enforcing the Lipschitz constraint is raised. A weight clipping method was presented in [26], aiming to limit all weight parameters of domain discriminator to the range of [−c,c] after each gradient update. However, [30] pointed out that it is easy to cause the problem of gradient disappearances and gradient explosions. Gulrajani et al. [30] proposed a more appropriate gradient penalty method to make the domain discriminator satisfy the Lipschitz constraint, and the method can obtain a good result in most cases. However, the linear gradient interpolation method can only ensure that the Lipschitz constraint is satisfied in a small space, and the interpolation between different label samples may not satisfy the Lipschitz constraint. The disadvantages are pointed out in [27]. As suggested in [27], a more reasonable method is to update the weight parameters θ w by the spectral normalization method after each gradient update. The merit of the spectral normalization method is that the domain discriminator f w can satisfy the Lipschitz constraint no matter how the domain discriminator parameters θ w change. Therefore, the spectral normalization method will be used to make the domain discriminator f w satisfy the Lipschitz constraint.
Now, the Wasserstein distance is continuous and differentiable almost everywhere, and an optimal domain discriminator can be trained first. Then, by fixing the optimal network parameters of domain discriminator and minimizing the Wasserstein distance, the feature extractor network can learn the feature representations with the domain discrepancy reduced. Therefore, the feature representations can be estimated by solving the minimax problem
min θ g   max θ w   l w d
Finally, by iteratively learning feature representations with lower Wasserstein distance, the adversarial objective function can learn domain invariant feature representations.

3.3. Combing with Supervision Signals

The final purpose of this paper is to ensure that the classifier of the source domain will have a good performance in the target domain. Considering the above domain adaption method, it may be impossible to learn the optimal feature representations. Because the WDLRF method can learn domain invariant feature representations and guarantees transferability of the learned feature representations, the source domain classifier is feasible to the target domain. However, the learned domain invariant feature representations are not enough class-distinguished. Therefore, the supervision information of the training set denoted by the source domain will be integrated into the domain invariant feature representation learning as suggested in [24]. The overview framework of the algorithm is shown in Figure 1.
Next, the combination of the feature representation learning and the classifier will be introduced. Several layers can be added as the classifier after the feature extractor network. The objective of the classifier f c : R d R l is to compute the Softmax prediction with the network parameters θ c , where l is the number of classes. The Softmax prediction is mainly used in multi-classification problems, and it will divide the entire space according to the number of classes to ensure that the classes are separable. Finally, the empirical loss of the classifier in the source domain is given by
l c ( x s , y s ) = min θ c i = 1 n s L ( f c ( x i s ) , y i s ) ,
where L ( f c ( x i s ) , y i s ) is the cross-entropy between the predicted probabilistic distribution and the one-hot encoding of the class labels given the labeled source data
L ( f c ( x i s ) , y i s ) = k = 1 l 1 ( y i s = k ) · log f c ( f g ( x i s ) ) k .
1 ( y i s = k ) is an indicator function, and log f c ( f g ( x i s ) ) k corresponds to the k-th dimension value of the distribution f c ( f g ( x i s ) ) . Therefore, the final empirical loss of the source domain classifier is expressed as
l c ( x s , y s ) = min θ c 1 n s i = 1 n s k = 1 l 1 ( y i s = k ) · log f c ( f g ( x i s ) ) k .
Finally, the final objective function is obtained by combining the Equations (8) with (11)
min θ g , θ c ( l c + λ max l w d θ w ) ,
where λ is a coefficient parameter used to control the balance between class-distinguished and transferable feature representation learning. The process of the WDLFR is shown in Algorithm 1.
The WDLFR algorithm can be implemented using the standard back-propagation with two iterations. In a mini-batch size including labeled source data and unlabeled target data, the domain discriminator can firstly be trained to optimal point by gradient ascent. The mini-batch gradient ascent method divides all samples of two domains into several batches and updates the network parameters of the neural network in each batch way, which can reduce the computational complexity. In other words, the mini-batch gradient ascent method divides the training set into several small training sets. Then, in order to reduce the distribution discrepancy across domains, we simultaneously minimize the estimated empirical Wasserstein distance across domains and the classification loss computed by labeled source samples to update the feature extractor network. Finally, the learned feature representations are domain-invariant and class-distinguished, since the parameter θ g receives the gradients from both the domain discriminator and the classifier.
The sensor drift changes the features of the collected data. Furthermore, it also makes data distributions different. Domain adaption techniques can reduce the difference of distributions among domains. Therefore, the proposed WDLRF method can be used to reduce the drift of the E-nose.
Algorithm 1 The Proposed WDLFR Method: Asserstein Distance Learned Feature Representations Combining with Classifier
Require: Labeled source data X S , unlabeled target data X t , mini-batch size m, total training iterations n, training step of domain discrimination k, coefficient parameter λ , learning rate of domain discrimination α , learning rate of features representations learning and classifier β .
1. Initialize feature extractor, domain discrimination, classifier with random weights θ g , θ w , θ c
2. Repeat: (total training iterations n)
3.  Sample m instances { ( x i s , y i s ) } i = 1 m from X S
  Sample m examples { ( x i t ) } i = 1 m from X t
4.  For i = 1, …, k do
5.    h s f g ( x s ) , h t f g ( x t )
6.    θ w θ w + α θ w [ l w d ( x s , x t ) ]
7.   Calculate spectral normalization weights W ¯ S N
8.    θ w θ w + α θ w [ l w d ( x s , x t ) ]
9. End for
10. θ c θ c β θ c [ l c ( x s , y s ) ]
11. θ g θ g β θ g [ l c ( x s , y s ) + l w d ( x s , x t ) ]
12. Until θ g , θ w , θ c converge

4. Experiments

In this section, the real sensor drift benchmark dataset of the E-nose from UCSD is used to evaluate the effectiveness of the WDLFR method, and the experimental results of the proposed WDLFR method are compared with that of other drift compensation algorithms in E-nose.

4.1. Sensor Drift Benchmark Dataset

The real sensor drift benchmark dataset, consisting of data from three years, was collected by Vergara et al. [5] at UCSD. The sensor array of the E-nose was composed of 16 chemical sensors, each of which extracted 8 features of the sample. Consequently, each sample had a total of 128 (16 × 8) dimensional feature vectors. The E-nose was utilized to measure six gases (acetone, acetaldehyde, ethanol, ethylene, ammonia, and toluene) at different concentrations. A total of 13,910 samples were gathered over a course of 36 months from January 2008 to February 2011, which were split into 10 batches according to the chronological order. The sensor response information of the tenth batch was collected deliberately after the E-nose was powered off for five months. These sensors were susceptible to serious pollution during the five months, and the pollution was irreversible so that the operating temperature of the chemical sensor array in the sensor chamber was not able to resume to a normal extent. In this situation, serious drift will happen on the collected samples. Therefore, the tenth batch will suffer from the most serious drift compared to other batches. The period of collection and the number of samples for each class, with respect to each batch, are summarized in Table 1. More information on the real sensor drift benchmark dataset can be found in [5].
To more intuitively observe the data distribution discrepancy of all batches, the two-dimensional principle component scatter points of the original data are plotted in Figure 2. From Figure 2, it is clearly observed that the 2D subspace distribution between Batch 1 and the other batches is significantly inconsistent due to the impact of the sensor drift. If Batch 1 is considered as the source domain for training model, test on Batch b, b = 2, …, 10 (i.e., target domain), the recognition accuracy will create a great bias. One possible reason is that it violates the basic assumptions of machine learning: The training set and the test set should maintain the same or similar probability distribution. In this case, the distributions between the two domains can be aligned by learning the domain invariant feature representations.

4.2. WDLRF Implementation Details

In this paper, all experiments are performed using Tensorflow and the training model is optimized using Adam optimizer. The advantage of the Adam optimizer is that each iteration of the learning rate has a clear range, and it makes the change of parameters very stable. Under the situation of the best experimental results, the constructions of the WDLRF method are as follows. The feature extractor network contains an input layer of 128 neuron nodes and an output layer of 200 neuron nodes. The domain discriminator is designed with an input layer of 200 nodes, one hidden layer of 10 nodes, and an output layer of 1 node. The classifier is composed with an input layer of 200 nodes and an output layer of 6 nodes. All the activation functions adopt the ReLU function, except a Softmax function for the classifier. After normalizing all samples from source and target domains, the samples from the source and target domain are first inputted into the feature extractor network. Then, the extracted source domain features are inputted into the classifier, and the source and target domain features are inputted into the domain discriminator to evaluate the Wasserstein distance. Finally, it updates the feature extractor network by learning the classifier and the domain discriminator at the same time. Therefore, the extracted features from the feature extractor network have domain invariant characteristics, and the distribution consistencies of source and target domain are greatly improved.

4.3. The Experiment Results and Analysis

The classification accuracies are used as a criterion to judge the drift reduction. A goal that the WDLFR method aligns with the distribution of the source and target domain is to improve the performance of the classifier. Therefore, all experiments are conducted under Experimental Setting 1 and Setting 2. In order to better verify the effectiveness of the proposed WDLFR method in this paper, the proposed approach compares with principal component analysis (PCA) [8], linear discriminant analysis (LDA) [31], domain regularized component analysis (DRCA) [19], and SVM ensemble (SVM-rbf, SVM-comgfk).
Setting 1: Take Batch 1 as the source domain for training model, and test on Batch b, b = 2, 3, …, 10.
Setting 2: Take Batch b as the source domain for training model, and test on Batch (b + 1), b = 1, 2, 3, …, 9.
Under the setting 1, the first batch with labeled data is used as the source domain, and the b-th (b = 2, 3, …, 10) batch with unlabeled data is considered as the target domain. If the process of learning domain invariant feature representations from the both domains is regarded as a task, a total of nine tasks with pair-wise batches (Batch 1 vs. Batch b) (b = 2, 3, …, 10) are implemented in this Experimental Setting 1. Inspired by PCA scatter points in Figure 2, the 2D principle component scatter points of nine tasks after using the proposed WDLFR method are shown in Figure 3. Comparing the PCA scatter points in Figure 2 and Figure 3, the distribution discrepancy between the source and target data has been greatly reduced in a certain extent, and the distribution consistencies have been improved greatly. Therefore, the classifier trained with source data is feasible to target data.
In order to represent the effect of the proposed WDLFR method on drift suppression intuitively, the recognition results of all comparison algorithms under Experimental Setting 1 are reported in Table 2. First, it can be found that the average recognition accuracy of the WDLFR is the best, reaching 82.55%. The average recognition accuracy is 4.92% higher than the second-best performance method (i.e., DRCA). Second, the recognition accuracy of Batch 10 is the lowest, rather than other batches. One possible reason is that the data of Batch 10 are gathered after the E-nose is powered off for five months, which will cause the data of Batch 10 to experience the more serious drift. Therefore, it is difficult to align the marginal probability between Batch 10 and Batch 1. Overall, the proposed WDLFR method is feasible for drift compensation. In addition, in order to reflect the effectiveness of each method intuitively, the recognition accuracy bar chart of each method under Experimental Setting 1 is drawn in Figure 4a. From Figure 4a, it can be clearly seen that the recognition accuracy for the most of batches is much higher than other compared approaches. Since the proposed WDLFR method adopts the mini-batch gradient ascent training way, the mini-batch size setting under the highest accuracy of each task is given in Table 3.
Under Experimental Setting 2, the b-th batch of dataset is used as source data, and the (b + 1)-th batch of dataset is viewed as target data, b = 1, 2, 3, …, 9, i.e., the classification model is trained on Batch b and tests on Batch (b + 1). The experimental comparison results of recognition accuracy for each algorithm are reported in Table 4, and the corresponding parameters (mini-batch size) are shown in Table 5. From Table 4, it can be clearly observed that the proposed algorithm achieves the highest average recognition accuracy, reaching 83.08%. The average recognition accuracy is 8.86% higher than the second-best performance method (i.e., DRCA). The recognition accuracy bar chart of each method is drawn in Figure 4b. From Figure 4b, the WDLFR attains the highest performance for the most of batches. Overall, the experimental comparison results under Setting 1 and Setting 2 confirm the fact that the proposed WDLFR method has a great advancement for reduction drift. At the same time, the effectiveness and competitiveness of the proposed WDLFR method are demonstrated.

5. Conclusions

In order to solve the issue of inconsistent distributions caused by the sensor drift in the E-nose, a novel drift compensation algorithm (WDLFR) is proposed in this paper. The WDLFR can effectively reduce the distribution discrepancy by taking the merit of the good gradient property and generalization bound of Wasserstein distance. Furthermore, the WDLFR can reduce the drift of the E-nose. The characteristics of the WDLFR are as follows: (1) The feature extractor network and the domain discriminator are trained by an adversarial manner, so that the extracted features by the feature extractor network can eventually cheat the domain discriminator to generate domain invariant feature representations; (2) It combines the feature extractor with classifier to make the learned domain invariant feature representations class-distinguished. Finally, in order to verify the effectiveness of the WDLFR, we experiment on the dataset of the E-nose from USCD, and the classification accuracy is better than other comparison algorithms using the proposed WDLFR method.
In the future, we will continue to expand the work from the perspective of adaption classifier. It establishes a residual relationship between the source and the target domain classifier, and combines the feature extractor network with the adaption classifier to compensate the drift.

Author Contributions

The work presented here was implemented under the collaboration of all authors. C.L. and Z.L. conceived and designed experiments; Y.T. and C.L. performed the experiments; Z.L., C.L. analyzed the experimental data; Y.T. wrote the paper; Z.L., C.L., J.X., H.Y. participated in paper revision and made many suggestions.

Funding

This work was supported by the Science and Technology Research Program of Chongqing Municipal Education Commission (Grant No. KJQN201800617), Foundation and Frontier Research Project of Chongqing Municipal Science and Technology Commission (Grant No. cstc2018jcyjAX0549).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Zhang, L.; Tian, F.; Kadri, C. On-line sensor calibration transfer among electronic nose instruments for monitoring volatile organic chemicals in indoor air quality. Sens. Actuators B Chem. 2011, 160, 899–909. [Google Scholar] [CrossRef]
  2. He, J.; Xu, L.; Wang, P.; Wang, Q. A high precise E-nose for daily indoor air quality monitoring in living environment. Integr. VLSI J. 2016, 58, 3124–3140. [Google Scholar] [CrossRef]
  3. Yan, K.; Zhang, D.; Wu, D. Design of a Breath Analysis System for Diabetes Screening and Blood Glucose Level Prediction. IEEE Trans. Biomed. Eng. 2014, 61, 2787–2795. [Google Scholar] [CrossRef] [PubMed]
  4. Rusinek, R.; Gancarz, M.; Krekora, M.; Nawrocka, A. A Novel Method for Generation of a Fingerprint Using Electronic Nose on the Example of Rapeseed Spoilage. J. Food Sci. 2018, 84, 51–58. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  5. Vergara, A.; Vembu, S.; Ayhan, T.; Ryan, M.A.; Homer, M.L.; Huerta, R. Chemical gas sensor drift compensation using classifier ensembles. Sens. Actuators B Chem. 2012, 166, 320–329. [Google Scholar] [CrossRef]
  6. Guney, S.; Atasoy, A. An electronic nose system for assessing horse mackerel freshness. In Proceedings of the International Symposium on Innovations in Intelligent Systems & Applications, Trabzon, Turkey, 2–4 July 2012; pp. 112–134. [Google Scholar]
  7. Marco, S.; Gutierrez-Galvez, A. Signal and data processing for machine olfaction and chemical sensing: A review. IEEE Sens. J. 2012, 12, 3189–3214. [Google Scholar] [CrossRef]
  8. Artursson, T.; Eklov, T.; Lundstrom, I.; Mårtensson, P.; Sjöström, M.; Holmberg, M. Drift correction for gas sensors using multivariate methods. J. Chemom. 2000, 14, 711–723. [Google Scholar] [CrossRef]
  9. Feng, J.; Tian, F.; Jia, P.; He, Q.; Shen, Y.; Fan, S. Improving the performance of electronic nose for wound infection detection using orthogonal signal correction and particle swarm optimization. Sens. Rev. 2014, 34, 389–395. [Google Scholar] [CrossRef]
  10. Padilla, M.; Perera, A.; Montoliu, I.; Chaudry, A.; Persaud, K.C.; Marco, S. Drift compensation of gas sensor array data by Orthogonal Signal Correction. Chemom. Intell. Lab. Syst. 2010, 100, 28–35. [Google Scholar] [CrossRef]
  11. Dang, L.; Tian, F.; Zhang, L.; Kadri, C.; Yin, X.; Peng, X.; Liu, S. A novel classifier ensemble for recognition of multiple indoor air contaminants by an electronic nose. Sens. Actuators A Phys. 2014, 207, 67–74. [Google Scholar] [CrossRef]
  12. Zuppa, M.; Distante, C.; Siciliano, P.; Persaud, K.C. Drift counteraction with multiple self-organising maps for an electronic nose. Sens. Actuators B Chem. 2004, 98, 305–317. [Google Scholar] [CrossRef]
  13. De Vito, S.; Fattoruso, G.; Pardo, M.; Tortorella, F.; Di Francia, G. Semi-Supervised Learning Techniques in Artificial Olfaction: A Novel Approach to Classification Problems and Drift Counteraction. IEEE Sens. J. 2012, 12, 3215–3224. [Google Scholar] [CrossRef]
  14. Duan, L.; Xu, D.; Tsang, W.H.; Luo, J. Visual Event Recognition in Videos by Learning from Web Data. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 34, 1667–1680. [Google Scholar] [CrossRef] [PubMed]
  15. Duan, L.; Tsang, I.W.; Xu, D. Domain Transfer Multiple Kernel Learning. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 465–479. [Google Scholar] [CrossRef] [PubMed]
  16. Duan, L.; Xu, D.; Chang, S.F. Exploiting web images for event recognition in consumer videos: A multiple source domain adaptation approach. In Proceedings of the IEEE Conference of the Computer Vision & Pattern Recognition (CVPR 2012), Providence, RI, USA, 16–21 June 2012; Volume 8, pp. 1338–1345. [Google Scholar]
  17. Sha, F.; Shi, Y.; Gong, B.; Grauman, K. Geodesic flow kernel for unsupervised domain adaptation. In Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 16–21 June 2012; IEEE Computer Society: Washington, DC, USA, 2012; Volume 22, pp. 2066–2073. [Google Scholar]
  18. Cui, Z.; Li, W.; Xu, D.; Shan, S.; Chen, X.; Li, X. Flowing on Riemannian Manifold: Domain Adaptation by Shifting Covariance. IEEE Trans. Cybern. 2014, 44, 2264–2277. [Google Scholar]
  19. Lei, Z.; Yan, L.; He, Z.; Liu, J.; Deng, P.; Zhou, X. Anti-drift in E-nose: A subspace projection approach with drift reduction. Sens. Actuators B Chem. 2017, 253, 407–417. [Google Scholar]
  20. Yan, K.; Kou, L.; Zhang, D. Domain Adaptation via Maximum Independence of Domain Features. Submitted to IEEE Trans. Cybern. 2016, 32, 408–422. [Google Scholar]
  21. Tao, Y.; Xu, J.; Liang, Z.; Xiong, L.; Yang, H. Domain Correction Based on Kernel Transformation for Drift Compensation in the E-Nose System. Sensors 2018, 18, 3209. [Google Scholar] [CrossRef]
  22. Längkvist, M.; Loutfi, A. Unsupervised feature learning for electronic nose data applied to bacteria identification in blood. In Proceedings of the NIPS Workshop Deep Learn and Unsupervised Feature Learn, Granada, Spain, 16 December 2011; pp. 1–7. [Google Scholar]
  23. Längkvist, M.; Coradeschi, S.; Loutfi, A.; Rayappan, J.B.B. Fast classification of meat spoilage markers using nanostructured ZnO thin films and unsupervised feature learning. Sensors 2013, 13, 1578–1592. [Google Scholar] [CrossRef]
  24. Ganin, Y.; Ustinova, E.; Ajakan, H.; Germain, P.; Larochelle, H.; Laviolette, F.; Marchand, M.; Lempitsky, V. Domain-Adversarial Training of Neural Networks. J. Mach. Learn. Res. 2015, 17, 2030–2096. [Google Scholar]
  25. Tzeng, E.; Hoffman, J.; Saenko, K.; Darrell, T. Adversarial Discriminative Domain Adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 22–25 July 2017; pp. 96–110. [Google Scholar]
  26. Arjovsky, M.; Chintala, S.; Bottou, L. Wasserstein GAN. arXiv 2017, 45, 67–82. [Google Scholar]
  27. Miyato, T.; Kataoka, T.; Koyama, M.; Yoshida, Y. Spectral Normalization for Generative Adversarial Networks. In Proceedings of the ICLR, Vancouver, BC, Canada, 30 April–3 May 2018; pp. 23–40. [Google Scholar]
  28. Villani, C. Optimal Transport. In Grundlehren Der Mathematischen Wissenschaften; Springer: Berlin, Germany, 2009; Volume 338, pp. 960–973. [Google Scholar]
  29. Shen, J.; Qu, Y.; Zhang, W.; Yu, Y. Wasserstein Distance Guided Representation Learning for Domain Adaptation; Association for Advancement of Artificial Intelligence: New Orleans, American, 2017; Volume 32, pp. 4058–4065. [Google Scholar]
  30. Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A. Improved Training of Wasserstein GANs. arXiv 2017, 32, 99–112. [Google Scholar]
  31. Ye, J.; Janardan, R.; Li, Q. Two-dimensional linear discriminant analysis. Adv. Neural Inf. Process. Syst. 2005, 67, 1569–1576. [Google Scholar]
Figure 1. Wasserstein Distance Learned Feature Representations (WDLFR) combined with the classifier.
Figure 1. Wasserstein Distance Learned Feature Representations (WDLFR) combined with the classifier.
Sensors 19 03703 g001
Figure 2. Two-dimensional principle component (PC1, PC2) scatter points of 10 batches data by principal component analysis (PCA).
Figure 2. Two-dimensional principle component (PC1, PC2) scatter points of 10 batches data by principal component analysis (PCA).
Sensors 19 03703 g002
Figure 3. Two-dimensional principle component scatter points of the source and target domain feature representations after using the proposed WDLFR method.
Figure 3. Two-dimensional principle component scatter points of the source and target domain feature representations after using the proposed WDLFR method.
Sensors 19 03703 g003aSensors 19 03703 g003b
Figure 4. Recognition accuracy bar chart under Experimental Setting 1 and Setting 2.
Figure 4. Recognition accuracy bar chart under Experimental Setting 1 and Setting 2.
Sensors 19 03703 g004
Table 1. Sensor drift benchmark dataset.
Table 1. Sensor drift benchmark dataset.
Batch IDMonthAcetoneAcetaldehydeEthanolEthyleneAmmoniaToluene
Batch 11, 2909883307074
Batch 23~101643341001095325
Batch 311~133654902162402750
Batch 414, 1564431230120
Batch 51628402046630
Batch 617~2051457411029606467
Batch 721649662360744630568
Batch 822, 233030403314318
Batch 924, 3061551007578101
Batch 1036600600600600600600
Table 2. Recognition Accuracy (%) under Experimental Setting 1. The bold font represents the highest recognition accuracy of a batch in all compared algorithms.
Table 2. Recognition Accuracy (%) under Experimental Setting 1. The bold font represents the highest recognition accuracy of a batch in all compared algorithms.
MethodsBatch IDAverage
2345678910
PCASVM82.4084.8080.1275.1373.5756.1648.6467.4549.1468.60
LDASVM47.2757.7650.9362.4441.4837.4268.3752.3431.1749.91
SVM-rbf74.3661.0350.9318.2728.2628.8120.0734.2634.4738.94
SVM-comgfk74.4770.1559.7875.0973.9954.5955.8870.2341.8564.00
DRCA89.1592.6987.5895.9486.5260.2562.2472.3452.0077.63
WDLRF86.4193.3880.7593.4094.4874.6579.5977.6662.6482.55
Table 3. Corresponding Parameter Setting (mini-batch size) of the WDLFR under Experimental Setting 1.
Table 3. Corresponding Parameter Setting (mini-batch size) of the WDLFR under Experimental Setting 1.
BatchID2345678910
Mini-batch size121232163264141616
Table 4. Recognition Accuracy (%) under Experimental Setting 2. The bold font represents the highest recognition accuracy of a batch in all compared algorithms.
Table 4. Recognition Accuracy (%) under Experimental Setting 2. The bold font represents the highest recognition accuracy of a batch in all compared algorithms.
MethodsBatch IDAverage
1 → 22 → 33 → 44 → 55 → 66 → 77 → 88 → 99 → 10
PCASVM82.4098.8783.2372.5936.7074.9858.1684.0430.6169.06
LDASVM47.2746.7270.8185.2848.8775.1577.2162.7730.2560.48
SVM-rbf74.3687.8390.0656.3542.5283.5391.8462.9822.6468.01
SVM-comgfk74.4773.7578.5164.2669.9777.6982.6985.5317.7669.40
DRCA89.1598.1195.0369.5450.8778.9465.9984.0436.3174.22
WDLFR86.4192.1396.8990.3674.5783.7089.1284.6846.4283.08
Table 5. Corresponding Parameter Setting (mini-batch size) of the WDLFR under Experimental Setting 2.
Table 5. Corresponding Parameter Setting (mini-batch size) of the WDLFR under Experimental Setting 2.
Batch ID1 → 22 → 33 → 44 → 55 → 66 → 77 → 88 → 99 → 10
Mini-batch size121632321264141216

Share and Cite

MDPI and ACS Style

Tao, Y.; Li, C.; Liang, Z.; Yang, H.; Xu, J. Wasserstein Distance Learns Domain Invariant Feature Representations for Drift Compensation of E-Nose. Sensors 2019, 19, 3703. https://doi.org/10.3390/s19173703

AMA Style

Tao Y, Li C, Liang Z, Yang H, Xu J. Wasserstein Distance Learns Domain Invariant Feature Representations for Drift Compensation of E-Nose. Sensors. 2019; 19(17):3703. https://doi.org/10.3390/s19173703

Chicago/Turabian Style

Tao, Yang, Chunyan Li, Zhifang Liang, Haocheng Yang, and Juan Xu. 2019. "Wasserstein Distance Learns Domain Invariant Feature Representations for Drift Compensation of E-Nose" Sensors 19, no. 17: 3703. https://doi.org/10.3390/s19173703

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop