Next Article in Journal
MLG-STPM: Meta-Learning Guided STPM for Robust Industrial Anomaly Detection Under Label Noise
Previous Article in Journal
Delay-Compensated Lane-Coordinate Vehicle State Estimation Using Low-Cost Sensors
Previous Article in Special Issue
Performance Evaluation of Fixed-Point Continuous Monitoring Systems: Influence of Averaging Time in Complex Emission Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Mixed Gas Component Identification and Concentration Estimation Method for Unbalanced Gas Sensor Array Samples

1
Harbin University of Science and Technology, Harbin 150080, China
2
The Fourth Nineteenth Research Institute of China Electronics Technology Group Corporation, Harbin 150001, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(19), 6254; https://doi.org/10.3390/s25196254
Submission received: 18 August 2025 / Revised: 3 October 2025 / Accepted: 6 October 2025 / Published: 9 October 2025
(This article belongs to the Special Issue Gas Sensing for Air Quality Monitoring)

Abstract

Component identification and concentration estimation of a gas mixture component are important for gas detection. However, the accuracy of traditional gas identification will decrease if the sample is not balanced or the number of samples is too few. In this paper, a method based on sample expansion is proposed to solve the aforementioned problem. Firstly, the ADASYN-ELM method is proposed to identify the composition of a gas mixture component. The KPCA is used to extract the feature of the sensor signal and the ADASYN method is used to expand the samples. The PSO and GA algorithms were used to optimize the parameters of the ELM classification model to complete the qualitative analysis. Secondly, the S-SMOTE-MLSSVR method was put forward to quantitatively estimate. The S-SMOTE method was used to expand the samples, and the regression model MLSSVR was optimized by PSO and GA algorithms to complete the quantitative analysis. The results show that the accuracy rate after sample expansion is generally higher and the MAPE and RMSE are generally lower than before sample expansion, indicating that the sample expansion method has a positive effect on classification and concentration estimation of mixed gases with extremely unbalanced samples and too few samples.

1. Introduction

Various gases usually exist in the form of mixtures in the industrial and living environment; it is particularly important to identify the composition and concentration analysis of the gas mixture component, so the detection of the gas mixture component has become a hot research field. There are three methods for gas detection; the first method is sensory evaluation, which is an evaluation method based on the reaction of human sensory organs to gas. However, sensory evaluation is susceptible to the limitations of the human subjective and olfactory system, and some gases can also cause damage to the human body. The second method is chemical analysis, such as spectroscopy, gas chromatography and mass spectrometry; this method uses advanced chemical analysis equipment to measure the composition and concentration of gases. However, the difficulty of sampling, the complexity of operation, the high cost of equipment and the low real-time performance of these methods limit the application of these methods to some extent. The last method is to use gas sensors to detect gases. This method has a low cost and is easy to operate. However, in complex environments or when there are many types of gases, it is difficult to detect the gases using a single sensor. In addition, other influencing factors such as the sensitivity of sensing materials to target gases, the reproducibility of sensor arrays and limitations in the quantity of discriminated gas all impose limitations on the application of the sensor [1,2,3].
Due to the limitations of the above methods, Gardner published a review article on electronic nose, formally proposing the concept of “electronic nose” in 1994 [4]. Electronic nose, also known as artificial olfactory system, is a new bionic detection technology which can be used to simulate the working mechanism of biological olfactory. It uses sensor array technology, which has the advantages of fast response, high sensitivity, low cost and easy processing. Electronic nose technology not only solves the subjective problems of sensory analysis, but also overcomes the complicated and expensive problems of chemical analysis methods. Electronic nose technology is often used in the analysis and detection of various gas fields, such as pollution control [5,6], medical technology [7,8], oil exploration [9], food safety [10,11], agricultural science [12,13] and environmental science [14,15], and the application of electronic noses is very extensive [16,17,18]. Therefore, electronic nose technology has become an important research direction in the field of gas detection. The electronic nose system is mainly composed of gas sensor array, signal preprocessing module and pattern recognition, as shown in Figure 1.
The sensor array is composed of various types of sensors and converts chemical signals into electrical signals by converting A/D. The signal preprocessing is mainly to remove noise, extract feature and process signal of sensor response signal. The pattern recognition is the classification recognition and concentration estimation of measured gases using machine learning algorithms. At present, the key research directions of electronic nose are mainly the performance improvement of gas sensors and the research of various machine learning methods in gas sensors [19]. Due to the nonlinear response of MOS gas sensor, it is difficult to improve mixed gas classification and concentration prediction by simply relying on the selection of gas-sensitive materials. Therefore, appropriate machine learning algorithms are needed to solve the current problems [20]. Machine learning algorithm plays an extremely important role, and its accuracy, time efficiency and anti-interference ability all affect the decision result. Among the research results of many studies, the literature [21] argues that more intelligent pattern recognition technology is needed to realize the potential of electronic nose technology. Literature [22] proposes that reasonable improvement of algorithms is an important support for the development of machine olfaction.
Before pattern recognition, data sets need to be preprocessed and feature extraction, which is conducive to providing reasonable data sets for subsequent recognition models. The main steps of data preprocessing are data cleaning, data specification and data transformation. Data cleansing is the “cleaning” of data by filling in missing values, smoothing noise data, smoothing or removing outliers, and resolving data inconsistencies. The average method can solve the problem of missing data, and the averaging method is to fill the average value of all data into the data position to be compensated [23]. Based on the setting principle, the nearest location data is selected as the compensation data [24]. The compensation method of regression model is to build a regression model and fill the compensation position with the predicted value of the regression model as the compensation value [25]. Although the data specification technique is represented by a much smaller data set, it still maintains close data integrity and can be mined on the data set after the specification, and it is more efficient and produces nearly identical analysis results. Common strategies are dimension specification and dimension transformation. Common loss dimension transformation methods include principal component analysis (PCA), linear discriminant analysis (LDA), singular value decomposition (SVD), etc. [26,27]. Data transformation includes normalization, discretization and sparse processing of data to achieve the purpose of mining. In the literature [28], data standardization and baseline processing methods were used as data processing methods to realize the identification of nitrogen dioxide and sulfur dioxide in air pollution. The feature extraction methods include PCA and LDA. When the data is small, the feature extraction effect of PCA is better than that of LDA, but when the data is large, the feature extraction effect of LDA is better than that of PCA [29].
In the classification study of mixed gases, researchers mostly adopt machine learning methods [30,31,32,33,34], and the selection of classification algorithms needs to be based on the characteristics of samples to find a more suitable scheme [35,36,37,38,39,40,41]. Sunny used four thin film sensors to form a sensor array to identify and estimate the concentration of a gas mixture. PCA was used to extract the features of response signals, and ANN and SVM were used for category recognition, achieving good recognition results [42]. Zhao studied the recognition of gas mixture components of organic volatiles. An array composed of four sensors was used to identify formaldehyde when the background gas was acetone, ethanol and toluene. PCA was used for dimensionality reduction extraction. MLP, SVM and ELM are, respectively, used in the classifier to identify and classify, among which SVM achieves the best effect and ELM requires the shortest training time to obtain results [43]. Jang used the SVM and paired graph scheme combined with the sensor array composed of semiconductor sensors to classify CH4 and CO, and obtained a high recognition accuracy [44]. Jung used sensor array to collect gas and then used SVM and fuzzy ARTMAP network for experimental comparison. The recognition time of SVM was shorter than the fuzzy ARTMAP network [45]. Zhao adopted a weighted discriminant extreme learning machine (WDELM) as a classification method. WDELM assigns different weights to each specific sample by using a flexible weighting strategy, which enables it to perform classification tasks under unbalanced class distribution [46].
There are multiple regression [47], neural network [48,49], SVR [50] and other methods [51,52,53,54,55,56] for gas concentration analysis. Zhang used the WCCNN-BiLSTM model to automatically extract time–frequency domain features of dynamic response signals from the original signals to identify unknown gases. The time domain characteristics of the steady-state response signal are automatically extracted by the many-to-many GRU model to accurately estimate the gas concentration [57]. Piotr proposed an improved cluster-based ensemble model to predict ozone. Each improved spiking neural network was trained on a set of separate time series; the ensemble model could provide better prediction results [58]. Liang used AdaBoost classification algorithm to classify the local features of infrared spectrum, and carried out PLS local modeling according to different features to predict the concentration of a gas mixture component. This method solves the problems of difficult identification and inaccurate quantitative analysis of alkane mixture gas components in traditional methods [59]. Adak used the multiple linear regression (MVLR) algorithm to predict the concentration of a mixture of two gases in acetone. The relative errors of acetone and methanol are lower than 6% and 17%, respectively [60].
Based on the above literature, most of the methods are suitable for relatively balanced data sets of categories. When the number of samples is extremely unbalanced, the traditional method with the overall classification accuracy as the learning objective will pay too much attention to most categories. As a result, the classification or regression performance of a small number of class samples is degraded, which leads to the failure of traditional machine learning to work well on extremely unbalanced data sets. Secondly, PCA method is often used to solve linear problems in the literature, while most of the problems in the real environment are nonlinear problems. In addition, ML algorithms often have multi-parameter and difficult parameter to determine the problem. In the literature, the parameters of deep learning algorithms such as neural network or ELM are often obtained by trial and error method or experience, and the selection of parameters plays a crucial role in the performance evaluation of algorithms. When the learning algorithm model without optimal parameters is used to detect the mixed gas, it cannot reasonably compare other algorithms with evaluation criteria. To solve the above problems, this paper presents a gas mixture component detection method which is suitable for electronic nose under unbalanced conditions. In view of the problem of extremely unbalanced sample numbers and too few samples, SMOTE, ADASYN, B-SMOTE, S-SMOTE and CSL-SMOTE were put forward for artificial synthesis of new samples by sample expansion methods, so as to alleviate the problem. For nonlinear problems, Kernel Principal Component Analysis (KPCA) method is used for feature extraction, and kernel technique is used to extend PCA to nonlinear problems. To solve the problem of multi-parameter and difficult parameter determination, PSO and GA optimization methods are used to optimize the parameters of classification and regression models, which are convenient for classification and regression methods to identify and classify the mixed gas and estimate the concentration.
The rest of the paper is structured as follows. In Part II, the methods of feature extraction, sample expansion, classification recognition and concentration detection are briefly introduced. In Part III, a new method for detecting mixed gas based on the electronic nose is introduced in detail. The verification experiment is carried out in part IV, and the experimental results are analyzed and discussed. Part V is the summary and outlook.

2. Methods

2.1. Kernel Principal Component Analysis

The KPCA transforms the linearly indivisible sample input space into the divisible high-dimensional feature space through kernel function Φ · and performs PCA in this high-dimensional space. Compared with the linear problem solved by PCA, the KPCA with kernel technique extends the linear problem to the nonlinear problem [12].
Set X = [ x 1 , x 2 , , x N ] R M × N as the observation sample after pretreatment, and contains N samples in X , x i R M represents the i th observed sample of the M dimension. The covariance matrix mapping sample X to a high dimensional feature space is expressed as
C = 1 N Φ X [ Φ X ] T = 1 N i = 1 N Φ x i Φ x i T
where i = 1 N Φ x i = 0 , Φ · is a nonlinear mapping.
Eigenvalue decomposition of covariance matrix C :
λ v = C v = 1 N Φ X Φ X T v = 1 N i = 1 N Φ x i Φ x i T v
where λ and v , respectively, represent the eigenvalues and eigenvectors of covariance matrix C , and v is the eigenvector in the eigenspace, that is, the direction of the principal component. There is a coefficient vector α = ( α 1 , α 1 , , α N ) T for a linear representation of the eigenvector v :
v = i = 1 N α i Φ x i = Φ X α
Substitute (3) into (2) and multiply both ends by Φ ( X ) T to obtain the following equation:
λ Φ X T Φ X α = 1 N Φ X T Φ X Φ X T Φ X α
Define K = [ Φ ( X ) T ] Φ ( X ) , then K is the symmetric positive semidefinite matrix of N × N :
K i j = K x i , x j = Φ x i T Φ x j
where K i j represents the elements in row i and column j of matrix K , and the eigenvalue solution problem combined with Equations (3)–(5) is converted to
N λ α = K α
where N λ is the eigenvector of K , and principal component analysis (PCA) is performed in the eigenspace to solve the eigenproblem of Formula (6), and the eigenvalue λ 1 λ 2 λ N corresponding to the eigenvector α 1 , α 2 , , α N is obtained;
r C C R = i = 1 P λ i / j = 1 N λ j × 100 %
where p is the number of primary components.
The k th feature of the newly observed sample x is mapped by Φ x to v k , where v k is the feature vector of the k th feature in the feature space, i.e., the direction of the principal component.
t k = v k , Φ x = i = 1 N α i k Φ x i , Φ x k = 1 , 2 , , p
where t k is the projection of Φ x onto v k . Where k = 1 N Φ x k = 0 is not satisfied, K is K ˜ :
K ˜ = K - I N K - K I N + I N K I N
where K ˜ is the kernel matrix after centralization, and I N is the matrix of N × N , where each element is 1 / N .

2.2. The Safe-Level-SMOTE Method

The SMOTE method can alleviate the over-fitting problem caused by random oversampling, but it only considers a few types of cases and does not consider the overlap between the synthesized samples and most types of samples; therefore, most researchers tend to adopt the improved Safe-Level-SMOTE method [61,62,63,64]. The Safe-Level-SMOTE method will select a few classes with a high degree of safety and assign a certain degree of safety to each class separately before combining new classes, which will be closer to the high degree of safety. This method solves the quality problem of the SMOTE class and the problem of fuzzy class boundaries. The schematic diagram of a few class samples synthesized by the Safe-Level-SMOTE algorithm is shown in Figure 2.
The process of the Safe-Level-SMOTE method is as follows:
(1)
Find the k nearest neighbors of p , denoting the number of k neighbors in D as s l p , and denoting a certain neighbor as n .
(2)
Find the k nearest neighbors of n , and the number of k neighbors in D is denoted as s l n .
(3)
Set the ratio s l _ r a t i o = s l p / s ln .
where D is a sample set of a few classes, and p is a sample in D .
Case 1: s l _ r a t i o = and s l p = 0 , that is, the k neighbors of the minority class sample p are all majority class samples, and no composite data is generated in this case.
Case 2: s l _ r a t i o = and s l p 0 , that is, when s l n is very small relative to s l p , the ratio will be 0. The n sample point is located in most class samples, and then p point is copied.
Case 3: s l _ r a t i o = 1 , that is, s l p = s l n , at this time to synthesize a new sample between n and p , synthesis method same as smote.
Case 4: s l _ r a t i o > 1 , that is, s l p > s l n , at this time, the number of subclass samples around p point sample is greater than the number of subclass samples around n point sample, consider p point as the safe level, and use the smote in β = 0 1 / s l _ r a t i o between p and n point to synthesize a new sample, and the synthesized sample position is biased to p point.
Case 5: s l _ r a t i o < 1 , that is, s l p < s l n then the number of subclass samples around p point sample is less than the number of subclass samples around n point sample, consider n point as the safe level, between p and n point with β = 1 s l _ r a t i o 1 to synthesize a new sample, the synthesized sample position is biased to n point.

2.3. Adaptive Synthetic Sampling Approach (ADASYN)

Many classification problems will face the problem of sample imbalance, most of the algorithms in this case, and classification effect is not ideal. Researchers usually adopt the SMOTE method to address the issue of sample imbalance. Although the SMOTE algorithm is better than random sampling, it still has some problems. Generating the same number of new samples for each minority class sample may increase the overlap between classes and create valueless samples. Therefore, the improved ADASYN method of SMOTE [65] is adopted. The basic idea of this algorithm is to adaptively generate minority class samples based on their distribution. This method can not only reduce the learning bias caused by class imbalance but also adaptively shift the decision boundary to the difficult-to-learn samples. Then, new samples are artificially synthesized based on the minority class samples and added to the data set.
The process of the ADASYN method is as follows:
Input: Training data set D t r with m samples: x i , y i i = 1 , 2 , m , where x i is a sample of n dimensional feature space X , x i corresponds to class label y i Y = 1 , 1 . m s and m l are defined as minority sample size and majority sample size, respectively, so m s m l and m s + m l = m .
The algorithm process:
(1)
Calculate the unbalance degree:
d = m s / m l , d 0 , 1
(2)
If d < d t h ( d t h is the default threshold of the maximum allowable unbalance rate):
(a)
Calculate the amount of composite samples that need to be generated for a few classes of samples:
G = m l m s × β
where β 0 , 1 is a parameter that specifies the level of balance required after the resultant data is generated, and β = 1 indicates that the new data set is completely balanced after the resultant.
(b)
For each x i belonging to the minority class, find K neighbors based on Euclidean distances in n dimensional space, and compute the ratio r i , which is defined as
r i = Δ i / K , i = 1 , 2 , m s
where Δ i is a sample of most classes input in K neighbors, then r i 0 , 1 .
(c)
Normalizes r i according to r ^ i = r i / i = 1 m s r i , so r i is a density distribution ( i r ^ i = 1 ).
(d)
Calculate the amount of sample x i that needs to be synthesized in each minority sample:
g i = r ^ i × G
where G is the total sample size of artificial minority samples synthesized according to Formula (11).
(e)
For each minority sample x i , the sample g i is synthesized by following the following steps:
Do the Loop from 1 to g i ;
(f)
(i) Randomly select a minority sample x z i from K neighbors of x i ; (ii) Synthetic sample s i = x i + x z i x i × λ ; where x z i x i is a difference vector in n dimensional space and λ 0 , 1 is a random number.
End Loop
As can be seen from the above steps, the key idea of the ADASYN method is to use density distribution as a criterion to adaptively synthesize the number of artificial samples for each minority class sample. From a physical perspective, the distribution of weights is measured based on the learning difficulty of different minority class samples. The data set obtained by the ADASYN method not only solves the problem of imbalanced data distribution (according to the expected balance level defined by the β coefficient), but also forces the learning method to focus on those difficult-to-learn samples.

2.4. The Multi-Output Least Squares Support Vector Regression Machine (MLLSVR)

Support vector regression machine (SVR) is a traditional machine learning method for solving convex quadratic programming problems. The basic idea of the method is to map the input vector to a high-dimensional feature space through a pre-determined nonlinear mapping, and then perform linear regression in this space. Thus, the effect of nonlinear regression in the original space is obtained [19]. Least squares support vector regression machine is an improved version that replaces inequality constraints in SVR with equality constraints. MLSSVR is a generalization of LSSVR in the case of multiple outputs.
Suppose data set x i , y i i = 1 l , where x i Ω n is the input vector and y i Ω is output value. Nonlinear mapping φ : Ω n Ω n h is introduced to map input to the n h dimensional feature space, and the regression function is constructed.
f x = φ ( x ) Τ w + b
where w Ω n h × l is the weight vector and b R is the offset coefficient.
In order to find the best regression function, the minimum norm w = w T w is needed. The problem can be boiled down to the following constraint optimization problem:
min w R n h × l , b R S w = 1 2 w 0 Τ w 0 s . t . y = Z T W + b 1 l
where   y Ω l × l is a block vector composed of y i , Z = φ x 1 , φ x 2 , , φ x l Ω n h × l , 1 l = 1 , 1 , , 1 T Ω l × 1 . By introducing the relaxation variable ξ i Ω i N l , the minimization problem of Equation (15) can be transformed into
min w Ω n h × l , b Ω J w , ξ = 1 2 w T w + γ 2 ξ Τ ξ s . t . y = Z T W + b e l + ξ
where ξ = ξ 1 , ξ 2 , , ξ l T Ω l is a vector composed of relaxation variables and γ Ω + is a regularization parameter.
In the multiple-output case, for a given training set x i , y i i = 1 l , x i Ω n is the input vector, y i Ω m is the output vector, X Ω l × n and Y Ω l × m are composed of block matrices of x i and y i , respectively. The purpose of MLSSVR is to map from n dimensional input x i Ω n to m dimensional output   y i Ω m . As in the case of single output, the regression function is
f x = φ ( x ) T W + b T
where W = w 1 , w 2 , , w m Ω n h × m is a matrix composed of weight vectors and b = b 1 , b 2 , , b m Ω m is a vector composed of offset coefficients. Minimize the following constrained objective function by finding W and b :
min W Ω n × h m , b Ω m J W , Ξ = 1 2 t r a c e W Τ W + γ 2 t r a c e Ξ Τ Ξ , s . t . Y = Z T W + repmat b Τ , l , 1 + Ξ
where Ξ = ξ 1 , ξ 2 , , ξ m Ω + l × m is a matrix of relaxation vectors. By solving this problem, W and b are obtained, and the nonlinear mapping is obtained. According to hierarchical Bayes, the weight vector w i Ω n h × 1 i Ε m can be decomposed into the following two parts:
w i = w 0 + v i
where w 0 Ω n h × 1 is the mean vector, v i i Ω m is a difference vector, w 0 and v i i Ε m reflect the connectivity difference between outputs. That is, w 0 contains the general characteristics of output, and v i i Ε m contains the special information of i th component of the output. Equation (18) is equivalent to the following problem.
min W 0 Ω n h , V Ω n h × m , b Ω m S w 0 , V , Ξ = 1 2 w 0 Τ w 0 + λ 2 m t r a c e V Τ V + γ 1 2 t r a c e Ξ Τ Ξ , s . t . Y = Z T W + repmat b Τ , l , 1 + Ξ
where V = v 1 , v 2 , , v m Ω n h × m , W = w 0 + v 1 , w 0 + v 2 , , w m + v m Ω n h × m , Z = φ x 1 , φ x 2 , , φ x l Ω Ω n h × l , λ , γ Ω + are two regularization parameters.
The Lagrange function corresponding to Equation (20) is
L w 0 , V , b , Ξ , A = J w 0 , V , Ξ t r a c e (   A Τ ( Z T W + repmat b T , l , 1 + Ξ Y ) )
where A = α 1 , α 2 , α m Ω l × m is a matrix consisting of a Lagrange multiplier vector.
According to the optimization theory of Karush–Kuhn–Tucker (KKT) conditions, the linear following equations are obtained:
L w 0 = 0 w 0 = i = 1 m Z α i L V = 0   V = m λ Z A L b = 0 A Τ 1 l = 0 l L Ξ = 0   A = γ Ξ L A = 0   Z T W + repmat b T , l , 1 + Ξ Y = 0 l × m
By canceling W and Ξ in Equation (22), the linear matrix equation can be obtained as follows:
0 m l × m P Τ   P H b α = 0 m y
where P = b l o c k d i a g 1 l , 1 l , , 1 l m Ω m l × m , H = Ω + 1 / γ I m l + m / λ Q Ω m l × m l , K = Z Τ Z Ω l × l , Ω = r e p m a t K , m , m Ω m l × m l , Q = b l o c k d i a g K , K , , K m Ω m l × m l , α = α 1 Τ , α 2 Τ , , α m Τ Τ Ω m l and y = y 1 Τ , y 2 Τ , , y m Τ Τ Ω m l . Since H is not positive definite, Equation (23) can be changed to the following form:
  S 0 m l × m l 0 m × m H b H 1 P b + α = P Τ H 1 y y
where S = P Τ H 1 P Ω m × m is a positive definite matrix. It is not difficult to see that Formula (24) is positive definite. The solution b and α * of Equation (23) are obtained in three steps:
(1) Solve: η , μ from H η = P and H μ = y ; (2) calculate: S = P Τ η ; (3) solve: b = S 1 η Τ y , α = μ η b .
The corresponding regression function can be obtained as follows:
f x = φ ( x ) Τ W + b Τ = φ ( x ) Τ r e p m a t w 0 , 1 , m + φ ( x ) Τ V + b Τ = φ ( x ) Τ r e p m a t i = 1 m Z α i , 1 , m + m λ φ ( x ) Τ Z * + b Τ
This article uses the most common RBF kernel functions, as follows.
k x , x j = exp g x x j 2
where g = 1 / 2 σ 2 , σ Ω + is the kernel width.
The MIMO differs from MISO algorithm in input–output mapping system and parameter types. When using this method, an optimization algorithm is needed to optimize the parameters in its model.

3. The Improvement Method

This paper proposes a method of mixture gas identification and concentration detection based on sample expansion. The flow chart of gas identification and concentration detection using the proposed method is shown in Figure 3.
The qualitative analysis of gas mixture is divided into five steps: data preprocessing, feature extraction, stratified cross-validation, sample expansion, parameter optimization and qualitative identification.
Step 1: The raw signal is preprocessed to eliminate the difference caused by the baseline to the raw data.
Step 2: The KPCA is used to extract the features of the preprocessed signal. When the cumulative contribution rate of n feature values reaches the set threshold, the first n features are selected to represent the original features.
Step 3: After KPCA feature extraction, use hierarchical five-fold cross-validation to divide the data into five mutually exclusive subsets on average. In each experiment, one subset is selected as the test set, the other four subsets are combined as the training set, and the average of the five results is used as the estimation of the algorithm accuracy.
Step 4: In the training set, the ADASYN method is used to artificially synthesize a few class samples in the class imbalance, and the generated new samples are put into the training set to form a new training set.
Step 5: After sample expansion on the class unbalanced data set, The ELM method is adopted as the classification method, the PSO and GA are used to optimize the parameters of the classification method, and the classification model is obtained. The test set is input into the classification model to identify the gas mixture.
The quantitative analysis of a gas mixture component is divided into four steps: data preprocessing, sample expansion, parameter optimization and quantitative estimation.
Step 1: The original signal is preprocessed to eliminate the influence of the baseline.
Step 2: Arrange the concentration in ascending order in the pre-treated sample set, and cross-select the samples as the training set and the test set. In the training set, The S-SMOTE method was used to synthesize artificial samples. The generated samples are put into the training set to form a new training set.
Step 3: After sample expansion, The MLSSVR method is used as the regression, and the PSO and GA methods are used to optimize the parameters of the regression method, and the regression model with the optimal parameters is obtained.
Step 4: Input the test set into the regression model to obtain the estimation of the mixed gas concentration, and use the mean absolute percentage error (MAPE) and root mean square error (RMSE) as the evaluation criteria.

4. The Experiment and Results

4.1. The Experimental Platform and Data Set

In order to verify the validity of a sample augmentation-based mixture gas identification and concentration detection method proposed in this paper, validation experiments were performed on UCI publicly available data sets. The data set was collected at the Gas Delivery Platform facility in the Chemical Signals Laboratory at the Bio-circuits Institute at the University of California San Diego. The system consists of data acquisition system platform, power control module and chemical delivery system. The sensor array includes 16 chemical sensors of four different types (Figaro Company, Meadows, IL, USA): TGS-2600, TGS-2602. TGS-2610. TGS-2620 (each type with four units). These sensors are integrated with custom signal conditioning and control electronic equipment. During the entire experiment, the working temperature of the sensors is controlled, and the working voltage of the sensors remains at 5 V. The sensor array continuously obtains electrical conductivity at a sampling frequency of 100 Hz. The sensor array is placed in a 60 mL measurement chamber, and the gas sample is injected at a constant flow rate of 300 mL/min. The flow control system is based on three different branch polyethylene. To separately control the gas flow of each branch while maintaining the total flow constant, each branch is connected to a different pressurized gas cylinder through a mass flow controller (MFC) system. The first fluid branch is used to control the flow of dry air provided by Airgas Company in the pressurized gas cylinder. The other two branches can be freely connected to any pressurized gas cylinder. These three branches converge to obtain the required gas mixture. Finally, the generated mixture is continuously circulated through the measurement chamber and collected by the exhaust system. To obtain accurate and repeatable data generation, this system is fully operated by a computerized environment, as shown in Figure 4.
The data set includes two binary gas mixtures: ethylene and methane, ethylene and CO. In this paper, part of the data set was selected as the data set for study, and 161 samples were selected in total. The composition of the experimental sample set is shown in Table 1. There are five types of gases, CO, ethylene, ethylene and methane mixture, ethylene and CO mixture, methane, labeled with category labels ranging from 1 to 5 and the sample size of each gas type.

4.2. The Preprocessing

The relative difference method is adopted, as shown in Formula (27). Baseline correction of signals for differences caused by raw data, compensate for drift in chemical sensor arrays and eliminate noise in sensor responses. The relative difference method eliminates the difference between baseline and raw data; the reliability of the data is ensured.
T i = S i S o S o
where S o is the baseline value, S i is the recorded value of the sensor, and T i is the effective value of the sensor.

4.3. The Extended Sample

The data set of qualitative analysis is divided into training and test set according to stratified five-fold cross-validation. In the training set, the gas type of gas mixture is labeled, the sensor array response is taken as input, the sample expansion method is used to synthesize the sample artificially, and the new sample is put into the training set to form a new training set. The expansion amounts of different sample expansion methods are shown in Table 2.
The data set of the concentration of the mixed gas is arranged in ascending order, samples are cross-selected as the training set and the test set, and the type of the mixed gas is labeled, the response and concentration of the sensor array are taken as the input, the sample expansion method is used to artificially synthesize samples, and new samples are put into the training set to form a new training set. The expansion amounts of each gas based on different sample expansion methods are shown in Table 3.

4.4. The Qualitative Identification

4.4.1. The Hierarchical K-Fold Cross-Validation

Hierarchical five-fold cross-validation is to divide the data into five subsets on average, and the proportion of sample size of each subset is the same as the original data set. Each experiment selects a subset as the test, combines the other four subsets as the training set, and the average of the five results is used as the estimation of the algorithm accuracy. After KPCA feature extraction, the data set is divided into five mutually exclusive subsets (D1~D5) of the same size using hierarchical five-fold cross-validation. In each test, any subset from D1 to D5 is selected as the test set, and the remaining 4 subsets are combined as the training set. If D1 is used as the test set, D2 to D4 are combined as the training set, and the results of the five times are averaged; the five-fold hierarchical cross-validation data allocation is shown in Figure 5.

4.4.2. The Feature Extraction

After preprocessing, the maximum value of sensor effective is selected for each sample to obtain data set. KPCA was used to extract features. In the selection of kernel functions, since the polynomial kernel function and the neural network kernel function have very strict requirements for parameter selection, they are prone to causing ill-conditioned kernel matrices, generating negative eigenvalues and eigenvectors, thereby leading to the failure of the KPCA transformation model construction. In comparison, the radial basis kernel function transformation matrix has excellent positive definiteness and is suitable for a wide range of parameters. Therefore, in this study, the radial basis kernel function was selected as the transformation kernel function. The parameter σ 2 of the radial basis function was default selection. The cumulative contribution threshold was set. When the number of eigenvalues is 5, the cumulative contribution rate reaches more than 95%, indicating the first five principal components can roughly represent all data, as shown in Figure 6. Therefore, the original feature can be characterized by the first five eigenvalues.
In the training set, SMOTE, Borderline-SMOTE (B_SMOTE), Safe-Level-SMOTE (S_SMOTE), Cost-Sensitive Learning-Smote (CSL_SMOTE) and ADASYN were used to expand the samples. The distribution of various data sets with the first three principal components is shown in Figure 7a. The synthetic sample with S-SMOTE method is shown in Figure 7b. The blue part is the synthetic artificial sample of various types. As the figure shows, the newly generated aggregate of various types did not overlap with other classes in their original categories, indicating that the synthetic sample method effectively alleviated the class imbalance. Then the PSO and GA methods were used to optimize the parameters of MRVM, SVM, ELM and SOFTMAX, and the optimal parameter classification model was obtained to identify and classify the mixed gas. The results of MRVM, SVM, ELM and SOFTMAX methods are shown in Table 4, Table 5, Table 6 and Table 7 and Figure 8, Figure 9, Figure 10 and Figure 11. It can be seen from the figures and tables that the accuracy rate after using the expanded sample is higher than that before the expanded sample, indicating that the expanded sample methods are beneficial to the classification of unbalanced gas mixture components.

4.4.3. The Classification of Mixed Gases Based on SOFTMAX Method

The results of SOFTMAX classification method are shown in Table 4 and Figure 8; the K value represents the nearest neighbor.
Table 4. The classification of mixed gases based on SOFTMAX method.
Table 4. The classification of mixed gases based on SOFTMAX method.
SOFTMAXSMOTEADASYNB_SMOTES_SMOTECSL_SMOTE
K55255
Unexpanded accuracy (%)93.693.5596.890.396.8
Expanded accuracy (%)96.810010096.8100
Figure 8. The classification of mixed gases based on SOFTMAX method.
Figure 8. The classification of mixed gases based on SOFTMAX method.
Sensors 25 06254 g008

4.4.4. The Classification of Mixed Gases Based on MRVM Method

When the MRVM method is used to classify the mixed gas, the PSO and GA methods are used to optimize the parameters of the core parameter b in MRVM, and the kernel function is Gaussian, the optimization parameter is kernel parameter b. The results of the MRVM classification method as shown in Table 5 and Figure 9.
Table 5. The classification of mixed gases based on MRVM method.
Table 5. The classification of mixed gases based on MRVM method.
MRVMSMOTEADASYNB_SMOTES_SMOTECSL_SMOTE
K55255
b (PSO)59.717774.359393.548728.06254.7429
Unexpanded accuracy (%)93.696.896.890.393.6
Expanded accuracy (%)96.896.810093.6100
b (GA)71.611927.901474.693335.377389.9384
Unexpanded accuracy (%)96.893.696.890.3296.8
Expanded accuracy (%)10010096.810097.8
Figure 9. The classification of different methods: (a) The classification of gas mixtures based on the PSO-MRVM method; (b) the classification of mixed gases based on the GA-MRVM method.
Figure 9. The classification of different methods: (a) The classification of gas mixtures based on the PSO-MRVM method; (b) the classification of mixed gases based on the GA-MRVM method.
Sensors 25 06254 g009

4.4.5. The Classification of Mixed Gases Based on SVM Method

When the SVM method was used to classify the gas mixture, the PSO and GA methods were used to optimize the parameters of kernel parameter g and penalty parameter C in SVM, and the kernel function is Gaussian. The results of the SVM classification method as shown in Table 6 and Figure 10.
Table 6. The classification of mixed gases based on SVM method.
Table 6. The classification of mixed gases based on SVM method.
SVM SMOTEADASYNB_SMOTES_SMOTECSL_SMOTE
K55255
C (PSO)6.711814.04535.29279.86016.9446
g (PSO)3.99341.76895.53914.81454.8246
Unexpanded accuracy (%)93.593.593.596.893.5
Expanded accuracy (%)10096.896.896.896.8
C (GA)24.059819.0548 3.7400 9.860199.5516
g (GA)3.04564.336119.88910.814513.2976
Unexpanded accuracy (%)96.8893.593.7596.8896.8
Expanded accuracy (%)96.8810010096.8896.8
Figure 10. The classification of different methods: (a) The classification of mixed gases based on the PSO-SVM method; (b) the classification of mixed gases based on GA-SVM method.
Figure 10. The classification of different methods: (a) The classification of mixed gases based on the PSO-SVM method; (b) the classification of mixed gases based on GA-SVM method.
Sensors 25 06254 g010

4.4.6. The Classification of Mixed Gases Based on ELM Method

When the ELM method is used to classify mixed gas, the PSO and GA methods are used to optimize parameters of weight w (8 × 16) and bias b (8 × 1) in ELM method. The results of ELM method are shown in Table 7 and Figure 11.
Table 7. The classification of mixed gases based on ELM method.
Table 7. The classification of mixed gases based on ELM method.
MethodsSMOTEADASYNB_SMOTES_SMOTECSL_SMOTE
K55255
Unexpanded accuracy (%)90.393.690.393.5596.8
Expanded accuracy (%)96.810010096.77100
Unexpanded accuracy (%)93.690.390.393.693.6
Expanded accuracy (%)96.896.710096.7796.8
Figure 11. The classification of different methods: (a) The classification of mixed gases based on the PSO-ELM method; (b) the classification of mixed gases based on GA-ELM method.
Figure 11. The classification of different methods: (a) The classification of mixed gases based on the PSO-ELM method; (b) the classification of mixed gases based on GA-ELM method.
Sensors 25 06254 g011

4.4.7. The Comparison of Performance Based on Different Method Classifications

When classifying gas mixture components, the PSO and GA methods are used to optimize the parameters of ELM, SVM, MRVM and SOFTMAX classification methods. In order to evaluate the difference in overall performance of the classification methods over the sample expansion method, the statistical average of the expansion difference under two optimization algorithms under the same sample expansion method is calculated (for example, SMOTE is the statistical average of PSO-ELM and GA-ELM under the same classification method). The extended mean difference in Figure 12 is the statistical average of the extended mean difference (SMOTE, ADASYN, B-SMOTE, S_SMOTE and CSL_SMOTE). The extended mean difference in ELM is 2.6% larger than that in SVM, 2.1% larger than that in MRVM, and 1% larger than that in SOFTMAX. Therefore, the extended mean difference in ELM is the largest, indicating that the classification method has the best overall performance in the proposed sample expansion method.
G A S M O T E E L M E x t e n d e d   d i f f e r e n c e = G A _ S M O T E _ E L M A c c u r a c y   r a t e M A P E G A   _ E L M A c c u r a c y   r a t e
S M O T E E x t e n d e d   d i f f e r e n c e = ( G A S M O T E E L M E x t e n d e d   d i f f e r e n c e + P S O S M O T E E L M E x t e n d e d   d i f f e r e n c e )
E L M E x t e n d e d   d i f f e r e n c e = ( S M O T E E x t e n d e d   d i f f e r e n c e + A D A S Y N E x t e n d e d   d i f f e r e n c e + B _ S M O T E E x t e n d e d   d i f f e r e n c e + S _ S M O T E E x t e n d e d   d i f f e r e n c e + C S L _ S M O T E E x t e n d e d   d i f f e r e n c e )
It can be seen from Figure 8, Figure 9, Figure 10 and Figure 11 that different sample expansion methods have different performance in classification methods. In order to evaluate the overall difference in the performance of sample expansion methods in classification methods, the extended difference in two optimization algorithms under the same classification method is statistically averaged (for example, the extended difference (ELM) is the statistical average of PSO-ELM and GA-ELM under the same sample extension), and the extended mean difference in Figure 13 is the statistical average of the extended difference (ELM, SVM, MRVM and SOFTMAX). It can be seen from the figure that the expansion mean difference is greater than 0, and ADASYN’s expansion mean difference is greater than 0.1% of B_SMOTE, 1.4% of S_SMOTE, 1.7% of SMOTE, and 2.5% of CSL_SMOTE, so the ADASYN method has the largest expansion mean difference. It shows that the sample expansion method has the best overall performance among the proposed classification methods.
G A S M O T E S V R E x t e n d e d   d i f f e r e n c e = G A _ S M O T E _ S V R A c c u r a c y   r a t e M A P E G A   _ S V R A c c u r a c y   r a t e
S M O T E S V R E x t e n d e d   d i f f e r e n c e = ( G A S M O T E S V R E x t e n d e d   d i f f e r e n c e + P S O S M O T E S V R E x t e n d e d   d i f f e r e n c e ) / 2
S M O T E E x t e n d e d   d i f f e r e n c e = ( E L M E x t e n d e d   d i f f e r e n c e + S V M E x t e n d e d   d i f f e r e n c e + M R V M E x t e n d e d   d i f f e r e n c e + S O F T M A X E x t e n d e d   d i f f e r e n c e ) / 4

4.5. The Quantitative Analysis

In the quantitative analysis of the mixed gas, PSO and GA are used to optimize the corresponding parameters in the classification model, and the optimal parameter classification model is obtained to estimate the concentration of the mixed gas. In order to evaluate the effectiveness of this method, appropriate evaluation indexes are needed. In this paper, mean absolute percentage error (MAPE) and root mean square error (RMSE) are selected as evaluation indexes of the prediction model, as shown in Equations (34) and (35). In Table 8, Table 9, Table 10, Table 11, Table 12, Table 13, Table 14, Table 15 and Table 16, the MAPE and RMSE of the sample of the gas mixing part increased by sample expansion were both lower than those before sample expansion, indicating that the sample expansion method is beneficial to concentration estimation of the class imbalance data set.
M A P E = 1 n i = 1 n y ^ i y i y i × 100 %
R M S E = i = 1 n y ^ i y i 2 N
where y i is the true concentration value, y ^ i is the predicted concentration value, and n is the number of samples to be measured.

4.5.1. The Quantitative Analysis of Mixed Gases Based on SVR Method

When the SVR method is used to the quantitative analysis of mixed gas, the PSO and GA methods are used to optimize the parameters of penalty coefficient C , nuclear parameter g in SVR and the kernel function is Gaussian. The results of the quantitative analysis of mixed gases based on SVR method as shown in Table 8, Table 9 and Table 10.
Table 8. The estimation of mixed gas concentration based on the parameter-optimal SMOTE-SVR model.
Table 8. The estimation of mixed gas concentration based on the parameter-optimal SMOTE-SVR model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
C (PSO)33.238410017.484350.571281.67115.052515.0712
g (PSO)0.010.08170.011.05221.06780.17.5242
RMSE (unexpanded)0.333611.85650.38259.09511.23747.74821.3761
RMSE (expanded)\\\16.18380.36146.12060.6009
MAPE (unexpanded)0.02670.03420.02690.1750.10880.06530.1585
MAPE (expanded)\\\0.04640.02690.04240.0458
C (GA)27.113699.545246.37058.643599.134387.384417.1865
g (GA)0.04220.28590.0370.97620.30670.080918.3491
RMSE (unexpanded)0.290612.13420.271760.12830.90317.79851.4123
RMSE (expanded)\\\16.69250.151210.07190.904
MAPE (unexpanded)0.02020.02720.01140.17510.09320.07360.1626
MAPE (expanded)\\\0.04960.01260.06770.0942
Table 9. The estimation of mixed gas concentration based on the parameter-optimal ADASYN-SVR model.
Table 9. The estimation of mixed gas concentration based on the parameter-optimal ADASYN-SVR model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
C (PSO)33.238410017.48437860.398277.90612.4882
g (PSO)0.010.08170.010.10.11.36540.2321
RMSE (unexpanded)0.333611.85650.38270.27490.92477.74821.3761
RMSE (expanded)\\\33.61680.35456.12060.6009
MAPE (unexpanded)0.02670.03420.02690.17210.09370.28670.0501
MAPE (expanded)\\\0.06860.01670.05580.0082
C (GA)27.113699.545246.370531.581343.732990.756846.2463
g (GA)0.04220.28590.0370.95240.32480.36580.1222
RMSE (unexpanded)0.290612.13420.271760.49220.9067.74821.3761
RMSE (expanded)\\\43.3910.15916.12060.6009
MAPE (unexpanded)0.02020.02720.01140.17520.09320.11270.0841
MAPE (expanded)\\\0.09710.01010.05140.0122
Table 10. The estimation of mixed gas concentration based on the parameter-optimal S_SMOTE-SVR model.
Table 10. The estimation of mixed gas concentration based on the parameter-optimal S_SMOTE-SVR model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
C (PSO)33.238452.847517.484310042.482810022.3426
g (PSO)0.011.11830.010.15860.06940.010.2
RMSE (unexpanded)0.333611.85650.38270.430.95187.06670.5797
RMSE (expanded)\\\4.22330.58926.57820.5106
MAPE (unexpanded)0.02670.03420.02690.17570.09360.06770.0662
MAPE (expanded)\\\0.01010.03690.05340.0596
C (GA)27.113699.545246.370538.699653.420510053.0962
g (GA)0.04220.28590.0370.54531.16210.35642.9062
RMSE (unexpanded)0.290612.13420.271768.20891.266715.81551.1962
RMSE (expanded)\\\0.01350.93635.23030.6657
MAPE (unexpanded)0.02020.02720.01140.17240.11190.10970.1347
MAPE (expanded)\\\0.01350.05440.0490.0472

4.5.2. The Quantitative Analysis of Mixed Gases Based on ELM Method

When the ELM method is used to the quantitative analysis of mixed gas, the PSO and GA methods are used to optimize the parameters of weight W (8 × 16) and bias b (8 × 1) in ELM. The results of the quantitative analysis of mixed gases based on ELM method are shown in Table 11, Table 12 and Table 13.
Table 11. The estimation of mixed gas concentration based on the parameter-optimal SMOTE-ELM model.
Table 11. The estimation of mixed gas concentration based on the parameter-optimal SMOTE-ELM model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
unexpanded RMSE (GA)0.07797.59280.08344.28830.73271.59921.1006
expanded RMSE (GA)\\\0.55380.70635.40860.6163
unexpanded MAPE (GA)0.00720.02420.00680.01350.0650.01250.1345
expanded MAPE (GA)\\\0.00120.02750.03040.066
unexpanded RMSE (PSO)0.10598.01370.06992.55191.29042.2151.0358
expanded RMSE (PSO)\\\1.43280.39041.63740.7018
unexpanded MAPE (PSO)0.00560.02650.00360.00630.1120.0130.1288
expanded MAPE (PSO)\\\0.00410.02460.01260.0694
Table 12. The estimation of mixed gas concentration based on the parameter-optimal ADASYN-ELM model.
Table 12. The estimation of mixed gas concentration based on the parameter-optimal ADASYN-ELM model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
unexpanded RMSE (GA)0.08897.24840.11105.7480.85272.48681.0435
expanded RMSE (GA)\\\2.39890.84072.56090.9546
unexpanded MAPE (GA)0.00630.02120.00780.01890.08260.01510.1264
expanded MAPE (GA)\\\0.01490.06140.0210.0846
unexpanded RMSE (PSO)0.09376.66330.05053.07311.43141.49580.7338
expanded RMSE (PSO)\\\3.40441.14031.66880.5138
unexpanded MAPE (PSO)0.00670.02170.00290.00790.11830.01350.0916
expanded MAPE (PSO)\\\0.00780.06310.01180.0448
Table 13. The estimation of mixed gas concentration based on the parameter-optimal S_SMOTE-ELM model.
Table 13. The estimation of mixed gas concentration based on the parameter-optimal S_SMOTE-ELM model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
unexpanded RMSE (GA)0.07748.28270.10822.90361.04412.88841.0022
expanded RMSE (GA)\\\0.07850.01762.59230.4906
unexpanded MAPE (GA)0.0050.02540.00850.00830.09260.02080.1202
expanded MAPE (GA)\\\0.00010.00080.01880.0465
unexpanded RMSE (PSO)0.09026.3250.00591.68481.04242.21920.7259
expanded RMSE (PSO)\\\0.15140.182.13650.3481
unexpanded MAPE (PSO)0.00740.01890.08520.00510.10380.01180.087
expanded MAPE (PSO)\\\0.00030.01120.01280.0329

4.5.3. The Quantitative Analysis of Mixed Gases Based on MLSSVR SVR Method

When the MLSSVR method is used to the quantitative analysis of mixed gas, the PSO and GA methods are used to optimize the parameters of gamma, lambda and p in MLSSVR, and the kernel function is Gaussian. The results of the quantitative analysis of mixed gases based on MLSSVR method are shown in Table 14, Table 15 and Table 16.
Table 14. The estimation of mixed gas concentration based on the parameter-optimal SMOTE-MLSSVR model.
Table 14. The estimation of mixed gas concentration based on the parameter-optimal SMOTE-MLSSVR model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
Gamma (PSO)20.3199132.61917.883715.398271.6237
Lambda (PSO)0.010.010.010.10.1
P (PSO)0.0510.21460.03990.78560.0364
unexpanded RMSE0.279611.10420.246733.91651.290337.57024.4949
expanded RMSE\\\14.28420.626110.18380.6122
unexpanded MAPE0.0190.02680.01460.09410.10740.28460.5559
expanded MAPE\\\0.0360.03590.0540.0681
Gamma (GA)99.985745.370599.499115.398297.2574
Lambda (GA)0.073169.45030.14490.10.0107
P (GA)0.05470.11930.03950.78560.0194
unexpanded RMSE0.278416.50050.246833.93631.284517.96520.9144
expanded RMSE\\\13.66071.33616.61010.3365
unexpanded MAPE0.01890.05590.01450.09430.10660.13620.1031
expanded MAPE\\\0.03440.07610.03850.0369
Table 15. The estimation of mixed gas concentration based on the parameter-optimal ADASYN-MLSSVR model.
Table 15. The estimation of mixed gas concentration based on the parameter-optimal ADASYN-MLSSVR model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
Gamma (PSO)20.3199132.61917.88371.4685100
Lambda (PSO)0.010.010.010.10.1
P (PSO)0.0510.21460.039912.27740.1203
unexpanded MAPE0.279611.10420.24670.09410.10740.18980.4132
expanded MAPE\\\0.11810.07850.07810.0574
unexpanded RMSE0.0190.02680.014633.91651.290327.18873.3801
expanded RMSE\\\42.45971.293711.80.4922
Gamma (GA)99.985745.370599.499115.398296.3928
Lambda (GA)0.073169.45030.14490.19.5367
P (GA)0.05470.11930.03950.78560.0188
unexpanded MAPE0.278416.50050.24680.09430.10650.13630.0369
expanded MAPE\\\0.11880.18140.03730.033
unexpanded RMSE0.01890.05590.014533.93671.284317.97370.3623
expanded RMSE\\\42.76941.87544.84220.3491
Table 16. The estimation of mixed gas concentration based on the parameter-optimal S_SMOTE-MLSSVR model.
Table 16. The estimation of mixed gas concentration based on the parameter-optimal S_SMOTE-MLSSVR model.
Single GasMixed Gas
Number of tests15341885
Mixed sample size0002529
Gas compositionCOEth.Met.Eth.COEth.Met.
Gamma (PSO)20.3199132.61917.8837100100
Lambda (PSO)0.010.010.010.10.1
P (PSO)0.0510.21460.03993.062920.1778
unexpanded RMSE0.279611.10420.246733.91411.273927.18873.3801
expanded RMSE\\\3.062920.05758.92170.7676
unexpanded MAPE0.0190.02680.01460.09470.10630.18980.4132
expanded MAPE\\\0.00760.00320.07890.0949
Gamma (GA)99.985745.370599.499199.877695.6688
Lambda (GA)0.073169.45030.14490.00110.0203
P (GA)0.05470.11930.03950.56610.0869
unexpanded MAPE0.278416.50050.24680.09430.10670.13620.0671
expanded MAPE\\\0.00440.00090.080.0369
unexpanded RMSE0.01890.05590.014533.93311.284717.96120.6
expanded RMSE\\\2.29730.02389.82360.3366

4.5.4. The Comparison of Performance Based on Different Quantitative Analysis Methods

It can be seen from the above table that MAPE and RMSE after sample expansion are generally lower than those before sample expansion, indicating that the sample expansion method is effective in estimating the concentration of mixed gas. In order to highlight the positive effect of the sample expansion method on various classification methods, the difference between MAPE and RMSE after sample expansion and before sample expansion is used. And the larger the difference before and after expansion, the better the effect of the expanded sample method, as shown in Figure 14 and Figure 15. In Figure 14, under the MAPE evaluation standard, SMOTE sample extension method has the best performance on PSO-SVR, and S-SMOTE sample extension method has the best performance on PSO-ELM and GA-MLSSVR. In Figure 15, under the RMSE evaluation standard, SMOTE sample extension method has the best performance on PSO-SVR, and S_SMOTE sample extension method has the best performance on PSO-ELM and PSO-MLSSVR.
As shown in Figure 14 and Figure 15 above, the same classification method has different performance for different sample expansion methods. In order to find the best sample expansion method conducive to classification, the expansion difference in two optimization algorithms under the same classification method is statistically averaged, as shown in Table 17 and Table 18. For example, the extended difference (SVR) in Table 11 is the statistical average of PSO-SVR and GA-SVR under the same sample expansion, and the extended mean difference in the table is the statistical average of the extended difference (SVR, ELM and MLSSVR). Table 18 is similar. As can be seen from Figure 16a, under the MAPE evaluation standard, S_SMOTE SMOTE average difference in expansion is greater than that of ADASYN by 5.85% and SMOTE by 9.43%. The expansion mean difference in ADASYN is 3.58% larger than that of SMOTE, so S_SMOTE has the best overall performance. As can be seen from Figure 16b, under the RMSE evaluation standard, S_SMOTE average difference in expansion is 2.6% larger than ADASYN average difference in expansion, and 8.67% larger than SMOTE average difference in expansion. Compared with ADASYN, SMOTE expansion mean difference is 6.07% larger, so S-SMOTE overall performance is the best. Finally, under the two evaluation criteria, S-SMOTE sample expansion method is the best classification method.
G A S M O T E S V R e x p a n s i o n   d i f f e r e n c e = G A S V R _ M A P E G A S M O T E S V R _ M A P E
S V R e x p a n s i o n   d i f f e r e n c e = ( G A S M O T E S V R e x p a n s i o n   d i f f e r e n c e + P S O S M O T E S V R e x p a n s i o n   d i f f e r e n c e ) / 2
S M O T E e x p a n s i o n   d i f f e r e n c e = ( S V R e x p a n s i o n   d i f f e r e n c e + E L M e x p a n s i o n   d i f f e r e n c e + M L S S V R e x p a n s i o n   d i f f e r e n c e ) / 3
In order to more clearly highlight the positive effect of sample expansion methods on various classification methods, the difference between MAPE and RMSE after sample expansion and before sample expansion is used. The larger the difference before and after sample expansion, the better the performance of regression method under the same sample expansion, as shown in Figure 17 and Figure 18. In Figure 17, under the MAPE evaluation criteria, PSO-MLSSVR regression method has the best performance on SMOTE and ADASYN sample expansion methods, and GA-MLSSVR has the best performance on S_SMOTE method. In Figure 18, under the RMSE evaluation criteria, the PSO-MLSSVR classification method has the best performance on SMOTE and S_SMOTE methods, and the PSO-SVR method has the best performance on ADASYN method.
As shown in Figure 17 and Figure 18 above, different classification methods differ in the performance of the same sample expansion method. In order to find the best classification method based on sample expansion, the statistical average of the expansion difference in the two optimization algorithms under the same sample expansion method is conducted, as shown in Table 19 and Table 20. For example, the extended difference in Table 18 is the statistical average of PSO-SVR and GA-SVR under the same regression method, and the extended mean difference in Table 18 is the statistical average of the extended difference (SMOTE, ADASYN and S_SMOTE). Table 20 is similar. It can be seen from Figure 19a that under MAPE evaluation criteria, MLSSVR’s average difference in expansion is 26.04% larger than SVR’s average difference in expansion, and 33.09% larger than ELM’s average difference in expansion. The extended mean difference in SVR is 7.05% larger than that of ELM, so MLSSVR has the best overall performance. It can be seen from Figure 19b that under RMSE evaluation criteria, SVR’s average difference in expansion is 12.53% larger than MLSSVR’s average difference in expansion and 12.53% larger than ELM’s average difference in expansion. MLSSVR’s extended mean difference is 11.84% larger than ELM’s extended mean difference, so SVR has the best overall performance. However, MLSSVR classification method is slightly better than SVR classification method in sample expansion under the two evaluation criteria.
G A S M O T E S V R e x p a n s i o n   d i f f e r e n c e = G A S V R _ M A P E G A S M O T E S V R _ M A P E
S V R e x p a n s i o n   d i f f e r e n c e = ( G A S M O T E S V R e x p a n s i o n   d i f f e r e n c e + P S O S M O T E S V R e x p a n s i o n   d i f f e r e n c e ) / 2
S V R e x p a n s i o n   d i f f e r e n c e = ( S M O T E e x p a n s i o n   d i f f e r e n c e + A D A S Y N e x p a n s i o n   d i f f e r e n c e + S _ S M O T E e x p a n s i o n   d i f f e r e n c e ) / 3
From the above results of concentration estimation, it can be seen that the sample expansion method and classification method proposed in this paper are effective in estimating the concentration of mixed gas. Figure 20 shows the estimation of mixed gas concentration based on the PSO-S_SMOTE_MLSSVR method. It can be seen that S_SMOTE method was used to combine artificial samples. Therefore, the concentration estimation method based on sample expansion is better.

5. Conclusions

The present work has been an attempt to propose a method of composition identification and concentration estimation of mixed gases based on sample expansion to alleviate the problem of extremely unbalanced gas sample number and too few samples. The ADASYN-ELM gas identification method is proposed for the qualitative identification of mixed gas components. In this method, KPCA is used to extract the characteristic value of the sensor array signal and obtain the characteristic value of the gas mixture component, which solves the problem that the response of the MOS gas sensor to the gas is nonlinear. Then, the ADASYN method is used to expand the sample to solve the problem of too few samples. The PSO and GA optimization algorithms were used to optimize the parameters of ELM classification method, solve the problem of multi-parameter and difficult parameter determination in the model, then obtain the optimal parameter classification model for qualitative analysis of mixed gas. For quantitative estimation of mixed gases concentration, S-SMOTE-MLSSVR mixed gases concentration estimation method was put forward. First, the sample expansion method S-SMOTE was used to expand the sample, and the parameters of the regression method MLSSVR were optimized by PSO and GA optimization algorithms, and the regression model with the best parameters were obtained. In order to verify the effectiveness of the proposed method, a public data set is used for experimental verification. The results show that for the classification part, the accuracy rate after sample expansion is generally higher than before sample expansion. For the quantitative estimation part, the MAPE and RMSE after sample expansion are generally lower than before sample expansion, indicating that the sample expansion method has a positive effect on the classification and concentration estimation of mixed gases with extremely unbalanced and too few samples.

Author Contributions

Conceptualization, J.S.; methodology, W.X.; software, W.X.; validation J.S.; formal analysis, Y.L.; investigation, M.Z.; resources, Y.L.; data curation, Y.G.; writing—original draft preparation, J.S.; writing—review and editing, J.S.; visualization, Y.G.; supervision, M.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Mingjun Zhou was employed by the Fourth Nineteenth Research Institute of China Electronics Technology Group Corporation. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Xu, J.; Shirinkami, H.; Hwang, S.; Jeong, H.S.; Kim, G.; Jun, S.B.; Chun, H. Fast reconfigurable electrode array based on titanium oxide for localized stimulation of cultured neural network. ACS Appl. Mater. Interfaces 2023, 15, 19092–19101. [Google Scholar] [CrossRef]
  2. Iswardy, E.; Khazanna; Yusibani, E.; Mursal; Lahna, K.; Fitriyani, S. Design and Testing of the Oblique and V-Shaped Cu-ITO Microelectrode Arrays to Generate Dielectrophoretic Force on Red Blood Cells; IOP Conference Series Materials Science and Engineering; IOP Publishing: Bristol, UK, 2025; pp. 1–9. [Google Scholar]
  3. Adib, R.; Barrett, C.; O’SUllivan, S.; Flynn, A.; McFadden, M.; Kennedy, E.; O’RIordan, A. In situ pH-Controlled electrochemical sensors for glucose and pH detection in calf saliva. Biosens. Bioelectron. 2025, 275, 117234. [Google Scholar] [CrossRef]
  4. Gardner, J.W.; Bartlett, P.N. A brief history of electronic nose. Sens. Actuators B 1994, 18, 211–220. [Google Scholar] [CrossRef]
  5. He, T.; Wang, C.; Wang, M.; Wang, K.; Huang, C.; Guo, W.; Shi, D.; Hu, H.; Wu, Y.; Wang, J.; et al. Elucidation of the potential mechanism of tannase in removing the astringency of hickory nuts and its effect on flavor profile utilizing wide-targeted metabolomics, E-nose, and HS-SPME-GC–MS. Food Res. Int. 2025, 202, 115727. [Google Scholar] [CrossRef] [PubMed]
  6. Naqvi, S.H.Z.; Shahzad, L.; Naqvi, S.L.H.; Ayub, F.; Tanveer, R. Assessing the health consequences of indoor air pollution from biomass fuel combustion on pediatric populations in rural communities of Pakistan. Int. J. Environ. Health Res. 2025, 12, 1–14. [Google Scholar]
  7. Baruah, S.; Mazumder, D.H. A Review on Application of Machine Learning Techniques Coupled with E-Nose in Healthcare, Agriculture, and Allied Domains. IEEE Trans. Instrum. Meas. 2025, 74, 74–101. [Google Scholar] [CrossRef]
  8. van Riswijk, M.L.M.; van Tintelen, B.F.M.; Lucas, R.H.; van der Palen, J.; Siersema, P.D. Overcoming methodological barriers in electronic nose clinical studies, a simulation data-based approach. J. Breath Res. 2025, 19, 36006–36019. [Google Scholar] [CrossRef]
  9. Gao, D.-H.; Yu, Q.-C.; Kebeded, M.A.; Zhuang, Y.-Y. Advances in modification of metal and noble metal nanomaterials for metal oxide gas sensors: A review: Advances in modification of metal and noble metal nanomaterials. Rare Met. 2025, 44, 1–9. [Google Scholar] [CrossRef]
  10. Kumar, K.; Verma, A.; Verma, P. Development of an Intelligent Electronic-Nose Framework for Perishable Food Quality Assessment. SN Comput. Sci. 2025, 6, 1–15. [Google Scholar] [CrossRef]
  11. Zhu, F.; Yao, H.; Shen, Y.; Zhang, Y.; Li, X.; Shi, J.; Zhao, Z. Information fusion of hyperspectral imaging and self-developed electronic nose for evaluating the degree of black tea fermentation. J. Food Compos. Anal. 2025, 137, 106859–106874. [Google Scholar]
  12. Ren, Z.; Liang, K.; Liu, Y.; Wu, X.; Zhang, C.; Mei, X.; Zhang, Y. A Neural Network with Multiscale Convolution and Feature Attention Based on an Electronic Nose for Rapid Detection of Common Bunt Disease in Wheat Plants. Agriculture 2025, 15, 415. [Google Scholar] [CrossRef]
  13. Wang, M.; Chen, Y.J. Electronic nose and its application in the food industry: A review. J. Eur. Food Res. Technol. 2024, 250, 21–67. [Google Scholar] [CrossRef]
  14. Qu, C.; Zhang, Z.; Liu, J.; Zhao, P.; Jing, B.; Li, W.; Wu, C.; Liu, J. Multi-scenario adaptive electronic nose for the detection of environmental odor pollutants. J. Hazard. Mater. 2025, 489, 137660–137672. [Google Scholar] [CrossRef] [PubMed]
  15. Tan, L.; Ren, Y.; Zhang, T.; Kong, C.; Weng, X.; Chang, Z. An electronic nose combined with qualitative-quantitative two-stage hybrid modeling for microbial quantitative prediction in automotive air conditioners. Sens. Actuators B Chem. 2025, 427, 137083–137101. [Google Scholar] [CrossRef]
  16. Bulavskiy, M.O.; Fedorov, F.S.; Nasibulin, A.G. The effect of electrochemical treatment on the structure of single-walled carbon nanotubes: Towards better filling efficiency. Surf. Interfaces 2024, 46, 104019. [Google Scholar] [CrossRef]
  17. Sysoev, V.V.; Petrunin, A.A.; Plugin, I.; Varezhnikov, A.S.; Olga, E. Glukhova SnO2 Nanobelts as a Chemiresistive Platform for an On-Chip Multisensory “Spectrometer” to Selectively Gauge Ions of Inert Gases. ACS Appl. Nano Mater. 2024, 7, 1–15. [Google Scholar] [CrossRef]
  18. Loes, M.J.; Bagheri, S.; Sinitskii, A. Layer-Dependent Gas Sensing Mechanism of 2D Titanium Carbide (Ti3C2Tx) MXene. ACS Nano 2024, 18, 26251–26260. [Google Scholar] [CrossRef]
  19. Romain, A.C.; Nicolas, J. Long term stability of metal oxide-based gas sensors for e-nose environmental applications: An overview. Sens. Actuators B Chem. 2010, 146, 502–506. [Google Scholar] [CrossRef]
  20. Akbari, E.; Nilashi, M.; Alizadeh, A.; Buntat, Z. Soft computing techniques in prediction gas sensor based 2D material. Org. Electron. 2018, 62, 181–188. [Google Scholar] [CrossRef]
  21. Craven, M.A.; Gardner, J.W.; Bartlett, P.N. Electronic noses-Development and future prospects. Trends Anal. Chem. 1996, 15, 486–493. [Google Scholar] [CrossRef]
  22. De Cesare, F.; Pantalei, S.; Zampetti, E.; Macagnano, A. Electronic nose and SPME techniques to monitor phenanthrene biodegradation in soil. Sens. Actuators B Chem. 2008, 131, 63–70. [Google Scholar]
  23. Raj, B.; Seltzer, M.L.; Stern, R.M. Reconstruction of missing features for robust speech recognition. Speech Commun. 2004, 43, 275–296. [Google Scholar] [CrossRef]
  24. Seltzer, M.L.; Raj, B.; Stern, R.M. A Bayesian classifier for spectrographic mask estimation for missing feature speech recognition. Speech Commun. 2004, 43, 379–393. [Google Scholar] [CrossRef]
  25. Ming, J.; Hazen, T.J.; Glass, J.R. Combining missing-feature theory, speech enhancement, and speaker-dependent/-independent modeling for speech separation. Comput. Speech Lang. 2010, 24, 67–76. [Google Scholar] [CrossRef]
  26. Wang, M.; Chen, Y.; Chen, D.; Tian, X.; Zhao, W.; Shi, Y. A food quality detection method based on electronic nose technology. Meas. Sci. Technol. 2024, 35, 056004–056029. [Google Scholar] [CrossRef]
  27. Shooshtari, M.; Salehi, A. An electronic nose based on carbon nanotube-titanium dioxide hybrid nanostructures for detection and discrimination of volatile organic compounds. Sens. Actuators B Chem. 2022, 357, 131418. [Google Scholar] [CrossRef]
  28. Zhu, S.; Qiu, X.; Yin, Y.; Fang, M.; Liu, X.; Zhao, X.; Shi, Y. Two-step-hybrid model based on data preprocessing and intelligent optimization algorithms (CS and GWO) for NO2 and SO2 forecasting. Atmos. Pollut. Res. 2019, 10, 1326–1335. [Google Scholar]
  29. Pîrloaga, D.M.; Racuciu, C.; Cretu, E.; Rogobete, M. PCA versus LDA in implementing of neural classifiers for face recognition. Rev. Air Force Acad. 2015, 28, 113–118. [Google Scholar]
  30. Bahraminejad, B.; Basri, S.; Isa, M.; Hambali, Z. Application of a sensor array based on capillary-attached conductive gas sensors for odor identification. Meas. Sci. Technol. 2010, 21, 85204–85221. [Google Scholar] [CrossRef]
  31. Boumali, S.; Benhabiles, M.T.; Bouziane, A.; Kerrour, F.; Aguir, K. Acetone discriminator and concentration estimator for diabetes monitoring in human breath. Semicond. Sci. Technol. 2021, 36, 85010–85025. [Google Scholar] [CrossRef]
  32. Zhang, W.; Wang, L.; Chen, J.; Xiao, W. A novel gas recognition and concentration detection algorithm for artificial olfaction. IEEE Trans. Instrum. Meas. 2021, 70, 1–14. [Google Scholar] [CrossRef]
  33. Zhou, Y.; Zhang, T. System Design and SVM Identification Algorithm for the Ultrasonically Catalyzed Single-Sensor E-Nose. IEEE Trans. Instrum. Meas. 2022, 71, 1–9. [Google Scholar] [CrossRef]
  34. Weng, X.; Kong, C.; Jin, H.; Chen, D.; Li, C.; Li, Y.; Chang, Z. Detection of Volatile Organic Compounds (VOCs) in Livestock Houses Based on Electronic Nose. Appl. Sci. 2021, 11, 2337. [Google Scholar] [CrossRef]
  35. Tsui, L.-K.; Benavidez, A.; Palanisamy, P.; Evans, L.; Garzon, F. Quantitative decoding of the response a ceramic mixed potential sensor array for engine emissions control and diagnostics. Sens. Actuators B Chem. 2017, 249, 673–684. [Google Scholar] [CrossRef]
  36. Zhang, L.; Tian, F.; Dang, L.; Li, G.; Peng, X.; Yin, X.; Liu, S. A Novel Background Interferences Method in Electronic Nose Using Pattern Recognition. Sens. Actuators A Phys. 2013, 201, 254–263. [Google Scholar] [CrossRef]
  37. Chen, J.; Wang, L.; Duan, S. A mixed-kernel, variable-dimension memristive CNN for electronic nose recognition. Neurocomputing 2021, 461, 129–136. [Google Scholar] [CrossRef]
  38. Jiang, H.; He, Y.; Chen, Q. Qualitative identification of the edible oil storage period using a homemade portable electronic nose combined with multivariate analysis. J. Sci. Food Agric. 2021, 101, 3448–3456. [Google Scholar] [CrossRef]
  39. Bakiler, H.; Güney, S. Estimation of Concentration Values of Different Gases Based on Long Short-Term Memory by Using Electronic Nose. Biomed. Signal Process. Control. 2021, 69, 102908–102917. [Google Scholar] [CrossRef]
  40. Lozano, J.; Santos, J.; Aleixandre, M.; Sayago, I.; Gutierrez, J.; Horrillo, M. Identification of Typical Wine Aromas by means of an Electronic Nose. IEEE Sens. J. 2006, 6, 173–178. [Google Scholar] [CrossRef]
  41. Lozano, J.; Santos, J.P.; Horrillo, M.C. Classification of white wine aromas with an electronic nose. Talanta 2005, 67, 610–616. [Google Scholar] [CrossRef]
  42. Chen, Y.; Wang, M.; Chen, Z.; Zhao, W.; Shi, Y. Self-validating sensor technology and its application in artificial olfaction: A review. Measurement 2025, 242, 116025–116046. [Google Scholar] [CrossRef]
  43. Zhao, L.; Li, X.; Wang, J.; Yao, P.; Sheikh, A. Akbar Detection of formaldehyde in mixed vocs gases using sensor array with neural networks. IEEE Sens. J. 2016, 16, 6081–6086. [Google Scholar] [CrossRef]
  44. Jang, K.-W.; Choi, J.-H.; Jeon, J.-H.; Kim, H.-S. Combustible Gas Classification Modeling using Support Vector Machine and Pairing Plot Scheme. Sensors 2019, 19, 5018. [Google Scholar] [CrossRef]
  45. Cho, J.; Anandakathir, R.; Kumar, A.; Kumar, J.; Kurup, P.U. Sensitive and fast recognition of explosives using fluorescent polymer sensors and pattern recognition analysis. Sens. Actuators B 2011, 160, 1237–1243. [Google Scholar] [CrossRef]
  46. Zhao, L.; Qian, J.; Tian, F.; Liu, R.; Liu, B.; Zhang, S.; Lu, M. A weighted discriminative extreme learning machine design for lung cancer detection by an electronic nose system. IEEE Trans. Instrum. Meas. 2021, 70, 2509709. [Google Scholar] [CrossRef]
  47. Alsarraj, A.; Rehman, A.U.; Belhaouari, S.B.; Saoud, K.M.; Bermak, A. Hydrogen sulfide (H2S) sensor: A concept of physical versus virtual sensing. IEEE Trans. Instrum. Meas. 2021, 70, 2516813. [Google Scholar] [CrossRef]
  48. Li, Z.; Pang, W.; Liang, H.; Chen, G.; Zheng, X.; Ni, P. Multicomponent Alkane IR measurement system based on dynamic adaptive moving window PLS. IEEE Trans. Instrum. Meas. 2022, 71, 1–13. [Google Scholar] [CrossRef]
  49. Xibilia, M.G.; Latino, M.; Marinković, Z.; Atanasković, A.; Donato, N. Soft sensors based on deep neural networks for applications in security and safety. IEEE Trans. Instrum. Meas. 2020, 69, 7869–7876. [Google Scholar] [CrossRef]
  50. Laref, R.; Losson, E.; Sava, A.; Siadat, M. Support Vector Machine Regression for Calibration Transfer between Electronic Noses Dedicated to Air Pollution Monitoring. Sensors 2018, 18, 3716. [Google Scholar] [CrossRef]
  51. Sharma, S.; Kunma, V.; Mishra, V.N.; Dwivedi, R. Classification and quantification of binary Mixtures of gases/odors using thick-film gas sensor array response. IEEE Sens. J. 2015, 15, 1252–1260. [Google Scholar]
  52. Pardo, M.; Faglia, G.; Sberveglieri, G.; Corte, M.; Masulli, F.; Riani, M. A time delay neural network for estimation of gas concentrations in a mixture. Sens. Actuators B Chem. 2000, 65, 267–269. [Google Scholar] [CrossRef]
  53. Maleki, H.; Sorooshian, A.; Goudarzi, G.; Baboli, Z.; Birgani, Y.T.; Rahmati, M. Air pollution prediction by using an artificial neural network model. Clean Technol. Environ. Policy 2019, 21, 1341–1352. [Google Scholar] [CrossRef] [PubMed]
  54. Pareek, V.; Chaudhury, S.; Singh, S. Hybrid 3DCNN-RBM Network for Gas Mixture Concentration Estimation with Sensor Array. IEEE Sens. J. 2021, 21, 24263–24273. [Google Scholar] [CrossRef]
  55. Chen, Z.; Zheng, Y.; Chen, K.; Li, H.; Jian, J. Concentration Estimator of Mixed VOC Gases Using Sensor Array with Neural Networks and Decision Tree Leaming. IEEE Sens. J. 2017, 17, 1884–1892. [Google Scholar] [CrossRef]
  56. Zhang, D.; Liu, J.; Jiang, C.; Liu, A.; Xia, B. Quantitative detection of formaldehyde and ammonia gas via metal oxide-modified graphene-based sensor array combining with neural network model. Sens. Actuators B Chem. 2017, 240, 55–65. [Google Scholar] [CrossRef]
  57. Zhang, W.; Wang, L.; Chen, J.; Bi, X.; Chen, C.; Zhang, J.; Hans, V. A Novel Gas Recognition and Concentration Estimation Model for an Artificial Olfactory System with a Gas Sensor Array. IEEE Sens. J. 2021, 21, 18459–18468. [Google Scholar] [CrossRef]
  58. Zhang, W.; Wang, L.; Chen, J.; Bi, X.; Chen, C.; Zhang, J. Air pollution prediction with clustering-based ensemble of evolvin spiking neural networks and a case study on London area. Environ. Model. Softw. 2019, 118, 262–280. [Google Scholar]
  59. Liang, H.; Liu, S.; Li, Z.; Guo, J.; Jiang, Y. Research on Infrared Spectral Quantitative Analysis of Hydrocarbon Gases Based on Adaptive Boosting Classifier and PLS. IEEE Sens. J. 2021, 21, 20521–20529. [Google Scholar] [CrossRef]
  60. Adak, M.F.; Akpinar, M.; Yumusak, N. Determination of the Gas Density in Binary Gas Mixtures Using Multivariate Data Analysis. IEEE Sens. J. 2017, 99, 3288–3297. [Google Scholar]
  61. Chawla, N.V.; Bowyer, K.W.; Hall, L.O.; Kegelmeyer, W.P. SMOTE: Synthetic minority over-sampling technique. J. Artif. Intell. Res. 2002, 16, 321–357. [Google Scholar] [CrossRef]
  62. Wong, A.; Anantrasirichai, N.; Chalidabhongse, T.H.; Palasuwan, D. Analysis of Vision-based Abnormal Red Blood Cell Classification. Comput. Med. Imaging Graph. 2021, 2, 1–13. [Google Scholar]
  63. Bunkhumpornpat, C.; Sinapiromsaran, K.; Lursinsap, C. Safe-Level-SMOTE: Safe-Level-Synthetic Minority Over-Sampling TEchnique for Handling the Class Imbalanced Problem. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Bangkok, Thailand, 27–30 April 2009; pp. 475–482. [Google Scholar]
  64. Han, H.; Wang, W.Y.; Mao, B.H. Borderline-SMOTE: A New Over-Sampling Method in Imbalanced Data Sets Learning. Lect. Notes Comput. Sci. 2005, 3644, 878–887. [Google Scholar]
  65. He, H.; Bai, Y.; Garcia, E.A.; Li, S. ADASYN: Adaptive synthetic sampling approach for imbalanced learning. In Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong Kong, China, 1–8 June 2008; pp. 1322–1328. [Google Scholar]
Figure 1. The workflow of electronic nose system.
Figure 1. The workflow of electronic nose system.
Sensors 25 06254 g001
Figure 2. The diagram of Safe-Level-SMOTE synthesis of minority class samples.
Figure 2. The diagram of Safe-Level-SMOTE synthesis of minority class samples.
Sensors 25 06254 g002
Figure 3. The mixture gas identification and concentration detection method based on sample expansion.
Figure 3. The mixture gas identification and concentration detection method based on sample expansion.
Sensors 25 06254 g003
Figure 4. Experimental system diagram.
Figure 4. Experimental system diagram.
Sensors 25 06254 g004
Figure 5. The five-fold hierarchical cross-validation data allocation.
Figure 5. The five-fold hierarchical cross-validation data allocation.
Sensors 25 06254 g005
Figure 6. The contribution rate arranged in the histogram.
Figure 6. The contribution rate arranged in the histogram.
Sensors 25 06254 g006
Figure 7. The sample distribution maps of different methods: (a) The distribution of various data sets with the first three principal components; (b) the synthetic sample with S-SMOTE method.
Figure 7. The sample distribution maps of different methods: (a) The distribution of various data sets with the first three principal components; (b) the synthetic sample with S-SMOTE method.
Sensors 25 06254 g007
Figure 12. The comparison of performance based on different method classifications.
Figure 12. The comparison of performance based on different method classifications.
Sensors 25 06254 g012
Figure 13. The comparison of performance based on different sample augmentation methods.
Figure 13. The comparison of performance based on different sample augmentation methods.
Sensors 25 06254 g013
Figure 14. The performance of different sample expansion methods based on MAPE evaluation criteria.
Figure 14. The performance of different sample expansion methods based on MAPE evaluation criteria.
Sensors 25 06254 g014
Figure 15. The performance of different sample expansion methods based on RMSE evaluation criteria.
Figure 15. The performance of different sample expansion methods based on RMSE evaluation criteria.
Sensors 25 06254 g015
Figure 16. The comparison of MAPE and RMSE performance of different sample expansion methods.
Figure 16. The comparison of MAPE and RMSE performance of different sample expansion methods.
Sensors 25 06254 g016
Figure 17. The performance of different classification methods based on MAPE evaluation criteria.
Figure 17. The performance of different classification methods based on MAPE evaluation criteria.
Sensors 25 06254 g017
Figure 18. The performance of different classification methods based on RMSE evaluation criteria.
Figure 18. The performance of different classification methods based on RMSE evaluation criteria.
Sensors 25 06254 g018
Figure 19. The comparison of MAPE and RMSE performance of different classification methods.
Figure 19. The comparison of MAPE and RMSE performance of different classification methods.
Sensors 25 06254 g019
Figure 20. The sample distribution maps of different methods.
Figure 20. The sample distribution maps of different methods.
Sensors 25 06254 g020aSensors 25 06254 g020b
Table 1. The data set introduction.
Table 1. The data set introduction.
LabelGasSample Size
1CO30
2Ethylene68
3CO–Ethylene17
4Methane–Ethylene10
5Methane36
Table 2. The expansion amounts of each gas based on different sample expansion methods for qualitative analysis.
Table 2. The expansion amounts of each gas based on different sample expansion methods for qualitative analysis.
LabelGasSMOTEADASYNB_SMOTES_SMOTECSL_SMOTE
1CO000013
2Eth.00000
3Met.000023
4CO and Eth.2523222525
5Met. and Eth.2928282929
Table 3. The expansion amounts of each gas based on different sample expansion methods for concentration analysis.
Table 3. The expansion amounts of each gas based on different sample expansion methods for concentration analysis.
LabelGasSMOTEADASYNS_SMOTE
1CO000
2Eth.000
3Met.000
4CO and Eth.252730
5Met. and Eth.293435
Table 17. The comparison of MAPE performance of different sample expansion methods.
Table 17. The comparison of MAPE performance of different sample expansion methods.
MethodsSMOTEADASYNS_SMOTE
expansion difference (SVR)0.4762 0.4614 0.3960
expansion difference (ELM)0.2448 0.1508 0.3906
expansion difference (MLSSVR)0.0403 0.2566 0.2578
expansion mean difference0.2538 0.2896 0.3481
Table 18. The comparison of RMSE performance of different sample expansion methods.
Table 18. The comparison of RMSE performance of different sample expansion methods.
MethodsSMOTEADASYNS_SMOTE
expansion difference (SVR)0.5573 0.4860 0.4776
expansion difference (ELM)0.2635 0.1208 0.4586
expansion difference (MLSSVR)0.4054 0.0236 0.3681
expansion mean difference0.4088 0.3481 0.4348
Table 19. The comparison of MAPE performance of different classification methods.
Table 19. The comparison of MAPE performance of different classification methods.
MethodsSVRELMMLSSVR
expansion difference (SMOTE)0.3774 0.1580 0.4722
expansion difference (ADASYN)0.0235 0.0116 0.2735
expansion difference (S_SMOTE)0.0473 0.0672 0.4836
expansion mean difference0.1494 0.0789 0.4098
Table 20. The comparison of RMSE performance of different classification methods.
Table 20. The comparison of RMSE performance of different classification methods.
MethodsSVRELMMLSSVR
expansion difference (SMOTE)0.3841 0.1055 0.3736
expansion difference (ADASYN)0.4052 0.0638 0.0426
expansion difference (S_SMOTE)0.3162 0.2051 0.3134
expansion mean difference0.36850.1248 0.2432
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, Y.; Shi, J.; Xia, W.; Zhou, M.; Gao, Y. A Mixed Gas Component Identification and Concentration Estimation Method for Unbalanced Gas Sensor Array Samples. Sensors 2025, 25, 6254. https://doi.org/10.3390/s25196254

AMA Style

Lin Y, Shi J, Xia W, Zhou M, Gao Y. A Mixed Gas Component Identification and Concentration Estimation Method for Unbalanced Gas Sensor Array Samples. Sensors. 2025; 25(19):6254. https://doi.org/10.3390/s25196254

Chicago/Turabian Style

Lin, Yuheng, Jinlong Shi, Wanyu Xia, Mingjun Zhou, and Yunpeng Gao. 2025. "A Mixed Gas Component Identification and Concentration Estimation Method for Unbalanced Gas Sensor Array Samples" Sensors 25, no. 19: 6254. https://doi.org/10.3390/s25196254

APA Style

Lin, Y., Shi, J., Xia, W., Zhou, M., & Gao, Y. (2025). A Mixed Gas Component Identification and Concentration Estimation Method for Unbalanced Gas Sensor Array Samples. Sensors, 25(19), 6254. https://doi.org/10.3390/s25196254

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop