Next Article in Journal
Time to First Fix Robustness of Global Navigation Satellite Systems: Comparison Study
Previous Article in Journal
Photoacoustic Imaging for Image-Guided Gastric Tube Placement: Ex Vivo Characterization
Previous Article in Special Issue
An Iterative Shifting Disaggregation Algorithm for Multi-Source, Irregularly Sampled, and Overlapped Time Series
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Analog Sensor Signal Processing Method Susceptible to Anthropogenic Noise Based on Improved Adaptive Singular Spectrum Analysis

1
National Key Laboratory for Electronic Measurement Technology, North University of China, Taiyuan 030051, China
2
Taiyuan Satellite Launch Center TSLC, Taiyuan 030027, China
3
State Key Laboratory of Geodesy and Earth’s Dynamics, Innovation Academy for Precision Measurement Science and Technology, Chinese Academy of Sciences, Wuhan 430077, China
*
Author to whom correspondence should be addressed.
Sensors 2025, 25(5), 1598; https://doi.org/10.3390/s25051598
Submission received: 9 February 2025 / Revised: 24 February 2025 / Accepted: 3 March 2025 / Published: 5 March 2025
(This article belongs to the Special Issue Signal Processing and Machine Learning for Sensor Systems)

Abstract

:
Sensor measurements are often affected by complex ambient noise and complicating signal processing tasks. The singular spectrum decomposition (SSA) algorithm, while widely used, faces challenges such as the difficulty of determining the number of decomposition layers, requiring iterative adjustments that reduce precision and increase processing time. This paper proposes an improved adaptive singular spectrum analysis (ASSA) algorithm that integrates a deep residual network (Res-Net) for automatic recognition. A comprehensive interference signal database was constructed to train the Deep Res-Net, and common interferences were restored through the combination of different signals, enabling greater frequency resolution performance. Meanwhile, a novel correlation detection reconstruction method based on a clustering algorithm for adaptive signal classification was developed to suppress background noise and extract meaningful signals. ASSA addresses the challenge of determining the optimal number of decomposition layers, eliminating the parameter adjusting process and enhancing the measurement efficiency of sensor systems. Through experiments, magnetotelluric (MT) observation data with complex interferences were applied to demonstrate the performance of ASSA, and promising results with an RMSE of 0.2 were obtained. The experiments also showed that the accuracy of ASSA was improved by 14% compared to other signal extraction algorithms, proving that ASSA can achieve excellent results when applied to other data processing fields.

Graphical Abstract

1. Introduction

In the field of sensor measurement, analog signal sensors have important and wide application scenarios [1]. However, in the actual measurement process, the sensor signal can be weak and it is very susceptible to human interferences, such as power line interference, switch interference, drift current interference and vehicle noise etc. [2]. Normally, as a kind of non-linear and non-stationary signal, the analog sensor data cannot be well processed using methods based on linear and stationary assumptions [3,4].
In the realm of signal processing, the endeavor to mitigate noise interference and extract pertinent components has emerged as a research hotspot. In recent years, singular spectrum analysis (SSA) has gained attention as an effective method for dealing with nonlinear signals [5] due to its nonparametric and nonstationary assumptions [6]. Nevertheless, traditional SSA encounters challenges in accurately discerning useful components within intricate signals, as it struggles to distinguish between the singular values of useful and noisy signals. This paper provides an improved adaptive singular spectrum analysis (ASSA) algorithm to improve the effectiveness of signal recognition and extraction, especially when processing signals embedded in complex background noise.
Conventional techniques for processing non-stationary signals predominantly rely on mathematical and statistical methodologies. Researchers have explored a plethora of ways to address this daunting challenge, including variational mode decomposition [7], adaptive chirp mode decomposition and neural networks [8]. In the quest to efficiently mitigate interferences, early-stage efforts often resort to the least squares method for impedance estimation in MT analysis [9], which proves effective in attenuating Gaussian noise [10,11]. Sharma et al. introduced SSA into automatic recognition of electrocardiogram (ECG) measured signals [12]. Wang et al. combined multichannel SSA with affinity propagation to reduce the interference of ranging sensors [13]. The above methods all have certain limitations, can only guarantee a good effect in a certain signal interval, and cannot achieve satisfactory results in global signal processing [14].
Singular spectrum analysis (SSA) is a powerful technique for global time series analysis [15], which is widely applied in the realms of time series analysis [16], classification problems [17], fault recognition [18,19] and non-stationary signal decomposition [20]. Based on the idea of phase space reconstruction, SSA can extract different components of a complex signal using singular value decomposition (SVD) [21]. It can effectively separate nonstationary signals with large temporal differences when the length of the orbital matrix and the number of decomposition layers are sated. In 2017, Armouche et al. combined SSA with an unsupervised classification algorithm and proposed sliding SSA [22], which can effectively extract two sinusoidal signals with similar frequencies. Zhang et al. introduced SSA to extract feature signal extraction when dealing with electric current magnetic interference in magnetic sensors [23]. Lin et al. employed SSA to estimate the modal parameters of engineering structural systems [24].
However, deficient or excessive decomposition will occur under improper prior and posterior parameters for SSA. Singular spectrum decomposition (SSD) is a new adaptive method for decomposing non-linear and non-stationary time series in narrow-banded components [25]. SSD was developed based on SSA, which can adaptively select the embedding dimension and decompose the original signal into several singular spectral components from high frequency to low frequency. Weiyang et al. utilized improved singular spectrum decomposition (SSD) and a singular-value energy autocorrelation coefficient spectrum to extract the rolling bearing fault feature [26]. SSD is highly adaptable, but it cannot decompose similar components [27]. The singular values of the above method still need to be manually adjusted, the time spent on adjusting cannot be quantified, and the results may achieve sub-optimal values. In addition, the resolution performance of similar frequencies of the above methods still needs to be improved.
To effectively decompose interference and preserve critical MT components, this paper proposes an enhanced SSA method incorporating automatic aggregation classification. A comprehensive noise database, encompassing typical MT noise types, is constructed to train a Deep Res-Net, endowing the model with superior frequency resolution capabilities. Building on this foundation, an intelligent signal classification and recognition framework is developed using Deep Res-Net to accurately identify analog sensor signals which are suspectable to human interferences. The exceptional frequency resolution of Deep Res-Net significantly enhances the algorithm’s overall adaptability. Furthermore, a novel correlation detection signal reconstruction method based on K-means clustering is introduced. By employing a correlation matrix model to constrain clustering, this approach eliminates manual parameter adjusting, ensuring globally optimal extraction results and improving the accuracy of target signal extraction. Simulation experiments demonstrate that ASSA achieves superior frequency resolution, with a more than 14% increase in accuracy compared to conventional signal extraction algorithms. Validation with MT observation data reveals that ASSA achieves a root mean square error (RMSE) of 0.2, demonstrating its high precision in processing analog sensor signals.
The rest of this paper is organized as follows: the second part introduces the method, the third part describes the experiment and data analysis, and finally, the conclusion is given.

2. Methodology

2.1. Basic Theory of Singular Spectrum Analysis and K-Means Clustering Algorithm

SSA theory was proposed in the second half of the 20th century, and it is a rapidly developing method of time series analysis [28]. The SSA algorithm comprises two primary stages. The initial phase, termed decomposition, involves the transformation of a time series into a trajectory matrix, often referred to as a Hankel matrix. Subsequently, singular value decomposition (SVD) is employed on the trajectory matrix to affect a decomposition into elementary rank-one matrix components. The subsequent stage, termed reconstruction, ingeniously organizes matrix components into groups and reverts the grouped matrix decomposition to the decomposition of the original object via a process known as diagonal averaging. Figure 1 shows the process.
In the first stage, the trajectory matrix is formed by taking an equal length sequence from the input time series x, where the length of the subsequence is determined by the window length L, and it is usual to take L < N 2 . The partitioned trajectory matrix is shown below.
X = X 1 , X 2 , , X K = ( x i j ) i , j = 1 L , K = y 1 y 2 y 3 y K y 2 y 3 y 4 y K + 1 y 3 y 4 y 5 y K + 2 y L y L + 1 y L + 2 y N
Then, the singular value decomposition of X is performed using Equation (1) to obtain singular values.
X = U Σ V T
Set S = X X T , and the eigenvalues of S are denoted by λ 1 , λ 2 , , λ L . In Equation (1), where U R L × L , Σ R L × K , V R K × K , Σ is the singular value matrix, which is a diagonal matrix, U and V are left and right singular vector matrices, respectively, and they are composed of the eigenvectors of X X T and X T X , respectively.
Equation (1) can also be expressed as vectors:
X = i = 1 r σ i u i v i T = X 1 + X 2 + + X r
where r is the rank of the matrix X, which is also the number of non-zero singular values, σ is the singular value, sorted in descending order in Σ . Therefore, submatrices can be expressed as X i = λ i U i V i T .
The second stage, or grouping part, starts with the eigentriple grouping, i.e., the collection λ i , U i , V i , aimed at dividing X into m linearly independent submatrices. The procedure is as follows:
Firstly, the index set 1 , 2 , , r is divided into m disjoint subsets I 1 , I 2 , , I m .
Let I = i 1 , i 2 , , i p so that the resultant matrix XI corresponding to the group I is defined as X I = X i 1 + X i 2 + + X i p .
Then, resultant matrices can be calculated from I 1 to I m .
Consequently, Equation (2) facilitates the decomposition, which can be expressed as follows:
X = X I 1 + X I 2 + X I 3 + + X I m
The last step is the diagonal averaging, which starts with transforming X I j into a new series of length N. Let Y be an X I j matrix with elements y i j , i , j 1 , K , N = L + K − 1. If L < K, y i j * = y i j and y i j * = y j i . By determining the diagonal averaging using Equation (4), the Y matrix is transferred into the new series y 1 , y 2 , , y N .
y K = 1 k m = 1 k y m , k m + 1 * 1 k L * = min L , K 1 L * m = 1 L * y m , k m + 1 * L * k K * = max L , K 1 N k + 1 k K * + 1 N K * + 1 y m , k m + 1 * K * k N
After applying diagonal averaging to the resultant matrices, the reconstructed sequences are obtained: X ˜ ( k ) = ( X ˜ 1 ( k ) , X ˜ 2 ( k ) , , X ˜ N ( k ) ) . Thus, the initial series x 1 , , x N can be represented by the sum of m reconstructed sequences.
x n = k = 1 m x ˜ n ( k ) ( n = 1 , 2 , , N )
In light of the above, the effect of SSA is closely related to the number of decomposition layers and the window size. However, these parameters need to be tested constantly in the face of complex signals, which results in the failure of SSA to effectively separate the frequency components of complex signals.
Therefore, the clustering algorithm is introduced to provide the above parameters adaptively. When SVD is finished, K-means clustering is added to cluster the singular values, so that components with a similar frequency are effectively extracted.
As a widely used unsupervised learning algorithm, K-means can cluster unlabeled input data into different groups. Its principle is to divide the eigenmatrix X of a set of N samples into K clusters without intersection. The mean of all the data in a cluster is often called the “centroids” of the cluster.
In the K-means algorithm, the number of clusters, K, is a hyperparameter that must be specified by the user. The core task of K-means is to find K optimal centroids and classify the data closest to these centroids to the clusters represented by these centroids. In general, after the hyperparameter K is set, K centroids are randomly initialized; in practical application, K points in the sample are usually selected as the centroids for initialization. Equation (6) is used to consider the distance between the sample x i and the centroid μ k .
d x i μ k = x i μ k 2
Then it is necessary to find the value of k that minimizes d x i μ k , which also determines the centroids of the clusters to which this sample x i belongs. Subsequently, the clustering process can be finalized by computing the distance between each sample and its respective centroid, followed by selecting the sample with the minimum distance, which can be expressed by Equation (7).
J ( c 1 , c 2 , , c m , μ 1 , μ 2 , , μ k ) = 1 m i = 1 m x i μ c i 2
where c m represents the subscript k of the centroid of the cluster to which the m-th sample belongs.
By adjusting different values of k, different centroids are obtained, and finally, the value of J in Equation (7) is minimized, thus completing the clustering process.
The flow chart of SSA combined with the K-means clustering algorithm is shown in Algorithm 1. In this work, the hyperparameter k of K-means clustering was obtained from a neural network described in the next part, the random state was set to 10, and other hyperparameters were set to default values.
Algorithm 1. Process of SSA combined with K-means clustering algorithm.
Input: Pending sequence Y(t)
1: Start: Set window length L, if L > N 2 , L = N 2 1 , determine the value of clustering K;
2: Embed Y(t) into trajectory matrix X;
3: Perform singular value decomposition using Equation (1), and sort the singular values in descending order;
4: Perform K-means clustering, choose K singular values μ 1 , μ 2 , , μ k as clustering centroids;
5:  When new centroid is different from the original one;
6:    for i = 1 to m;
7:      Calculate the distance between the ith sample and the centroids according to Equation (6), and take the centroid with the smallest dxiμi and denoted as ci;
8:    end;
9:    for i = 1 to K;
10:         Calculate the mean of the coordinate of all sample points belonging to the current centroid, and take the mean value as the new centroid.
11:  end;
12: end;
13: Screen for valid classifications based on correlations, selecting clusters with correlations greater than 0.99;
14: Filter the eigentriples corresponding to the singular values of each clustered cluster based on indexing;
15: Reconstruct the signal components with different frequencies according to Equation (4);
16: end.
Output: Reconstructed signals

2.2. Deep Residual Neural Networks

In order to extract meaningful components from the original signal, the K-means clustering algorithm is employed within the decomposition phase of SSA. However, determining the appropriate value of K is often challenging in practical applications. Thus, a deep residual neural network (Deep Res-Net) was applied to enhance performance, which is basically a convolutional neural network (CNN). In this work, it was optimized using residual subnets with soft thresholds.
CNN has a great status in computer vision, but it can also be applied in time series processing, i.e., one-dimensional CNN (1-D CNN). The difference between 1-D CNN and 2-D CNN is the dimension of the convolution kernel. While 2D-CNNs have achieved remarkable recognition accuracy across various applications, they often necessitate the transformation of 1D data into 2D formats and typically require extensive datasets for training to mitigate the risk of overfitting. In contrast, 1D-CNNs offer a more compact architecture, enabling effective training on limited datasets of 1D signals. Furthermore, 1D-CNNs can be directly applied to raw signals without the need for extensive pre-processing or post-processing [29,30,31]. Given these advantages, 1D-CNNs were selected in this work to achieve an optimal balance between computational efficiency and accuracy. Figure 2 shows the basic 1-D CNN procedure.
In this work, ordinary CNN was optimized to achieve a better performance in 1-D signal recognition. The convolutional layer stands as the pivotal component distinguishing a CNN from traditional fully connected (FC) neural networks in machine learning architectures, and it also greatly reduces the number of trained parameters. This is accomplished by employing convolution instead of matrix multiplication, where the convolution kernels can possess fewer parameters compared to the transformation matrix in the FC layer. The relationship between the input and the convolution kernel can be expressed as follows:
y j = i M j x i k i j + b j
where xi is the channel number of the input feature map, yj is the j-th channel of the output feature map, k is the convolutional kernel, b is the bias, and Mj is a collection of channels that are used for calculating the j-th channel of the output feature map [32].
In the training process of CNN, theoretically, the model achieves better results as the number of network layers increases, which leads to a problem: vanishing or exploding gradients [33]. This problem has been usually addressed by normalized initialization and adding intermediate normalization layers [34,35]. Another approach to address this issue is through the utilization of deep residual networks. These networks represent an appealing variant of CNNs, leveraging identity shortcuts to alleviate the challenges associated with parameter optimization [36].
Research shows that the accuracy of CNN improves as the net gets deeper, but it faces the problem of exploding or vanishing gradients and degradation [37]. To improve the performance of CNN, residual blocks and soft thresholds are applied in this work. Residual blocks create a shortcut from input to output, allowing layers with poor performance to be skipped.
Compared to designing filters manually, using a gradient descent algorithm enables the net learn itself; therefore, combining soft thresholds with deep learning can be a good way to eliminate noise-related information and construct highly discriminative features [38]. The soft threshold function can be expressed as follows:
y = s o f t ( x , τ ) = x + τ x τ 0 x τ x τ x τ
where x is the input feature, τ is the threshold, which is a positive parameter, and the output feature is denoted by y.
The basic principle of the residual block is shown in Figure 3, and a residual block can be represented as follows:
x l + 1 = f ( x l ) + F ( x l , W l )
In Equation (10), f ( x l ) is the direct mapping part, F ( x l , W l ) is the residual part, corresponding to the upper and lower parts of the page, respectively.
Therefore, the whole flowchart showing the residual improved CNN is shown in Figure 4. The overall flow of ASSA is shown in Algorithm 2.
Algorithm 2. Overall workflow of the ASSA algorithm.
Input: Measured data sequence Y(t)
1: Start: Deep Res-Net recognition process of Y(t);
2: Deep Res-Net outputs noise signal category K and target signal category T, respectively;
3: Set window length L, if L > N 2 ,   L = N 2 1 ;
4: Embed Y(t) into trajectory matrix X;
5: Perform singular value decomposition using Equation (1), and sort the singular values in descending order;
6: Perform K-means clustering, choose K + T singular values μ 1 , μ 2 , , μ k + T as clustering centroids;
7: Obtain the clusters C 1 , C 2 , , C K + T , calculate the mean Ci of i-th cluster C i , m e a n = j = 1 l e n ( C i ) c i j l e n ( C i ) , where cij is the j-th element of Ci, and sort in ascending order;
8: Set threshold Ts;
9: if  C i , m e a n < T s ;
10:    for i, j < K + T;
11:        if  i j
12:            Correlation matrix C o r _ m i , j = 1 C i , m e a n C j , m e a n ;
13:        end
14:    Valid cluster Cv = where (Cor_m < 0.01)
15:    end
16: end
17: Filter the eigentriples corresponding to the singular values of each clustered cluster based on Cv;
18: Reconstruct the signal components with different frequencies, according to Equation (4);
19: end.
Output: Reconstructed signals

3. Experiments and Result Analysis

To validate the effectiveness of the proposed algorithm, simulation experiments and practical experiments were carried out. For simulation experiments, a complex signal with different frequency components was constructed, and measured MT data were applied in practical experiments. The algorithm execution platform utilized in this experiment was: 12th Gen Intel(R) Core (TM) i9-12950HX (2.30 GHz), 3070Ti 8 GB GPU, 32 GB RAM, Windows 11 ×64 operation system. Software platform: Python 3.9.

3.1. Experiments of Frequency Resolution Performance

In order to verify the ability of the proposed algorithm to recognize and decompose multiple frequency superposition signals, a complex signal with different frequency components was constructed. The simulation signal s1 consists of sinusoidal signals with frequencies of 10, 15, 25, 35, 55, 65 Hz and amplitudes from one to six, respectively. Gaussian noise was also added to ensure that the simulation signals had a signal-to-noise ratio (SNR) of 3 dB. The simulation signals, simulation signals with Gaussian noise and combined signals are shown in Figure 5a,b, respectively. The decomposition result is shown in Figure 6. The root mean square error (RMSE) was used for quantitative analysis, as shown in Table 1.
From Figure 6, it can be seen that most of these signals were well reconstructed, with slight distortion in the range of about 100 sampling points at the beginning and end of the signal. It can also be seen that the amplitudes of the restored signals have a slight attenuation, but the overall levels are still maintained, and the higher the signal frequency, the more stable the reconstructed amplitude.
Through this experiment, it has been proven that the proposed method can effectively identify signals with close frequencies, and the signals are well reconstructed.

3.2. Experiments of Target Signal Recognition

The experimental network architecture comprises five residual subnetworks, each integrating two convolutional layers with ReLu and Sigmoid activation functions. The network was initiated with a primary convolutional layer, followed by five additional convolutional layers dedicated to channel parameter optimization. Through systematic hyperparameter tuning, the network achieves an optimal performance under the following configuration: Cross-entropy serves as the loss function, filter dimensions were progressively scaled from 16 to 256, a kernel of size 3 was implemented with He-normal initialization, the l2 regularization coefficient was set to 10−4, and the learning rate was maintained at 0.001. This configuration demonstrated superior performance in temporal efficiency and model precision during the following experiments.
In the actual application scenario, the signal measured by the sensor was not an ideal sinusoidal signal, which is usually a complex composite signal with more frequency components. As mentioned above, MT observation data are susceptible to human interferences, and it is necessary to carry out simulation experiments of the corresponding interferences.
Therefore, a complex non-stationary signal x was constructed to verify the efficiency of the proposed method. The sawtooth function and square functions were used to generate large-scale triangular and square waves, and a single-sided attenuation pulse signal was used to simulate vibration interference. Based on different combinations of the above signals, different types of interference can be simulated so as to build a general interference database.
The simulation signal x has a sampling rate of 1200 Hz and sampling time of 10 s, x = s + ns, where s is a non-stationary signal, and ns is Gaussian background noise with a SNR of 3. For comprehensive analysis and discussion, the following four typical target signals were selected for comparative experiments:
Tringle : s 1 = s a w t o o t h ( 2 π 50 ( t 1 ) ) Oscillating attenuation : s 2 = exp ( 10 ( t 1 ) cos ( 2 π 50 ( t 1 ) ) Pulstran :   s 3 = p u l s t r a n ( t 0.25 , d , g a u s p u l s , 10 , 0.5 ) Daul-frequency :   s 4 = exp ( 10 ( y 7 ) ) sin ( 2 π 50 ( 7 t ) + π 2 )
During the experiments, all target signals were set as training data for the Deep Res-Net; the trained network was then applied to provide parameters for clusters of singular values. The recognition accuracy is up to 100%, which greatly enhances the accuracy of the K-means clustering algorithm and reduces the debugging time. The results are shown in Figure 7.
Figure 7a shows the reconstructed tringle wave signal. It can be seen that the reconstructed signal waveform is well preserved, with a slight attenuation in amplitude, and the signal amplitude rises after the 800th sampling point. Figure 7b shows the reconstructed oscillating attenuation signal, which has an attenuation of 0.4 amplitude at the initial time, but the overall target signal is well extracted. Figure 7c shows the restricted pulstran signal. The portion of the signal with an amplitude greater than 0 has a good extraction effect, while the portion of the signal with an amplitude less than 0 has a slight distortion, which is not smooth enough at the 800th sampling point. Figure 7d shows the reconstructed dual-frequency signal. The overall signal waveform is well extracted, and the background noise at the portion of the signal with an amplitude of 0 was not removed completely.
In order to better demonstrate the effectiveness of the proposed method, the ASSA and RMSE methods were used to evaluate the extraction effect of the target signal. Conventional SSA with manually adjusted parameters and the orthogonal matching pursuit (OMP) algorithm were included for comparison, and the results are shown in Figure 8 and Table 1.
Figure 8 illustrates the performance of the SSA with manually adjusted parameters and OMP algorithms in extracting target signals, revealing varying degrees of distortion. For the triangular wave signal in Figure 8a, outliers are evident at the beginning and end, and the overall signal amplitude is unstable. While the general profile of the pulstran signal was reconstructed, it remains insufficiently accurate, with noticeable noise at the end. The dual-frequency signal was partially restored, but harmonic interference persists, and some target signal frequencies were not preserved. In Figure 8b, the triangle wave and oscillation attenuation signals are effectively reconstructed. However, the pulstran and dual-frequency signals exhibit significant distortion, with noise reduction remaining inadequate.
From Table 2, it can be seen that compared with conventional SSA and the OMP sparse signal extraction method, the proposed ASSA method demonstrates superior performance in recognizing and extracting typical interference signals. SSA, however, requires extensive manual parameter adjustments, often yielding suboptimal results with unquantifiable time expenditure. Without taking into account the debugging time, the processing speed of SSA is similar to that of ASSA. OMP suffers from the complexities of constructing over-complete dictionaries, leading to increased computational time and high hardware requirements. The ASSA method effectively addresses these limitations, offering a more efficient and robust solution.

3.3. Measured Data Experiments

In order to verify the effect of the proposed method in practical application, experiments were carried out using measured data from the V8 MT system, developed by the Phoenix Corporation of Canada. The sensing system is shown in Figure 9.
In the practical application of the MT method, because the shape of the target signal was difficult to know in advance, and because there were complex noise interferences in the measurement process, the target signal could not be extracted effectively.
Before the experiment, additional work was carried out to collect and summarize the interference signals of the measurement area, including power frequency interference, stray current, electronic switch, motor noise, vehicle interference, etc., and the noise signal database was built during the simulation. Some typical interference signals are shown in Figure 10. From Figure 10, it can be seen that power line interference, usually from the power system, has a typical performance of 50 Hz sine wave and its harmonics, with the main energy concentrated in the middle of the signal. The main energy of the switch interference signal is concentrated in the bottom. The drift current interference signal is a kind of low-frequency, random or periodic interference signal and vehicle interference is the vibration and noise caused by engines, motors or tires, which manifests as the superposition of multiple low-frequency signals and some random transient pulses.
The Deep Res-Net was trained as follows: A simulation program generated 1000 noise signals, each with a sampling rate of 1200 Hz and a duration of 10 s. Because the amplitude of interference noise hardly exceeds 10 in real application, its amplitude starts at 0 and increases in 0.01 steps. These noise signals were then superimposed onto the target signal without noise and were finally used to establish a training dataset with dimensions of 1.2 × 103 by 5000. The test set was similarly constructed by superimposing a variety of noise types onto a denoised target signal. Given that the sampling rate of the measurement equipment used in this experiment was also 1200 Hz, the measurement signals were segmented into 10 s intervals to maintain data size consistency. Each segment was processed individually to ensure both the proper functioning of the algorithm and the similarity between the training data and the measurement data.
Then, the noise signal database was employed in ASSA to eliminate interferences in the MT method so as to extract the target signal completely. In addition, the historical measurement data were also incorporated into the deep Res-Net training, which was used to identify the target signal type after the noise interference was eliminated to provide suitable parameters for SSA.
Typical interferences in the MT methods included power frequency interference, stray current, electronic switch, motor noise, vehicle interference, etc.
The test results are shown in Figure 11.
Figure 11 demonstrates that the original observed signal was significantly impacted by interference, leading to substantial disturbances. In contrast, the signal extracted using ASSA exhibits no distortion, achieves an exceptional disturbance elimination effect, and delivers a high signal-to-noise ratio (SNR). These improvements render the extracted signals highly suitable for subsequent inversion calculations.
In Figure 11a, the blue line represents the signal obtained from the first measurement, while the light blue and green lines correspond to the two useful signals extracted by ASSA, with root mean square errors (RMSE) of 0.2047 and 0.2902 and SNRs of 12 dB and 11 dB, respectively. Similarly, in Figure 11b, the blue line represents the second MT observation signal, and the green line indicates the target signal extracted by ASSA, which achieved an RMSE of 0.2962 and an SNR of 13 dB. Details are provided in Table 3.
For comparison, the conventional SSA and OMP algorithms were used to process the second measurement signal, and the results are shown in Figure 12 and Table 4:
As can be seen from Table 4, the processing time of traditional SSA is very close to that of ASSA, regardless of manual parameter adjustment. Although the extraction accuracy of the OMP algorithm was superior to that of ASSA, it consumed a lot of time due to the need to build a huge redundant dictionary.
In order to verify the generalization ability of the algorithm, another measured data experiment was carried out. The conducted experiment focused on the measurement data processing of an MEMS gyroscope, specifically utilizing a CRM100 gyroscope for data acquisition on a rotating platform. The zero position of the CRM100 gyroscope is 1.65 after conversion. Pictures of the CRM100 gyroscope and rotating platform are shown in Figure 13a,b, respectively.
Unlike MT data, the gyroscope’s noise is predominantly characterized by random walk noise, which is primarily manifested as Gaussian noise and weak current noise, mainly represented as weak damped noise. The model training process follows a similar methodology to that employed for MT data processing. Given the selected instrument’s sampling rate of 1 kHz, a noise dataset was constructed with a sampling rate of 1 kHz, a sampling duration of 10 s, and an amplitude range from 0 to 1. This dataset was then integrated with effective signals to form the training and test sets. During data processing, the gyroscope data was segmented into 10 s frames and processed sequentially. The results are shown in Figure 14 and Table 5.
It can be seen from Table 5 that ASSA has better signal processing performance in a similar processing time. The experiment proved that the proposed algorithm has good generalization ability.

4. Conclusions

This paper has presented an innovative adaptive singular spectrum analysis (ASSA) approach, integrating K-means clustering, Deep Res-Net, and a signal reconstruction method based on correlation detection to enhance signal recognition and noise extraction performance. By addressing the limitations of traditional SSA, which relies on manually adjusted parameters, the proposed method achieves automatic parameter optimization and robust adaptability. The ASSA framework is particularly advantageous for handling complex noise in multi-signal environments, providing higher extraction accuracy and faster processing speeds.
Through experiments, the performance of ASSA based on K-means clustering and Deep Res-Net proposed in this paper was verified, proving the accuracy of multi-signal recognition when the frequency is similar, and the ability of this method to identify and extract complex noises was also verified through simulation experiments. Compared with traditional SSA with manually adjusted parameters and the sparse decomposition signal extraction method, ASSA has higher extraction accuracy and stronger adaptability.
The experimental results show that this method is effective in practical applications. Compared with conventional data processing methods, ASSA eliminates the need for manual parameter adjustment and adaptively determines the optimal singular spectral decomposition layers, resulting in higher extraction accuracy and faster extraction speed. Furthermore, another advantage of ASSA is its versatility, as it can be applied to sensor measurement systems susceptible to complex noise, such as vibration detection and attitude measurement in high-dynamic environments. It also integrates well with other data analysis programs for researchers to use.

Author Contributions

Conceptualization, S.G.; Data curation Z.G., K.F. and C.Z. (Chenming Zhang); Investigation, J.L. and S.G.; Methodology, S.G., Z.G., J.S. and K.F.; Resources, C.Z. (Chunxing Zhang); Software Z.G. and W.H.; Supervision, J.L.; Validation, S.G., K.F. and W.H.; Writing—original draft, Z.G. and S.G.; Writing—review & editing, Z.G., C.Z. (Chunxing Zhang), C.Z. (Chenming Zhang) and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was sponsored by Key Technologies R&D Program (2016YFC0303100), the National High Technology Research and Development Program of China (2014AA06A603), and the National Natural Science Foundation of China (61531001).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gao, Z.; Ge, S.; Li, J.; Feng, K. Inertial Navigation Trajectory and Attitude Prediction Based on Improved Hidden Markov Model. In Proceedings of the 2023 IEEE 16th International Conference on Electronic Measurement & Instruments (ICEMI), Harbin, China, 9–11 August 2023; pp. 383–389. [Google Scholar]
  2. Cai, J. A de-noising method of magnetotelluric signals based on the generalized S-transform. J. Appl. Geophys. 2024, 223, 105349. [Google Scholar] [CrossRef]
  3. Li, J.; Liu, Y.; Tang, J.; Ma, F. Magnetotelluric noise suppression via convolutional neural network. Geophysics 2023, 88, WA361–WA375. [Google Scholar] [CrossRef]
  4. Pukhova, V.M.; Kustov, T.V.; Ferrini, G. Time-frequency analysis of non-stationary signals. In Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering (EIConRus), Moscow and St. Petersburg, Russia, 29 January–1 February 2018; pp. 1141–1145. [Google Scholar]
  5. Akaniro, O.G.; Sanei, S. Singular Spectrum Analysis of Non-stationary Signals. In Proceedings of the 2020 3rd International Conference on Emerging Trends in Electrical, Electronic and Communications Engineering (ELECOM), Balaclava, Mauritius, 25–27 November 2020; pp. 18–21. [Google Scholar]
  6. Yang, D.; Wang, H.; Wang, T.; Lu, G. Piezoelectric Active Sensing-Based Pipeline Corrosion Monitoring Using Singular Spectrum Analysis. Sensors 2024, 24, 4192. [Google Scholar] [CrossRef] [PubMed]
  7. Li, J.; Zhang, X.; Tang, J. Noise suppression for magnetotelluric using variational mode decomposition and detrended fluctuation analysis. J. Appl. Geophys. 2020, 180, 104127. [Google Scholar] [CrossRef]
  8. Pal, U.; Chattopadhyay, P.B.; Sarraf, Y.; Halder, S. Optimizing noise reduction in layered-earth magnetotelluric data for generating smooth models with artificial neural networks. Acta Geophys. 2024, 1–31. [Google Scholar] [CrossRef]
  9. Lin, W.; Yang, B.; Han, B.; Hu, X. A Review of Subsurface Electrical Conductivity Anomalies in Magnetotelluric Imaging. Sensors 2023, 23, 1803. [Google Scholar] [CrossRef]
  10. Pedersen, J.; Hermance, J.F. Least squares inversion of one-dimensional magnetotelluric data: An assessment of procedures employed by Brown University. Surv. Geophys. 1986, 8, 45. [Google Scholar] [CrossRef]
  11. Cadzow, J.A. Least Squares, Modeling, and Signal Processing. Digit. Signal Process. 1994, 4, 19. [Google Scholar] [CrossRef]
  12. Sharma, L.D.; Bhattacharyya, A. A Computerized Approach for Automatic Human Emotion Recognition Using Sliding Mode Singular Spectrum Analysis. IEEE Sens. J. 2021, 21, 26931–26940. [Google Scholar] [CrossRef]
  13. Wang, X.; Zhang, S.; Chen, S.; Hou, L.; Zhu, L. An Antijamming Method Based on Multichannel Singular Spectrum Analysis and Affinity Propagation for UWB Ranging Sensors. IEEE Sens. J. 2023, 23, 11869–11878. [Google Scholar] [CrossRef]
  14. Eriksen, T.; Rehman, N.U. Data-driven nonstationary signal decomposition approaches: A comparative analysis. Sci. Rep. 2023, 13, 1798. [Google Scholar] [CrossRef] [PubMed]
  15. Hassani, H. Singular spectrum analysis: Methodology and comparison. J. Data Sci. 2007, 5, 19. [Google Scholar] [CrossRef]
  16. Jain, S.; Panda, R.; Tripathy, R.K. Multivariate Sliding-Mode Singular Spectrum Analysis for the Decomposition of Multisensor Time Series. IEEE Sens. Lett. 2020, 4, 7002404. [Google Scholar] [CrossRef]
  17. Bayati, F.; Trad, D. 3-D Data Interpolation and Denoising by an Adaptive Weighting Rank-Reduction Method Using Multichannel Singular Spectrum Analysis Algorithm. Sensors 2023, 23, 577. [Google Scholar] [CrossRef]
  18. Du, W.; Zhou, J.; Wang, Z.; Li, R.; Wang, J. Application of Improved Singular Spectrum Decomposition Method for Composite Fault Diagnosis of Gear Boxes. Sensors 2018, 18, 3804. [Google Scholar] [CrossRef]
  19. Liao, Z.; Song, L.; Chen, P.; Guan, Z.; Fang, Z.; Li, K. An Effective Singular Value Selection and Bearing Fault Signal Filtering Diagnosis Method Based on False Nearest Neighbors and Statistical Information Criteria. Sensors 2018, 18, 2235. [Google Scholar] [CrossRef]
  20. Gu, J.; Hung, K.; Ling, B.W.-K.; Chow, D.H.-K.; Zhou, Y.; Fu, Y.; Pun, S.H. Generalized singular spectrum analysis for the decomposition and analysis of non-stationary signals. J. Frankl. Inst. 2024, 361, 106696. [Google Scholar] [CrossRef]
  21. Zhou, R.; Han, J.; Guo, Z.; Li, T. De-Noising of Magnetotelluric Signals by Discrete Wavelet Transform and SVD Decomposition. Remote Sens. 2021, 13, 4932. [Google Scholar] [CrossRef]
  22. Harmouche, J.; Fourer, D.; Auger, F.; Borgnat, P.; Flandrin, P. The Sliding Singular Spectrum Analysis: A Data-Driven Nonstationary Signal Decomposition Tool. IEEE Trans. Signal Process. 2018, 66, 251–263. [Google Scholar] [CrossRef]
  23. Zhang, C.; Du, C.; Peng, X.; Han, Q.; Guo, H. An Aeromagnetic Compensation Method for Suppressing the Magnetic Interference Generated by Electric Current with Vector Magnetometer. Sensors 2022, 22, 6151. [Google Scholar] [CrossRef]
  24. Lin, C.-S.; Wu, Y.-X. Singular Spectrum Analysis for Modal Estimation from Stationary Response Only. Sensors 2022, 22, 2585. [Google Scholar] [CrossRef]
  25. Bonizzi, P.; Karel, J.M.H.; Meste, O.; Peeters, R.L.M. Singular spectrum decomposition: A new method for time series decomposition. Adv. Adapt. Data Anal. 2014, 06, 1450011. [Google Scholar] [CrossRef]
  26. Xu, W.; Shen, Y.; Jiang, Q.; Zhu, Q.; Xu, F. Rolling bearing fault feature extraction via improved SSD and a singular-value energy autocorrelation coefficient spectrum. Meas. Sci. Technol. 2022, 33, 085112. [Google Scholar] [CrossRef]
  27. Kazemi, M.; Rodrigues, P.C. Robust singular spectrum analysis: Comparison between classical and robust approaches for model fit and forecasting. Comput. Stat. 2023. [Google Scholar] [CrossRef]
  28. Golyandina, N. Particularities and commonalities of singular spectrum analysis as a method of time series analysis and signal processing. WIREs Comput. Stat. 2020, 12, e1487. [Google Scholar] [CrossRef]
  29. Kiranyaz, S.; Ince, T.; Abdeljaber, O.; Avci, O.; Gabbouj, M. 1-D Convolutional Neural Networks for Signal Processing Applications. In Proceedings of the ICASSP 2019—2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 8360–8364. [Google Scholar]
  30. Ince, T.; Kiranyaz, S.; Eren, L.; Askar, M.; Gabbouj, M. Real-Time Motor Fault Detection by 1-D Convolutional Neural Networks. IEEE Trans. Ind. Electron. 2016, 63, 7067–7075. [Google Scholar] [CrossRef]
  31. Abdeljaber, O.; Avci, O.; Kiranyaz, S.; Gabbouj, M.; Inman, D.J. Real-time vibration-based structural damage detection using one-dimensional convolutional neural networks. J. Sound Vib. 2017, 388, 154–170. [Google Scholar] [CrossRef]
  32. Zhao, M.; Kang, M.; Tang, B.; Pecht, M. Deep Residual Networks With Dynamically Weighted Wavelet Coefficients for Fault Diagnosis of Planetary Gearboxes. IEEE Trans. Ind. Electron. 2018, 65, 4290–4300. [Google Scholar] [CrossRef]
  33. Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, Sardinia, Italy, 13–15 May 2010; p. 8. [Google Scholar]
  34. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the International Conference on Machine Learning, Lille, France, 6–11 July 2015. [Google Scholar]
  35. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; p. 9. [Google Scholar]
  36. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; p. 9. [Google Scholar]
  37. Zagoruyko, S.; Komodakis, N. Wide Residual Networks. arXiv 2016, arXiv:1605.07146. [Google Scholar]
  38. Zhao, M.; Zhong, S.; Fu, X.; Tang, B.; Pecht, M. Deep Residual Shrinkage Networks for Fault Diagnosis. IEEE Trans. Ind. Inform. 2020, 16, 4681–4690. [Google Scholar] [CrossRef]
Figure 1. Flow chart of SSA.
Figure 1. Flow chart of SSA.
Sensors 25 01598 g001
Figure 2. Illustration of basic CNN.
Figure 2. Illustration of basic CNN.
Sensors 25 01598 g002
Figure 3. Basic principle of the residual block.
Figure 3. Basic principle of the residual block.
Sensors 25 01598 g003
Figure 4. Diagram of Deep Res-Net for 1-D signal recognition.
Figure 4. Diagram of Deep Res-Net for 1-D signal recognition.
Sensors 25 01598 g004
Figure 5. Simulation signals. (a) Original signals; (b) Simulation signals with background noise and combined signals.
Figure 5. Simulation signals. (a) Original signals; (b) Simulation signals with background noise and combined signals.
Sensors 25 01598 g005
Figure 6. Decomposition results of sinusoidal signals with different frequencies and amplitudes.
Figure 6. Decomposition results of sinusoidal signals with different frequencies and amplitudes.
Sensors 25 01598 g006
Figure 7. Decomposition and reconstruction results of simulated typical interference noise. (a) Comparison of triangle wave signal before and after reconstruction; (b) Comparison of oscillating attenuation signal before and after reconstruction; (c) Comparison of pulstran signal before and after reconstruction; (d) Comparison of dual-frequency signal before and after reconstruction.
Figure 7. Decomposition and reconstruction results of simulated typical interference noise. (a) Comparison of triangle wave signal before and after reconstruction; (b) Comparison of oscillating attenuation signal before and after reconstruction; (c) Comparison of pulstran signal before and after reconstruction; (d) Comparison of dual-frequency signal before and after reconstruction.
Sensors 25 01598 g007
Figure 8. Comparison of simulation experiment results. (a) Simulation signal extraction results of SSA with manually adjusted parameters; (b) Simulation signal extraction results of OMP method.
Figure 8. Comparison of simulation experiment results. (a) Simulation signal extraction results of SSA with manually adjusted parameters; (b) Simulation signal extraction results of OMP method.
Sensors 25 01598 g008
Figure 9. Sensing system schematic of the V8 MT system.
Figure 9. Sensing system schematic of the V8 MT system.
Sensors 25 01598 g009
Figure 10. Time domain waveform of typical noise in the general noise database. (a) Powerline interference; (b) Switch interference; (c) Drift current interference; (d) Vehicle interference.
Figure 10. Time domain waveform of typical noise in the general noise database. (a) Powerline interference; (b) Switch interference; (c) Drift current interference; (d) Vehicle interference.
Sensors 25 01598 g010aSensors 25 01598 g010b
Figure 11. Signal recognition and extraction results of measured MT data. (a) First measurement extraction results; (b) Second measurement extraction results.
Figure 11. Signal recognition and extraction results of measured MT data. (a) First measurement extraction results; (b) Second measurement extraction results.
Sensors 25 01598 g011
Figure 12. Second measurement extraction results by SSA and OMP.
Figure 12. Second measurement extraction results by SSA and OMP.
Sensors 25 01598 g012
Figure 13. Pictures of CRM100 gyroscope and rotating platform. (a) CRM100 gyroscope; (b) Rotating platform.
Figure 13. Pictures of CRM100 gyroscope and rotating platform. (a) CRM100 gyroscope; (b) Rotating platform.
Sensors 25 01598 g013
Figure 14. Processing results of gyroscope data.
Figure 14. Processing results of gyroscope data.
Sensors 25 01598 g014
Table 1. RMSE and processing time of each target frequency signal extraction.
Table 1. RMSE and processing time of each target frequency signal extraction.
SignalRMSEProcessing Time (s)
Reconstructed signal 13.64 × 10−22.81
Reconstructed signal 23.45 × 10−22.73
Reconstructed signal 34.60 × 10−22.72
Reconstructed signal 44.26 × 10−22.99
Reconstructed signal 51.84 × 10−22.73
Reconstructed signal 63.64 × 10−22.72
Table 2. RMSE of each target signal extraction.
Table 2. RMSE of each target signal extraction.
SignalRMSEProcessing Time (s)
ASSASSAOMPASSASSAOMP
Triangle wave0.10410.15120.426224.8621.751075.91
Oscillating attenuation0.03560.04730.047523.5323.371075.91
Pulstran0.18890.21630.256524.6220.581075.91
Dual-frequency0.15520.23670.274323.8120.791075.91
Table 3. Experimental results of measured data processed by ASSA.
Table 3. Experimental results of measured data processed by ASSA.
Reconstructed SignalRMSESNR (dB)Processing Time (s)
Reconstructed signal 1 (First time)0.20471232.75
Reconstructed signal 2 (First time)0.29021132.68
Reconstructed signal (Second time)0.29621333.57
Table 4. Experimental results of measured data processed by SSA and OMP.
Table 4. Experimental results of measured data processed by SSA and OMP.
AlgorithmRMSEProcessing Time (s)
Conventional SSA0.490629.36
OMP0.13424347.62
Table 5. Experimental results of gyroscope measured data.
Table 5. Experimental results of gyroscope measured data.
AlgorithmRMSEProcessing Time (s)
ASSA3.57 × 10−44.2
VMD4.02 × 10−43.3
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Z.; Ge, S.; Li, J.; Huang, W.; Feng, K.; Zhang, C.; Zhang, C.; Sun, J. An Analog Sensor Signal Processing Method Susceptible to Anthropogenic Noise Based on Improved Adaptive Singular Spectrum Analysis. Sensors 2025, 25, 1598. https://doi.org/10.3390/s25051598

AMA Style

Gao Z, Ge S, Li J, Huang W, Feng K, Zhang C, Zhang C, Sun J. An Analog Sensor Signal Processing Method Susceptible to Anthropogenic Noise Based on Improved Adaptive Singular Spectrum Analysis. Sensors. 2025; 25(5):1598. https://doi.org/10.3390/s25051598

Chicago/Turabian Style

Gao, Zhengyang, Shuangchao Ge, Jie Li, Wentao Huang, Kaiqiang Feng, Chenming Zhang, Chunxing Zhang, and Jiaxin Sun. 2025. "An Analog Sensor Signal Processing Method Susceptible to Anthropogenic Noise Based on Improved Adaptive Singular Spectrum Analysis" Sensors 25, no. 5: 1598. https://doi.org/10.3390/s25051598

APA Style

Gao, Z., Ge, S., Li, J., Huang, W., Feng, K., Zhang, C., Zhang, C., & Sun, J. (2025). An Analog Sensor Signal Processing Method Susceptible to Anthropogenic Noise Based on Improved Adaptive Singular Spectrum Analysis. Sensors, 25(5), 1598. https://doi.org/10.3390/s25051598

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop