Next Article in Journal
Search-Space Reduction for S-Boxes Resilient to Power Attacks
Previous Article in Journal
Multiscale Post-Seismic Deformation Based on cGNSS Time Series Following the 2015 Lefkas (W. Greece) Mw6.5 Earthquake
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Filtering-Based Regularized Sparsity Variable Step-Size Matching Pursuit and Its Applications in Vehicle Health Monitoring

1
School of Electronic and Information Engineering, Beihang University, Beijing 100191, China
2
Hefei Innovation Research Institute, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2021, 11(11), 4816; https://doi.org/10.3390/app11114816
Submission received: 18 April 2021 / Revised: 20 May 2021 / Accepted: 22 May 2021 / Published: 24 May 2021
(This article belongs to the Section Electrical, Electronics and Communications Engineering)

Abstract

:
Recent years have witnessed that real-time health monitoring for vehicles is gaining importance. Conventional monitoring scheme faces formidable challenges imposed by the massive signals generated with extremely heavy burden on storage and transmission. To address issues of signal sampling and transmission, compressed sensing (CS) has served as a promising solution in vehicle health monitoring, which performs signal sampling and compression simultaneously. Signal reconstruction is regarded as the most critical part of CS, while greedy reconstruction has been a research hotspot. However, the existing approaches either require prior knowledge of the sparse signal or perform with expensive computational complexity. To exploit the structure of the sparse signal, in this paper, we introduce an initial estimation approach for signal sparsity level firstly. Then, a novel greedy reconstruction algorithm that relies on no prior information of sparsity level while maintaining a good reconstruction performance is presented. The proposed algorithm integrates strategies of regularization and variable adaptive step size and further performs filtration. To verify the efficiency of the algorithm, typical voltage disturbance signals generated by the vehicle power system are taken as trial data. Preliminary simulation results demonstrate that the proposed algorithm achieves superior performance compared to the existing methods.

1. Introduction

To acquire an accurate signal from its sampling signal, the sampling rate should be at least the Nyquist rate, i.e., twice of the signal’s highest frequency component. However, high sampling rate results in expensive hardware resources. With the bandwidth of signals increasing rapidly, it leads to tremendous pressure on the hardware system. Moreover, traditional sampling methods may easily result in the huge waste of computation and storage resources, even leading to large recovery errors.
Compressed sensing (CS) theory proposed by D. Donoho, E. Candes and T. Tao successfully addressed these issues. According to the theory, if a signal is sparse in a certain domain, a measurement matrix unrelated to the sparse basis can be employed for low-dimensional projection, and a high-precision reconstruction can be accomplished through convex optimization algorithms or matching pursuit methods [1]. B. Adcock provided mathematical explanations for CS by constructing frameworks and generalization concepts [2].
Compared with traditional signal acquisition and processing, CS samples signals at a much lower sampling rate efficiently, compresses the number of measurements which require to be transmitted or processed, and acquires high-resolution signals with the characteristics of signals sparsity [3,4]. Because of the superior performance in signal sampling and compression, CS has been widely employed in the fields of wireless communications [5], image and data processing [6,7], data compression [8], signal denoising [9,10], especially in the real-time monitoring and analysis of vehicle signal quality [11,12,13].
Vehicle is an organic coupled composition which consists of different sophisticated subsystems. With the explosive growth of ubiquitous vehicles, large numbers of sophisticated and complex devices are continually integrated to vehicle electronics. This leads to the essential need that we should acquire a sufficiently high-quality status signal for vehicle health monitoring. To provide reliable guarantees for the healthy operation of a vehicle, real-time monitoring of status signals in the vehicle system is a promising solution. Due to the large variety of vehicle signals and requirement for long-term supervising, real-time monitoring generates huge amounts of data, which presents extremely high requirements for signal sampling and storage. The conventional monitoring scheme for vehicle is depicted in Figure 1a. The original signals are sampled and stored at a high sampling rate, which generates massive signals and imposes a heavy burden on storage and transmission [14]. In order to transmit such sampling signals to the data processing center, it requires expensive hardware resources and causes large time delay. CS theory, which performs signals sampling and compression simultaneously, can be well employed in quality analysis for vehicle system. Real-time health monitoring for vehicle using CS is demonstrated in Figure 1b. By contrast, only small numbers of measurement signals need to be transmitted after the original signals are sensed and compressed. In general, the data center has powerful computing resources to reconstruct the original signal. The transmission of these measurement signals demands limited resources and is extremely quick, which can be beneficial for the subsequent analysis and diagnosis.
Reconstruction refers to the procedure of recovering the original signal from the low-dimensional measurement signal, and it is the critical part of CS. Reconstruction algorithms can be mainly divided into two directions, i.e., convex optimization algorithms and iterative greedy methods [8]. Convex optimization algorithms, such as basis pursuit (BP), require high computational complexity, which means they are not applicable in practice. By contrast, iterative greedy methods, proposed for handling the l0-norm minimization, show advantages with low computational complexity and superior visual interpretation, and have aroused much attention.
The conventional greedy methods require prior knowledge of the sparse signal and perform expensively [14,15,16,17,18]. Sparsity adaptive matching pursuit (SAMP) does not rely on sparsity level [19]. With the potential performance advancement of SAMP, an increasing number of researchers have explored adaptive matching pursuit algorithms [20,21,22,23,24,25]. However, these algorithms do not employ the preselection step, i.e., making no initial estimation for sparsity level during the initial phase. This leads to poor reconstruction performance. In addition, they may hardly adopt appropriate variable step-size and do not further filter results for higher accuracy. Although these algorithms can perform reconstruction with rare prior information of signal sparsity level, the accuracy and efficiency of reconstruction still need to be greatly improved, especially for vehicle health monitoring.
To overcome problems mentioned above, we propose a novel greedy algorithm called filtering-based regularized sparsity variable step-size matching pursuit (FRSVssMP), which is capable of signal reconstruction with requiring no prior knowledge on the sparsity level. With the proposed initial estimation approach for signal sparsity level, this algorithm integrates strategies of regularization and variable step-size adaptive, and further filters reconstruction results. To verify the efficiency of FRSVssMP in reconstruction, we conducted lots of experiments on the typical vehicle signals. The results demonstrate that the proposed FRSVssMP surpasses other greedy algorithms both in terms of accuracy and efficiency. The proliferation of smart transportation has significantly promoted explosive growth of vehicles. Due to the sophisticated and complex electronics, vehicles are inevitably susceptible to various faults. Especially, with the rapid development of intelligent sensors, large numbers of heterogeneous vehicle status data impose heavy pressures in health monitoring. In order to ensure the reliable operation of vehicles and identify potential faults efficiently, FRSVssMP can serve as a promising paradigm to facilitate the transmission and analysis of those data. The contributions of this paper are three-fold, as follows:
  • Firstly, we put forward an initial estimation approach for signal sparsity level to exploit structure of the sparse signal.
  • Secondly, we propose a novel iterative greedy algorithm relying on no prior information of signal sparsity level. The algorithm integrates strategies of regularization and variable adaptive step size. Further, the original signal can be reconstructed precisely with the filtering mechanism.
  • Thirdly, extensive numerical simulations are conducted to verify the performance of the proposed algorithm for real-time health monitoring. Experimental results demonstrate that FRSVssMP significantly outperforms the state-of-the-art greedy algorithms.
The remainder of this paper is organized as follows. The basic model of CS and representative greedy algorithms are illustrated in Section 2. Section 3 gives the detailed description of sparsity level estimation approach and the proposed reconstruction algorithm. Simulation results and discussions are presented with typical disturbance signals, including synthetic sparse signals and real sparse signals in monitoring for the vehicle power system in Section 4, followed by the conclusion in Section 5.

2. Theoretical Basis and Related Work

This section briefly describes the framework of CS [1], including sparse representation of signals, low dimensional measurement and reconstruction. Considering the importance of reconstruction in CS, numbers of greedy reconstruction algorithms have been proposed to solve this challenging problem. Therefore, we also review some representative greedy reconstruction algorithms.

2.1. The Framework of CS

2.1.1. Sparse Representation of Signals

Consider a sparse signal X R N × 1 of sparsity level K (K << N) and an orthogonal basis matrix Ψ R N × N . X may not be sparse itself, but can be represented sparsely in this certain domain, where the i-th column vector is denoted by Ψ i R N × 1 . Therefore, X can be expressed as:
X = i = 1 N θ i ψ i = ψ θ ,
where θ  R N × 1 is a sparse vector. A signal is called K-sparse in case that the number of non-zero elements in θ is K.

2.1.2. Low Dimensional Measurement

Select the measurement matrix φ R M × N unrelated to sparse basis to obtain measurements. Y R M × 1 is the M-dimensional measurement vector and can be acquired by linear random projections:
Y = φ X = φ ψ θ = A θ ,
where A R M × N denotes the sensing matrix with M     K   ·   log ( N / K ) [1].
Due to the signal X is sparse, it can be reconstructed precisely through the measurement vector provided that A satisfies the restricted isometry property (RIP), i.e., for all K-sparse signals X, there exists a restricted isometry constant (RIC) δk (0 < δk <1) that holds:
1 δ k A X 2 2 X 2 2 1 + δ k ,
where k denotes the order of RIP [26].
The measurement matrix has a direct influence on compression and reconstruction performance. Conventional measurement matrixes include Gaussian matrix, Bernoulli matrix, etc.

2.1.3. Reconstruction

The most powerful method to recover the sparse signal is by solving the l0-norm minimization problem. Then, sparse coefficient θ can be reconstructed from Y by:
min θ 0 s . t Y = φ ψ θ .
The l0-norm minimization makes results as sparse as possible. Nevertheless, to solve this optimization problem is NP-hard. Papers suggest that conversion of the l0-norm minimization problem and the l1-norm one can be reached under certain conditions. Thus, Equation (4) can be transformed into its convex approximation l1-norm as follows:
min θ 1 s . t Y = φ ψ θ .
However, solving this optimization still requires high computational complexity [26]. Currently, researchers mainly focus on how to construct low complexity approaches, such as greedy recovery algorithms.

2.2. Greedy Reconstruction Algorithms

Various iterative greedy algorithms have been developed for reconstruction with the basic idea that signal support entries are iteratively obtained by correlating the measurement vector with columns of the sensing matrix. According to whether the signal sparsity level is known, greedy algorithms can be classified into two categories. The first category requires signal sparsity level as prior information to control iterations, such as OMP [14]. While the second one does not require any prior knowledge on the sparsity level and can perform reconstruction through adjustment of step sizes, such as SAMP [19] and AStMP [20].
Mallat first proposed matching pursuit (MP), i.e., one atom can be obtained from the support set in each iteration, but there is no guarantee that the residual is orthogonal to atoms. Orthogonal matching pursuit (OMP) does not repeat selection of the same atom, but selects only one atom in each iteration, which results in low reconstruction efficiency [14]. To select multiple atoms on one iteration, stagewise orthogonal matching pursuit (StOMP) with threshold principle [15] and regularized orthogonal matching pursuit (ROMP) with regularization strategy [16] were proposed. The computational complexity of the two methods is considerably lower than that of BP. However, these methods suffer from large reconstruction errors, and excellent reconstruction performance can be acquired only when there are plentiful measurements. Therefore, compressive sampling matching pursuit (CoSaMP) and subspace pursuit (SP) including pruning were introduced [17]. The expanded subspace pursuit achieves blind sparse reconstruction by incorporating a simple backtracking technique based on SP [18]. The innovation of those approaches lies in pruning selected atoms in support set to exclude the wrong atoms. Unfortunately, a constant selection for all iterations easily leads to redundant atoms required. In addition, those aforementioned methods depend heavily on prior information regarding the sparsity level.
Generally speaking, signal sparsity level is unknown. Hence algorithms in the second category are widely applicable, and have been proved to be effective. These algorithms divide reconstruction into several phases. In each phase, cardinality W of the signal support set F does not change, and multiple iterations are conducted. When F cannot meet a terminating criterion, we can increase a step size S to enter next phase. As W increases, actual sparsity level K can be obtained approximatively without prior information of the signal sparsity.
Researchers proposed sparsity adaptive matching pursuit (SAMP), which does not rely on signal sparsity and achieves reconstruction through gradual approximation of the actual sparsity level by multiplying fixed step size [19]. However, the fixed step size results in either low reconstruction efficiency or low reconstruction accuracy. Further, several adaptive matching pursuit algorithms were proposed on the basis of SAMP. Adaptive step-size matching pursuit (AStMP) precisely estimates signal sparsity level by combining subspace pursuit with variable step-size in each iteration [20]. Adaptive regularized compressive sampling matching pursuit (ARCoSaMP) chooses the support set adaptively and exploits the regularization process to achieve the accuracy of reconstruction [21]. An improved reconstruction method utilizes the fuzzy threshold method and a double threshold iterative method to improve the precision [22]. A modified sparsity adaptive matching pursuit (MSAMP) achieves adaptive sparsity realization and index set backtracking optimization [23]. More recently, a novel fast global matching pursuit (FGMP) solves the l0 minimization with global matching pursuit strategies [24]. To address the issue of step size selection, a constrained backtracking matching pursuit (CBMP) uses two kinds of composite strategies to estimate the true support set [25]. However, those algorithms have some obvious disadvantages. For example, some algorithms do not exert a specific “selection strategy” when selecting elements of the correlation vector, which limits accuracy to some extent. Additionally, variable step-sizes adopted above are relatively simple and do not take multiple phases in single iteration into consideration. In addition, these methods do not further filter results for higher accuracy, and the algorithms mentioned above do not employ the preselection step, which may lead to poor reconstruction performance.
In contrast with these algorithms, the novel reconstruction algorithm proposed in this paper integrates strategies of regularization and variable adaptive step size without requiring any prior information of the signal sparsity. Furthermore, optimized reconstruction results can be achieved by a filtering step. Therefore, it can perform reconstruction more accurately and efficiently.

3. The Proposed Algorithm and Its Applications

In this section, we propose a novel greedy algorithm FRSVssMP, which can be capable of signal reconstruction without requiring prior knowledge on the signal sparsity. With the proposed initial estimation approach for signal sparsity level, this algorithm integrates strategies of regularization and variable step-size adaptive, and further filters reconstruction results. Since the initial step size, which affects recovery performance obviously, is closely related to signal sparsity, this section presents a new approach to estimate signal sparsity level initially. In the following parts, regularization [16], variable step-size adaptive [19,20,21], and filtering mechanism are introduced, followed by the description of steps in details.
Regularization: As illustrated in [16], regularization means to select a set of atoms with maximum energy, of which the maximum absolute value obtained by multiplying column vectors of the sensing matrix and residual is less than twice of the minimum one.
Sparsity variable step-size adaptive: As illustrated in [20], adaptive variable step-size is a powerful step to improve reconstruction accuracy and efficiency. In this paper, we elaborate a new variable step-size pattern with comparison of thresholds and relative error. The idea of adaptive step-size utilized in [20,21] only adopts one stage of variant step sizes. By contrast, in our algorithm, two stages of variant step sizes are leveraged to further approximate the true signal sparsity level. In the initial phase, a large step size is adopted for rapid approximation. Until the precision reaches a certain high value, a small step size is utilized for accurate approximation. Adaptation is reflected in the setting of a small step size, which can be self-tuning according to characteristics of signals.
Filtering mechanism: Further, the filtering mechanism is proposed to exclude the incorrect supports. After aforementioned steps, the sparse signal can be obtained based upon the confirmed support set by least square method. However, without any prior information of signal sparsity, we are still unable to identify whether the estimated sparsity level is correct enough since the algorithm cannot traverse each possible sparsity. After numerous experiments, we find that for many applications, there is a high probability that the estimated sparsity level is larger than the actual level. In this case, we consider to further filter results for the sake of higher recovery accuracy. Filtering mechanism removes an atom that exerts minimal influence on reconstruction performance in each iteration until the residual cannot be further diminished.
Since the initial sparsity level estimation exerts an important influence on the subsequent steps, an initial estimation approach for the sparsity level is introduced. Then, the complete steps of the proposed algorithm are illustrated in detail.

3.1. An Initial Estimation Approach for the Signal Sparsity Level

Consider a sparse signal X R N × 1 of sparsity level K (K << N). The true support set of sparse signals is denoted by Λ , then we have Λ = K . y R M × 1 is the measurement vector and A is the sensing matrix. Support that ν = A T y , the i-th element ν i of ν is the inner product of A i (the i-th column of A) and the measurement vector y, i.e., ν i = A i ,   y . The indices of the k0 (1 ≤ k0N) largest absolute values in { ν i } are merged into the identified support set Λ 0 , i.e., Λ 0 = k 0 . δ denotes the restricted isometry constant. Now we can have the proposition 1.
Proposition 1:
Consider that A satisfies the RIP with parameters ( 2 K ,   δ 2 k ) . For   k 0 K ,
A Λ 0 T y 2 2 ( 1 + δ 2 k ) 2 1 δ k y 2 2 .
Proof of Proposition 1:
Put indices of the K (1 ≤ KN) largest absolute values in { ν i } to be Λ K . If k0K, i.e., Λ 0 Λ K , we have
A Λ 0 T y 2 2 A Λ K T y 2 2 .
The measurement vector y can be calculated as y = A Λ θ Λ . Thus,
A Λ K T y 2 = A Λ K T A Λ θ Λ 2 A Λ K T A Λ 2 θ Λ 2
where Λ K = Λ = K and Λ K Λ 2 K .
Suppose Ω = Λ K Λ , according to the property of spectral norm that norm of a subarray does not exceed norm of the original matrix, we have A Λ K T A Λ 2 A Ω T A Ω 2 .
According to the RIP, singular values of A Λ are between 1   δ k and 1 +   δ k , where 0 < δk < 1. Let λ A Λ T A Λ be eigenvalues of A Λ T A Λ , thus 1   δ k     λ ( A Λ T A Λ )     1 +   δ k .
A satisfies the RIP with parameters ( 2 K ,   δ 2 k ) . Therefore, λ ( A Ω T A Ω )     1 +   δ 2 k . Thus,
A Λ K T A Λ 2 A Ω T A Ω 2 1 + δ 2 K .
According to the RIP, it holds ( 1 δ K ) θ Λ 2 2 y 2 2 , then
θ Λ 2 2 y 2 2 1 δ K .
Hence,
A Λ K T y 2 2 = A Λ K T A Λ θ Λ 2 2 A Λ K T A Λ 2 2 θ Λ 2 2 ( 1 + δ 2 K ) 2 y 2 1 δ K
Thus, it holds
A Λ 0 T y 2 2 ( 1 + δ 2 k ) 2 1 δ k y 2 2 .
We have proven that the proposition 1 is true. Correspondingly, the following converse-negative proposition of the proposition 1 can be inferred, which is also true. □
Proposition 2:
Consider that A satisfies the RIP with parameters ( 2 K ,   δ 2 k ) . If the inequality
A Λ 0 T y 2 2 > ( 1 + δ 2 k ) 2 1 δ k y 2 2
holds, then k0 > K.
The RIC δ K ( 0 ,   1 ) increases monotonously as the sparsity level K increases, i.e., for any two positive integers K1 and K2 (K1 << K2), one has   δ K 1   δ K 2 [20]. Thus, proposition 2 can be rewritten as follows.
Proposition 3:
Consider that A satisfies the RIP with parameters ( 2 K ,   δ 2 k ) . If the inequality
A Λ 0 T y 2 2 > y 2 2 ( 1 δ 2 k ) 3
holds, then k0 > K.
Proof of Proposition 3:
According to the monotonicity of RIC, since 0 < δ < 1, we can obtain the inequality as
( 1 + δ 2 k ) 2 1 δ k y 2 2 < ( 1 + δ 2 k ) 2 1 δ 2 k y 2 2 ( 1 + δ 2 k + δ 2 k 2 + o ( δ 2 k 2 ) ) 2 1 δ 2 k y 2 2
based on the Taylor formula as
1 1 δ 2 k = 1 + δ 2 k + δ 2 k 2 + o ( δ 2 k 2 ) .
Thus, it holds
( 1 + δ 2 k ) 2 1 δ k y 2 2 < ( 1 1 δ 2 k ) 2 1 1 δ 2 k y 2 2 .
With the proposition 2, we can draw the conclusion that the proposition 3 is true. □
Y. Tsaig et al. indicated that reconstruction performance shows better when the sparsity level was around (M/4) [27]. To make a preliminary estimation for the sparsity, we can preset the initial k0 as (M/4) and exploit the proposition 3 for discrimination. According to [20,28], boundary of δ 2 k is calculated as the value of δ 2 k , and can be put into (14):
(1)
If A Λ 0 T y 2 > y 2 2 / 1 δ 2 k 3 holds, then k0 is larger than the actual sparsity level K. Then k0 = k0 − 1 till the inequality (14) does not hold. This k0 is considered to be an estimation of the actual sparsity level K.
(2)
If A Λ 0 T y 2 > y 2 2 / 1 δ 2 k 3 does not hold, then k0 = k0 + 1 till the inequality (14) holds. This k0 is considered to be an estimation of the actual sparsity level K.
Initial step size can be set properly based on the preliminary estimation of the sparsity level. After numerous experiments, we suggest that a good reconstruction performance is achieved when initial step size is set to k 0 2 , where k0 is the estimated initial sparsity level. In following experiments, we set parameters according to this rule. Due to (14), the initial sparsity level obtained by this approach is larger than the actual level, which means that the sparsity obtained by variable step-size would be slightly larger than the actual level with a high probability. However, this problem can be well solved by taking a further filtering mechanism.

3.2. Description of FRSVssMP

After description of the initial estimation approach for sparsity level, we draw a sketch of FRSVssMP. After employing initial sparsity level estimation to establish initialization for iterations, it mainly consists of two components, iterations and filtering.
In iterations part, leveraging the strategies of regularization and sparsity variable step-size adaptive, FRSVssMP iteratively identifies the support set of the sparse signal by appropriately correlating the measurement or residual with the columns of the sensing matrix. More specifically, during each iteration, residuals acquired in contiguous phases are compared and a relative threshold is applied for adjusting step size. Through several rapid approximations with a large step size and precise approximations with a small step size, a superior support set of the sparsity signal is attained. Then, the sparse signal can be obtained based upon the confirmed support set by least square method.
In contrast with existing methods, sparsity variable step-size adaptive performed in our algorithm considers both residuals and the relative threshold. Discriminant criterions are designed according to comparison of old and new residuals. If the relative error obtained by old and new residuals in contiguous phases is larger than the relative threshold given, cardinality of the estimated support set should be increased by an integral step size to approximate the true signal sparsity level. This means that, the algorithm can accomplish a rapid reconstruction by reducing time with a large step size. If the relative error obtained is smaller than the relative threshold given, cardinality of the estimated support set should be increased by less than an integral step size. In other words, the algorithm can implement a precise reconstruction by approximating the true signal sparsity with a small step size.
Furthermore, in filtering part, the obtained result is filtered to exclude atoms that exerts minimal influence on reconstruction until the residual cannot be further diminished. Therefore, we can further prevent incorrect atoms from degrading the performance and acquire a more appropriate sparsity signal.
According to the process described above, we present the pseudocode of the proposed FRSVssMP algorithm in details with the same symbols in Section 2. Let S be the initial step size, L be the estimated sparsity level, k0 be the estimated initial sparsity level, rt be the residual, and t be number of iterations. Ø represents an empty set, and Λ t is nonzero indices (number of columns) set with L elements for the t-th iteration. aj denotes the j-th column of A. Put At to be the set of columns of A selected according to the indices set (the number of columns is Lt). The regularization function can be denoted as regularize(u,l), where u is the set of atoms selected, and l is the number of candidate atoms in u to be regularized. Function max(v,i) is defined to select the largest i elements in v. Steps of the proposed Algorithm 1 are illustrated as follows.
Algorithm 1. FRSVssMP for Compressed Sensing
Input: A R M × N ,   y R M ,   S
Output: θ ^
1: Initialization:  r 0 = y ,   Λ 0 = , L = S = k 0 / 2 ,   t = 1
2: Steps can be divided into iterations and filtering:
3: (Iterations)
4: while  t M do
5:    u = a b s A T r t 1 , u L = r e g u l a r i z e u , L
6:    S k = j | u j u L , C k = Λ t 1 S k , A t = { A j | j C k }
7:    θ ^ t = a r g m i n θ t y A t θ t = A t T A t 1 A t T y
8:    θ ^ t L = max θ ^ t , L ,   Λ t L = { j | θ ^ t j ϵ θ ^ t L }
9:    A t L = A j | j Λ t L , F = Λ t L
10:    r n e w = y A t L ( A t L T A t L ) 1 A t L T y  
11:    if   r n e w 2 < r t 1 2 then
12:    if ( r n e w = 0 ) or ( r n e w 2 λ ) then
13:     break
14:    else
15:       Λ t = F ,   r t = r n e w ,   t = t + 1
16:     end if
17:   else
18:       if   r n e w r t 1 2 / r t 1 2 η then
19:       L = L + S ,   t = t + 1
20:    else
21:       L = L + α S ,   t = t + 1
22:      end if
23:   end if
24: end while
25: θ ^ t F = θ ^ t , Λ F = j | θ ^ t j θ ^ t F , A F = A j | j Λ F ,   F = Λ F
26: (Filtering steps)
27: while  I 0 do
28:    r = y A F ( A F T A F ) 1 A F T y ,   I = c a r d i n a l F 1
29:    v = a b s A F T A F 1 A F T y ,   v i = max v , i
30:    Λ p = j | v j ϵ v i ,   A p = A j | j ϵ Λ p ,   P = Λ p
31:    r e s = y A p A P T A P 1 A P T y
32:    if   r 2 r e s 2 then
33:     F = P ,   r = r e s
34:   else
35:   break
36:  end if
37: end while
38: Outputs: θ ^ = A F T A F 1 A F T y
As illustrated above, four parameters are required to be considered during iterations, i.e., an initial step size S, an iteration-terminating parameter λ, a step-size transformed parameter η and a step-size adaptive parameter α. Such parameters exert an influence on reconstruction performance obviously.
The initial step size S is set to k 0 2 in initialization for a rapid approximation. The iteration-terminating parameter λ is employed to judge whether iterations should be stop or not, of which value is usually set to 10−6 in practice. The step-size transformed parameter η is adopted for comparison of old and new residuals in contiguous phases to confirm how to change the estimated sparsity level. According to the signal characteristics, the step-size adaptive parameter α adjusts step increment to accomplish an accurate approximation by a small step size. Generally, the value of α is a certain number between 0.4 and 0.6 corresponding to the signal.

3.3. Theoretical Analysis and Performance of FRSVssMP

The proposed FRSVssMP employs an initial sparsity level estimation approach to establish initialization for iterations part firstly. Then, it integrates strategies of regularization and variable step-size adaptive, and further filters reconstruction results.
The regularization is employed to ensure the correctness of the identified support set. It implements initial selection of atoms. Therefore, the accuracy of the selected atoms is significantly improved. The fixed step size is improved by variable adaptive step size for high efficiency. Two stages of variant step sizes leveraged in FRSVssMP can achieve higher reconstruction accuracy with fewer iterations. Furthermore, the proposed filtering mechanism is adopted to exclude the incorrect supports. It removes atoms that make the least contributions to the reconstruction. Utilizing filtering iteratively, we can further confirm signal sparsity level and improve accuracy. In conclusion, with the initial sparsity level estimation approach and such mechanisms, FRSVssMP shows superior reconstruction accuracy over other methods with less computational complexity.

3.4. Applications Of FRSVssMP in Vehicle Health Monitoring

FRSVssMP is applicable for scenarios requiring high reconstruction accuracy and efficiency, especially for real-time health monitoring of vehicle status signals.
Reliable operations of vehicle systems are relevant to all aspects of vehicle applications. With the development of technology, high-power devices, nonlinear electrical components and large numerous of new electronics are continuously integrated to vehicles, which brings significant challenges for vehicle health monitoring [29]. Hence, real-time monitoring of vehicle status signals plays an increasingly important role.
Vehicle status signals reflect the reliable operations of vehicle systems. Due to numerous vehicle status signals and long-term signal sampling, large amounts of data are required to be sensed, transformed, and stored [30]. CS can be well exploited for efficient sampling and data compression. With the concepts of real-time health monitoring and analysis for vehicle systems proposed, the compressed status data are transmitted to monitoring center for reconstruction and information exchange, which requires extremely high reconstruction accuracy and efficiency. Therefore, it is extremely significant to apply FRSVssMP to the process of reconstruction, which can decrease reconstruction time with high accuracy obviously.

4. Simulation and Discussion

In this section, series of experiments are conducted on typical voltage disturbance signals sensed from the Vehicle Power System in disturbance monitoring. The performance of the proposed algorithm is evaluated and compared with state-of-the-art methods.
As an essential subsystem of vehicles, the vehicle power system plays an important role in generating, regulating, storing and distributing electrical energy of the whole vehicle. The reliable operation of this system is the foundation for the successful completion of the vehicle mission. Power quality disturbances widely exist in the vehicle power system, and the voltage impulse is one of the most common types of disturbances. Monitoring of voltage disturbance signals is a prime problem concerned in power quality improving. When impulse disturbance signals exceed a certain threshold, the performance of the power system inevitably degrades, which affects the reliable operation of all vehicle systems seriously. Therefore, it is very necessary to monitor and analyze the voltage disturbance signals in real time. The complete implementation of health monitoring of vehicle status signals with FRSVssMP is depicted in Figure 2.
Reconstruction mainly consists of initial estimation for the sparsity level, iterations and filtering steps, which coincides with the previous description. After acquiring the reconstructed signals, we can conduct further analysis on vehicle operating conditions.

4.1. Synthetic Data and Real-World Data

We evaluate the reconstruction performance of the proposed method using synthetic sparse data and real-world sparse data, respectively. Real-world data are non-negative voltage disturbance signals sampled from the vehicle power system, which are extraordinarily common impulse signals in this system. They are sparse in time domain, with approximately 10% of total elements nonzero. In order to give convenient explanation, values of all non-zero elements are normalized to [0, 1]. Therefore, the voltage disturbance signals can be represented as one-dimensional vectors.
Given the characteristics of real disturbance signals, we further generate Gaussian sparse data in a similar way. Specifically, suppose the one-dimensional sparse signal contains K nonzero coefficients (K is the sparsity level) with random locations. The nonzero coefficients are randomly generated with each entry independently drawn from a normalized Gaussian distribution. Synthetic sparse signals can be utilized as an essential supplement to real sparse signals to verify the completeness of FRSVssMP. Figure 3a,b depict examples of one-dimensional synthetic signals and the original real signals, respectively, where we set the signal length equal to 128.
In this paper, essential experiments were conducted to illustrate superior performance of FRSVssMP against the state of the art algorithms including StOMP, CoSaMP, SAMP, AStMP [20] and CBMP [25]. Typical synthetic and real-world voltage disturbance signals from the vehicle power system were selected as trial data. Performance evaluation metrics includes reconstruction time and success rate. Reconstruction time T in seconds refers to average time implemented for reconstruction. Reconstruction success rate is defined as the ratio of the number of successful trials to the total number of trials. To be specific, suppose N is the signal length and M denotes the length of measurement vector, then the root-mean-square (RMS) can be expressed as:
RMS = i = 0 N 1 S i S ^ i 2 / i = 0 N 1 S i 2 × 100 % .
where S R N × 1 is the N × 1 originalsignal and S ^ R N × 1 denotes the reconstructed signal. The successful trial refers to a trial with RMS less than 10−6. In our simulations, each experiment was repeated for 5000 trials, and the sensing matrix A R M × N was generated as a zero mean random Gaussian matrix with columns normalized to unit l2 norm. Experiments were implemented in MATLAB R2020a on Windows 10 (published by Microsoft, Redmond, America) with Intel Core i7 processor (3.6 GHz) with 4-GB RAM.

4.2. Performance of the Proposed Initial Sparsity Level Estimation Approach

Firstly, the proposed initial sparsity level estimation was evaluated in contrast to AStMP [20], which shows the state of art performance in initial sparsity estimation. In this simulation, real signals were extracted with the length N = 128. And the measurement length M and sparsity level K were fixed as M = 64 and K = 32, respectively.
Initial estimations of sparsity level in different experiments are shown in Figure 4a. Performance of FRSVssMP and AStMP are compared with different δ k . The blue dotted line indicates the actual sparsity level ( K = 32 ). According to Figure 4a, FRSVssMP achieves superior initial estimation of K over AStMP as the estimated level is close to the actual level with the same δ k . In addition, by choosing an appropriate δ k , the initial estimation of K will approximate to actual level obviously.
As illustrated in Section 3, with the estimated sparsity level k0, we set the initial step size to k 0 2 for subsequent process. Further, to demonstrate the superiority of this setting, Figure 4b demonstrates the estimated sparsity level obtained by different algorithms during iterations, with a focus on the selection of initial step size. Step size adaptively changes in different iterations, and finally, the sparsity level is accurately approximated. Taking advantage of the initial step size setting k 0 2 , in contrast to SAMP, SASP and AStMP, FRSVssMP approximates the actual sparsity level more efficiently as the number of iterations required can be significantly reduced.

4.3. Reconstruction Performance versus the Sparsity Level

Then, we compared reconstruction performance of FRSVssMP with the existing algorithms versus the sparsity level K. In this simulation, for both synthetic and real sparse signals, signal length N and measurement length M were set to N = 256 and M = 128. And the parameters S, λ, α for FRSVssMP were set equal to S = k 0 2 , λ = 10 6 , α = 0 . 5 throughout our experiments, where k 0 was the estimated initial sparsity level. The empirical results suggest that the step-size transformed parameter η is critical to the performance. For synthetic sparse signals, η was set to η = 0 . 2 , while we set η = 0 . 25 for real sparse signals.

4.3.1. Reconstruction Success Rates

Figure 5a,b depict reconstruction success rates versus the sparsity level K for synthetic and real signals. The horizontal axis represents different K drawn from 5 ,   60 , and the vertical axis shows the corresponding success rates (in percentages).
For a small K, high reconstruction success rates are acquired by FRSVssMP and the competing algorithms. As the sparsity level becomes large, success rates obtained decrease considerably, while differences between the success rates of the algorithms become significant. For both synthetic and real signals, FRSVssMP presents a considerable performance advantage over the greedy algorithms in the first category, such as StOMP and CoSaMP, due to the utilization of sparsity variable step-size adaptive. Moreover, FRSVssMP shows a slight superiority over SAMP, AStMP and CBMP on the whole, because it exploits the underlying information of the sparsity level and filters the incorrect atoms. CBMP acquires the second best performance, followed by AStMP, SAMP, CoSaMP, and StOMP.

4.3.2. Reconstruction Time

Figure 6a,b show reconstruction time versus the sparsity level K for synthetic and real signals. The greedy algorithms in the first category were omitted since they took significantly longer time. The horizontal axis denotes the different K drawn from [ 45 ,   60 ] , and the vertical axis indicates the corresponding reconstruction time (in seconds). The initial step size selected according to the initial estimation approach may not work well in greedy algorithms with the fixed step size (such as SAMP). To make the comparison fair, we must evaluate reconstruction time with the similar success rates achieved by those algorithms. After a series of experiments, to ensure the similar success rates obtained by FRSVssMP, the step size was set to 5 for SAMP, and we also employed the appropriate parameters for CBMP.
It can be observed that, both for synthetic and real signals, compared with SAMP and CBMP, FRSVssMP performs better in terms of reconstruction time, which reveals that FRSVssMP is more efficient with the same sparsity level. This is due to the fact that, FRSVssMP selects a more appropriate initial step size by the initial sparsity level estimation approach and employs two stages of sparsity variant step sizes mechanism. Moreover, as sparsity level becomes larger, the reconstruction time increases. However, FRSVssMP shows better robustness as the amount of time increased for it is less compared to those for other algorithms, which means it works well for complex scenarios.

4.3.3. Performance of FRSVssMP and SAMP versus Variable Step Sizes

Additionally, to illustrate the influence of variable step sizes on reconstruction performance, FRSVssMP were compared with SAMP in different variable step sizes S, i.e., 5 and 10. Figure 7a,b demonstrate success rates versus variable step sizes for synthetic and real signals. The horizontal axis represents the different sparsity level, and the vertical axis shows the corresponding success rates (in percentages).
We can observe that, as sparsity levels become larger, the success rates decrease. For a small K, there is no significant difference among different variable step sizes in terms of reconstruction success rates. However, for a large K, as the variable step size become larger, the success rates decrease obviously. The reason for this lies in that a small variable step size approximates the actual sparsity level much more precisely. It can be concluded that, for both types of sparse signals, FRSVssMP achieves better success rates compared to SAMP with the same step size (5 or 10).

4.4. Reconstruction Performance versus the Measurement Length

In this subsection, we compared reconstruction performance of FRSVssMP with those of the competing algorithms versus the measurement length M. In this simulation, for both synthetic and real sparse signals, signal length N and sparsity level K were both set to N = 1024 and K = 20, respectively. Other parameters were set in the same way as in the previous part.

4.4.1. Reconstruction Success Rates

Figure 8a,b illustrate reconstruction success rates versus the measurement length M for synthetic and real signals. The horizontal axis represents different M, and the vertical axis shows the corresponding success rates (in percentages). For better comparison, the measurement lengths for synthetic sparse signals were drawn from [ 75 ,   120 ] , while those were assigned to [ 90 ,   140 ] for real sparse signals.
From Figure 8, with the increase of the measurement length, success rates obtained grow considerably. Meanwhile, as M becomes larger, the difference between reconstruction success rates calculated by different algorithms becomes more and more obvious. For both synthetic and real sparse signals, FRSVssMP outperforms all the greedy algorithms in the first category obviously, especially with a large M. FRSVssMP makes a more accurate estimation of the sparse signals by the initial sparsity level estimation approach as a result of the increasing information acquired from the measurement vectors. FRSVssMP performs better for almost all measurement lengths compared to AStMP and CBMP because of the accurate initial sparsity level estimation along with the filtering mechanism. As the nearest rival to FRSVssMP, CBMP shows superiority over other algorithms with two kinds of composite strategies, followed by AStMP, SAMP, CoSaMP, and StOMP.

4.4.2. Reconstruction Time

Figure 9a,b depict reconstruction time versus the measurement length M for synthetic and real signals. Similarly, in this experiment, the greedy algorithms in the first category were omitted because that they took considerably longer time. The horizontal axis represents different M, and the vertical axis shows the corresponding reconstruction time (in seconds). For better comparison, the measurement lengths for synthetic signals were drawn from [ 75 ,   95 ] , while those were assigned to [ 90 ,   110 ] for real signals. To ensure similar success rates obtained by FRSVssMP, the step size was set to 4 for SAMP, and the appropriate parameters were selected for CBMP.
As can be seen from Figure 9, for both synthetic and real sparse signals, reconstruction time achieved by FRSVssMP is less than that of SAMP and CBMP for different measurement lengths. This implies that, FRSVssMP is more efficient than the competing algorithms with the same M because of the initial sparsity level estimation approach and sparsity variable step-size adaptive mechanism. In addition, FRSVssMP shows superiority in terms of robustness as the range of the reconstruction time for it is less than those for the competing methods.

4.4.3. Performance of FRSVssMP and SAMP versus Variable Step Sizes

To illustrate the influence of variable step sizes on reconstruction performance, FRSVssMP were compared with SAMP in different M. Figure 10a,b demonstrate success rates versus variable step sizes for synthetic and real sparse signals. The horizontal axis represents different M, and the vertical axis shows the corresponding success rates (in percentages). To make the comparison clearer, FRSVssMP was compared with SAMP in different variable step sizes for synthetic and real sparse signals.
We can find that, for different M, there is not much difference in reconstruction success rates between different variable step sizes. As the variable step size grows, the success rates decrease slightly since that a small variable step size approximates the actual sparsity level much more easily. It can be concluded that, for both types of sparse signals, FRSVssMP has advantage over SAMP in terms of success rates with the same step size.

5. Conclusions

With the explosive growth of ubiquitous vehicles, CS has been widely employed in signal sensing and compression in vehicle engineering. This paper firstly introduces an initial estimation approach for the signal sparsity level, then proposes a novel greedy reconstruction algorithm, i.e., FRSVssMP, for real-time vehicle health monitoring. The proposed algorithm first selects an appropriate initial step size by the initial sparsity level estimation approach. Then it integrates strategies of regularization and variable adaptive step size, and further performs filtration. Relying on no prior information of the sparsity level, FRSVssMP could achieve an excellent reconstruction performance. FRSVssMP is applicable for scenarios requiring high reconstruction accuracy and efficiency, especially in vehicle health monitoring. Since various faults exerts serious impact on the operation of vehicles and even lead to traffic accidents, health monitoring for vehicles has extremely high requirements. FRSVssMP can promote the reconstruction accuracy and efficiency for status data obviously, which can be beneficial for the subsequent analysis and diagnosis.
Simulation results demonstrate that, taking typical voltage disturbance signals in vehicle power system disturbance monitoring as trial data, the proposed FRSVssMP outperforms the state-of-the-art greedy algorithms both in terms of reconstruction success rates and time. Furthermore, we demonstrate that the reconstruction performance of the greedy algorithms is highly relevant to variable step sizes. During the iterations, some parameters, i.e., an initial step size, an iteration-terminating parameter, a step-size transformed parameter, and a step-size adaptive parameter, have critical influences on the reconstruction. How to scientifically select appropriate values for these parameters will be the subject of our future work.

Author Contributions

Conceptualization, H.L.; methodology, H.L.; formal analysis, H.L.; writing—original draft preparation, H.L.; writing—review and editing, H.Z. and W.F.; supervision, H.Z. and W.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was fully supported by the National Natural Science Foundation of China under Grant Number 61901015 and 91438116.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare that there is no conflict of interest regarding the publication of this paper.

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Adcock, B.; Hansen, A.C.; Poon, C.; Roman, B. Breaking the coherence barrier: A new theory for compressed sensing. Forum Math. Sigma. 2017, 5, 1–84. [Google Scholar] [CrossRef] [Green Version]
  3. Cho, S.; Park, S.; Cha, G.; Oh, T. Development of image processing for crack detection on concrete structures through terrestrial laser scanning associated with the octree structure. App. Sci. 2018, 8, 2373. [Google Scholar] [CrossRef] [Green Version]
  4. Sun, T.; Li, J.; Blondel, P. Direct under-sampling compressive sensing method for underwater echo signals and physical implementation. App. Sci. 2019, 9, 4596. [Google Scholar] [CrossRef] [Green Version]
  5. Lei, Z.; Yang, P.; Zheng, L.; Xiong, H.; Ding, H. Frequency hopping signals tracking and sorting based on dynamic programming modulated wideband converters. App. Sci. 2019, 9, 2906. [Google Scholar] [CrossRef] [Green Version]
  6. Wei, Z.; Zhang, J.; Xu, Z.; Liu, Y. Optimization methods of compressively sensed image reconstruction based on single-pixel imaging. App. Sci. 2020, 10, 3288. [Google Scholar] [CrossRef]
  7. Liu, H.; Zhao, H.; Feng, W. Regularized sparsity variable step-size adaptive matching pursuit algorithm for compressed sensing. J. Beijing Univ. Aeronaut. Astronaut. 2017, 43, 2109. [Google Scholar]
  8. Li, L.; Fang, Y.; Liu, L.; Peng, H.; Kurths, J.; Yang, Y. Overview of compressed sensing: Sensing model, reconstruction algorithm, and its applications. App. Sci. 2020, 10, 5909. [Google Scholar] [CrossRef]
  9. Liu, R.; Shu, M.; Chen, C. ECG signal denoising and reconstruction based on basis pursuit. App. Sci. 2021, 11, 1591. [Google Scholar] [CrossRef]
  10. Li, X.; Dong, L.; Li, B.; Lei, Y.; Xu, N. Microseismic signal denoising via empirical mode decomposition, compressed sensing, and soft-thresholding. App. Sci. 2020, 10, 2191. [Google Scholar] [CrossRef] [Green Version]
  11. Wu, Z.; Zhang, Q.; Cheng, L.; Tan, S. A new method of two-stage planetary gearbox fault detection based on multi-sensor information fusion. App. Sci. 2019, 9, 5443. [Google Scholar] [CrossRef] [Green Version]
  12. Li, Y.; Song, B.; Kang, X.; Du, X.; Guizani, M. Vehicle-type detection based on compressed sensing and deep learning in vehicular networks. Sensors 2018, 18, 4500. [Google Scholar] [CrossRef] [Green Version]
  13. Yang, Y.; Nagarajaiah, S. Robust data transmission and recovery of images by compressed sensing for structural health diagnosis. Struct. Control Health Monit. 2017, 24, e1856. [Google Scholar] [CrossRef]
  14. Tropp, J.A.; Gilbert, A.C. Signal Recovery from Random Measurements via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  15. Donoho, D.L.; Tsaig, Y.; Drori, I.; Starck, J.L. Sparse Solution of Underdetermined Systems of Linear Equations by Stagewise Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2012, 58, 1094–1121. [Google Scholar] [CrossRef]
  16. Needell, D.; Vershynin, R. Signal Recovery from Incomplete and Inaccurate Measurements via Regularized Orthogonal Matching Pursuit. IEEE J. Sel. Top. Signal Process 2010, 4, 310–316. [Google Scholar] [CrossRef] [Green Version]
  17. Needell, D.; Tropp, J.A. CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples. Appl. Comput. Harmon. Anal. 2008, 26, 301–321. [Google Scholar] [CrossRef] [Green Version]
  18. Han, X.; Zhao, G.; Li, X.; Shu, T.; Yu, W. Sparse signal reconstruction via expanded subspace pursuit. J. Appl. Remote Sens. 2019, 13, 046501. [Google Scholar] [CrossRef]
  19. Do, T.T.; Gan, L.; Nguyen, N.; Tran, T.D. Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In Proceedings of the Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 3–6 November 2009; pp. 581–587. [Google Scholar]
  20. Fu, Y.; Liu, S.; Ren, C. Adaptive Step-Size Matching Pursuit Algorithm for Practical Sparse Reconstruction. Circuits Syst. Signal Process 2017, 36, 2275–2291. [Google Scholar] [CrossRef]
  21. Wang, H.; Du, W.; Xu, L. A new sparse adaptive channel estimation method based on compressive sensing for FBMC/OQAM transmission network. Sensors 2016, 16, 966. [Google Scholar] [CrossRef] [Green Version]
  22. Hu, Y.; Zhao, L. A Fuzzy Selection Compressive Sampling Matching Pursuit Algorithm for its Practical Application. IEEE Access. 2019, 7, 144101–144124. [Google Scholar]
  23. Liao, C.C.; Chen, T.S.; Wu, A.Y. Real-Time Multi-User Detection Engine Design for IoT Applications via Modified Sparsity Adaptive Matching Pursuit. IEEE Trans. Circuits Syst. I Regul. Pap. 2019, 66, 2987–3000. [Google Scholar] [CrossRef]
  24. Li, D.; Wu, Z. A fast global matching pursuit algorithm for sparse reconstruction by l0 minimization. Signal Image Video Process 2020, 14, 277–284. [Google Scholar] [CrossRef]
  25. Bi, X.; Leng, L.; Kim, C.; Liu, X.; Du, Y.; Liu, F. Constrained Backtracking Matching Pursuit Algorithm for Image Reconstruction in Compressed Sensing. App. Sci. 2021, 11, 1435. [Google Scholar] [CrossRef]
  26. Eldar, Y.C.; Kutyniok, G. Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  27. Tsaig, Y.; Donoho, D.L. Extensions of compressed sensing. Signal Process 2006, 86, 549–571. [Google Scholar] [CrossRef]
  28. Cai, T.T.; Wang, L.; Xu, G. New bounds for restricted isometry constants. IEEE Trans. Inf. Theory. 2010, 56, 4388–4394. [Google Scholar] [CrossRef]
  29. Yoo, A.; Shin, S.; Lee, J.; Moon, C. Implementation of a sensor big data processing system for autonomous vehicles in the C-ITS environment. App. Sci. 2020, 10, 7858. [Google Scholar] [CrossRef]
  30. He, Y.; Ma, W.; Ma, Z.; Fu, W.; Chen, C.; Yang, C.F.; Liu, Z. Using unmanned aerial vehicle remote sensing and a monitoring information system to enhance the management of unauthorized structures. App. Sci. 2019, 9, 4954. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Comparison of different monitoring schemes for vehicle. (a) The conventional monitoring scheme; (b) Real-time vehicle health monitoring using CS.
Figure 1. Comparison of different monitoring schemes for vehicle. (a) The conventional monitoring scheme; (b) Real-time vehicle health monitoring using CS.
Applsci 11 04816 g001
Figure 2. The implementation of vehicle health monitoring with FRSVssMP.
Figure 2. The implementation of vehicle health monitoring with FRSVssMP.
Applsci 11 04816 g002
Figure 3. Examples of one-dimensional sparse data. (a) Synthetic signals; (b) Real signals.
Figure 3. Examples of one-dimensional sparse data. (a) Synthetic signals; (b) Real signals.
Applsci 11 04816 g003
Figure 4. Performance of the initial sparsity level estimation approach. (a) The initial estimation of sparsity level in different experiments; (b) The estimated sparsity level vs. iterations.
Figure 4. Performance of the initial sparsity level estimation approach. (a) The initial estimation of sparsity level in different experiments; (b) The estimated sparsity level vs. iterations.
Applsci 11 04816 g004
Figure 5. Reconstruction success rates vs. the sparsity level (a) Synthetic signals; (b) Real signals.
Figure 5. Reconstruction success rates vs. the sparsity level (a) Synthetic signals; (b) Real signals.
Applsci 11 04816 g005
Figure 6. Reconstruction time vs. the sparsity level. (a) Synthetic signals; (b) Real signals.
Figure 6. Reconstruction time vs. the sparsity level. (a) Synthetic signals; (b) Real signals.
Applsci 11 04816 g006
Figure 7. Reconstruction success rates vs. variable step sizes. (a) Synthetic signals; (b) Real signals.
Figure 7. Reconstruction success rates vs. variable step sizes. (a) Synthetic signals; (b) Real signals.
Applsci 11 04816 g007
Figure 8. Reconstruction success rates vs. the measurement length. (a) Synthetic signals; (b) Real signals.
Figure 8. Reconstruction success rates vs. the measurement length. (a) Synthetic signals; (b) Real signals.
Applsci 11 04816 g008
Figure 9. Reconstruction time vs. the measurement length. (a) Synthetic signals; (b) Real signals.
Figure 9. Reconstruction time vs. the measurement length. (a) Synthetic signals; (b) Real signals.
Applsci 11 04816 g009
Figure 10. Reconstruction success rates vs. variable step sizes. (a) Synthetic signals; (b) Real signals.
Figure 10. Reconstruction success rates vs. variable step sizes. (a) Synthetic signals; (b) Real signals.
Applsci 11 04816 g010
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Liu, H.; Zhao, H.; Feng, W. Filtering-Based Regularized Sparsity Variable Step-Size Matching Pursuit and Its Applications in Vehicle Health Monitoring. Appl. Sci. 2021, 11, 4816. https://doi.org/10.3390/app11114816

AMA Style

Liu H, Zhao H, Feng W. Filtering-Based Regularized Sparsity Variable Step-Size Matching Pursuit and Its Applications in Vehicle Health Monitoring. Applied Sciences. 2021; 11(11):4816. https://doi.org/10.3390/app11114816

Chicago/Turabian Style

Liu, Haoqiang, Hongbo Zhao, and Wenquan Feng. 2021. "Filtering-Based Regularized Sparsity Variable Step-Size Matching Pursuit and Its Applications in Vehicle Health Monitoring" Applied Sciences 11, no. 11: 4816. https://doi.org/10.3390/app11114816

APA Style

Liu, H., Zhao, H., & Feng, W. (2021). Filtering-Based Regularized Sparsity Variable Step-Size Matching Pursuit and Its Applications in Vehicle Health Monitoring. Applied Sciences, 11(11), 4816. https://doi.org/10.3390/app11114816

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop