Next Article in Journal
Methodical Approach for Determining the Length of Drill Channels in Osteosynthesis
Next Article in Special Issue
Integrations between Autonomous Systems and Modern Computing Techniques: A Mini Review
Previous Article in Journal
Model Free Localization with Deep Neural Architectures by Means of an Underwater WSN
Previous Article in Special Issue
Measurement of Dynamic Responses from Large Structural Tests by Analyzing Non-Synchronized Videos
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Comparative Study of Computational Methods for Compressed Sensing Reconstruction of EMG Signal

DII—Dipartimento di Ingegneria dell’Informazione, Università Politecnica delle Marche, Via Brecce Bianche 12, I-60131 Ancona, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2019, 19(16), 3531; https://doi.org/10.3390/s19163531
Submission received: 12 July 2019 / Revised: 1 August 2019 / Accepted: 9 August 2019 / Published: 13 August 2019

Abstract

:
Wearable devices offer a convenient means to monitor biosignals in real time at relatively low cost, and provide continuous monitoring without causing any discomfort. Among signals that contain critical information about human body status, electromyography (EMG) signal is particular useful in monitoring muscle functionality and activity during sport, fitness, or daily life. In particular surface electromyography (sEMG) has proven to be a suitable technique in several health monitoring applications, thanks to its non-invasiveness and ease to use. However, recording EMG signals from multiple channels yields a large amount of data that increases the power consumption of wireless transmission thus reducing the sensor lifetime. Compressed sensing (CS) is a promising data acquisition solution that takes advantage of the signal sparseness in a particular basis to significantly reduce the number of samples needed to reconstruct the signal. As a large variety of algorithms have been developed in recent years with this technique, it is of paramount importance to assess their performance in order to meet the stringent energy constraints imposed in the design of low-power wireless body area networks (WBANs) for sEMG monitoring. The aim of this paper is to present a comprehensive comparative study of computational methods for CS reconstruction of EMG signals, giving some useful guidelines in the design of efficient low-power WBANs. For this purpose, four of the most common reconstruction algorithms used in practical applications have been deeply analyzed and compared both in terms of accuracy and speed, and the sparseness of the signal has been estimated in three different bases. A wide range of experiments are performed on real-world EMG biosignals coming from two different datasets, giving rise to two different independent case studies.

1. Introduction

Surface electromyography (sEMG) is a technique to capture and measure the electrical potential at the skin surface due to muscle activity [1,2]. The registered EMG signal in a muscle is the collective action potential of all muscular fibers of the motor unit that work together since they are stimulated by the same motor neuron. The muscular contraction is generated by a stimulus that propagates from the brain cortex to the target muscle as an electrical potential, named action potential (AP). sEMG signal is frequently used for the evaluation of muscle functionality and activity, thanks to the non-invasiveness and ease of this technique [3,4,5]. Common applications are fatigue analysis [6] of rehabilitation exercises [5,7], postural control [8], musculoskeletal disorder analysis [9], gait analysis [10], movement recognition [11], gesture recognition [12], prosthetic control [13,14,15], to cite only a few. Among these applications monitoring and automatic recognition of human activities are of particular interests both for sport and fitness as well as for healthcare of elderly and impaired people [16,17]. Wireless body area networks (WBANs) provide an effective and a relatively low-cost solution for biosignal monitoring in real time [18,19]. A WBAN typically consists of one or more low-power, miniaturized, lightweight devices with wireless communication capabilities that operate in the proximity of a human body [20]. However, power consumption represents a major problem for the design and for the widespread of such devices. A large part of the device power consumption is required for the wireless transmission of the signals which are recorded from multiple channels at a high-sampling rate [21]. Standard compression protocols have a high computational complexity and the implementation in the sensor nodes would add a big overhead to the power consumption. Compressed sensing (CS) techniques that lie on the sparsity property of many natural signals, have been successfully applied in the WBAN long term signal monitoring, since CS significantly saves the transmit power by reducing the sampling rate [22,23,24,25,26]. Recent studies have applied CS to sEMG signals for gesture recognition, an innovative application field of the sEMG signal analysis [27,28]. In this context, CS has a great importance in reducing the size of transmitted sEMG data while being able to reconstruct good quality signals and recognize hand movements. The fundamental idea behind CS is that rather than first sampling at a high rate and then compressing the sampled data, as usually done in standard techniques, we would like to find ways to directly sense the data in compressed form, i.e., at a lower sampling rate. To make this possible, CS relies on the concept of sparsity which implies that certain classes of signals, sparse signals, when expressed in a proper basis have only a small number of non-zero coordinates. The CS field grew out of the seminal work of Candes, Romberg, Tao and Donoho [29,30,31,32,33,34,35], who showed that a finite dimensional sparse signal can be recovered from several samples much smaller that its length. The CS paradigm combines two fundamental stages, encoding and reconstruction. The reconstruction of a signal acquired with CS represents the most critical and expensive stage as it involves an optimization which seeks the best solution to an undetermined set of linear equations with no prior knowledge of the signal except that it is sparse when represented in a proper basis. To obtain the better performance in the reconstruction of the undersampled signal a large variety of algorithms have been developed in recent years [36]. In the class of computational techniques for solving sparse approximation problems, two approaches are computationally practical and lead to provably exact solutions under some defined conditions: convex optimization and greedy algorithms. Convex optimization is the original CS reconstruction algorithm formulated as a linear programming problem [37]. Unlike convex optimization, greedy algorithms try to solve the reconstruction problem in a less exact manner. In this class, the most common algorithms used in practical applications are orthogonal matching pursuit (OMP) [38,39,40,41,42], compressed sampling matching pursuit (CoSaMP) [43,44], normalized iterative hard thresholding (NIHT) [45]. All these algorithms are applicable in principle to a generic signal; however, in the design and implementation of a sensor architecture is of paramount importance to assess the performance with reference to the specific signal to be acquired. Additionally, the performance of the algorithms can vary very widely, so that a comparative study that demonstrates the practicability of such algorithms are welcomed by designers of low powers WBANs for biosignal monitoring [26].
The aim of this paper is to explore the trade-off in the choice of a compressed sensing algorithm, belonging to the classes of techniques previously described, to be applied in EMG sensor-applications. Thus, the ultimate goal of the paper is to present a comparative study of computational methods for CS reconstruction of EMG signals, in real-world EMG signal acquisition systems, leading to efficient, low-power WBANs. For example, a useful application of this comparative study can be the selection of the best algorithm to be applied in EMG-based gesture recognition. In addition, the effect of this basis used for reconstruction on signal sparseness has been analyzed for three different bases.
This paper is organized as follows. Section 2 summarizes the basic concept of CS theory. Section 3 is mainly focused on CS reconstruction algorithms and in particular gives a complete description of four algorithms: Convex Optimization, OMP, CoSaMP, and NIHT. Section 4 reports a comparative study of the four algorithms performance when applied to real-world EMG signals.

2. Compressed Sensing Background

In this section, we provide an overview of the basic concepts of the CS theory. In Table 1, for ease of reference, a list of the main symbols and definitions used throughout the text are reported. Some of these are currently adopted in the literature while other specific operators will be defined later.
CS theory asserts that rather than acquire the entire signal and then compress, it is usually done in compression techniques, it is possible to capture only the useful information at rates smaller than the Nyquist sampling rate.
The CS paradigm combines two fundamental stages, encoding and reconstruction.
In the encoding stage the N-dimensional input signal f is encoded into a M-dimensional set of measurements y through a linear transformation by the M × N measurement matrix Φ where y = Φ f . In this way with M < N the CS data acquisition system directly translates analog data into a compressed digital form.
In the reconstruction stage given by f = Ψ x , assuming the signal f to be recovered is known to be sparse in some basis Ψ = [ Ψ 1 , , Ψ N ] , in the sense that all but a few coefficients x i are zero, the sparsest solution x (fewest significant non-zero x i ) is found. The reconstruction algorithms exhibit better performance when the signal to be reconstructed is exactly k-sparse on the basis Ψ , i.e., with x i 0 for i Λ , | Λ | = k . Thus, in some algorithms the N k elements of x that give negligible contributions are discarded. To this end the following operator is defined
Λ = s u p p k , Ψ ( x ) : | Λ | = k , γ i > γ j , γ i = | x i | Ψ i 2 , for i Λ , j Λ
that selects the set Λ of k indexes corresponding to largest values | x i | Ψ i 2 . The set Λ so derived represents the so-called set of sparsity. Another useful definition in this context is the operator F ( x , Λ ) that returns a vector with the same elements of x in the sub-set Λ and zero elsewhere, formally
F ( x , Λ ) Λ = x Λ , F ( x , Λ ) I N Λ = 0 , I N = { 1 , 2 , , N } ,
where I N Λ means difference of the two sets I N and Λ . The consecutive application of the two operators gives rise to a k-sparse vector obtained from x by keeping only the components with the largest values of | x i | Ψ i 2 , and will be synthetically denoted by [ x ] k , called reduced operator. Thus
[ x ] k = F ( x , s u p p k , Ψ ( x ) ) .
A natural formulation of the recovery problem is within an l 0 -norm minimization framework, which seeks a solution x of the problem
min x R N x 0 s u b j e c t t o y = Φ Ψ x ,
where · 0 is a counting function that returns the number of non-zero components in its argument. Unfortunately, the l 0 -minimization problem is NP-hard, and hence cannot be used for practical application. A method to avoid using this computationally intractable formulation is to consider an l 1 -minimization problem.
It has been shown [33] that when x is the solution to the convex approximation problem
min x R N x 1 s u b j e c t t o y = Φ Ψ x
then the reconstruction f = Ψ x is exact. More specifically only M measurements in the Φ domain selected uniformly at random, are needed to reconstruct the signal provided M satisfies the inequality
M C ν 2 ( Φ , Ψ ) S log N
where N represents the signal size, S the index of sparseness, C a constant and ν ( Φ , Ψ ) the coherence between the sensing basis Φ and the representation basis Ψ . Coherence measures the largest correlation between any two elements of Φ and Ψ and is given by ν ( Φ , Ψ ) = max k , j | Φ k , Ψ j | , with ν 2 ( Φ , Ψ ) [ 0 , N ] . Random matrices are largely incoherent ( ν = 1 ) with any fixed basis Ψ . Therefore, as the smaller the coherence the fewer samples are needed, random matrices are the best choice for sensing basis.
The usually adopted performance metric to measure the reduction in the data required to represent the signal f is the compression ratio C R defined as
C R = N M ,
that is the ratio between the length of the original and compressed signal vectors. Instead sparsity is usually defined as
S N = k N .
Sometimes is more convenient to define sparsity with respect to dimension M, thus giving S M = k / M . Obviously the two equation are related by S M = CR S N .

3. The Algorithms

As the CS sampling framework includes two main activities, encoding and reconstruction, some specific algorithms must be derived for this purpose.

3.1. Encoding

CS encoder uses a linear transformation to project the vector f into the lower dimensional vector y, through the measurement matrix Φ . In addition of being incoherent with respect to the basis Ψ , measurement matrix Φ must facilitate encoding practical implementation. One widely used approach is to use Bernoulli random matrices Φ ( i , j ) = ± 1 . This choice allows saving of multiplication in the matrix-product operation y = Φ f . Moreover, simple, fast and low-power digital and analog hardware implementations of the encoder are possible [26].

3.2. Basis Matrix Ψ

A wide range of basis matrices Ψ can be adopted in Equation (4), three of the most familiar bases will be used in this paper, namely DCT, Haar and Daubechies’ wavelet (DBW). Although DCT seems not to be an adequately sparse basis for EMG signal it was used in one of the two case studies because of signal pre-filtering during acquisition as it will be explained in Section 4. Additionally, other recent works [46,47] have demonstrated the validity of DCT basis for CS applied to EMG signal. Matrix Ψ for Haar and DB4 was built using parallel filter bank wavelet implementation [48].

3.3. Reconstruction

CS reconstruction algorithms can be divided into two classes: convex optimization and greedy algorithms.

3.3.1. Convex Optimization

L1-minimization

The CS theory asserts that when f is sufficiently sparse, the recovery via l 1 -minimization is provably exact. Thus, a fundamental algorithm for reconstruction is the convex optimization wherein the l 1 -norm of the reconstructed signal is an efficient measure of sparsity. The CS reconstruction process is described by Equation (5) which can be regarded as a linear programming problem. This approach is also known as basis pursuit.
By assuming f ( k ) = f ( k N N + 1 ) , , f ( k N ) , k = 1 , , L is a frame of the EMG signal to be reconstructed, Ψ = Ψ 1 , , Ψ N is an N × N basis matrix, Φ an M × N Bernoulli matrix, then the constraint in Equation (5) can be rewritten as
y = Φ Ψ x = A x
with A = Φ Ψ . Introducing the Lagrange function
L ( x , λ ) = x 1 + λ T ( A x y )
where T denotes matrix transposition thus to solve the problem (5) is equivalent to determine the stationary point of L ( x , λ ) with respect to both x and λ . A usual technique for this problem is the projected-gradient algorithm based on the following iterative scheme
x t + 1 = x t μ L x | x t
where μ is a parameter that regulates the convergence of the algorithm. By deriving (10) we obtain
L x = s g n ( x ) + A T λ ,
then combining Equations (11) and (12) with constraint A x t = y and assuming ( A A T ) 1 is invertible, it results
λ = ( A A T ) 1 A s g n ( x t ) .
Finally, the following iterative solution
x t + 1 = x t μ ( I A T ( A A T ) 1 A ) s g n ( x t )
is obtained. To make the convergence parameter μ independent of signal power the following normalized version of the algorithm can be adopted
x t + 1 = x t μ x t 1 N P s g n ( x t )
with P = I A A and A = A T ( A A T ) 1 . To initialize the algorithm a vector x 0 given by
x 0 = A y
that solves the following l 2 -minimization problem
x 0 = arg min x x 2 2 s . t . A x = y
has been chosen. The parameter μ determines the convergence of the algorithm; thus, to establish a proper choice of its value a convergence criterion should be derived. However, a complete treatment of convergence is a difficult task and is out of the scope of this paper. To face this problem the value of μ has been chosen using a semi-heuristic criterion that bounds the steady-state ripple given by
x t + 1 x t 2 x t 2 < ϵ m a x , t
where ϵ m a x specifies the desired accuracy. In such a way we obtain
μ ϵ m a x N P s g n ( x 2 x 2 x 0 1 < ϵ m a x N P s g n ( x ) 2
which can be reduced to the more practical condition
μ ϵ m a x N P s g n ( x 0 ) 2 .
An optimized version of the algorithm with a reduced number of products can be derived as follows. Let us rewrite Equation (15) as
x t + 1 = x t μ x t 1 N q t
where
q t = P s t = j = 1 N P j s t ( j )
and
s t = s t ( 1 ) , , s t ( N ) = s g n ( x t ) .
The variation of q from ( t 1 ) to t
q t q t 1 = j = 1 N P j [ s t ( j ) s t 1 ( j ) ]
only depends on
Δ s t ( j ) = s t ( j ) s t 1 ( j ) = 2 for s t 1 ( j ) = 1 , s t ( j ) = 1 2 for s t 1 ( j ) = 1 , s t ( j ) = 1 0 for s t 1 ( j ) = s t ( j ) j = 1 , , N ,
which can be rewritten in a compact form as
Δ s t ( j ) = 2 s t ( j ) v t ( j ) , j = 1 , , N
where
v t ( j ) = 1 for s t ( j ) s t 1 ( j ) 0 for s t ( j ) = s t 1 ( j ) .
By defining
w t ( j ) = ( s t ( j ) + 1 ) / 2 { 0 , 1 } ,
Equation (27) is equivalent to
v t ( j ) = w t ( j ) w t 1 ( j ) .
Finally, from Equations (24), (26) and (29) and defining the set Ω t = { j / v t ( j ) = 1 } we have
q t = q t 1 + 2 j Ω t P j s t ( j ) .
Thus, the summation in Equation (30) is extended only to the terms for which a sign changing from s t 1 to s t occurs, thus reducing the number of products required at each step.
A pseudo-code of the L 1 algorithm is reported as Algorithm 1.
Algorithm 1 L1-minimization
 Input: A = Φ Ψ , y , k
 Inizialize:
P = [ p 1 p N ] , P = I A A , x 0 = A y t = 0
 Output: k-sparse coefficient vector x
while t < N i t e r do
   s t = s g n ( x t ) , w t = ( s t + 1 ) / 2  // reduce s t to binary vector w t
  if t > 0 then
    v t = w t w t 1
    Ω t = { j / v t ( j ) = 1 }  // define the set of indices Ω t corresponding to a change from s t 1 to s t
    q t = q t 1 + 2 j Ω t p j s t ( j )
  else
    q t = P s 0
    μ = ϵ m a x N q 0 2
  end if
   x t + 1 = x t μ x t 1 N q t
   t = t + 1
end while

3.3.2. Greedy Algorithms

Orthogonal Matching Pursuit (OMP)

This algorithm solves the reconstruction of k sparse coefficient vector x, i.e., with x i 0 , i Λ , | Λ | = k . The algorithm tries to find the k directions corresponding to the non-zero x-components, starting from the residual r 0 given by the measurements y. At each step t k the column a j of A that is mostly correlated with the residual is derived. Then the best coefficients x t are found by solving the following l 2 -minimization problem
x t = arg min x y A t x 2 ,
thus giving
x t = A t y .
Finally, the residual, the difference between the actual measure and the A mathematical description of the algorithm is reported in Algorithm 2.
Algorithm 2 OMP
 Input: A = Φ Ψ , y , k
 Initialize:
r 0 = y / / r e s i d u a l A 0 = / / c o l u m n s t = 0
 Output: k-sparse coefficient vector x
 while t < k do
   λ t = arg max j | a j T r t 1 |  // find the column of A that is most strongly correlated with the residual
   A t = [ A t 1 a λ t ]  // merge the new column
   x t = A t y  // find the best coefficients x t from (31)
   r t = y A t x t  // update the residual
   t = t + 1
end while

Compressive Sampling Matching Pursuit (CoSaMP)

Differently from OMP, the CoSaMP algorithm tries to find the k S columns of A T that are the most strongly correlated with the residual, thus making a correction of the reconstruction on the basis of residual achieved at each step. The k S columns are determined by the selection step
W t = arg max | W | = k S A W T r t 1
where A W T is the sub-matrix of A T made by columns indexed by the set W, and r t is the residual at current iteration step t. Thus, the algorithm proceeds to estimate the best coefficients h for approximating the residual with the new columns indexed by T t = Λ t W t . As this step corresponds to an LMS problem, it results
h = A T t y .
Finally, the sparsity operator
x = F ( h , Λ t + 1 )
with Λ t + 1 = s u p p ( h ) k , Ψ is applied to obtain the sparse vector x. At the end of the algorithm the residual is updated with the new signal reconstruction A Λ t + 1 x Λ t + 1 . A pseudo-code of the algorithm is reported in Algorithm 3.

Normalized Iterative Hard Thresholding (NIHT)

The basic idea that underlies NIHT algorithm is that the sparse components to be identified give a large contribution to the gradient of residual. The algorithm tries to find these components by following the gradient of residual r t = A x t y , i.e.,
x ˜ t + 1 = x t μ t r t 2 2 x t ,
thus obtaining
x ˜ t + 1 = x t μ t A T ( A x t y ) .
The sparse vector x t + 1 is derived at each iteration t by applying the reduced operator to the estimated vector x t + 1 ,
Λ t + 1 = s u p p k , Ψ ( x ˜ t + 1 )
x t + 1 = F ( x ˜ t + 1 , Λ t + 1 ) .
Algorithm 3 CoSaMP
 Input: A = Φ Ψ , y , k
 Initialize:
r 0 = y / / r e s i d u a l t = 0
Λ 0 = arg max | Λ | = k A Λ T r 0 1  // find k columns of A T that are most strongly correlated with residual r 0
 Output: k-sparse coefficient vector x
while t < N i t e r do
   k S = γ k , γ [ 0 , 1 ]  // number of new columns to be selected
   W t = arg max | W | = k S A W T r t 1  // find k S columns of A T that are most strongly correlated with residual r t
   T t = Λ t W t  // merge the new columns such that | T t | = k + k S
   h = A T t y  // find the best coefficients for residual approximation
   Λ t + 1 = s u p p k , Ψ ( h )  // find the set of sparsity Λ t + 1
   x = F ( h , Λ t + 1 )  // find sparse vector x
   r t + 1 = y A Λ t + 1 x Λ t + 1  // update the residual
   t = t + 1
end while
As in CoSaMP the initialization is made by choosing the columns of A T that are most strongly correlated with residual
Λ 0 = arg max | Λ | = k A Λ T y 1
and then estimating the best coefficients
x 0 = A Λ 0 y
for residual approximation. A different step size has been used for each x t component by defining the step size vector ρ as
ρ = min j a j 2 1 a 1 2 , , 1 a N 2 ,
thus, normalizing the components of the gradient vector A T ( A x t y ) . In this way the updated equation for x becomes
x ˜ t + 1 = x t μ t q t
where q t = ρ A T ( A x t y ) is the normalized gradient vector and ⊙ denotes the element-wise product of vectors. The value of μ t has been estimated by minimizing the residual, i.e., such that
μ t A x t + 1 y 2 2 = 0
or
μ t A ( x t μ t q t ) y 2 2 = 0 .
A closed form of μ t cannot be derived as it depends on the set Λ t + 1 selected after the updating of x ˜ t + 1 . To circumvent this problem an iterative approach has been used, starting from an initial estimation Λ t + 1 of Λ t + 1 to compute μ t ( Λ t + 1 ) and then updating Λ t + 1 to the true value. In this way from previous Equation (45) we obtain
μ t = q Λ t + 1 T A T ( A x Λ t + 1 y ) q Λ t + 1 T A T A q Λ t + 1 = w T ϵ w T w
where
w = A q Λ t + 1 ϵ = A x Λ t + 1 y Λ t + 1 = s u p p k , Ψ ( x t μ t q ) .
Algorithm 4 NIHT

 Input: A = Φ Ψ , y , k
 Initialize:
Λ 0 = arg max | Λ | = k A Λ T y 1 / / f i n d   t h e   c o l u m n s   o f A T   t h a t   a r e   t h e / /   m o s t   s t r o n g l y   c o r r e l a t e d   w i t h   r e s i d u a l x 0 = A Λ 0 y / /   f i n d   t h e   b e s t   c o e f f i c i e n t s   f o r   r e s i d u a l   a p p r o x i m a t i o n t = 0
 Output: k-sparse coefficient vector x
while t < N i t e r do
   r t = A x t y  // update residual
   ρ = min j a j 2 1 a 1 2 , , 1 a N 2  // step size vector
   q t = ρ ( A T r t )  // normalized gradient vector
   Λ t + 1 = Λ t + 1  // initialize the set of sparsity Λ t + 1
  if t > 0 then
   while (stop criterion on Λ t + 1 ) do
    x ˜ t + 1 = x t μ t ( Λ t + 1 ) q t  // update x t with step size μ t given by (46)
    Λ t + 1 = s u p p k , Ψ ( x ˜ t + 1 )  // update set of sparsity Λ t + 1
    x t + 1 = F ( x ˜ t + 1 , Λ t + 1 )  // find sparse vector x t + 1
   end while
  else
    x ˜ t + 1 = x t μ t ( I N ) q t
    [ x t + 1 ] k = F ( x ˜ t + 1 , s u p p k , Ψ ( x ˜ t + 1 ) )  // find sparse vector x t + 1
  end if
   t = t + 1
end while

4. Comparative Study

To quantify the performance of the CS algorithms previously described a comparative study has been conducted on two different sets of EMG signals, giving rise to case study A and case study B.
A similar study of CS applied to EMG signal was performed in [49]. In that work sparsity is enforced to the signal with a time-domain thresholding technique, and reconstruction SNR is measured with respect to the sparsified signal. In this work, to have an estimation of overall information loss of the signal, after enforcing sparsity with reduced operator [ x ] k for each basis, we measured SNR with respect to the original signal x.

4.1. Case Study A

The signals used in this study refer to three different muscles, namely biceps brachii, deltoideus medius, and triceps brachii. They were recorded by the sEMG acquisition set-up described in [16] and following the protocol outlined in [50,51]. The EMG signal was high-pass filtered at 5 Hz and low-pass filtered at 500 Hz before being sampled at 2 kHz. The algorithms were applied to frames of length N = 1024 , which is a large value enough to limit SNR variations among frames. In the simulations the index k and the compression factor C F = M / N , i.e., the inverse of C R , were varied. The performance has been measured based on the following equivalent signal to noise ratio
S N R = 20 log 10 y 2 y y r e c 2 ,
where y r e c is the reconstruction signal, by averaging the results obtained with different frames.

4.1.1. Basis Selection

Figure 1 compares for the three muscles the reconstruction error in three frames extracted from the data set, as achieved with convex optimization using three different bases, DCT, Haar, and DB4. Since the signal was pre-filtered at 500 Hz, this can lead to an improvement of sparsity in the frequency domain making DCT worth testing.
Figure 2 and Figure 3 report the SNR as a function of frame, sparsity and iteration respectively, for the same muscles of Figure 1. DB4 basis clearly shows the best reconstruction performance in all the conditions considered in these figures.

4.1.2. Comparison of Algorithms Performance

As the ultimate goal of this paper is to study and compare the CS methods for the reconstruction of EMG signals, a large experimentation has been carried out with the algorithms previously described.
Figure 4, Figure 5, Figure 6, Figure 7 and Figure 8 report the performance achieved with the four algorithms L1, OMP, CoSaMP, and NIHT under different experimental conditions. In particular the behavior of SNR as a function of sparsity S M = k / M for the four algorithms and the three bases is shown in Figure 4, where a constant value C F = 0.5 of compression factor is used. Here the sparsity S M with respect to the dimension M has been adopted as for k > M the behavior is not of particular significance. It is evident from these results the superiority of DB4 with respect to other bases as already pointed out in the previously figures. Concerning algorithm performance, all the algorithms show a pronounced peak near the value of k / M = 0.4 0.5 . This behavior is due to the fact that the SNR is measured with respect to the original signal, and as k / M decreases the fidelity between x and [ x ] k deteriorates. Moreover, while for OMP, CoSaMP, and NIHT, the SNR falls rapidly as k / M increases, the L1 algorithm remains nearly constant beyond the maximum.
Figure 5 reports the SNR as a function of sparsity S M = k / M for different values of C F . In these cases, L1 and OMP have the better performance as they show a similar behavior. Figure 6 depicts the SNR as a function of compression factor C F and different values of S M . Also, in this case L1 and OMP show the better performance.

4.1.3. Noise Tolerance

Real data CS acquisition systems are inherently noisy, thus to simulate a more realistic situation some experiments have been conducted with a noise superimposed to the signal. The effect of a noisy signal y on CS reconstruction corresponds to an error x e in the sparse solution x given in this case by
x = x NF + x e
where x NF denotes the noise-free solution. The error term x e can be particularized for the four algorithms as follows:
x e , L 1 = A n x e , OMP = A Λ k n x e , CoS aMP = A T n x e , NIHT = A Λ 0 n
where n is the noise superimposed to y. It is straightforward to show that the following inequality
f f x e 2 f 2 | f f x e NF 2 f 2 ψ x e 2 f 2 |
holds, thus giving the relationship
S N R S N R NF S N R noise S N R NF S N R noise
where S N R NF and S N R noise = f 2 / ψ x e 2 refer to the x NF and x e components, respectively. For high values of noise SNR degenerates to SNR noise thus worsening the noise-free performance. It is worth noticing that for L 1 the SNR noise is independent of k / M , as it results from Equation (50) and the definition of SNR noise . This implies that reducing the SNR noise does not affect the reconstruction, thus resulting almost constant with k / M . For the other algorithms x e increases with k / M , so that a maximum for the SNR is expected. Figure 7 reports the SNR as a function of sparsity S M for three values of noise superimposed, while Figure 8 is the noisy version of Figure 6, in which a value of SNR = 25 dB for the measurement signal y is used. The experimental results confirm the considerations stated above for L1, which shows the worst behavior when the measure SNR decreases. As for OMP, CoSaMP, and NIHT, their performances are almost independent of k / M for low values of it, while suddenly worsen when k / M exceeds a critical value of about 0.5. Finally, Figure 9 reports the computational cost and execution time on MATLAB respectively as functions of sparsity S M = k / M . The execution time was computed using MATLAB tic-toc functions. These figures clearly show that L1-minimization outperforms the other algorithms.

4.2. Case Study B

The EMG signals used in this case study come from PhysioBank [52], a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by biomedical research community. In particular, the data come from the `Neuroelectric and Myoelectric Databases’ of PhysioBank archives. A class of this database, named `Examples of Electromyograms’ [53], has been used; it contains short EMG recordings from three subjects (one without neuromuscular disease, one with myopathy, one with neuropathy). The signals are sampled at a frequency of 4 kHz and the frame has a length N = 1024 , the same as case study A. As the signal from this dataset was not low-pass filtered, it contains all typical EMG frequency components, therefore this time we discarded DCT and Haar, using only DB4 basis. We chose to add this case study to analyse performances when signal has the lowest sparsity as possible which is the worst scenario for the reconstruction performance.
Figure 10 reports the execution time as a function of sparsity S M for the four algorithms. Figure 11 compares the SNR as a function of sparsity S M for three values of C F , as obtained with the four algorithms. As shown in these figures, the obtained results have a similar behaviour of those achieved in case study A.
Finally, based on the experimental results previously reported a qualitative assessment of the four reconstruction algorithms can be derived that explores the trade-off in the choice of a CS reconstruction algorithm for EMG sensor application. To this end Table 2 summarizes the performance, in terms of accuracy, noise tolerance, and speed, of the four reconstruction algorithms.
The L1 minimization algorithm has an excellent behavior for accuracy, noise tolerance, and speed, thus outperforming the other algorithms. Among these, CoSaMP shows the best trade-off between accuracy and speed.

5. Conclusions

This paper presents a comprehensive comparative study of four of the most common algorithms for CS reconstruction of EMG signals, namely L1-minimization, OMP, CoSaMP, and NIHT. The study has been conducted using a wide range of EMG biosignals coming from two different datasets. Concerning algorithm accuracy, all the algorithms show a pronounced peak of SNR near the value of k / M = 0.4 0.5 . However, while for OMP, CoSaMP, and NIHT, the SNR falls rapidly, the L1 algorithm remains nearly constant beyond the maximum. As for the effect of noise on CS reconstruction, L1-minimization shows a behavior that is almost independent of k / M . The results on computational cost and execution time on MATLAB show that L1-minimization outperforms the other algorithms. Finally, Table 2 summarizes the performance, in terms of accuracy, noise tolerance, speed, and computational cost of the four algorithms.

Author Contributions

Investigation, L.M., C.T., L.F., and P.C.; Methodology, L.M., C.T., L.F., and P.C.; Writing—original draft, L.M., C.T., L.F., and P.C.

Funding

This work was supported by a Università Politecnica delle Marche Research Grant.

Conflicts of Interest

The authors declare no conflict of interest. The funding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.

References

  1. Naik, G.R.; Selvan, S.E.; Gobbo, M.; Acharyya, A.; Nguyen, H.T. Principal Component Analysis Applied to Surface Electromyography: A Comprehensive Review. IEEE Access 2016, 4, 4025–4037. [Google Scholar] [CrossRef] [Green Version]
  2. Merlo, A.; Farina, D.; Merletti, R. A fast and reliable technique for muscle activity detection from surface EMG signals. IEEE Trans. Biomed. Eng. 2003, 50, 316–323. [Google Scholar] [CrossRef] [PubMed]
  3. Fukuda, T.Y.; Echeimberg, J.O.; Pompeu, J.E.; Lucareli, P.R.G.; Garbelotti, S.; Gimenes, R.; Apolinário, A. Root mean square value of the electromyographic signal in the isometric torque of the quadriceps, hamstrings and brachial biceps muscles in female subjects. J. Appl. Res. 2010, 10, 32–39. [Google Scholar]
  4. Nawab, S.H.; Roy, S.H.; Luca, C.J.D. Functional activity monitoring from wearable sensor data. In Proceedings of the 26th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004; Volume 1, pp. 979–982. [Google Scholar]
  5. Lee, S.Y.; Koo, K.H.; Lee, Y.; Lee, J.H.; Kim, J.H. Spatiotemporal analysis of EMG signals for muscle rehabilitation monitoring system. In Proceedings of the 2013 IEEE 2nd Global Conference on Consumer Electronics, Tokyo, Japan, 1–4 October 2013; pp. 1–2. [Google Scholar]
  6. Biagetti, G.; Crippa, P.; Curzi, A.; Orcioni, S.; Turchetti, C. Analysis of the EMG Signal During Cyclic Movements Using Multicomponent AM–FM Decomposition. IEEE J. Biomed. Health Inform. 2015, 19, 1672–1681. [Google Scholar] [CrossRef] [PubMed]
  7. Chang, K.M.; Liu, S.H.; Wu, X.H. A wireless sEMG recording system and its application to muscle fatigue detection. Sensors 2012, 12, 489–499. [Google Scholar] [CrossRef]
  8. Ghasemzadeh, H.; Jafari, R.; Prabhakaran, B. A Body Sensor Network With Electromyogram and Inertial Sensors: Multimodal Interpretation of Muscular Activities. IEEE Trans. Inf. Technol. Biomed. 2010, 14, 198–206. [Google Scholar] [CrossRef] [PubMed]
  9. Du, W.; Omisore, M.; Li, H.; Ivanov, K.; Han, S.; Wang, L. Recognition of Chronic Low Back Pain during Lumbar Spine Movements Based on Surface Electromyography Signals. IEEE Access 2018, 6, 65027–65042. [Google Scholar] [CrossRef]
  10. Spulber, I.; Georgiou, P.; Eftekhar, A.; Toumazou, C.; Duffell, L.; Bergmann, J.; McGregor, A.; Mehta, T.; Hernandez, M.; Burdett, A. Frequency analysis of wireless accelerometer and EMG sensors data: Towards discrimination of normal and asymmetric walking pattern. In Proceedings of the 2012 IEEE International Symposium on Circuits and Systems, Seoul, Korea, 20–23 May 2012; pp. 2645–2648. [Google Scholar]
  11. Zhang, X.; Chen, X.; Li, Y.; Lantz, V.; Wang, K.; Yang, J. A Framework for Hand Gesture Recognition Based on Accelerometer and EMG Sensors. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2011, 41, 1064–1076. [Google Scholar] [CrossRef]
  12. Rahimi, A.; Benatti, S.; Kanerva, P.; Benini, L.; Rabaey, J.M. Hyperdimensional biosignal processing: A case study for EMG-based hand gesture recognition. In Proceedings of the 2016 IEEE International Conference on Rebooting Computing (ICRC), San Diego, CA, USA, 17–19 October 2016; pp. 1–8. [Google Scholar]
  13. Brunelli, D.; Tadesse, A.M.; Vodermayer, B.; Nowak, M.; Castellini, C. Low-cost wearable multichannel surface EMG acquisition for prosthetic hand control. In Proceedings of the 2015 6th International Workshop on Advances in Sensors and Interfaces (IWASI), Gallipoli, Italy, 18–19 June 2015; pp. 94–99. [Google Scholar]
  14. Yang, D.; Jiang, L.; Huang, Q.; Liu, R.; Liu, H. Experimental Study of an EMG-Controlled 5-DOF Anthropomorphic Prosthetic Hand for Motion Restoration. J. Intell. Robot. Syst. 2014, 76, 427–441. [Google Scholar] [CrossRef]
  15. Oskoei, M.A.; Hu, H. Myoelectric control systems—A survey. Biomed. Signal Process. Control 2007, 2, 275–294. [Google Scholar] [CrossRef]
  16. Biagetti, G.; Crippa, P.; Falaschetti, L.; Turchetti, C. Classifier Level Fusion of Accelerometer and sEMG Signals for Automatic Fitness Activity Diarization. Sensors 2018, 18, 2850. [Google Scholar] [CrossRef] [PubMed]
  17. Roy, S.H.; Cheng, M.S.; Chang, S.S.; Moore, J.; Luca, G.D.; Nawab, S.H.; Luca, C.J.D. A Combined sEMG and Accelerometer System for Monitoring Functional Activity in Stroke. IEEE Trans. Neural Syst. Rehabil. Eng. 2009, 17, 585–594. [Google Scholar] [CrossRef] [PubMed]
  18. Varshney, U. Pervasive Healthcare and Wireless Health Monitoring. Mob. Netw. Appl. 2007, 12, 113–127. [Google Scholar] [CrossRef]
  19. Movassaghi, S.; Abolhasan, M.; Lipman, J.; Smith, D.; Jamalipour, A. Wireless Body Area Networks: A Survey. IEEE Commun. Surv. Tutor. 2014, 16, 1658–1686. [Google Scholar] [CrossRef]
  20. Cavallari, R.; Martelli, F.; Rosini, R.; Buratti, C.; Verdone, R. A Survey on Wireless Body Area Networks: Technologies and Design Challenges. IEEE Commun. Surv. Tutor. 2014, 16, 1635–1657. [Google Scholar] [CrossRef]
  21. Zhang, Y.; Zhang, F.; Shakhsheer, Y.; Silver, J.D.; Klinefelter, A.; Nagaraju, M.; Boley, J.; Pandey, J.; Shrivastava, A.; Carlson, E.J.; et al. A Batteryless 19 μW MICS/ISM-Band Energy Harvesting Body Sensor Node SoC for ExG Applications. IEEE J. Solid-State Circuits 2013, 48, 199–213. [Google Scholar] [CrossRef]
  22. Craven, D.; McGinley, B.; Kilmartin, L.; Glavin, M.; Jones, E. Compressed Sensing for Bioelectric Signals: A Review. IEEE J. Biomed. Health Inform. 2015, 19, 529–540. [Google Scholar] [CrossRef]
  23. Cao, D.; Yu, K.; Zhuo, S.; Hu, Y.; Wang, Z. On the Implementation of Compressive Sensing on Wireless Sensor Network. In Proceedings of the 2016 IEEE First International Conference on Internet-of-Things Design and Implementation (IoTDI), Berlin, Germany, 4–8 April 2016; pp. 229–234. [Google Scholar] [CrossRef]
  24. Ren, F.; Marković, D. A Configurable 12–237 kS/s 12.8 mW Sparse-Approximation Engine for Mobile Data Aggregation of Compressively Sampled Physiological Signals. IEEE J. Solid-State Circuits 2016, 51, 68–78. [Google Scholar]
  25. Kanoun, K.; Mamaghanian, H.; Khaled, N.; Atienza, D. A real-time compressed sensing-based personal electrocardiogram monitoring system. In Proceedings of the 2011 Design, Automation Test in Europe, Grenoble, France, 14–18 March 2011; pp. 1–6. [Google Scholar]
  26. Chen, F.; Chandrakasan, A.P.; Stojanovic, V.M. Design and Analysis of a Hardware-Efficient Compressed Sensing Architecture for Data Compression in Wireless Sensors. IEEE J. Solid-State Circuits 2012, 47, 744–756. [Google Scholar] [CrossRef] [Green Version]
  27. Mangia, M.; Paleari, M.; Ariano, P.; Rovatti, R.; Setti, G. Compressed sensing based on rakeness for surface ElectroMyoGraphy. In Proceedings of the 2014 IEEE Biomedical Circuits and Systems Conference (BioCAS) Proceedings, Cleveland, OH, USA, 17–19 October 2014; pp. 204–207. [Google Scholar]
  28. Marchioni, A.; Mangia, M.; Pareschil, F.; Rovatti, R.; Setti, G. Rakeness-based Compressed Sensing of Surface ElectroMyoGraphy for Improved Hand Movement Recognition in the Compressed Domain. In Proceedings of the 2018 IEEE Biomedical Circuits and Systems Conference (BioCAS), Cleveland, OH, USA, 17–19 October 2018; pp. 1–4. [Google Scholar]
  29. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  30. Candes, E.J.; Tao, T. Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? IEEE Trans. Inf. Theory 2006, 52, 5406–5425. [Google Scholar] [CrossRef]
  31. Donoho, D.L.; Stark, P.B. Uncertainty Principles and Signal Recovery. SIAM J. Appl. Math. 1989, 49, 906–931. [Google Scholar] [CrossRef]
  32. Candes, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  33. Candes, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  34. Candes, E.J.; Wakin, M.B. An Introduction To Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  35. Qaisar, S.; Bilal, R.M.; Iqbal, W.; Naureen, M.; Lee, S. Compressive sensing: From theory to applications, a survey. J. Commun. Netw. 2013, 15, 443–456. [Google Scholar] [CrossRef]
  36. Tropp, J.A.; Wright, S.J. Computational Methods for Sparse Solution of Linear Inverse Problems. Proc. IEEE 2010, 98, 948–958. [Google Scholar] [CrossRef] [Green Version]
  37. Kim, S.; Koh, K.; Lustig, M.; Boyd, S.; Gorinevsky, D. An Interior-Point Method for Large-Scale1-Regularized Least Squares. IEEE J. Sel. Top. Signal Process. 2007, 1, 606–617. [Google Scholar] [CrossRef]
  38. Tropp, J.A. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inf. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef]
  39. Tropp, J.A.; Gilbert, A.C. Signal Recovery From Random Measurements Via Orthogonal Matching Pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  40. Cai, X.; Zhou, Z.; Yang, Y.; Wang, Y. Improved Sufficient Conditions for Support Recovery of Sparse Signals Via Orthogonal Matching Pursuit. IEEE Access 2018, 6, 30437–30443. [Google Scholar] [CrossRef]
  41. Davis, G.; Mallat, S.; Avellaneda, M. Adaptive greedy approximations. Constr. Approx. 1997, 13, 57–98. [Google Scholar] [CrossRef]
  42. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; pp. 40–44. [Google Scholar]
  43. Needell, D.; Tropp, J. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Appl. Comput. Harmon. Anal. 2009, 26, 301–321. [Google Scholar] [CrossRef] [Green Version]
  44. Dai, W.; Milenkovic, O. Subspace Pursuit for Compressive Sensing Signal Reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
  45. Blumensath, T.; Davies, M.E. Normalized Iterative Hard Thresholding: Guaranteed Stability and Performance. IEEE J. Sel. Top. Signal Process. 2010, 4, 298–309. [Google Scholar] [CrossRef]
  46. Ravelomanantsoa, A.; Rabah, H.; Rouane, A. Compressed Sensing: A Simple Deterministic Measurement Matrix and a Fast Recovery Algorithm. IEEE Trans. Instrum. Meas. 2015, 64, 3405–3413. [Google Scholar] [CrossRef] [Green Version]
  47. Ravelomanantsoa, A.; Rouane, A.; Rabah, H.; Ferveur, N.; Collet, L. Design and Implementation of a Compressed Sensing Encoder: Application to EMG and ECG Wireless Biosensors. Circuits Syst. Signal Process. 2017, 36, 2875–2892. [Google Scholar] [CrossRef]
  48. Shukla, K.K.; Tiwari, A.K. Efficient Algorithms for Discrete Wavelet Transform: With Applications to Denoising and Fuzzy Inference Systems; Springer Publishing Company, Incorporated: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  49. Dixon, A.M.R.; Allstot, E.G.; Gangopadhyay, D.; Allstot, D.J. Compressed Sensing System Considerations for ECG and EMG Wireless Biosensors. IEEE Trans. Biomed. Circuits Syst. 2012, 6, 156–166. [Google Scholar] [CrossRef]
  50. Biagetti, G.; Crippa, P.; Falaschetti, L.; Orcioni, S.; Turchetti, C. A portable wireless sEMG and inertial acquisition system for human activity monitoring. Lect. Notes Comput. Sci. 2017, 10209 LNCS, 608–620. [Google Scholar]
  51. Biagetti, G.; Crippa, P.; Falaschetti, L.; Orcioni, S.; Turchetti, C. Human Activity Monitoring System Based on Wearable sEMG and Accelerometer Wireless Sensor Nodes. BioMed. Eng. OnLine 2018, 17 (Suppl. 1), 132. [Google Scholar] [CrossRef]
  52. PhysioBank. Available online: https://physionet.org/physiobank/ (accessed on 19 March 2019).
  53. Neuroelectric and Myoelectric Databases—Examples of Electromyograms. Available online: https://physionet.org/physiobank/database/emgdb/ (accessed on 19 March 2019).
Figure 1. Reconstruction error of EMG signal as achieved with convex optimization, using DCT, Haar, and DB4 bases in frames corresponding to three muscles (a) Biceps (b) Deltoideus (c) Triceps.
Figure 1. Reconstruction error of EMG signal as achieved with convex optimization, using DCT, Haar, and DB4 bases in frames corresponding to three muscles (a) Biceps (b) Deltoideus (c) Triceps.
Sensors 19 03531 g001
Figure 2. SNR vs. frame number, as achieved with convex optimization, using DCT, Haar, and DB4 bases, for the same muscles of Figure 1.
Figure 2. SNR vs. frame number, as achieved with convex optimization, using DCT, Haar, and DB4 bases, for the same muscles of Figure 1.
Sensors 19 03531 g002
Figure 3. SNR vs. algorithm iterations, as achieved with convex optimization, for the same bases and muscles of Figure 1.
Figure 3. SNR vs. algorithm iterations, as achieved with convex optimization, for the same bases and muscles of Figure 1.
Sensors 19 03531 g003
Figure 4. Case Study A—SNR as a function of sparsity S M = k / M with a compression factor C F = 0.5 , for the four algorithms, (a) L1, (b) OMP, (c) CoSaMP, (d) NIHT, using the same bases of Figure 1.
Figure 4. Case Study A—SNR as a function of sparsity S M = k / M with a compression factor C F = 0.5 , for the four algorithms, (a) L1, (b) OMP, (c) CoSaMP, (d) NIHT, using the same bases of Figure 1.
Sensors 19 03531 g004
Figure 5. Case Study A—SNR as a function of sparsity S M = k / M and compression factor C F , for the four algorithms, L1, OMP, CoSaMP, NIHT, using the DB4 basis.
Figure 5. Case Study A—SNR as a function of sparsity S M = k / M and compression factor C F , for the four algorithms, L1, OMP, CoSaMP, NIHT, using the DB4 basis.
Sensors 19 03531 g005
Figure 6. Case Study A—SNR as function of compression factor C F = M / N for the four algorithms and three values of sparsity S M = k / M .
Figure 6. Case Study A—SNR as function of compression factor C F = M / N for the four algorithms and three values of sparsity S M = k / M .
Sensors 19 03531 g006
Figure 7. Case Study A—SNR as a function of sparsity S M = k / M for the four algorithms and three values of noise superimposed to the signal.
Figure 7. Case Study A—SNR as a function of sparsity S M = k / M for the four algorithms and three values of noise superimposed to the signal.
Sensors 19 03531 g007
Figure 8. Case Study A—SNR as a function of compression factor C F = M / N for the four algorithms and three values of sparsity S M = k / M . A value of SNR = 25 dB for the measurement signal y is used.
Figure 8. Case Study A—SNR as a function of compression factor C F = M / N for the four algorithms and three values of sparsity S M = k / M . A value of SNR = 25 dB for the measurement signal y is used.
Sensors 19 03531 g008
Figure 9. Case Study A—Execution time on MATLAB as a function of sparsity S M = k / M .
Figure 9. Case Study A—Execution time on MATLAB as a function of sparsity S M = k / M .
Sensors 19 03531 g009
Figure 10. Case Study B—Execution time on MATLAB as a function of sparsity S M = k / M .
Figure 10. Case Study B—Execution time on MATLAB as a function of sparsity S M = k / M .
Sensors 19 03531 g010
Figure 11. Case Study B—SNR as a function of sparsity S M = k / M and compression factor C F , for the four algorithms, L1, OMP, CoSaMP, NIHT, using the DB4 basis.
Figure 11. Case Study B—SNR as a function of sparsity S M = k / M and compression factor C F , for the four algorithms, L1, OMP, CoSaMP, NIHT, using the DB4 basis.
Sensors 19 03531 g011
Table 1. Notation.
Table 1. Notation.
SymbolDescription
element-wise product of two vectors, i.e., c = a b = ( a 1 b 1 , , a N b N ) , a = ( a 1 , , a N ) , b = ( b 1 , , b N )
bitwise XOR between two binary arrays
x s r s r samples circular shift of vector x
s g n ( x ) element-wise sign function of a vector x
B pseudo-inverse of matrix B
Λ = s u p p ( x ) support of x, the set of indices Λ = { j : x j 0 }
| Λ | = k cardinality of the set Λ (the number k of elements in the set)
| | x | | 0 = | s u p p ( x ) | l 0 -norm of x
| | x | | p = ( i = 1 n | x i | p ) 1 / p l p -norm of x (for some 0 < p < )
x Λ sub-vector of x indexed by set Λ
B Λ sub-matrix of B made by columns indexed by set Λ
s u p p k , Ψ ( x ) returns a set Λ of k indexes corresponding to the largest values | x i | | | Ψ i | | 2 , Ψ = [ Ψ 1 , , Ψ N ]
F ( x , Λ ) returns a vector with the same elements of x in the sub-set Λ and 0 elsewhere
[ x ] k = F ( x , s u p p k , Ψ ( x ) ) reduced operator
Table 2. Comparison, in terms of accuracy and speed, of the four algorithms L1, OMP, CoSaMP, NIHT.
Table 2. Comparison, in terms of accuracy and speed, of the four algorithms L1, OMP, CoSaMP, NIHT.
AlgorithmAccuracyNoise ToleranceSpeedComputational Cost
L1ExcellentExcellentExcellent O ( N 2 N iter )
OMPGoodGoodBad O ( M k 3 )
CoSaMPFairFairGood O ( M k 2 N iter )
NIHTBadFairFair O ( M N k )

Share and Cite

MDPI and ACS Style

Manoni, L.; Turchetti, C.; Falaschetti, L.; Crippa, P. A Comparative Study of Computational Methods for Compressed Sensing Reconstruction of EMG Signal. Sensors 2019, 19, 3531. https://doi.org/10.3390/s19163531

AMA Style

Manoni L, Turchetti C, Falaschetti L, Crippa P. A Comparative Study of Computational Methods for Compressed Sensing Reconstruction of EMG Signal. Sensors. 2019; 19(16):3531. https://doi.org/10.3390/s19163531

Chicago/Turabian Style

Manoni, Lorenzo, Claudio Turchetti, Laura Falaschetti, and Paolo Crippa. 2019. "A Comparative Study of Computational Methods for Compressed Sensing Reconstruction of EMG Signal" Sensors 19, no. 16: 3531. https://doi.org/10.3390/s19163531

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop