# Stochastic Gradient Matching Pursuit Algorithm Based on Sparse Estimation

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Preliminaries and Problem Statement

## 3. StoGradMP Algorithm

**Randomize:**The measurement matrix $\mathsf{\Phi}$ is randomly divided into blocks, that is, it searches the row index of the measurement matrix constituting a block matrix ${\mathsf{\Phi}}_{{b}_{i}}$ of size ${b}_{i}\times n$ by the row vector corresponding to those row indexes. Then, according to Equation (10) and the block matrix, execute the calculation operation of sub-function ${f}_{{i}_{k}}({x}_{k})$.

**Proxy:**Compute the gradient ${G}_{k}$ of ${f}_{{i}_{k}}({x}_{k})$, where the gradient ${G}_{k}$ is a $n\times 1$ column vector.

**Identify:**The absolute value of the gradient vector is ranked in descending order, the first $2K$ absolute value of the gradient coefficients are selected, the column index (atomic index) of the measurement matrix corresponding to those coefficients is found, then form a preliminary index set ${P}_{k}$.

**Merge:**Constitute the candidate atomic index set ${C}_{k}$, which is consists of the preliminary index set ${P}_{k}$ and the support index set ${S}_{k-1}$ of the previous iteration.

**Estimation:**The transition estimation of the signal ${b}_{k}$ by the least square method.

**Prune:**The absolute value of the estimation vector of the signal transition is ranked in descending order, the first $K$ absolute value of signal estimation coefficients is determined, then conduct a search for the atomic index of the measurement matrix corresponding to those coefficients, forming the support atomic index set ${S}_{k}$.

**Update:**Update the final estimation of signal ${x}_{k}={b}_{kS}$ at the current iteration, which corresponds to the support atomic index set ${S}_{k}$.

**Check:**When the ${l}_{2}$-norm of the signal residual is less than the tolerance error of the StoGradMP algorithm, the iteration is halted. Or, if the loop index $k$ is greater than the maximum number of iterations, the proposed method ends and the approximation of signal $\widehat{x}={x}_{k}$ is the output. Otherwise, continue the iteration until the halting condition is met.

## 4. Proposed Algorithm

#### 4.1. Pre-Evaluation Strategy

#### 4.2. Adjustment Strategy

#### 4.3. Reliability Verification Condition

Algorithm 1 Proposed algorithm | |

Input: Measurement matrix ${\mathsf{\Phi}}^{m\times n}$, Observation vector $u$, Block size $b$Step-size $s$, Isometry constant ${\delta}_{K}$, Initial sparsity estimation ${K}_{0}=1$ Tolerance used to exit loop $tol$, Maximum number of iterations $\mathrm{maxIter}$ | |

Output1: ${K}_{0}$ sparsity estimation of the original signal$V$ the support atomic index set Output2: $\widehat{x}={x}_{k}$ $K$-sparse approximation of signal $x$ | |

Set parameters: | |

$\widehat{x}=0$ | {initialize signal approximation} |

$k=0$ | {loop index used to loop 2} |

$kk=0$ | {loop index used to loop 1} |

$done1=0$ | {while loop 1 flag} |

$done2=0$ | {while loop 2 flag} |

${r}_{k}=u$ | {initialize residual} |

$\mathrm{M}=\mathrm{floor}(\mathrm{m}/\mathrm{b})$ | {number of blocks} |

${P}_{0}=[]$ | {empty preliminary index set} |

${C}_{0}=[]$ | {empty candidate index set} |

$V=[]$ | {empty support index set used to loop 1} |

${S}_{0}=[]$ | {empty support index set used to loop 2} |

$j=0$ | {stage index} |

Part 1: Sparsity EstimationWhile ($~done1$)$kk=kk+1$ - (1)
**Compute the atom correlation:**$g={\mathsf{\Phi}}^{T}\ast u$- (2)
**Identify the support index set:**$V=\mathrm{max}(|g|,{K}_{0})$- (3)
**Check the iteration condition**If (${\parallel {\mathsf{\Phi}}_{\mathsf{\Gamma}}^{T}u\parallel}_{2}>\frac{1-{\delta}_{K}}{\sqrt{1+{\delta}_{K}}}{\parallel u\parallel}_{2}$)$done1=1$ quit iterationelse${K}_{0}={K}_{0}+1$ Sparsity approachend
end | |

Part 2: Recovery part${S}_{0}=V$ Update the support index set While ($~done2$)$k=k+1$ - (1)
**Randomize**
- (2)
**Computation of gradient:**${G}_{k}=\nabla {f}_{{i}_{k}}({x}_{k})=-2\ast {\mathsf{\Phi}}_{{b}_{{i}_{k}}}^{T}({u}_{{b}_{{i}_{k}}}-{\mathsf{\Phi}}_{{b}_{{i}_{k}}}{x}_{k-1})$- (3)
**Identify the large**${K}_{0}$**components:**${P}_{k}=\mathrm{max}(|{G}_{k}|,{K}_{0})$- (4)
**Merge to update candidate index set:**${\mathsf{\Phi}}_{{C}_{k}}={\mathsf{\Phi}}_{{P}_{k}}\cup {\mathsf{\Phi}}_{{S}_{k-1}}$**Reliability verification condition**
${b}_{k}={\mathsf{\Phi}}_{{C}_{k}}^{+}u$ Signal estimation by the least square method else ${b}_{k}=0$ break; end - (5)
**Prune to obtain current support index set:**$S=\mathrm{max}(|{b}_{k}|,{K}_{0})$- (6)
**Signal approximation by the support set:**${x}_{k}={b}_{k}{}_{S}$**,**${r}_{new}=u-\mathsf{\Phi}{x}_{k}$- (7)
**Check the iteration condition**
$done2=1$ quit iteration else if (${\parallel {r}_{new}\parallel}_{2}\ge {\parallel {r}_{k-1}\parallel}_{2}$) sparsity adjustment condition$j=j+1$ shift into stage ${K}_{0}=j\ast s$ approach the real sparsity else ${r}_{k}={r}_{new}$ update the residual ${S}_{k}=S$ update the support index set end end |

## 5. Proof of the Proposed Algorithm

**Proposition**

**5.1.**

**Proof.**

## 6. Discussion

## 7. Conclusions

## Author Contributions

## Acknowledgments

## Conflicts of Interest

## References

- Vehkaperä, M.; Kabashima, Y.; Chatterjee, S. Analysis of Regularized LS Reconstruction and Random Matrix Ensembles in Compressed Sensing. IEEE Trans. Inf. Theory
**2016**, 62, 2100–2124. [Google Scholar] [CrossRef] - Laue, H.E.A. Demystifying Compressive Sensing [Lecture Notes]. IEEE Signal Process. Mag.
**2017**, 34, 171–176. [Google Scholar] [CrossRef] - Arjoune, Y.; Kaabouch, N.; El Ghazi, H.; Tamtaoui, A. Compressive sensing: Performance comparison of sparse recovery algorithms. In Proceedings of the 2017 IEEE 7th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA, 9–11 January 2017; pp. 1–7. [Google Scholar]
- Liu, J.K.; Du, X.L. A gradient projection method for the sparse signal reconstruction in compressive sensing. Appl. Anal.
**2018**, 97, 2122–2131. [Google Scholar] [CrossRef] - Wang, Q.; Qu, G. Restricted isometry constant improvement based on a singular value decomposition-weighted measurement matrix for compressed sensing. IET Commun.
**2017**, 11, 1706–1718. [Google Scholar] [CrossRef] - Lopes, M.E. Unknown Sparsity in Compressed Sensing: Denoising and Inference. IEEE Trans. Inf. Theory
**2016**, 62, 5145–5166. [Google Scholar] [CrossRef] - Guo, J.; Song, B.; He, Y.; Yu, F.R.; Sookhak, M. A Survey on Compressed Sensing in Vehicular Infotainment Systems. IEEE Commun. Surv. Tutor.
**2017**, 19, 2662–2680. [Google Scholar] [CrossRef] - Chen, W.; You, J.; Chen, B.; Pan, B.; Li, L.; Pomeroy, M.; Liang, Z. A sparse representation and dictionary learning based algorithm for image restoration in the presence of Rician noise. Neurocomputing.
**2018**, 286, 130–140. [Google Scholar] [CrossRef] - Li, K.; Chandrasekera, T.C.; Li, Y.; Holland, D.J. A nonlinear reweighted total variation image reconstruction algorithm for electrical capacitance tomography. IEEE Sens. J.
**2018**, 18, 5049–5057. [Google Scholar] [CrossRef] - He, Q.; Song, H.; Ding, X. Sparse signal reconstruction based on time–frequency manifold for rolling element bearing fault signature enhancement. IEEE Trans. Instrum. Meas.
**2016**, 65, 482–491. [Google Scholar] [CrossRef] - Schnas, K. Average performance of Orthogonal Matching Pursuit (OMP) for sparse approximation. IEEE Signal Process. Lett.
**2018**, 25, 1865–1869. [Google Scholar] [CrossRef] - Meena, V.; Abhilash, G. Robust recovery algorithm for compressed sensing in the presence of noise. IET Signal Process.
**2016**, 10, 227–236. [Google Scholar] [CrossRef] - Pei, L.; Jiang, H.; Li, M. Weighted double-backtracking matching pursuit for block-sparse reconstruction. IET Signal Process.
**2016**, 10, 930–935. [Google Scholar] [CrossRef] - Fu, W.; Chen, J.; Yang, B. Source recovery of underdetermined blind source separation based on SCMP algorithm. IET Signal Process.
**2017**, 11, 877–883. [Google Scholar] [CrossRef] - Satpathi, S.; Chakraborty, M. On the number of iterations for convergence of CoSaMP and Subspace Pursuit algorithms. Appl. Comput. Harmon. Anal.
**2017**, 43, 568–576. [Google Scholar] [CrossRef] [Green Version] - Golbabaee, M.; Davies, M.E. Inexact gradient projection and fast data driven compressed sensing. IEEE Trans. Inf. Theory
**2018**, 64, 6707–6721. [Google Scholar] [CrossRef] - Gao, Y.; Chen, Y.; Ma, Y. Sparse-bayesian-learning-based wideband spectrum sensing with simplified modulated eideband converter. IEEE Access
**2018**, 6, 6058–6070. [Google Scholar] [CrossRef] - Lin, Y.; Chen, Y.; Huang, N.; Wu, A. Low-complexity stochastic gradient pursuit algorithm and architecture for robust compressive sensing reconstruction. IEEE Trans. Signal Process.
**2017**, 65, 638–650. [Google Scholar] [CrossRef] - Mamandipoor, B.; Ramasamy, D.; Madhow, U. Newtonized orthogonal matching pursuit: Frequency estimation over the continuum. IEEE Trans. Signal Process.
**2016**, 64, 5066–5081. [Google Scholar] [CrossRef] - Rakotomamonjy, A.; Flamary, R.; Gasso, G. DC proximal Newton for Non-convex optimization problems. IEEE Trans. Neural Netw. Learn. Syst.
**2016**, 27, 636–647. [Google Scholar] [CrossRef] - Chou, C.; Chang, E.; Li, H.; Wu, A. Low-Complexity Privacy-Preserving Compressive Analysis Using Subspace-Based Dictionary for ECG Telemonitoring System. IEEE Trans. Biomed. Circuits Syst.
**2018**, 12, 801–811. [Google Scholar] - Bonettini, S.; Prato, M.; Rebegoldi, S. A block coordinate variable metric linesearch based proximal gradient method. Comput. Optim. Appl.
**2018**, 71, 5–52. [Google Scholar] [CrossRef] - Rani, M.; Dhok, S.B.; Deshmukh, R.B. A systematic review of compressive sensing: Concepts, implementations and applications. IEEE Access
**2018**, 6, 4875–4894. [Google Scholar] [CrossRef] - Nguyen, N.; Needell, D.; Woolf, T. Linear convergence of stochastic iterative greedy algorithms with sparse constraints. IEEE Trans. Inf. Theory
**2017**, 63, 6869–6895. [Google Scholar] [CrossRef] - Tsinos, C.G.; Berberidis, K. Spectrum Sensing in Multi-antenna Cognitive Radio Systems via Distributed Subspace Tracking Techniques. In Handbook of Cognitive Radio; Springer: Singapore, 2017; pp. 1–32. [Google Scholar]
- Tsinos, C.G.; Rontogiannis, A.A.; Berberidis, K. Distributed Blind Hyperspectral Unmixing via Joint Sparsity and Low-Rank Constrained Non-Negative Matrix Factorization. IEEE Trans. Comput. Imaging
**2017**, 3, 160–174. [Google Scholar] [CrossRef] - Li, H.; Zhang, J.; Zou, J. Improving the bound on the restricted isometry property constant in multiple orthogonal least squares. IET Signal Process.
**2018**, 12, 666–671. [Google Scholar] [CrossRef] - Wang, J.; Li, P. Recovery of Sparse Signals Using Multiple Orthogonal Least Squares. IEEE Trans. Signal Process.
**2017**, 65, 2049–2062. [Google Scholar] [CrossRef] - Wang, J.; Kwon, S.; Li, P.; Shim, B. Recovery of sparse signals via generalized orthogonal matching pursuit: A new analysis. IEEE Trans. Signal Process.
**2016**, 64, 1076–1089. [Google Scholar] [CrossRef] - Soltani, M.; Hegde, C. Fast algorithms for de-mixing sparse signals from nonlinear observations. IEEE Trans. Signal Process.
**2017**, 65, 4209–4222. [Google Scholar] [CrossRef] - Li, H.; Liu, G. Perturbation analysis of signal space fast iterative hard thresholding with redundant dictionaries. IET Signal Process.
**2017**, 11, 462–468. [Google Scholar] [CrossRef] - Rakotomamonj, A.; Koço, S.; Ralaivola, L. Greedy Methods, Randomization Approaches and Multiarm Bandit Algorithms for Efficient Sparsity-Constrained Optimization. IEEE Trans. Neural Netw. Learn. Syst.
**2017**, 28, 2789–2802. [Google Scholar] [CrossRef] - Srimanta, M.; Bhavsar, A.; Sao, A.K. Noise Adaptive Super-Resolution from Single Image via Non-Local Mean and Sparse Representation. Signal Process.
**2017**, 132, 134–149. [Google Scholar] - Dziwoki, G. Averaged properties of the residual error in sparse signal reconstruction. IEEE Signal Process. Lett.
**2016**, 23, 1170–1173. [Google Scholar] [CrossRef] - Stanković, L.; Daković, M.; Vujović, S. Reconstruction of sparse signals in impulsive disturbance environments. Circuits Syst. Signal. Process.
**2016**, 36, 767–794. [Google Scholar] [CrossRef] - Metzler, C.A.; Maleki, A.; Baraniuk, R.G. From denoising to compressed sensing. IEEE Trans. Inf. Theory
**2016**, 62, 5117–5144. [Google Scholar] [CrossRef]

**Figure 2.**Reconstruction percentage of different step-sizes with different sparsities in different isometry constant conditions ($n=400$, $s\in [1,5,10,15]$, ${\delta}_{K}\in [0.1,0.2,0.3,0.4,0.5,0.6]$ and $m=170$, Gaussian signal).

**Figure 3.**Reconstruction percentage of different isometry constants with different sparsities in different step-size conditions ($n=400$, $s\in [1,5,10,15]$, ${\delta}_{K}\in [0.1,0.2,0.3,0.4,0.5,0.6]$ and $m=170$, Gaussian signal).

**Figure 4.**The average estimated sparsity of different isometry constants with different sparsities ($n=400$, ${\delta}_{K}\in [0.1,0.2,0.3,0.4,0.5,0.6]$ and $m=170$, Gaussian signal).

**Figure 5.**Reconstruction percentage of different algorithms with different sparsities in different real sparsity $K$ conditions ($n=400$, $s\in [1,5,10,15]$, ${\delta}_{K}=0.1$, and $m=170$,$L\in [10\text{}100]$, Gaussian signal).

**Figure 6.**Reconstruction percentage of different algorithms with different measurements in different real sparsity $K$ conditions ($n=400$, $s\in [1,5,10,15]$, ${\delta}_{K}=0.1$ and $m=2\ast K:5:300$, Gaussian signal).

**Figure 7.**The average runtime of different algorithms with different sparsities in different sparsity conditions. ($n=400$, $s\in [1,5,10,15]$, ${\delta}_{K}=0.1$ and $m=170$, Gaussian signal).

**Figure 8.**The average runtime of different algorithm with different measurements in different sparsity conditions ($n=400$, $s\in [15,10,15]$, ${\delta}_{K}=0.1$ and $m=2\ast K:5:300$, Gaussian signal).

**Figure 9.**The average mean square error of different algorithms with different $SNR$ levels in different real sparsity conditions ($n=400$, $s\in [1,5,10]$, ${\delta}_{K}=0.1$ and $m=170$, $SNR=10:5:50$, Gaussian signal).

**Figure 10.**Application in remote sensing image compressing and reconstructing with our proposed method.

**Figure 11.**Application in power quality signal compressing and reconstructing with our proposed method.

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Zhao, L.; Hu, Y.; Liu, Y.
Stochastic Gradient Matching Pursuit Algorithm Based on Sparse Estimation. *Electronics* **2019**, *8*, 165.
https://doi.org/10.3390/electronics8020165

**AMA Style**

Zhao L, Hu Y, Liu Y.
Stochastic Gradient Matching Pursuit Algorithm Based on Sparse Estimation. *Electronics*. 2019; 8(2):165.
https://doi.org/10.3390/electronics8020165

**Chicago/Turabian Style**

Zhao, Liquan, Yunfeng Hu, and Yulong Liu.
2019. "Stochastic Gradient Matching Pursuit Algorithm Based on Sparse Estimation" *Electronics* 8, no. 2: 165.
https://doi.org/10.3390/electronics8020165