# Improved Reconstruction Algorithm of Wireless Sensor Network Based on BFGS Quasi-Newton Method

^{1}

^{2}

^{*}

## Abstract

**:**

_{k}directly by calculating the step difference between m adjacent iteration points, and a matrix H

_{k}approximating the inverse of the Hessian matrix is constructed. It solves the disadvantages of BFGS requiring the calculation and storage of H

_{k}, reduces the algorithm complexity, and improves the reconstruction rate. Finally, the experimental results show that the L-BFGS quasi-Newton method has good experimental results for solving the problem of sparse signal reconstruction in wireless sensor networks.

## 1. Introduction

_{k}by iteration, which reduces the amount of computation. However, the BFGS algorithm needs to calculate and store the n × n matrix H

_{k}. When dimension n is large, the reduced computation amount is limited. Therefore, the L-BFGS [14] quasi-Newton algorithm is proposed in this paper to solve the problem of sparse signal reconstruction in wireless sensors based on compressed sensing. The L-BFGS algorithm does not need to calculate and store H

_{k}directly but only needs to store m (m<<n) vector pairs (s

_{k},u

_{k}). This solves the shortcoming of the BFGS quasi-Newton algorithm, which must calculate and store the matrix H

_{k}directly; the size of H

_{k}is n × n. In Section 5, we prove the feasibility of using the L-BFGS quasi-Newton algorithm to solve the problem of sparse signal reconstruction in wireless sensors based on compressed sensing.

## 2. Signal Reconstruction Algorithm Based on Newton Method

_{0}norm minimization problem, and the signal is reconstructed by solving the L

_{0}norm minimization problem:

_{0}norm here is the number of nonzero elements of the vector.

_{0}norm minimization problem. However, the L

_{0}norm problem is NP-hard. In other words, the solution to the L

_{0}norm minimization problem needs to enumerate all the permutations and combinations of non-zero values in the original signal. Under certain conditions, the L

_{1}norm minimization problem is the optimal convex approximation of the L

_{0}norm minimization problem [15], and the solution to the L

_{1}norm minimization problem is simple. Therefore, solving the L

_{0}norm minimization problem is transformed into solving the L

_{1}norm minimization problem:

_{1}norm minimization problem is a convex optimization problem. There are many methods to solve convex optimization problems, and Newton’s method is one of them. Newton’s method uses the first-order matrix and second-order Hessian matrix to approximate the objective function quadratic, and the result is faster than other convex optimization algorithms.

_{1}regularized least squares problem [16].

_{1}is a nonsmooth convex function for x, the subgradient basic approach is not effective in solving the problem (4). The L

_{1}minimization problem (3) can be solved by solving a suitable smooth form of problem (4).

_{1}is smoothed into the following smooth function:

_{1}regularized least squares problem (4) into an unconstrained smooth convex programming problem:

_{k}is the iteration point of step k, x

_{k+}

_{1}is the iteration point of step k + 1, s

_{k}is the iteration difference of adjacent iteration points, g

_{k}is the gradient of F(x), u

_{k}is the gradient difference of F(x).

## 3. Principles

_{k}to approximate the inverse of the Hessian matrix without directly calculating and storing the matrix H

_{k}. The complexity of the L-BFGS algorithm is lower than that of the BFGS algorithm, and the convergence speed of the Newton method is maintained. The constructed H

_{k}is positive definite, which ensures that the search direction is the descending direction of the target function F(x) at x.

_{k}, which makes B

_{k}is an approximate matrix of the Hessian matrix. B

_{k}needs to meet the following conditions:

_{k}is symmetric and positive definite, ensuring that the direction generated by the algorithm is the descending direction of the objective function F(x) at x

_{k};

_{k}is relatively simple.

_{k}is obtained by solving the following linear equation:

_{k}is:

_{k}has been constructed, H

_{k}satisfies the condition ${B}_{k+1}={H}_{k+1}^{-1}$. The Sherman-Morrison Woodbury [19] formula is used twice to obtain the updated formula of the matrix H

_{k}is:

_{k}is the approximation of the inverse of the Hessian matrix and I is the identity matrix of order n, H

_{k}should satisfy the following conditions:

- (1)
- H
_{k}satisfies the Quasi-Newton equation:$${B}_{k+1}{s}_{k}={y}_{k}\mathrm{or}{H}_{k+1}{y}_{k}={s}_{k};$$ - (2)
- H
_{k}is a positive definite symmetric matrix; - (3)
- The update from H
_{k}to H_{k+}_{1}is a low-rank update.

_{k+}

_{1}when H

_{k}is positive definite. From Formula (8), an efficient recursive procedure is derived to compute the matrix-vector product H

_{k}. For completeness, we describe the L-BFGS recursive procedure for H

_{k}in Algorithm 1.

Algorithm 1: L-BFGS two-loop recursion |

1. Input:${s}_{k}=[{s}_{k-m},{s}_{k-m+1},\dots ,{s}_{k-1}]$$\mathrm{and}{u}_{k}=[{u}_{k-m},{u}_{k-m+1},\dots ,{u}_{k-1}]$; 2. $q\leftarrow {g}_{k}$ 3. for $i=k-1:k-m$ do6. end7. $r\leftarrow {H}_{k}^{(0)}q$ 8. for $i=k-1:k-m$ do11. end $r=-{H}_{k}{g}_{k}$12. Return the result |

## 4. Algorithm

_{k}no longer needs to be calculated and stored directly. Instead, the m latest vectors (s

_{k},u

_{k}) saved in the memory are used to update the H

_{k}, which only needs to store matrices of the size m × n, saving a lot of computation cost and improving the reconstruction rate. The specific step of the L-BFGS algorithm is Algorithm 2.

Algorithm 2: Limit memory BFGS Algorithm(L-BFGS) |

1.Given: $\epsilon >0$$,k=0$$,{\sigma}_{1}\in (0,{\scriptscriptstyle \frac{1}{2}})$$,{\rho}_{1}\in (0,1)$$,t\in (0,1)$; 2. $\mathrm{Initialization}:{x}_{0}\in {R}^{n}$$;{H}_{0}=I$. 3. while $\Vert {g}_{k}\Vert >\epsilon $ do12. end |

_{k}. Wolfe line search belongs to a kind of inexact linear search. When using Wolfe line search to find the step size α

_{k}, α

_{k}is required to satisfy the following:

_{1}and ρ

_{2}satisfies the conditions (0 < ρ

_{1}< ρ

_{2}< 1).

_{k}directly; it only needs to store matrices of the size m × d and calculate H

_{k}by matrices of storage. The high reconstruction rate is improved by discarding some vectors and sacrificing a little reconstruction accuracy. Therefore, compared with the BFGS quasi-Newton method, the L-BFGS quasi-Newton method greatly reduces the computational complexity and makes the reconstruction rate of a sparse signal faster.

## 5. Verify the Feasibility of the L-BFGS Algorithm

_{1}norm minimization problem. When the solution is the optimal solution to the L

_{1}norm minimization problem, the sparse signal can be accurately reconstructed. In other words, when algorithm 2 meets global convergence, it is feasible.

**Lemma**

**1.**

_{k}be symmetric and positive definite, and B

_{k+1}is determined by the L-BFGS correction Equation (7), then the necessary and sufficient condition for B

_{k+1}symmetric and positive definite is ${u}_{k}^{T}{s}_{k}>0$.

_{0}is symmetric and positive definite and maintains ${u}_{k}^{T}{s}_{k}>0$ ($\forall k\ge 0$) during the iteration. Then, the matrix sequence { B

_{k+}

_{1}} generated by the L-BFGS correction formula is symmetric and positive definite, so that the equation ${B}_{k}{d}_{k}=-{g}_{k}$ has a unique solution d

_{k}

**.**It is easy to deduce that

_{k}is the descending direction of the target function F(x) at x

_{k}.

**Lemma**

**2.**

**Proof.**

_{k+}

_{1}is positively definite. The search direction generated by the algorithm is guaranteed to be the descending direction of the objective function F(x) at x

_{k}, making the algorithm work normally.□

**Lemma**

**3.**

_{1}norm ∥x∥

_{1}obtained by the smoothing technique is:

**Proof.**

**Theorem**

**1.**

_{k}} denote the sequence generated by L-BFGS quasi-Newton method iteration. When the smoothing parameter of ∥x∥

_{1}satisfies τ

_{k}→0, then {x

_{k}} is a bounded sequence. Let x* denote any limit point of {x

_{k}}, then x* is the optimal solution to the L

_{1}norm minimization problem.

_{k}for the step k, make ${x}_{k}^{\ast}$ = arg $\underset{x}{\mathrm{min}}f\left(x\right)$ denotes the minimum point of the objective function F

_{k}(x) at step k, and z denote any vector satisfying $Az=y$.

_{k}→0, {x

_{k}} is a bounded sequence. On the other hand, it can also be proved that the limit point x* of the sequence {x

_{k}} is feasible. The limit point x* is the KKT point of the L

_{1}norm minimization problem. Since ∥x∥

_{1}is convex and problem (3) is a convex programming problem, the KKT condition is sufficient for optimality, so x* is the optimal solution to the ∥x∥

_{1}norm minimization problem (3). It can be seen that the sparse signal of the wireless sensor network can be accurately reconstructed by the L-BFGS quasi-Newton method.

**Corollary**

**1.**

_{1}norm minimization problem have a unique optimal solution, and let {x

_{k}} represent the sequence generated by an iteration of the L-BFGS method. When k→∞ and τ

_{k}→0, the x

_{k}converges to x*, where x* = arg min{∥x ∥

_{1}:Ax = y}.

## 6. L-BFGS Quasi-Newton Method Steps

_{k}should be stored each time. H

_{k}is very large, and the reconstruction rate of the algorithm is lost by direct calculation. According to Equation (8), each iteration of H

_{k}is obtained by iterating curvature information (s

_{k},u

_{k}). Since H

_{k}cannot be stored easily, then store all the curvature information (s

_{k},u

_{k}). This kind of storage saves memory space and improves the speed of the algorithm.

_{k},u

_{k}) cannot be saved. The reconstruction rate can be improved by discarding some original (s

_{k},u

_{k}). Assuming that the number of storage vectors set is m. When the iteration is m + 1 times, (s

_{1},u

_{1}) will be thrown away, and when the iteration is m + 2 times, (s

_{2},u

_{2}) will be thrown away. By parity of reasoning, only the latest m-group (s

_{k},u

_{k}) is retained. In this way, although the reconstruction accuracy is lost, the memory is saved, the algorithm complexity is reduced, and the reconstruction rate is improved. So the L-BFGS algorithm can be understood as another optimization of the BFGS algorithm.

_{0}, and define the operation error ε. The general value of ε is between 0 and 1, and the initial value of the H matrix is the identity matrix I of order n;

_{k}at this time;

_{k+}

_{1}. If the norm of g

_{k+}

_{1}satisfies the operating error defined, namely $\Vert {g}_{k+1}\Vert \le \epsilon $, it indicates that the cut-off condition is met and the algorithm is terminated. The iteration point x

_{k+}

_{1}is the optimal solution. If the cut-off conditions are not met, it is necessary to go to step 4 and continue to solve the optimal solution;

_{k}and the step factor α

_{k}, and update x

_{k+}

_{1}; to ensure that the search direction is correct. There are many methods to calculate the step factor α

_{k}. Here, the inexact Wolfe line search is used to solve the step factor α

_{k}. The step size can be guaranteed to be larger than 0. In the correct search direction, the x

_{k+}

_{1}is closer and closer to the optimal solution;

_{k},u

_{k}), and delete (s

_{k-m},u

_{k-m}). The L-BFGS quasi-Newton method will retain m bits of data. When it exceeds m bits, it will delete the curvature information before m bits to improve the calculation speed. This is also the difference between the BFGS quasi-Newton method;

_{0}, the initial value x

_{0}of iteration should be selected near the root to ensure the convergence of the iterative process. The most common way is to choose the initial value by using Newton’s convergence theorem. In step 4, the correctness of the search direction d

_{k}should be ensured. If the search direction is not the descending direction at x, the iterative point will deviate more and more from the optimal point, and the sparse signal cannot be accurately reconstructed. There are many ways to calculate the descending direction, d

_{k}. In this paper, a two-loop recursive algorithm is adopted to find the search direction, which can ensure the accuracy of the search direction and the two-loop recursive algorithm is simple. In step 5, if k ≥ m is satisfied, calculate the new curvature information and discard the curvature information before m-bit

_{k-m},u

_{k-m}). By giving up small reconstruction accuracy in exchange for a faster reconstruction rate. If k ≥ m is not satisfied, directly calculate (s

_{k},u

_{k}). In step 6, let k = k + 1 and start a new iteration; no initialization is required. The result of the previous iteration is its initial value, and the iteration continues until the optimal solution is obtained.

## 7. Experimental Simulation

^{M×N}with Gaussian distribution is used as the observation matrix, and the Gaussian sparse signal x ∈ R

^{N}with variable sparsity is used as the original signal. The value range of sparsity K is [1,70].

## 8. Conclusions

_{k}directly; only two matrices of size m × n need to be stored. This algorithm, by discarding some vectors and sacrificing some reconstruction accuracy, greatly reduces the reconstruction time and improves the reconstruction rate compared with the BFGS algorithm. At the same time, the simulation experiment is carried out to compare the reconstruction accuracy and reconstruction time of the proposed algorithms, the BFGS algorithm, the NSHTP algorithm, and the BP algorithm. Experiments show that the reconstruction accuracy of the proposed algorithm is higher than that of the BP algorithm and the NSHTP algorithm but slightly lower than that of the BFGS algorithm. However, the reconstruction rate of the proposed algorithm is the fastest and most stable among the four algorithms, which is two times higher than the BFGS algorithm. Therefore, the proposed algorithm has a high reconstruction rate and certain advantages in terms of reconstruction accuracy. It is a better reconstruction algorithm than the BFGS algorithm.

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Benelhouri, A.; Idrissi, S.H.; Antari, J. Evolutionary routing based energy-aware multi-hop scheme for lifetime maximization in heterogeneous WSNs. Simul. Model. Pract. Theory
**2022**, 116, 102471–102487. [Google Scholar] [CrossRef] - AlZobi, F.I.; AlZubi, A.A.; Yurii, K.; Alharbi, A.; Alanazi, J.M.; Smadi, S. An Optimal Scheme for WSN Based on Compressed Sensing. Comput. Mater. Contin.
**2022**, 72, 1053–1069. [Google Scholar] [CrossRef] - Deng, Q.; Zeng, H.; Zhang, J.; Tian, S.; Cao, J.; Li, Z.; Liu, A. Compressed sensing for image reconstruction via back-off and rectification of greedy algorithm. Signal Process.
**2019**, 157, 280–287. [Google Scholar] [CrossRef] - Sorokovikov, P.; Gornov, A. Combined non-convex optimization algorithms based on differential evolution, harmony search, firefly, and L-BFGS methods. IOP Conf. Series: Mater. Sci. Eng.
**2021**, 1047, 012077. [Google Scholar] [CrossRef] - Traoré, C.; Pauwels, E. Sequential convergence of AdaGrad algorithm for smooth convex optimization. Oper. Res. Lett.
**2021**, 49, 452–458. [Google Scholar] [CrossRef] - Alahari, R.K.; Satya, P.K.; Kishan, R. Low Complexity FFT Factorization for CS Reconstruction. Int. J. Eng. Adv. Technol.
**2020**, 9, 438–442. [Google Scholar] - Mohimani, G.H.; Babaie-Zadeh, M.; Gorodnitsky, I.; Jutten, C. Sparse Recovery using Smoothed ℓ;0; (SL0): Convergence Analysis. CoRR
**2010**, 1001, 5073–5087. [Google Scholar] - Alaifari, R.; Daubechies, I.; Grohs, P.; Thakur, G. Reconstructing Real-Valued Functions from Unsigned Coefficients with Respect to Wavelet and Other Frames. J. Fourier Anal. Appl.
**2017**, 23, 1480–1494. [Google Scholar] [CrossRef] [Green Version] - Daubechies, I.; Friese, M.; Mol, C. An Iterative Thresholding Algorithm for Linear Inverse Problems with a Sparsity Constraint. Commun. Pure Appl. Math.
**2004**, 57, 1413–1457. [Google Scholar] [CrossRef] [Green Version] - Chen, S.S.; Donoho, D.L.; Saunders, M.A. Atomic Decomposition by Basis Pursuit. SIAM J. Sci. Comput.
**1998**, 20, 33–61. [Google Scholar] [CrossRef] - Hager, W.W.; Phan, D.D.; Zhang, H.C. Gradient-Based Methods for Sparse Recovery. Soc. Ind. Appl. Math.
**2011**, 4, 146–165. [Google Scholar] [CrossRef] [Green Version] - Ma, D.X.; Zhang, M.H.; Meng, X. Fast smooth l_0 norm method for compressed sensing signal reconstruction. Sci. Technol. Eng.
**2013**, 13, 2377–2381. [Google Scholar] - Chen, F.H.; Li, S.A. BFGS correction Quasi-Newton method for large-scale signal recovery. Chin. J. Math.
**2015**, 35, 727–734. [Google Scholar] - Lu, L.B.; Wang, K.P.; Tan, H.D.; Li, Q.K. Three-dimensional magnetotelluric inversion using L-BFGS. Acta Geophys. Off. J. Inst. Geophys. PAS Pol. Acad. Sci.
**2020**, 68, 1049–1066. [Google Scholar] [CrossRef] - Jin, D.; Yang, Y.; Ge, T.; Wu, D. A Fast Sparse Recovery Algorithm for Compressed Sensing Using Approximate l0 Norm and Modified Newton Method. Materials
**2019**, 12, 1227. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Yang, H.; Yang, X.; Zhang, F.; Ye, Q. Robust Plane Clustering Based on L1-Norm Minimization. IEEE Access
**2020**, 8, 29489–29500. [Google Scholar] [CrossRef] - Nam, N.M.; An, N.T.; Rector, R.B.; Sun, J. Nonsmooth Algorithms and Nesterov’s Smoothing Technique for Generalized Fermat--Torricelli Problems. SIAM J. Optim.
**2014**, 24, 1815–1839. [Google Scholar] [CrossRef] [Green Version] - Duan, C.; Liu, Y.; Xing, C.; Wang, Z. Infrared and Visible Image Fusion Using Truncated Huber Penalty Function Smoothing and Visual Saliency Based Threshold Optimization. Electronics
**2021**, 11, 33. [Google Scholar] [CrossRef] - Yang, H.; Zhong, J.; Ma, B.L. The Sherman-Morrison-Woodbury Formula of Matrix Core Inverse and its application. J. Jiangxi Univ. Sci. Technol.
**2021**, 42, 98–102. [Google Scholar] - Meng, N.; Zhao, Y.-B. Newton-Step-Based Hard Thresholding Algorithms for Sparse Signal Recovery. IEEE Trans. Signal Process.
**2020**, 68, 6594–6606. [Google Scholar] [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Lu, X.; Yang, C.; Wu, Q.; Wang, J.; Wei, Y.; Zhang, L.; Li, D.; Zhao, L.
Improved Reconstruction Algorithm of Wireless Sensor Network Based on BFGS Quasi-Newton Method. *Electronics* **2023**, *12*, 1267.
https://doi.org/10.3390/electronics12061267

**AMA Style**

Lu X, Yang C, Wu Q, Wang J, Wei Y, Zhang L, Li D, Zhao L.
Improved Reconstruction Algorithm of Wireless Sensor Network Based on BFGS Quasi-Newton Method. *Electronics*. 2023; 12(6):1267.
https://doi.org/10.3390/electronics12061267

**Chicago/Turabian Style**

Lu, Xinmiao, Cunfang Yang, Qiong Wu, Jiaxu Wang, Yuhan Wei, Liyu Zhang, Dongyuan Li, and Lanfei Zhao.
2023. "Improved Reconstruction Algorithm of Wireless Sensor Network Based on BFGS Quasi-Newton Method" *Electronics* 12, no. 6: 1267.
https://doi.org/10.3390/electronics12061267