Next Article in Journal
A Hardware-in-the-Loop V2X Simulation Framework: CarTest
Previous Article in Journal
Real-Time Efficient FPGA Implementation of the Multi-Scale Lucas-Kanade and Horn-Schunck Optical Flow Algorithms for a 4K Video Stream
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Novel Sparsity Adaptive Algorithm for Underwater Acoustic Signal Reconstruction

1
College of Mechanical and Electronic Engineering, Suzhou University, Suzhou 234000, China
2
College of Computer and Information, Hohai University, Nanjing 211100, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(13), 5018; https://doi.org/10.3390/s22135018
Submission received: 30 May 2022 / Revised: 23 June 2022 / Accepted: 27 June 2022 / Published: 3 July 2022
(This article belongs to the Section Sensor Networks)

Abstract

:
In view of the fact that most of the traditional algorithms for reconstructing underwater acoustic signals from low-dimensional compressed data are based on known sparsity, a sparsity adaptive and variable step-size matching pursuit (SAVSMP) algorithm is proposed. Firstly, the algorithm uses Restricted Isometry Property (RIP) criterion to estimate the initial value of sparsity, and then employs curve fitting method to adjust the initial value of sparsity to avoid underestimation or overestimation, before finally realizing the close approach of the sparsity level with the adaptive step size. The algorithm selects the atoms by matching test, and uses the Least Squares Method to filter out the unsuitable atoms, so as to realize the precise reconstruction of underwater acoustic signal received by the sonar system. The experimental comparison reveals that the proposed algorithm overcomes the drawbacks of existing algorithms, in terms of high computation time and low reconstruction quality.

1. Introduction

The traditional underwater acoustic array takes the cross-correlation output of the signal received by the sonar system as the test statistic of the reconstruction algorithm [1]. However, due to the influence of multi-path propagation and noise in the real wave-guide environment, the attenuation phenomenon of the spatial correlation of the received signals of different array elements is extremely easy to produce [2,3]. How to improve the correlation of the received signal to achieve precise reconstruction of the underwater array is a research hot spot. Li proposed a frequency shift compensation method based on wave-guide invariant to improve the correlation of signals received by the sonar system [4,5]. However, the wave-guide invariant will change with the topography in complex terrain. This problem can be improved by the Compressed Sensing (CS) theory, which can reduce the distortion of the signal effectively by setting a reasonable threshold to remove the small characteristic part and retain the essential characteristic part of the signal.
CS theory, proved by Emmanuel Candes et al., pointed out that a sparse or compressible signal could be reconstructed exactly in some transform domains [6]. CS method is an emerging signal processing theory on sampling, which allows the acquiring and reconstruction of signals using sub-Nyquist measurements [7,8]. Recovering the original signal from the compressed observation data is a convex optimization problem, and solving a linear optimization problem based on the norm is an NP-hard problem. It can be solved by using a greedy algorithm with progressive approximation strategy, in which the orthogonal matching pursuit (OMP) algorithm is the most representative [9]. Subsequently, scholars have studied many improved algorithms based on orthogonal matching pursuit algorithm [10,11,12]. Though the algorithms mentioned above can improve the spatial correlation of underwater acoustic signal and realize the detection and estimation of underwater acoustic signal, they need sparsity as a prior knowledge. Thong T.DO developed a sparse adaptive matching pursuit (SAMP) algorithm in 2009 [13], but the fixed step size of SAMP is not flexible enough to avoid overestimation and underestimation. Most of the improved algorithms employ the method of reducing step-size if the algorithm starts to converge, however the improvement effect is not ideal. A sparsity adaptive subspace tracking (SASP) algorithm was proposed [14], which has many iterations and high computational complexity. Therefore, the algorithm is not feasible in the underwater target detection system with high real-time requirement.
Therefore, the algorithm of underwater acoustic signal reconstruction based on sparsity adaptive needs further research and exploration, it is very important for the signal processing to find an algorithm that takes into account both the reconstruction time and reconstruction accuracy.

2. Theoretical Analysis

2.1. Compressed Sensing

Assume that a real valued and discrete signal x with P sparsity can be represented by P significant coefficients over an N-dimensional basis, where P N , x R N [15]. Ignoring background noise, the compressed sensing could be described as recovering a high-dimensional signal x from a low-dimensional signal y :
y M × 1 = Φ M × N x N × 1
where y R M , Φ M × N is the sensing matrix with more columns N than rows M. Generally, reconstructing x from y can be regarded as a problem of solving the equation whose solution is infinite. Considering the sparse constraint of x , the reconstruction problem can be described as the following minimization [16]:
arg min x x 0 s . t . y = Φ x
where x 0 is the l 0 norm of x. Since the instability of x 0 , the solution of the Equation (2) is not convergent. The  l 0 -norm minimization problem is NP-hard, which is solved by selecting atoms to the support set of the unknown signal sequentially in the greedy algorithm. Another solution method is to convert the Equation (2) to an l 1 -norm minimization one:
arg min x x 1 s . t . y = Φ x
This kind of reconstruction is called as convex optimization algorithm, which is performs better in the stability. However, the algorithm run-time becomes longer as the dimension increases. Many improved greedy methods such as SAMP and SASP algorithms perform better in both stability and run-time. Moreover, the RIP criterion has been proposed for Φ x to guarantee that Φ x has the unique solution corresponding to the measurement vector. The sensing matrix Φ x must satisfy the RIP criterion of order P if there δ P ( 0 , 1 ) ) such that
( 1 δ P ) x 2 2 Φ x 2 2 ( 1 + δ P ) x 2 2
for any P-sparse vector x ( x 0 P ).
Greedy algorithm and convex optimization algorithm are the two main methods of signal reconstruction as mentioned above. Unlike the convex optimization method, the greedy method does not constrain the sparsity of the signal, but directly seeks the sparsity solution by means of matching. This algorithm has many advantages, such as simple implementation, low computational complexity and fast convergence speed. It meets the requirement of underwater acoustic signal processing for the sonar system.

2.2. Ship Radiated Noise

In this paper, the corresponding mathematical model of the underwater acoustic signal received by the sonar system is established according to the characteristics of the frequency spectrum. The propeller and its own motor and other propulsion devices, as well as the ship body shuttle water movement, will produce noise radiation in the water when the ship is sailing. According to the noise source, ship-radiated noise has three types, propeller noise, hydrodynamic noise and mechanical noise, whose frequency spectrum includes a line spectrum and continuous spectrum.
The line spectrum is mainly formed by mechanical vibration, propeller resonance and blade cutting flow. The frequency f m of the line spectrum is related to the ship speed v and the propeller blades number l, which satisfies the following relationships [17]
f m = m v l
where m denotes the number of harmonics.
The continuous spectrum is mainly caused by the wide-band noise generated by the cavitation of the propeller, which is caused by the random breakdown of a large number of bubbles near the propeller. The mathematical model of ship-radiated noise x ( t ) can be expressed as
x ( t ) = x c ( t ) + x l ( t )
where x c ( t ) and x l ( t ) are the time domain expression of continuous spectrum and line spectrum.
There are many algorithms for signal reconstruction. The two main methods are the greedy algorithm and convex optimization algorithm, as mentioned above. Unlike the convex optimization algorithm, the greedy algorithm does not restrict the sparse nature of the signal. Rather, it uses the matching method to find it.

3. SAVSMP Algorithm

In order to overcome the shortcomings of the SAMP and SASP algorithms in underwater acoustic signal reconstruction, a sparse adaptive variable step size matching pursuit (SAVSMP) algorithm is proposed for the sonar system.The flow chart of the main procedure is shown in Figure 1. The algorithm includes initial sparsity estimation, first selecting, candidate set making, second selecting, support set updating, residual updating and step-size setting parts. The novelty of the SAVSMP algorithm is that new methods are employed in initial sparsity estimation part, first selecting part and step-size setting part.

3.1. Initial Sparsity Estimation

The flow chart of the initial sparsity estimation is shown in Figure 2.
There are several stages in the recovery process, and each stage includes several iterations. Here, i denotes the iteration index, and j indicates the stage index. Moreover, r 0 and K 0 are denoted as the initial residual and initial sparsity. The initial sparsity is similar to SASP and AStMP algorithms, which select atoms by the RIP criterion. The criterion is formally defined as Theorem 1, which is proved by [14].
Theorem 1.
If the sensing matrixΦsatisfies the RIP criterion with the parameters ( P , δ P ) , when K 0 P , one has
Φ F K 0 y 2 ( 1 δ P ) ( 1 + δ P ) y 2
where F K 0 is the support set consisting of K 0 selected atoms. Furthermore, we have Theorem 2 as follows.
Theorem 2.
If the sensing matrixΦsatisfies the deformation of the RIP criterion with the parameters ( P , δ P ) when K 0 P , one has
Φ F K 0 y 2 ( 1 δ P ) ( 1 + δ P ) y 2
If Equation (8) is satisfied, the initial sparsity K 0 = K 0 +1. The support set will be updated as
F K 0 = max ( | Φ y | , K 0 )
After updating, and algorithm continuous to return to the judgment part. It indicates that the initial sparsity estimation is completed when the criterion is not met.
It can be seen from the Equation (8) that the larger the δ P is, the smaller the K 0 would be; on the contrary, the smaller the δ P is, the larger the K 0 that could be obtained. Therefore, K 0 should be adjusted to K 0 in order to prevent over-estimation and under-estimation of the sparsity estimation. Taking δ P and K 0 / K 0 as the independent variable and dependent variable, respectively, the function can be obtained by the MATLAB curve fitting, K 0 = ( a 1 δ P + b 1 ) K 0 , a 1 = 10 , b 1 = 0.5 . The Sum of Squares due to Error (SSE) takes the value of 4.437 × 10 31 , which means the goodness-of-fit of the obtained formula is high.
The over-estimated value is reduced when δ P is smaller, and the under-estimated value is increased when δ P is larger. However, K 0 is halved whatever value the RIP constant δ P takes in the AStMP algorithm which will result in under-estimation. After updating the support set and the residual, the subprogramme returns.

3.2. First Selecting

The greedy algorithms usually use the inner product criterion to measure the similarity between vectors. Suppose x = ( x 1 , x 2 , , x n ) , and  y = ( y 1 , y 2 , , y n ) are two n-dimensional vectors, the inner product criterion between them can be defined as follows [18]:
cos ( x , y ) = x , y x · y = t = 1 n x t y t t = 1 n x t 2 t = 1 n y t 2 1 / 2
It can be seen that the inner product criterion realizes the selection of atoms by calculating the angle cosine of the sensing matrix and the residual. The greater the absolute value, the more relevant the residual is to the selected atom from the sensing matrix. However, this matching method can not reflect the effect of the important components of the amplified data when measuring the similarity of signals [19]. Therefore, a better method needs to be found to select the atom that best matches the residual.
The Dice coefficient criterion is introduced here, and its definition is as follows [20,21,22]:
Dice ( x , y ) = t = 1 n x t y t ( t = 1 n x t 2 + t = 1 n y t 2 ) / 2
It can be seen from Equations (10) and (11), cos(x,y) and Dice(x,y) have the same range of values [0, 1], and will be 1 when the two vectors are equal. It is easy to find that the denominator of the inner product criterion is t = 1 n x t 2 t = 1 n y t 2 1 / 2 which represents the geometric average of the square sum calculated by the two vectors, while the denominator of the Dice coefficient criterion is t = 1 n x t 2 + t = 1 n y t 2 / 2 which represents the arithmetic average of the square sum calculated by the two vectors. According to the average theory, in general, the geometric mean focuses on the average change trend of the overall sample, while the arithmetic average represents the unbiased estimate of the individual expectations. Consequently, the arithmetic mean is better to avoid partial information loss and preserve the integrity of the signal [23]. Therefore, the Dice coefficient criterion is more conducive to select the appropriate atom than the inner product criterion, it can reflect the important role of each element of the signal [24,25]. Therefore, the proposed algorithm SAVSMP uses, the following formula to match the best atoms [26,27]:
Ω i = max ( Dice ( Φ , r i 1 ) , K j )

3.3. Step-Size Setting

Adaptive step size is used in the The SAVSMP algorithm, which will reduce the running time effectively. The residual r is compared with the threshold σ 1 , firstly. If  r 0 σ 1 , the operation of the program would be terminated, Otherwise, the residuals between the two adjacent iterations would be compared. If  r 0 r i 1 0 , calculate the value of χ , which is defined as χ = y 2 / r 2 . At the initial iteration of the greedy algorithms, the value of χ is equal to 1 since the residual r is assigned as y at first. The residual r decreases gradually as the number of iterations increases, and the iteration will end when r reaches the threshold. Therefore, it can be concluded as χ [ 1 , + ) . This is because most of the effective atoms are put into the support set when χ is about 10 [28,29], ξ = 9 + 1 / χ , where ξ ( 9 , 10 ] is set as the threshold to choose the method for the step-size updating. SAVSMP uses the exponential curve fitting method if χ is smaller than ξ ; otherwise, it employs the weak matching method to update the step-size. The flow chart of the step-size setting is shown in Figure 3.
In the exponential curve fitting method, the residual energy ratio χ and the step-size scaling ratio β are set as the independent variable and dependent variable. According to the expected change relation that β decreases exponentially with the increase of χ , the function k j = β k j 1 , β = a 2 exp ( b 2 x ) , where a 2 ( 4.80 , 5.42 ) , b 2 ( 0.26 , 0.22 ) is obtained by Matlab simulation.The SSE of the function is about 0.011, which means the goodness-of-fit of the obtained function is high. The relation between χ and χ is shown in Figure 4, it can be seen that the step size changes in a nonlinear manner instead of undergoing linear growth.
A weak mating method is employed to realize the close approach of the sparsity level. The algorithm selects the maximum value M a and the intermediate value M e of the inner product v between the observation matrix M a and the residual M a to determine the threshold setting. The definitions of v, M a and M e are shown below:
V = Φ r i V = ( v 1 , v 2 , v n , )
M a = max 1 < n < N | v n |
M e = median 1 < n < N | V n |
The threshold γ is set as g a m m a = τ a 3 M a + b 3 M e / c 3 , where τ ( 0 , 1 ] , a 3 , b 3 , and  c 3 are set as 2, 3, and 3, respectively, considering a suitable threshold γ could help to select atoms more efficiently. Atoms are selected and put into the supporting set when v n > γ . The step-size can be set by the parameter τ , and τ [ 0.8 , 0.9 ] can give consideration to both reconstruction accuracy and computation speed [28]. and the selected atoms are put into the supporting set. The step size of each stage can be adjusted by τ , and the experimental results show that the performance of the reconstruction and the computation speed can be considered at the same time when τ [ 0.8 , 0.9 ] .

4. Performance Analysis of SAVSMP Algorithm

SAVSMP algorithm includes two aspects of initial sparsity estimation and signal reconstruction. The use of Dice coefficient criterion in SAVSMP algorithm facilitates the selection of better-matched atoms. In addition, with the idea of step size changing adaptively, the reconstruction accuracy of SAVSMP is comparable to that of SAMP with a fixed step size. The using of the initial sparsity estimation in SAVSMP is equivalent to reducing the sparsity of the target signal indirectly. The complexity of the initial sparsity estimation is mostly concentrated in the calculation of M times projections, so it is relatively small. However, reconstructing the target signal through the least squares method accounts for a significant portion of the computation. As a result, the whole complexity of the SAVSMP is determined by the stage number. Each stage can be considered as a single SP algorithm with a small sparsity. The proposed SAVSMP algorithm is summarized in Algorithm 1.
The proposed SAVSMP algorithm is summarized in Algorithm 1. It can be seen that the computation complexity of initial sparsity estimation part is mainly focused on the projection operation of Equation (9). The computation complexity of signal reconstruction part is mainly concentrated in the projection operation of Equation (12), while the signal reconstruction using least squares only accounts for a small amount of computation. Therefore, the computational complexity of SAVSMP algorithm depends on the computation of Equations (9) and (12). According to the analysis, Equation (9) needs K 0 l n M iterations and Equation (12) needs P K 0 k S A V S M P iterations. Because the computation complexity of one projection operation is O ( M N ) , the computation complexity of SAVSMP algorithm is O ( K 0 l n M + P K 0 k S A V S M P ) M N . Since the step size increases at a larger slope before the threshold and linearly at a smaller slope after the threshold, K 0 l n M + P K 0 k S A V S M P P k S A M P is obtained. Comparing the computational complexity of the OMP Algorithm O ( P M N )  [9], the SAMP algorithm O ( P k S A M P M N ) [13] and the SASP algorithm O ( P 2 M N ) [14], it can be concluded that the proposed SAVSMP algorithm has lower computational complexity.
Each stage of the proposed algorithm can be considered as the SP algorithm which can reconstruct the signal via finite iterations [10]. Therefore, SAVSMP can also reconstruct x from y via finite stages if Φ satisfies the RIP criterion with the parameter δ P < 0.165 . Since r 2 is monotonically decreasing, the algorithm will converge to a local minimum.
Algorithm 1 Process of the SAVSMP Algorithm.
Input: Measurement vector y, Sensing matrix Φ , RIP constant δ P ,
      parameter σ 1 , σ 2 , τ .
Initialization: r 0 = y , K 0 = ln M , i = 0 , j = 0
      If  Φ F K 0 y 2 ( 1 δ p ) ( 1 + δ p ) y 2
             K 0 = K 0 + 1 , F K 0 = max ( | Φ y | , K 0 )
      end if
             K 0 = ( a 1 δ P + b 1 ) K 0 , F 0 = max ( | Φ F K 0 y | , K 0 ) ,
             r 0 = min y Φ F 0 x F 0 2
repeat: i = i + 1 , Ω i = arg max K j D ( Φ , r i 1 ) , H i = F i 1 Ω i ,
             F i = arg max K j Φ H i + y , r i = y Φ F i Φ F i + y
      if  r i 2 σ 1
            quit the iteration.
      else if  r i 2 r i 1 2
             χ = y 2 / r 2 , ξ = 9 + 1 / χ
            if  j σ 2
                  if χ ξ
                         β = a 2 exp ( b 2 x ) , k j = β k j 1
                  else  V = Φ r i ,
                         M a = max 1 < n < N | v n | , M e = m e d i a n 1 < n < N | v n | ,
                         γ = τ a 3 M a + b 3 M e / c 3 ,
                         L i = n : v n > γ ,
                         F i = F i 1 L i , k j = L i
                  end if
            else
                   k j = L i
            end if
                   j = j + 1 , K j = K j 1 + k j
      else
             F i = F
      end if
output: X = Φ F + y

5. Simulation Results

The performance of SAVSMP algorithm is evaluated via a comparison with SAMP, SASP, OMP, and AStMP algorithms. A Gaussian sparse signal is used, the length N = 256, the sparsity P = 44 and the number of measurements M = 128. The sensing matrix Φ M × N is a Gaussian random matrix with σ =1, μ = 0. The parameters of σ 1 , σ 2 , τ take the value of 1 × 10 6 , 10, 0.9, respectively. The background noise is Gaussian white noise, and the array is a uniform linear array composed of nine elements, and the array element spacing is half of the wavelength. All simulations are implemented in Matlab 2011a on the PC with 3.50 GHz Intel Core i3 processor and 4.0 GB memory in the Windows 7 system.

5.1. Sparsity Estimation

The initial sparsity estimation K 0 of the SAVSMP algorithm is shown in Figure 5. Taking a smaller value for δ P will lead to overestimation, whereas taking a larger value for δ P will lead to underestimation. The SAVSMP algorithm adjusts K 0 to K 0 , and K 0 = ( a 1 δ P + b 1 ) K 0 . As a result, the over-estimated value is reduced and the under-estimated value is increased. For example, K 0 is halved when δ P takes the value of 0.1 and tripled when δ P takes the value of 0.35. However, K 0 is halved whatever value the RIP constant δ P takes in AStMP algorithm.
Estimated sparsity K 0 after treatment of the SAVSMP and AStMP algorithms are shown in Figure 6. Compared with the AStMP algorithm, the SAVSMP algorithm overcomes the under-estimation and over-estimation of initial sparsity estimation which can easily lead to higher iteration times and lower reconstruction accuracy. In addition, the SAVSMP algorithm eliminates the large influence of the RIP parameter δ P on the estimated value, which mainly focuses on values between 20 and 25 whatever δ P is.

5.2. Iteration Times

From the analysis of the proposed algorithm, the smaller the iteration times, the lower the computational complexity can be deduced. The iteration times comparison of different algorithms are shown in Figure 7. The SAVSMP algorithm needs 6 and 8 iterations, respectively, while δ P takes the smaller value of 0.1 and larger value of 0.35. However, taking the value of 0.1 or 0.35 for δ P has implications in the AStMP algorithm, which needs 8 or 12 iterations, respectively. The SASP algorithm needs 18 iterations, which is nearly three times that of the SAVSMP algorithm. The SAMP algorithm requires 23 iterations, which is the greatest number compared with these algorithms. Therefore, the iteration times of the SAVSMP algorithm is minimal, and δ P has little influence on the iteration times.

5.3. Reconstruction Accuracy

To ensure the validity of the experimental data, 500 independent simulations are employed to calculate the reconstruction probability of the OMP, ROMP, StOMP, SP, CoSaMP, SAMP, SASP, AStMP, and SAVSMP algorithms.
The probability of exact reconstruction versus the sparsity level P of different algorithms is shown in Figure 8. It is shown that the reconstruction probability decrease as P increases. Compared with other algorithms, the OMP algorithm performs the worst. Although ROMP algorithm is more accurate than OMP algorithm, it is much lower than the StOMP, SP, and CoSaMP algorithms, which can achieve 100 % reconstruction accuracy when P is small and drop dramatically when P 35 . The reconstruction probability of the SAMP, SASP, AStMP, and SAVSMP algorithms is close when P is small; however, the SAVSMP algorithm performs better than other algorithms when P is greater than 60.
The reconstruction probability versus the number of measurements M of different algorithms is shown in Figure 9. It can be seen that the reconstruction probability increase as M increases. Compared with other algorithms, the ROMP algorithm performs the worst. Although OMP algorithm is more accurate than ROMP algorithm, it is much lower than the StOMP, SP and CoSaMP algorithms, which can achieve 100% reconstruction accuracy when M is larger than 95. The precision of the SAMP, SASP, AStMP, and SAVSMP algorithms are close when M is larger than 80; however, there construction probability of the SAVSMP algorithm is higher than other algorithms when P is smaller than 75. Therefore, SAVSMP algorithm shows obvious advantages compared with other algorithms in terms of reconstruction probability.
The mean square error (MSE) versus the signal-to-noise ratio (SNR) of different algorithms, such as the SAMP, SASP, AStMP, and SAVSMP algorithm, is shown in Figure 10. It is shown that the reconstruction error of each algorithm decreases gradually with the increase of SNR. The MSE of SAVSMP algorithm is less than other algorithms, and the performance advantage is more obvious with the increase of signal-to-noise ratio. It can be seen that the proposed algorithm has certain robustness.

5.4. Underwater Acoustic Signal Reconstruction Experiment

To verify the applicability of the algorithm to non-Gaussian sparse signals, this section uses underwater acoustic signal radiated by the ship to test the theoretical application value of SAVSMP algorithm. The ship-radiated noise obeying non Gaussian distribution consists of line spectrum and continuous spectrum.
Suppose that the propeller blades number is 3, the speed is 420 r/min, and the shaft frequency is f z = 420 / 60 = 7 Hz. Since the basic frequency is set as 21 Hz, the second harmonic, the third harmonic, and the fourth harmonic are 42 Hz, 63 Hz, and 84 Hz, respectively [17]. In addition, on the premise of satisfying the acoustic characteristics of the line spectrum, 150 Hz, 450 Hz, and 800 Hz are selected from the range of 100–1000 Hz [30]. Amplitude values of the selected seven frequency points are set to 3.5, 3.1, 2.2, 1.7, 1.5, 2.2, and 2, respectively, and the initial phases in Radian are set to 0.1, 2, −1, 3, 1, 0, and −1, respectively. The continuous spectrum peak frequency is set as 900 Hz, then 11 discrete frequency points from 0–900 Hz and 5 discrete frequency points from 1000–5000 Hz are selected. The corresponding frequency response of the finite impulse response (FIR) filter can be obtained if the desired response is given. The continuous spectrum can be obtained by applying the white Gaussian noise to the filter. Adding the line spectrum and the continuous spectrum, the spectrum of the ship-radiated noise can be obtained.
The reconstruction of the ship-radiated noise by the SAMP, SASP, AStMP, and SAVSMP algorithms are shown in Figure 11. Obviously, the SAVSMP algorithm can achieve underwater acoustic signal reconstruction with higher precision.
In addition to the reconstructed signal contrast, the peak signal-to-noise ratio (PSNR) will be used to evaluate the quality of the reconstruction signal. The PSNR is expressed as [31]
P S N R = 10 log 255 2 F r a m e s i z e n = 1 F r a m e s i z e ( O n R n ) 2
where Framesize represents the total number of pixels of the target signal, and O n and R n represent the nth pixel value of the original signal and the reconstructed signal, respectively. From Equation (16), we can see that larger PSNR means higher reconstruction quality. The compression rations versus the PSNR of the four algorithms are shown in Figure 12. The figure reveals that the PSNR will increases when the compression ratio increases. Compared with other algorithms in the figure, the SAVSMP algorithm has the highest PSNR for the underwater acoustic signal reconstruction.

6. Conclusions

In this work, a novel sparsity adaptive algorithm SAVSMP for underwater acoustic signal reconstruction has been presented. Since the SAVSMP algorithm estimates the sparsity by RIP criterion, it overcomes the disadvantage of the previous algorithms which require the prior information on the signal sparsity. SAVSMP selects atoms into the support set with an adaptive step size that can converge to the real sparsity quickly. In order to avoid over-estimation or under-estimation, SAVSMP proceeds the initial sparsity estimation adaptively by curing fitting method. Both the theoretical analyse and the systematical simulations demonstrate that the algorithm can accurately reconstruct ordinary random signal with low complexity. Given the significant advantage of high PSNR, SAVSMP can be used in reconstruction for underwater acoustic signals received by the sonar system.

Author Contributions

Conceptualization, N.L.; Data curation, N.L.; Formal analysis, N.L.; Investigation, X.Y.; Methodology, X.Y.; Project administration, H.L.; Resources, H.L.; Software, H.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Natural Science Foundation of China Nos. 11173010 and U1531101.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

We would like to acknowledge Wenyun Gao, Xiaofei Niu, Wei Fu for their support during data collection, figures preparing and the English proofreading.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gurley, K.; Kijewski, T.; Kareem, A. First-and Higher-Order Correlation Detection Using Wavelet Transforms. J. Eng. Mech. 2003, 129, 188–201. [Google Scholar] [CrossRef] [Green Version]
  2. Duda, T.F.; Collis, J.M.; Lin, Y.T.; DeFerrari, H.A. Horizontal Coherence of Low-Frequency Fixed-Path Sound in a Continental Shelf Region with Internal-Wave Activity. J. Acoust. Soc. Am. 2012, 131, 1782–1797. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Yang, T.C. Temporal Coherence of Sound Transmissions in Deep Water Revisited. J. Acoust. Soc. Am. 2008, 124, 113–127. [Google Scholar] [CrossRef]
  4. Su, X.X.; Zhang, R.H.; Li, F.H. Improvement of the longitudinal correlations of acoustical field by using the wave guide invariance. Acta Acust. 2006, 4, 305–309. [Google Scholar]
  5. Li, Y.; Lee, K.; Bresler, Y. Identifiability in Blind Devolution with Subspace or Sparsity Constraints. IEEE Trans Inf. Theory 2016, 62, 4266–4275. [Google Scholar] [CrossRef] [Green Version]
  6. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  7. Candes, E.J.; Wakin, M.B. An Introduction To Compressive Sampling. IEEE Signal Process. Mag. 2005, 25, 21–30. [Google Scholar] [CrossRef]
  8. Candes, E.J.; Romberg, J.K.; Tao, T. Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Fourier Information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef] [Green Version]
  9. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  10. Dai, W.; Milenkovic, O. Subspace Pursuit for Compressive Sensing: Closing the Gap between Performance and Complexity; Illinois University: Champaign, IL, USA, 2008. [Google Scholar]
  11. Needell, D.; Vershynin, R. Signal recovery from random measurements via regularized orthogonal matching pursuit. IEEE J. Sel. Top. Signal Process. 2010, 4, 310–316. [Google Scholar] [CrossRef] [Green Version]
  12. Donoho, D.L.; Tsaig, Y.; Drori, I. Sparse solution of under determined systems of linear equations by stage wise orthogonal matching pursuit. IEEE Trans. Inf. Theory 2012, 58, 1094–1121. [Google Scholar] [CrossRef]
  13. Do, T.T.; Gan, L.; Nguyen, N. Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In Proceedings of the 42nd Asilomar Conference on Signal, System, and Computers, Pacific Grove, CA, USA, 26–29 October 2008; pp. 581–587. [Google Scholar]
  14. Yang, C.; Feng, W.; Feng, H. A sparsity adaptive subspace pursuit algorithm for compressive sampling. Acta Electron. Sin. 2010, 38, 1914–1917. (In Chinese) [Google Scholar]
  15. Sun, T. Compressive Sensing and Imaging Applications. Ph.D. Thesis, Rice University, Houston, TX, USA, 2011. [Google Scholar]
  16. Candes, E.J.; Romberg, J.K.; Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 2006, 59, 1207–1223. [Google Scholar] [CrossRef] [Green Version]
  17. Liu, B. Principles of Underwater Acoustics; Harbin Engineering University: Harbin, China, 2009; pp. 216–231. (In Chinese) [Google Scholar]
  18. Salton, G.; McGill, M.J. Introduction to Modern Information Retrieval; McGraw-Hill: New York, NY, USA, 1987. [Google Scholar]
  19. Lei, Y. Atom Selection in Sparse Dictionary with Compressive Sensing OMP Reconstruction Algorithm; South China University of Technology: Guangzhou, China, 2011. (In Chinese) [Google Scholar]
  20. Dice, L.R. Measures of the amount of ecologic association between species. Ecology 1945, 26, 297–302. [Google Scholar] [CrossRef]
  21. Mulekar, M.S.; Brown, C.S. Distance and Similarity Measures. In Encyclopedia of Social Network Analysis and Mining; Springer: New York, NY, USA, 2017. [Google Scholar]
  22. Yong, R.; Ye, J.; Liang, Q.F. Estimation of the joint roughness coefficient (JRC) of rock joints by vector similarity measures. Bull. Eng. Geol. Environ. 2018, 77, 735. [Google Scholar] [CrossRef]
  23. Bullen, P.S.; Mitrinovic, D.S.; Vasic, P.M. Means and Their Inequalities; Springer: New York, NY, USA, 1988. [Google Scholar]
  24. Wang, D.; Wu, Y.; Cao, W. Improved Reconstruction Algorithm for Compressed Sensing. J. Northwestern Poly. Tech. Univ. 2017, 35, 774–779. (In Chinese) [Google Scholar]
  25. Wang, F.; Zhao, Z.; Liu, X. Improved Sparsity Adaptive Matching Pursuit Algorithm. Comput. Sci. 2018, 45, 234–238. (In Chinese) [Google Scholar]
  26. Zhang, L.; Liang, D. Study of spectral reflectance reconstruction based on an algorithm for improved orthogonal matching pursuit. J. Opt. Soc. Korea 2016, 20, 515–523. [Google Scholar]
  27. Yu, H.; Ma, C.; Wang, H. Transmission Line Fault Location Method Based on Compressed Sensing Estimation of Traveling Wave Natural Frequencies. Trans. China Electrotech. Spciety 2017, 32, 140–148. (In Chinese) [Google Scholar]
  28. Fu, Y.; Liu, S.; Ren, C. Adaptive Step-size Matching Pursuit Algorithm for Practical Sparse Reconstruction. Circuits Syst. Signal Process. 2017, 36, 2276–2284. [Google Scholar] [CrossRef]
  29. Li, N.; Yin, X.; Guo, H. Variable step-size matching pursuit based on oblique projection for compressed sensing. IET Image Process. 2020, 14, 766–773. [Google Scholar] [CrossRef]
  30. Zhang, Q. Study On Blind Source Separation of Underwater Acoustic Signals D; Harbin Institute of Technology: Harbin, China, 2013. [Google Scholar]
  31. Yao, S.; Sangaiah, A.K.; Zheng, Z. Sparsity estimation matching pursuit algorithm based on restricted isometry property for signal reconstruction. Future Gener. Comput. Syst. 2018, 88, 747–754. [Google Scholar] [CrossRef]
Figure 1. Flow chart of the main procedure.
Figure 1. Flow chart of the main procedure.
Sensors 22 05018 g001
Figure 2. The flow chart of the initial sparsity estimation.
Figure 2. The flow chart of the initial sparsity estimation.
Sensors 22 05018 g002
Figure 3. Flow chart of the step-size setting.
Figure 3. Flow chart of the step-size setting.
Sensors 22 05018 g003
Figure 4. Measurementto residual energy ratio χ versus step-size scaling ratio β .
Figure 4. Measurementto residual energy ratio χ versus step-size scaling ratio β .
Sensors 22 05018 g004
Figure 5. The number of measurements versus the initial sparsity estimation (M = 128, N = 256, P = 44 (the red dotted line in the figure)).
Figure 5. The number of measurements versus the initial sparsity estimation (M = 128, N = 256, P = 44 (the red dotted line in the figure)).
Sensors 22 05018 g005
Figure 6. The estimated sparsity K0 after treatment of the SAVSMP and AStMP algorithms (M = 128, N = 256, P = 44).
Figure 6. The estimated sparsity K0 after treatment of the SAVSMP and AStMP algorithms (M = 128, N = 256, P = 44).
Sensors 22 05018 g006
Figure 7. Iteration times comparison of different algorithms (M = 128, N = 256, P = 44).
Figure 7. Iteration times comparison of different algorithms (M = 128, N = 256, P = 44).
Sensors 22 05018 g007
Figure 8. The probability of exact reconstruction versus the sparsity level P of different algorithms (M = 128, N = 256).
Figure 8. The probability of exact reconstruction versus the sparsity level P of different algorithms (M = 128, N = 256).
Sensors 22 05018 g008
Figure 9. The reconstruction probability versus the number of measurements (M = 128, N = 256, P = 44).
Figure 9. The reconstruction probability versus the number of measurements (M = 128, N = 256, P = 44).
Sensors 22 05018 g009
Figure 10. The reconstruction error to SNR of different algorithms (M = 128, N = 256, P = 44, σ = 1 × 10 6 , ϵ = 0.3 , β = 0.1 ) .
Figure 10. The reconstruction error to SNR of different algorithms (M = 128, N = 256, P = 44, σ = 1 × 10 6 , ϵ = 0.3 , β = 0.1 ) .
Sensors 22 05018 g010
Figure 11. The reconstruction of ship-radiated noise.
Figure 11. The reconstruction of ship-radiated noise.
Sensors 22 05018 g011
Figure 12. The compression rations versus the PSNR (underwater acoustic signal).
Figure 12. The compression rations versus the PSNR (underwater acoustic signal).
Sensors 22 05018 g012
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Li, N.; Yin, X.; Li, H. A Novel Sparsity Adaptive Algorithm for Underwater Acoustic Signal Reconstruction. Sensors 2022, 22, 5018. https://doi.org/10.3390/s22135018

AMA Style

Li N, Yin X, Li H. A Novel Sparsity Adaptive Algorithm for Underwater Acoustic Signal Reconstruction. Sensors. 2022; 22(13):5018. https://doi.org/10.3390/s22135018

Chicago/Turabian Style

Li, Na, Xinghui Yin, and Haitao Li. 2022. "A Novel Sparsity Adaptive Algorithm for Underwater Acoustic Signal Reconstruction" Sensors 22, no. 13: 5018. https://doi.org/10.3390/s22135018

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop