Method to Solve Underwater Laser Weak Waves and Superimposed Waves

With the rapid development of Lidar technology, the use of Lidar for underwater terrain detection has become feasible. There is still a challenge in the process of signal resolution: the underwater laser echo signal is different to propagating in the air, and it is easy to produce weak waves and superimposed waves. However, existing waveform decomposition methods are not effective in processing these waveform signals, and the underwater waveform signal cannot be correctly decomposed, resulting in subsequent data-processing errors. To address these issues, this study used a drone equipped with a 532 nm laser to detect a pond as the study background. This paper proposes an improved inflection point selection decomposition method to estimate the parameter. By comparing it with other decomposition methods, we found that the RMSE is 2.544 and R2 is 0.995975, which is more stable and accurate. After estimating the parameters, this study used oscillating particle swarm optimization (OPSO) and the Levenberg–Marquardt algorithm (LM) to optimize the estimated parameters; the final results show that the method in this paper is closer to the original waveform. In order to verify the processing effect of the method on complex waveform, this paper decomposes and optimizes the simulated complex waveforms; the final RMSE is 0.0016, R2 is 1, and the Gaussian component after decomposition can fully represent the original waveform. This method is better than other decomposition methods in complex waveform decomposition, especially regarding weak waves and superimposed waves.


Introduction
In recent years, deep-sea exploration has attracted considerable attention because of its usefulness in resource availability, defense, and transportation [1]. With the concept of Digital Earth proposed by the academician Guo [2], modern technologies have been widely used in the surveying and mapping industry, and the technology of underwater detection tends to be rich and diversified.
Several scholars have conducted research on the detection of underwater resources using different sensors. Huo et al. proposed a binocular vision-based underwater target detection and 3D reconstruction system [3]. Chen et al. used sonar techniques to finish damage detection in the underwater foundations of a bridge [4]. Shen et al. proposed a chaotic-pulse laser with high frequency intensity modulation for underwater target detection [5]. Nadimi applied synthetic aperture sonar to an underwater gas pipeline, and proposed a new method to improve detection efficiency [6]. With the differences in technologies and devices, the commonly used underwater detection methods are: laser radar, binocular vision camera, and sound radar. The underwater environment usually has uneven illumination, low visibility, and complexity. Sound radar has the widest application range, but it is vulnerable to interference in the complex and changeable underwater used OPSO and fixed damping LM to fit the estimated parameters, as some scholars have applied the combination PSO and LM to other fields and achieved good results [24]. In order to verify the effectiveness of this method, this study used a drone equipped with a 532 nm laser to detect a pond as the study background, collected relevant data, tested multiple experimental data using the process in this paper, and compared them with traditional algorithms, to verify the effectiveness of the underwater echo calculation process.

Principle of the Gaussian Decomposition Method
The Gaussian decomposition method is based on the assumption that the laser-echo signal is constructed by multiple Gaussian functions; nowadays, this method has also been confirmed by many scholars, and has been applied to waveform calculation [25,26]. The purpose of Gaussian decomposition is to extract as many Gaussian components as possible, and obtain the information of each component ( f (x), Equation (1)).
f (x) is the amplitude value of the waveform data, x is the sampling time of the waveform data; A i , u i , δ i are the peak value, center position, and half width of the Gaussian component respectively; noise is the background noise of the signal.

Data Preprocessing
Preprocessing includes noise estimation and the Savitzky-Golay smoothing filter; because the waveform decomposition process is easily affected by noise, it is necessary to preprocess the noise before waveform processing. Firstly, noise is estimated, using the Savitzky-Golay method to remove noise, and then using the five-point three-time smoothing method to smooth the waveform, in order to avoid generating false inflection points that would result in errors in the Gaussian decomposition [16,27]. The fluctuation of the waveform at the beginning and end stages is mainly due to the influence of noise; therefore, this study used the 5% waveform intensity at the beginning and end stages as the estimated noise, and then used the Savitzky-Golay and five-point three-time smoothing methods to filter out the noise, and smooth the waveform curve after filtering. It is necessary to set a threshold to avoid a large number of pseudo-waveform components ( Figure 1); if the waveform intensity is below the threshold, there is no need to add new Gaussian components to fit the waveform. In this process, the threshold (σ th , Equation (2)) is affected by the estimated noise and root-mean-square error before and after filtering (σ n , Equation (3)).
σ th = 3σ n + noise (2) σ n is the root-mean-square error before and after filtering; y(i), f x (i) are the signal strengths before and after filtering; n is the number of sampling points.

Improved Gaussian Decomposition Method
This study used the iterative decomposition method to find the maximum value of the waveform ( , Equation (4)), if the value satisfied the threshold condition, starting the waveform decomposition. Before each waveform decomposition, it is necessary to determine the number of inflection points in the waveform, and mark them, to determine the standard deviation of the waveform. This study used the 'Quick Locating Algorithm for Turning Points in Discrete Point Set of Curve' to detect waveform inflection points (P, Equations (5) and (6)) [28]. When all inflection points in this cycle were found, we needed to classify these points and count their number ( inf , inf , Equation (7)), and we then established a calculation method for different standard deviations, according to the classification of left and right inflection points. If inf and inf are equal to 1, it indicates that there is no overlapping wave at the peak, but it does not mean that the standard deviation is not affected by other waveforms, so the standard deviation needs to be judged by inf and inf ( , Equation (8)). If there are other cases, it can be considered that the waveform has a superposition, or that the inflection point is not judged, owing to the large amplitude ( , Equation (9)). After completing the above steps, the iterative solution is repeated, until all the peaks are lower than the threshold value, or reach the upper limit of the iteration.
( ) is the waveform intensity at each time, a is the threshold value.
, is an evaluation index used to judge whether it is an inflection point; if , , , 0, the inflection point is P.
inf , inf are the number of inflection points on the right and left sides of the peak; inf, inf are the number of all inflection points on the right and left side of the peak; inf is the number of inflection points at the right junction of the peak; inf is the number of inflection points at the left junction of the peak.

Improved Gaussian Decomposition Method
This study used the iterative decomposition method to find the maximum value of the waveform ( f m , Equation (4)), if the value satisfied the threshold condition, starting the waveform decomposition. Before each waveform decomposition, it is necessary to determine the number of inflection points in the waveform, and mark them, to determine the standard deviation of the waveform. This study used the 'Quick Locating Algorithm for Turning Points in Discrete Point Set of Curve' to detect waveform inflection points (P, Equations (5) and (6)) [28]. When all inflection points in this cycle were found, we needed to classify these points and count their number (Rinf r , Linf l , Equation (7)), and we then established a calculation method for different standard deviations, according to the classification of left and right inflection points. If Rinf r and Linf l are equal to 1, it indicates that there is no overlapping wave at the peak, but it does not mean that the standard deviation is not affected by other waveforms, so the standard deviation needs to be judged by Rinf and Linf (δ, Equation (8)). If there are other cases, it can be considered that the waveform has a superposition, or that the inflection point is not judged, owing to the large amplitude (δ, Equation (9)). After completing the above steps, the iterative solution is repeated, until all the peaks are lower than the threshold value, or reach the upper limit of the iteration.
f x (i) is the waveform intensity at each time, a is the threshold value.
S i−1,i is an evaluation index used to judge whether it is an inflection point; if S i−1, i , S i,i+1 < 0, the inflection point is P.
Rinf r , Linf l are the number of inflection points on the right and left sides of the peak; Rinf, Linf are the number of all inflection points on the right and left side of the peak; Rinf j is the number of inflection points at the right junction of the peak; Linf j is the number of inflection points at the left junction of the peak.
µ is the time of the peak value; P xi is the time of the inflection points on the left and right sides of the peak.
(1) When Linf = 1 and Rinf > 1 (Figure 2a), the left-hand side of the peak is closer to the real situation. Therefore, taking the half position of the left peak as the full width at half maxima (tgl), and finding the closest point of the half peak to calculate the standard deviation and, finally, comparing these values, allows us to obtain the standard deviation of the peak (δ).
Sensors 2023, 23, x FOR PEER REVIEW 6 of 27 calculated, and the shortest distance is denoted as S3. We sorted S1, S2, and S3, and obtained the middle distance as the half peak width (dis) to calculate the standard deviation ( ). (4) In this case, because the inflection point is not judged (Figure 3f), this study found the point closest to the half width on both sides of the peak, and calculated the distance between it and the peak, using the shortest distance as dis, to calculate the standard deviation ( ).  (2) When Rinf = 1 and Linf > 1 (Figure 2b), the right-hand side of the peak is closer to the real situation. Therefore, taking the half position of the right peak as the full width at half maxima (tgr), and finding the closest point of the half peak to calculate the standard deviation and, finally, comparing these values, allows us to obtain the standard deviation of the peak (δ). (3) When Rinf = 1 and Linf = 1 (Figure 2c) or Rinf > 1 and Linf > 1 (Figure 2d), the peak is a single waveform, or wrapped by multiple waveforms without superposition. Therefore, it is necessary to calculate the distance from the points closet to the half width peak on both sides to the peak point. The point with a smaller distance is taken as the full width at half maxima (dis), and these values are compared, to obtain the standard deviation of the peak (δ). Others (1) When Linf = 1 and Rinf > 1 (Figure 3a), the left side of the peak is closer to the real situation; the right side of the peak exhibits a waveform superposition. Therefore, we need to find the corresponding inflection point, and the point of the nearest half peak position on the left side (tgl), and calculate their respective standard deviation (δ). This paper selected a smaller value as the standard deviation, to avoid error due to unrecognized partial inflection points.

Two-Order Oscillating Particle Swarm Optimization
Two-order OPSO is a method based on PSO [29,30]; the basic idea is as follows: (1) According to the estimated parameters used to establish a group of particles, these particles are randomly distributed within the estimated parameter range. (2) The fitness of each particle (fitness, Equation (10)); if the fitness is better than the individual extremum, we update the individual extremum; if the fitness is better than the global extremum, we update the global extremum. (3) Optimizing the particles' position and velocity ( , , Equation (11)). (4) We repeat the above steps until the limit error requirement, or the number of iterations, is reached.
is a function composed of multiple Gaussian components after Gaussian decomposition; n is the number of samples. (2) When Rinf = 1 and Linf > 1 (Figure 3b), the right side of the peak is closer to the real situation; the left side of the peak exhibits a waveform superposition. Therefore, we need to find the corresponding inflection point, and the point of the nearest half peak position on the right side (tgr), and calculate their respective standard deviation (δ). This paper selected a smaller value as the standard deviation, to avoid error due to unrecognized partial inflection points. (3) There are three different situations when Rinf > 1 and Linf > 1 (Figure 3c-e); at this time, the distance from the center to the inflection points cannot accurately express the standard deviation, because the inflection points are affected by the superimposed waveform. In this study, the average position of the inflection points on the left and right sides of the peak are denoted as S1 and S2, respectively. The distance from the closest half position of the peak to the peak, on both sides of the peak, is calculated, and the shortest distance is denoted as S3. We sorted S1, S2, and S3, and obtained the middle distance as the half peak width (dis) to calculate the standard deviation (δ).
(4) In this case, because the inflection point is not judged (Figure 3f), this study found the point closest to the half width on both sides of the peak, and calculated the distance between it and the peak, using the shortest distance as dis, to calculate the standard deviation (δ).

Two-Order Oscillating Particle Swarm Optimization
Two-order OPSO is a method based on PSO [29,30]; the basic idea is as follows: (1) According to the estimated parameters used to establish a group of particles, these particles are randomly distributed within the estimated parameter range. (2) The fitness of each particle (fitness, Equation (10)); if the fitness is better than the individual extremum, we update the individual extremum; if the fitness is better than the global extremum, we update the global extremum. (3) Optimizing the particles' position and velocity (V i , X i , Equation (11)). (4) We repeat the above steps until the limit error requirement, or the number of iterations, is reached.
y i is the objective function; f x i is a function composed of multiple Gaussian components after Gaussian decomposition; n is the number of samples.
is the position of the last time; X i (t), X i (t + 1) are the particle positions before and after updating; V i (t + 1) and V i (t) are the particle velocity before and after updating; ω is the inertia weight factor; α 1 and α 2 are parameters to adjust the global search ability and local search ability of the algorithm; c 1 and c 2 are study factors; r 1 , r 2 are random numbers between 0 and 1, P i and P g represent the individual extremum and global extremum, respectively.

Levenberg-Marquardt Algorithm
The LM algorithm is a commonly used nonlinear least-squares method [31], which can optimize the estimated parameters, to make the estimated curve closer to the original curve (∆x lm , Equation (12)). This paper used the gain ratio (ρ, Equation (13)) to adjust the damping coefficient (α, Equation (14)), which not only ensures that each iteration decreases in the direction of the objective function value, but also converges rapidly near the optimal value.
, J(x) T are the Jacobian matrix and transposed Jacobian matrix of f x i , p (0) ; y is the objective function, α is the damping coefficient.
v is a parameter with an initial value of 2; if ρ > 0, we reduce the value of α and update all parameters; otherwise, we expand α and update the parameters. The purpose is to adjust the step α by adjusting the change coefficient v; when the drop is too slow, we use a larger α to make the method close to the gradient method. When the drop is too fast, we use a smaller α to bring the method close to the Gauss-Newton method.

Parameters Estimation
This study used a drone equipped with a 532 nm laser to detect a pond as the study background; the pond is located in a park in Guilin, Guangxi, China. The water quality is clear and the water depth is shallow, and the interference to the lidar signal is not strong. Through a preliminary analysis of the echo signal, we obtained the corresponding waveform intensity of each experiment, as the experimental data. Firstly, the original data were preprocessed to remove the noise in the waveform, and the data were smoothed by the Savitzky-Golay and five-point three-time smoothing methods, then we calculated the root-mean-square for before and after filtering, and the noise. The waveform threshold was estimated according to the root-mean-square, in order to provide a threshold for whether to continue to decompose the waveform later ( Figure 1). After we obtained the preprocessed waveform signal, the Gaussian decomposition method was used to decompose the waveform signal into multiple Gaussian function components. In the first decomposition process (Figure 4a), Rinf r = 1, Linf l = 1, we needed to confirm the number of inflection points on both sides of the peak, Rinf = 1, Linf > 1; therefore, the left side of the peak is closer to the real situation. We need to extract the inflection points on both sides, and the left side closest to the half peak point, for calculation to get σ. In the second decomposition process (Figure 4b), the peak is surrounded by multiple waveforms, Rinf > 1, Linf > 1, so we need to extract the inflection points on both sides, and the points closest to the half peak on both sides for calculation to get σ. In the third decomposition process (Figure 4c), Rinf r = 1, Linf l > 1, we found that there was waveform superposition on the left side of the peak, the waveform on the right side was closer to the real situation; we extracted the inflection point on the right side and the point closest to the half peak on the right side for calculation to get σ. In the fourth decomposition process (Figure 4d), there was no inflection point on one side of the waveform; the inflection point could not be used to decompose the waveform. It was necessary to extract the point on both sides of the peak closest to the half peak, and take the point with the shortest distance to the peak for calculation to get σ. In the fifth decomposition process (Figure 4e), the situation is similar to the fourth decomposition, so we used the same method of calculation to get σ.

Parameter Optimizing and Fitting
After the initial decomposition of the parameters, some meaningless Gaussian components will be generated. These meaningless Gaussian components will produce some pseudo waveforms, and affect the waveform fitting ( Figure 5a). Therefore, we need to eliminate these Gaussian components during the iteration (Figure 5b), and if any Gaussian components are less than one emission-pulse in width after decomposition, we can eliminate the component with a smaller wave peak [32]. The parameters are estimated in the initial decomposition process, which cannot represent the real value; negative oscillation occurs when compared with the original data ( Figure 6), and it is necessary to optimize and fit the estimated parameters. Optimizing the characteristic parameters can achieve better results before parameter fitting [33]. The underwater waveform strength of the research data is very low, because water will absorb most of the laser. Although PSO (Figure 7a) can achieve better results in error calculation, its global performance is worse than that of OPSO (Figure 7b), and easy to fall into local optimum, so this method cannot deal well with a weak wave. This paper used OPSO to optimize the characteristic parameters. We found that PSO had a good fitting effect on the Gaussian component with a large peak value, but the fitting accuracy for weak waves was far less than that obtained by using OPSO; further parameter optimization fitting is needed, after completing the optimization of the characteristic parameters. The LM algorithm is easily affected by the initial value, and the peak value in the previous Gaussian decomposition process adopts the estimated value, and has a large deviation from the original waveform, which will affect the optimization accuracy. This is the reason why OPSO was used to optimize the parameters in the previous process. A large number of scholars have proposed that the fixed damping coefficient can help in the calculation, but this method was not applicable in this study (Figure 8a), because the fixed damping coefficient is easy to fall into the local optimal solution, and cannot express the original waveform data well. Compared with the fixed damping coefficient, the LM algorithm using the variable damping coefficient has a better fitting effect, and better globality (Figure 8b).     ((a,b), respectively, represent whether to use fixed damping coefficient to solve the waveform, the red box highlights the area with the large difference between the different methods).

Accuracy Comparison
To verify the accuracy of the decomposition method, four groups of representative underwater echo data were selected, and these waveforms were decomposed using different Gaussian decomposition methods. The distance from the left and right inflection points of test 1 and test 2 to the peak was different; the boundary of the strong and weak waveform signal of test 1 was not obvious, which had a superposition phenomenon. The boundary of the strong and weak waveform signal of test 2 was obvious. The distances from the left and right inflection points of test 3 and test 4 to the peak were the same; the boundary of the strong and weak waveform signal of test 3 was not obvious; and the  ((a,b), respectively, represent whether to use fixed damping coefficient to solve the waveform, the red box highlights the area with the large difference between the different methods).

Accuracy Comparison
To verify the accuracy of the decomposition method, four groups of representative underwater echo data were selected, and these waveforms were decomposed using different Gaussian decomposition methods. The distance from the left and right inflection points of test 1 and test 2 to the peak was different; the boundary of the strong and weak waveform signal of test 1 was not obvious, which had a superposition phenomenon. The boundary of the strong and weak waveform signal of test 2 was obvious. The distances from the left and right inflection points of test 3 and test 4 to the peak were the same; the boundary of the strong and weak waveform signal of test 3 was not obvious; and the boundary of the strong and weak waveform signal of test 4 was obvious.
Through comparison, it was found that although the number of waveform components is three, after removing the pseudo-Gaussian component, the method proposed in this paper had a better waveform-fitting effect, and was closer to the original waveform. After comparing a variety of data, it was found that the decomposition effect of the inflection point method was poor; when the distance between the inflection point and the peak value was not equal (Figure 9), the peak dislocation phenomenon occurred. Moreover, the Gaussian component with a smaller peak value did not fit well with the original data, meaning that the inflection points method alone could not correctly express the original waveform. When using the iterative method to process the waveform (Figure 10), it was also found that the method had a poor effect on the waveform decomposition with smaller peaks, and the final decomposition result could not fully express the original waveform; when using the inflection point selection decomposition method (Figure 11), it was found that the fitting situation with a smaller peak value was improved, compared with the first two methods, but the fitting effect at the junction of two Gaussian components was not good. The main reason for this is that it is unreasonable to calculate δ by taking the minimum value of the half-peak width as the standard deviation, when the inflection points on both sides of the peak are 1. This could easily cause the situation of test 3; the waveform was not completely decomposed and appeared as multiple pseudo components because of the small δ. It would produce large errors after the subsequent removal of the pseudo components, and had great trouble with the subsequent parameter optimization fitting. When using the method presented in this paper (Figure 12), we found that the method in this study obtained good conditions for each group of data, however, compared with the previous situation. It has been well improved, in addition to the poor fitting condition of test 4 at the peak junction but, compared with other methods, a similar problem also occurs, except for the iterative method. Although the iterative method has a good fitting effect at the junction, it causes a large error in the smaller peak, in order to fit the junction. The poor fitting of the junction is mainly due to the use of the local maximum value to represent the peak value of the Gaussian component, but it is obviously unreasonable to use the maximum value in the superimposed waveforms, which is also the focus of our planned follow-up research.
To more clearly see the decomposition accuracy of each method for the experimental data, this study made an accuracy verification comparison (Table 1), and calculated the root-mean-square error (RMSE, Equation (15)) and the coefficient of determination (R 2 , Equation (16)). The results showed that the method in this paper is more accurate than other methods; although the accuracy of some data is not higher than in the iterative method, the accuracy of the iterative method is not much higher than that of this paper, and it can be found that the overall fitting effect of this paper is better than the iterative method. The results showed that the RMSE is 2.544 and the R 2 is 0.995975, the average precision parameters are higher than other data, and the decomposition effect is more stable and accurate. In summary, this method is more suitable for the decomposition of multiple waveforms; not only that, but this method can also express waveform data more accurately than the other methods.
N is the number of sampling points; y i , f (x i ) are the signal echo intensities before and after optimization, respectively.  (16) is the mean value of each signal, and an R 2 value closer to 1 indicates a better fit.
(Test 1) (Test 2) (Test 3) (Test 4) Figure 9. Data processed by inflection point (the image representation uses the inflection point to solve different data; the red box highlights the area that is poorly fitted). Figure 9. Data processed by inflection point (the image representation uses the inflection point to solve different data; the red box highlights the area that is poorly fitted).  Figure 10. Data processed by iterative decomposition (the image representation uses iterative decomposition to solve different data; the red box highlights the area that is poorly fitted). Figure 10. Data processed by iterative decomposition (the image representation uses iterative decomposition to solve different data; the red box highlights the area that is poorly fitted).     Figure 12. Data processed by this paper (the image representation uses the method of this paper to solve different data; the red box highlights the area that is poorly fitted).
y is the mean value of each signal, and an R 2 value closer to 1 indicates a better fit. In order to verify the effectiveness of this method in complex waveforms, and the limitations of other algorithms in decomposition, a complex waveform superimposed by multiple Gaussian functions is constructed for simulation experiments (Figure 13), and solved using the inflection point method (Figure 14a), the iterative method (Figure 14b), the inflection point selection decomposition method (Figure 14c), and the proposed method (Figure 14d). Its parameters are set from left to right to generate a complex waveform ( Table 2). After the preliminary decomposition of the waveform, it can be found that after the inflection point method, the waveform has a large number of waveform components, because the superimposed wave will produce some pseudo inflection points, which will affect the judgment of the decomposition process of the waveform component. Therefore, the inflection point method cannot have the condition for complex waveform decomposition, and when the waveform cannot be judged by multiple superposition of the half peak width, iterative decomposition will yield a large δ. Therefore, the iterative method cannot be applied to complex waveform decomposition. The inflection point selection method is improved on the basis of the iterative method, but it is unreasonable to select the shortest half peak width to calculate δ when the inflection points on both left and right sides are 1; moreover, using the iterative method, when multiple waveforms are superimposed but the number of left and right inflection points is 1, excessive δ will cause the incorrect decomposition of waveforms. Although this method can roughly judge the center position of the original waveform after eliminating the spurious variables, the peak value and δ that deviate too much from the original waveform will increase the workload of subsequent parameter optimization. When using the method in this paper, all the waveform components were roughly judged, and the parameters were not much different from the original data. This method can provide a better estimation parameter for parameter optimization. After obtaining the estimated parameters, using the OPSO and LM algorithms to optimize the parameters (Figure 15), and comparing the optimized waveform with the original waveform (Table 3), the RMSE was only 0.0016,the R 2 was 1, and the optimized waveform was very close to the original waveform.
of subsequent parameter optimization. When using the method in this paper, all the waveform components were roughly judged, and the parameters were not much different from the original data. This method can provide a better estimation parameter for parameter optimization. After obtaining the estimated parameters, using the OPSO and LM algorithms to optimize the parameters (Figure 15), and comparing the optimized waveform with the original waveform (Table 3), the RMSE was only 0.0016,the R 2 was 1, and the optimized waveform was very close to the original waveform.      In order to further verify the correctness of the algorithm flow in this study, this study selected the worst accuracy among the four tests (test 4) and used this test's data as study data, using a variety of optimization methods to optimize the estimated parameters of these data ( Table 4). The results show that using PSO and updating the LM algorithm had the highest accuracy, but from the fitting effect of the final results (Figure 16), this method did not achieve a good fitting effect, it fell into the local optimal solution in order to pursue accuracy, and it can be seen that the fitting effect of weak waves is not good except when using OPSO or combining OPSO and update damping LM. It can also be found that the optimization method combining the two methods can improve the waveform fitting accuracy, and is closer to the original waveform curve.   In order to further verify the correctness of the algorithm flow in this study, this study selected the worst accuracy among the four tests (test 4) and used this test's data as study data, using a variety of optimization methods to optimize the estimated parameters of these data ( Table 4). The results show that using PSO and updating the LM algorithm had the highest accuracy, but from the fitting effect of the final results ( Figure 16), this method did not achieve a good fitting effect, it fell into the local optimal solution in order to pursue accuracy, and it can be seen that the fitting effect of weak waves is not good except when using OPSO or combining OPSO and update damping LM. It can also be found that the optimization method combining the two methods can improve the waveform fitting accuracy, and is closer to the original waveform curve. Original data 3.7071 0.9934 PSO (Figure 16a) 1.9853 0.9976 OPSO (Figure 16b

Discussion
The laser radar waveform solution is the initial process for obtaining the point cloud data, and the accuracy of the solution directly affects the quality of the subsequent point cloud. This paper proposes an improved inflection point selection decomposition method, and establishes a set of corresponding laser radar solution processes, which has been greatly improved in accuracy and stability, compared with other decomposition methods.
The reason why the decomposition accuracy of this study is more accurate than other methods is that the corresponding waveform parameter solution method is established

Discussion
The laser radar waveform solution is the initial process for obtaining the point cloud data, and the accuracy of the solution directly affects the quality of the subsequent point cloud. This paper proposes an improved inflection point selection decomposition method, and establishes a set of corresponding laser radar solution processes, which has been greatly improved in accuracy and stability, compared with other decomposition methods.
The reason why the decomposition accuracy of this study is more accurate than other methods is that the corresponding waveform parameter solution method is established by each different waveform feature, and this method can use the most suitable calculation model to decompose the waveform. Compared with other decomposition methods, this study combines the advantages of the inflection point method and the iterative method, to analyze the waveform data layer by layer. According to the position of the inflection point and the half-peak width to establish the best decomposition method, when the inflection on both sides of the decomposed waveform is 1, the waveform may not be a superimposed waveform, by calculating the distance from the inflection points on both sides to the peak center, and the distance from the half-peak width points on both sides to the peak center. Then, the maximum and minimum values are removed to obtain δ. In this way, if there are two same values in the judgment, the same value will be reserved. When the inflection point of the waveform on both sides is greater than 1, this waveform may be superimposed, calculating the δ by the distance from the center point of the two inflection points closest to the center on both sides to the peak value, and the distance from the half-peak width of the closest peak to the peak. The purpose of selecting the center of the two inflection points is that the median value of the two inflection points does not affect the decomposition of the superimposed wave, and there will be no superimposed wave that is decomposed as a single wave because of the large δ, and no multiple pseudo components with small δ. When the inflection point on one side of the waveform is 1, it shows that the side is closer to the waveform to be decomposed, and it is more reasonable to calculate the δ on this side. When the inflection point on both sides of the waveform is 0, it is considered that the inflection point is not calculated because the corresponding distance of the amplitude of the waveform component is too close, the half-peak width needs to be used to calculate δ. In the process of parameter optimization, although the accuracy and solution efficiency of PSO and fixed damping LM may be higher than the method in this paper in the face of the waveform component, the peak difference is not large; when facing an underwater waveform with weak waves, it easily falls into the local optimal solution. In order to verify the efficiency of the method, we calculated the speed of waveform calculation. We used the Intel Core i3-8350K @ 4.00 GHz quad-core processor, 32 GB (Kingston DDR4 3200 MHZ 32 GB), NVIDIA Quadro P1000 graphics card, and Asus PRIME-Z370-A (Z370 chipset) motherboard to configure the computer to run the waveform calculation program. The final solution time was 0.000156 s. Therefore, this paper adopts a more global parameter optimization method that can better meet the solution requirements of underwater waveforms.
The method in this paper also has certain limitations: (1) When solving the superimposed waveform, it is unreasonable to use the highest peak value as the amplitude of the Gaussian component; although this limitation can be corrected in the subsequent parameter optimization process, it undoubtedly does not increase the workload of parameter optimization. (2) The laser echo signal will be affected by noise in the water. In this paper, the water data under the influence of different noise are not further explored, but this problem will be further studied in follow-up work. (3) Although this method uses multiple sets of research data, the experimental environment is still limited to ponds with clear water quality. Because the data source problem can only be tested with simulated data, there is no exploration of different types of water body data, and the universality of this method cannot be verified.

Conclusions
This paper proposes an improved waveform decomposition method based on inflection point selection decomposition, and we obtained the following conclusions: (1) By comparing multiple sets of study data, underwater lasers are mostly complex waveforms of strong wave and weak wave superposition. We found that the improved inflection point selection method proposed in this paper is more suitable for underwater waveform calculation, which can solve underwater waveform data well and has a good effect on the parameter extraction of superimposed waves and weak waves. (2) To further verify the universality of this method, this study used a variety of methods to decompose the simulated waveform. The results showed that the waveform decomposed by this method is closer to the actual waveform in comparison with other methods, and the accuracy is much higher than that of other decomposition methods. (3) To verify the accuracy of the subsequent parameters optimization process, this study compared a variety of parameter optimization combinations, and the results showed that although the accuracy of the optimized method was not the highest, the actual results were closer to the original waveform than the other methods.
Our method highlighted the following aspects as worthy of future research and improvement: (1) The underwater laser echo signal is susceptible to various noise sources. In follow-up work, multiple experiments will be carried out under the influence of various noise sources, to further verify the accuracy and reliability of the proposed method. (2) In follow-up work, a variety of different sensors will be used to test the algorithm in this paper, to further understand whether the limitations and deviations of the sensors will affect the results. (3) The data background selected in this paper is a pond in a park in Guilin, Guangxi, China; the water quality has little effect on the laser echo signal. The planned followup research will study different types of water bodies, such as oceans and rivers, to further verify and evaluate the universality of this method in different environments. Institutional Review Board Statement: Not applicable.

Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.

Acknowledgments:
The authors would like to thank the editors and the reviewers for their valuable suggestions.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: PSO Particle swarm optimization OPSO Oscillating particle swarm optimization LM Levenberg-Marquardt Interpretation

Savitzky-Golay
The Savitzky-Golay filter is a digital filter that can be smoothing filter applied to a set of data to smooth the data, which can improve the accuracy of the data without changing the signal trend and width. It is mainly realized via the process of convolution; that is, the continuous subset of adjacent data points is fitted with a low-order polynomial by a linear least-squares method. five-point three-time After removing the first point from the known number smoothing method of equidistant points, a curve-smoothing method using a cubic polynomial to approximate the two points before and after each data point is proposed. Jacobian matrix In vector analysis, a Jacobian matrix is a matrix whose first-order partial derivatives of a function are arranged in a certain way, and its determinant is called the Jacobian determinant.