Two-Dimensional Space-Variant Motion Compensation Algorithm for Multi-Hydrophone Synthetic Aperture Sonar Based on Sub-Beam Compensation

: For a multi-hydrophone synthetic aperture sonar (SAS), the instability of the platform and underwater turbulence easily lead to two-dimensional (2-D) space-variant (SV) motion errors. Such errors can cause serious imaging problems and are very difficult to compensate for. In this study, we propose a 2-D SV motion compensation algorithm for a multi-hydrophone SAS based on sub-beam compensation. The proposed algorithm is implemented using the following four-step process: (1) The motion error of each sub-beam is obtained by substituting the sonar’s motion parameters measured in the exact motion error model established in this study. (2) The sub-beam’s targets of all targets are compensated for motion error by implementing two-phase multiplications on the raw data of the multiple-hydrophone SAS in the order of hydrophone by hydrophone. (3) The data of the sub-beam’s target compensated motion error are extracted from the raw data by utilizing the mapping relationship between the azimuth angle and the Doppler frequency. (4) The imaging result of each sub-beam is obtained by performing a monostatic imaging algorithm on each sub-beam’s data and coherently added to obtain high-resolution imaging results. Finally, the validity of the proposed algorithm was tested using simulation and real data.


Introduction
Synthetic aperture sonar (SAS) [1][2][3] is a high-resolution acoustic imaging system mounted on an autonomous underwater vehicle (AUV), remote-operated vehicle (ROV), and unmanned undersea vehicle (UUV) platforms.It is widely applied in underwater topography mapping, small-target detection, and buried object detection [4][5][6][7].However, the instability of the platform, underwater turbulence, and other factors in the changeable underwater environment easily cause 2-D SV motion errors in the SAS, which vary with the range and azimuth dimensions.Such errors can result in several imaging problems, such as the loss of geometric resolution, reduction in image contrast, increase in sidelobes, and strong phase distortion.
To account for such errors, the SAS positions require an accurate measurement with inertial navigation units (INUs) or an accurate estimate, and the motion error requires accurate compensation with the motion compensation (MOCO) algorithm.Generally, this algorithm can be divided into two types: time-domain MOCO and line-by-line MOCO algorithms.The time-domain MOCO algorithm is the most accurate but requires very high computational effort [8].The line-by-line MOCO algorithm has a much higher computational efficiency; however, the existing line-by-line MOCO algorithm for SAS only considers the range dependence of the 2-D SV motion error.Moreover, the azimuth dependence of the 2-D SV motion error has the ability to cause a dramatic phase error and an unfocused imaging result in high-resolution and complicated motion error cases.To the best of our knowledge, many 2-D SV MOCO algorithms for the airborne synthetic aperture radar (SAR) have been applied widely, such as in precise topography and aperture-dependent motion compensation algorithms (PTA) [9], subaperture topography and aperture-dependent motion compensation algorithms (SATA) [10], and frequency division motion algorithms (FD) [11][12][13].Although SAS technology is derived from the SAR community, these algorithms cannot be directly applied to SAS.This is mainly due to the difference between SAS and SAR.Because of the low speed of sound in water, an SAS requires a hydrophone array to obtain a useful mapping rate [1], which is referred to as multi-hydrophone SAS.Owing to the multi-hydrophone configuration, the displacement of each sample from the ideal trajectory should take the offset displacement caused by the rotation of the hydrophone array into account, and this additional offset varies with the hydrophone position in the array.For example, the additional offset is up to 1.75 cm when the yaw angle is 1 • , and the length of the hydrophone array is 1 m.This offset is not negligible for the SAS because its wavelength is usually less than 2 cm for the high-resolution SAS.Thus, the 2-D SV motion error for multi-hydrophone SAS is also hydrophone-dependent and cannot be compensated by the existing MOCO algorithms.
To compensate for the 2-D SV error for the multi-hydrophone SAS and overcome the shortcomings of the above algorithms, we propose a 2-D space-variant motion compensation algorithm for the multi-hydrophone SAS based on sub-beam compensation.The proposed algorithm is designed using the following four steps: (1) The exact motion error model for multi-hydrophone SAS, which includes five degrees of freedom (roll, pitch, yaw, sway, and heave), is established for the first time.In addition, the motion error of each target observed through a short observation aperture, viewed as a narrow beam or a sub-beam, is obtained by substituting the sonar's motion parameters measured or estimated into the exact motion error model.(2) Owing to the short observation aperture of the sub-beam, the motion error of all targets observed by the sub-beam can be replaced by the motion error of the targets at the beam center of the sub-beam.Moreover, because the motion error in the sub-beam is weakly dependent on the range, its delay can be considered the same as the delay along the range.Thus, by implementing two-phase multiplications on the raw data of the multiple-hydrophone SAS in the order of hydrophone by hydrophone, targets illuminated by the sub-beam of all targets illuminated by sonar are compensated for the phase error and delay error coming from motion error, respectively.(3) To extract the sub-beam data from raw data, the mapping relationship between the azimuth angle and Doppler frequency is utilized to split the equivalent single-hydrophone data, which is a result of implementing the Doppler spectrum extension and coherent superposition on the multi-hydrophone raw data.(4) To obtain the imaging result with high resolution, the imaging result of each sub-beam, obtained by performing the monostatic imaging algorithm on each sub-beam data, is coherently added.Finally, the validity of the proposed algorithm is tested using simulation and real data.
The remainder of this paper is organized as follows: In Section 2, a range history model is presented.In Section 3, the signal mode is introduced.In Section 4, the proposed algorithm is discussed in detail.Section 5 presents the experimental results based on the simulated and real measured data to verify the effectiveness of the proposed algorithm.The conclusions are presented in Section 6.

Exact Range History
As shown in Figure 1a, an imaging coordinate system oxyz is established according to the right-hand rule, where the z-axis is pointed upward, and the x-axis and y-axis are in a horizontal plane and perpendicular to each other.An acoustic array, which contains an M + N + 1 hydrophone and one transponder, moves in the oxyz coordinate system.The velocity of the acoustic array along the x-axis is v, and the offsets along the y-axis and z-axis are sway y s and heave z h , respectively.To describe the gestures of the acoustic array, a moving coordinate system ox a y a z a is built as shown in Figure 1b, where the origin and x a -axis are the phase center of the transponder and the phase center line of the hydrophones, respectively.The roll, pitch, and yaw of the acoustic array are denoted by θ r , θ p , and θ y , respectively.
As shown in Figure 1a, an imaging coordinate system oxyz is established accordin to the right-hand rule, where the z-axis is pointed upward, and the x-axis and y-axis a in a horizontal plane and perpendicular to each other.An acoustic array, which contai an M+N+1 hydrophone and one transponder, moves in the oxyz coordinate system.Th velocity of the acoustic array along the x-axis is v, and the offsets along the y-axis an z-axis are sway  and heave  , respectively.To describe the gestures of the acoust array, a moving coordinate system    is built as shown in Figure 1b, where th origin and  -axis are the phase center of the transponder and the phase center line of th hydrophones, respectively.The roll, pitch, and yaw of the acoustic array are denoted b  ,  , and  , respectively.In Figure 1a, when the transponder transmits the signal at position (,  , −ℎ −  the distance from the transponder to P( , sin , 0) is as follows:  * (;  , ) = ( −  ) + ( −    ) + (ℎ +  ) where  is the slow time, h is the height of the array beyond the seafloor, r is the close distance between the target and trajectory, and  is the depression angle between r an the z-axis.
Owing to the low speed of underwater sound, the moving distance of the SAS b tween the transmission and reception cannot be ignored.Assuming that  * is the dela between the moment of the ith hydrophone receiving the echo and the moment of th transponder transmitting signal, the position of the ith hydrophone at the receivin moment is given by the following: where  is the distance from the ith hydrophone to the transponder and  is a rot tion matrix given in [14].According to the location of P and the ith hydrophone at th receiving moment, the distance from P to the ith hydrophone is as follows: In Figure 1a, when the transponder transmits the signal at position (vt, y s , −h − z h ), the distance from the transponder to P(x 0 , r sin θ d , 0) is as follows: R * T (t; x 0 , r) = (vt where t is the slow time, h is the height of the array beyond the seafloor, r is the closest distance between the target and trajectory, and θ d is the depression angle between r and the z-axis.
Owing to the low speed of underwater sound, the moving distance of the SAS between the transmission and reception cannot be ignored.Assuming that τ * mi is the delay between the moment of the ith hydrophone receiving the echo and the moment of the transponder transmitting signal, the position of the ith hydrophone at the receiving moment is given by the following: where d i is the distance from the ith hydrophone to the transponder and M r is a rotation matrix given in [14].According to the location of P and the ith hydrophone at the receiving moment, the distance from P to the ith hydrophone is as follows: The exact propagation distance of sound from the transmission to reception is the sum of R * T (t; x 0 , r) and R * Ri (t; x 0 , r), which is expressed as follows: This is referred to as the exact range history.However, because the echo signal received by the ith hydrophone is propagated for τ * mi , the exact range history can be expressed as follows: R * mi (t; where c is the speed of underwater sound.By combining (4) and ( 5), the solution for τ * mi is given as follows: where A, B mi , and C mi , respectively, are represented as follows:

Exact Motion Error
Because yaw and pitch, particularly yaw, can change the direction of the SAS beam and cause the squint phenomenon, we selected the squint model as the ideal model for the multi-hydrophone SAS without loss of generality.Assuming that the mean of yaw and pitch are θ y0 and θ p0 , respectively, the ideal range history is given in [15] and is expressed as follows: where The exact motion error is the difference between the exact range history and the ideal range history and is obtained by subtracting (10) from ( 5) as follows: It can be noted from ( 13) that this motion error is a 2-D SV and hydrophone-dependent term because its size is related to the target position at the beam and the hydrophone position in the acoustic array.

Sub-Beam Range History
The SAS echo can be viewed as the sum of all sub-beam echoes, as shown in Figure 2. Assuming that the number of sub-beams is K and the squint angle of the kth sub-beam is θ k , the moment is x 0 /v + r tan θ k /v when the beam center of the kth sub-beam crosses target P.Because the sub-beam beamwidth is very narrow, the motion error of all targets illuminated by the sub-beam can be replaced by the motion error of the targets at the beam center of the sub-beam.By substituting t k into (13), the approximation motion error of the kth sub-beam can be obtained as follows: where emote Sens. 2024, 16, x FOR PEER REVIEW It can be noted from ( 22) and ( 14) that ∆ () and  , () are both rang terms; thus, they can be integrated into one term for convenience as express referred to as the motion compensation term.Then, the range history of the can be rewritten as follows: ( ) Relationship between the Doppler frequency and sub-beam.
To evaluate the size of the path error caused by approximating  , (; ), a simulation was performed.We assume that the acoustic array Figure 1b, where  and  are equal to 25 and 0, respectively.The system p the SAS are shown in Table 1, and the motion parameters are shown in Fig  ,  ,  , and  vary randomly with the pulses in −1 , 1 , 0 , 2 , −1. −0.1, 0.1 , respectively.Considering that the beam width is a very impor determining the size of the 2-D SV motion error, this simulation was ca different beam widths.To compare the results of different beam widths un acoustic array and resolution, the SAS beamwidth was changed by adjusti frequency of the transmitted signal.Thus, there are three types of center fr listed in Table 1, whose wavelengths are  (1.87 cm),  (3.75 cm), and  ( beamwidths of 6 , 15 , and 24 , respectively.The range history of the sub-beam can be viewed as the sum of the ideal range history and motion error; thus, the range history of the kth sub-beam can be expressed as follows: Considering that the expression of R * i (t; r) is very complex, it is necessary to simplify it.According to the description of the displaced phase center antenna (DPCA) technology [16], the multi-hydrophone SAS can be viewed as a monostatic hydrophone transmit signal and receive echo at the phase center of the bistatic transponder/hydrophone pair.Thus, the range history R * i (t; r) can be considered as the sum of a single root term and a range- dependent term and is rewritten as follows: where R i (t; r) is the single root term, which is equivalent to the range history of the monostatic hydrophone, and ∆R i (r) is the range-dependent offset term, which is the offset distance attained by transforming a bistatic transponder/hydrophone pair into a monostatic hydrophone.R i (t; r) and ∆R i (r) are given as follows: where θ sq0 is the squint angle.
It can be noted from ( 22) and ( 14) that ∆R i (r) and ε i,k (r) are both range-dependent terms; thus, they can be integrated into one term for convenience as expressed below.
referred to as the motion compensation term.Then, the range history of the kth sub-beam can be rewritten as follows: To evaluate the size of the path error caused by approximating R * i (t; r) into R mi,k (t; r), a simulation was performed.We assume that the acoustic array is shown in Figure 1b, where N and M are equal to 25 and 0, respectively.The system parameters of the SAS are shown in Table 1, and the motion parameters are shown in Figure 3, where θ y , θ p , y s , and z h vary randomly with the pulses in and [−0.1, 0.1] m, respectively.Considering that the beam width is a very important factor in determining the size of the 2-D SV motion error, this simulation was carried out for different beam widths.To compare the results of different beam widths under the same acoustic array and resolution, the SAS beamwidth was changed by adjusting the center frequency of the transmitted signal.Thus, there are three types of center frequencies, as listed in Table 1, whose wavelengths are λ 1 (1.87 cm), λ 2 (3.75 cm), and λ 3 (7.5 cm) with beamwidths of 6 • , 15 • , and 24 • , respectively.Without a loss of generality, the hydrophone at the farthest distance from the transponder was selected, and the maxima of θ y , θ p , y s , and z h were used to calculate the motion error.In the proposed method, the total number of sub-beams for beamwidths of 6 • , 15 • , and 24 • are 20, 40, and 55, respectively.In the conventional algorithm, the motion errors of all targets illuminated by the SAS beam are replaced by the motion errors of the targets on the center line of the SAS beam.The results of the two algorithms were measured in terms of the wavelength (λ), as shown in Figure 4.
First, as shown in Figure 4a-c, the beamwidth has a significant influence on the size of the path error in the conventional algorithm.By using the threshold of the path error as 0.0625 (1/16) wavelengths, we measured the maximum path error in Figure 4a-c.The results were 0.3058λ 1 , 0.5878λ 2 , and 0.8919λ 3 , respectively, which are significantly larger than 0.0625 wavelengths.Therefore, it can be concluded that the size of the path error in the conventional algorithm has a significant effect on the imaging results after MOCO.Without a loss of generality, the hydrophone at the farthest distance from the transponder was selected, and the maxima of  ,  ,  , and  were used to calculate the motion error.In the proposed method, the total number of sub-beams for beamwidths of 6 , 15 , and 24 are 20, 40, and 55, respectively.In the conventional algorithm, the motion errors of all targets illuminated by the SAS beam are replaced by the motion errors of the targets on the center line of the SAS beam.The results of the two algorithms were measured in terms of the wavelength (), as shown in Figure 4. Next, it can be observed from Figure 4d-f that the beamwidth has little influence on the size of the path error.The main reason for this beneficial result is that the beamwidth of the central sub-beam is sufficiently narrow such that the size of the path error is small when the motion errors of all targets observed by the sub-beam are replaced by the motion errors of the targets at the beam center of the sub-beam.Theoretically, the farther the sub-beam is from the beam center of the SAS, the greater the size of the path error of the sub-beam.Thus, without loss of generality, we selected the sub-beam at the SAS beam edge for analysis.In Figure 4, the images of the second row and the third row are the path error results of the central and edge sub-beams at different beamwidths, respectively.By comparing the images of the two rows, it is clear that the edge sub-beams have a greater path error than the central sub-beams.To evaluate the size of the path error of these sub-beams quantitatively, we measured the maximum path error in Figure 4d-i.The results were 0.0104λ 1 , 0.0052λ 2 , 0.0059λ 3 , 0.0461λ 1 , 0.0527λ 2 , and 0.0538λ 3 , respectively.Although the results show that the path errors of the edge sub-beams are evidently larger than that of the central sub-beams, they are all less than the threshold of 0.0625 wavelengths.Therefore, it can be concluded that the size of the path error in the proposed algorithm has no effect on the imaging result after MOCO.
sponder was selected, and the maxima of  ,  ,  , and  were used to calculate the motion error.In the proposed method, the total number of sub-beams for beamwidths of 6 , 15 , and 24 are 20, 40, and 55, respectively.In the conventional algorithm, the motion errors of all targets illuminated by the SAS beam are replaced by the motion errors of the targets on the center line of the SAS beam.The results of the two algorithms were measured in terms of the wavelength (), as shown in Figure 4. First, as shown in Figure 4a-c, the beamwidth has a significant influence on the size of the path error in the conventional algorithm.By using the threshold of the path error as 0.0625 (1/16) wavelengths, we measured the maximum path error in Figure 4a-c.The results were 0.3058 , 0.5878 , and 0.8919 , respectively, which are significantly larger than 0.0625 wavelengths.Therefore, it can be concluded that the size of the path error in the conventional algorithm has a significant effect on the imaging results after

Signal Model
The echo signal of the ith hydrophone can be considered the sum of the echo signals collected by each sub-beam and is expressed as follows: s i,k (τ, t; r) is the echo signal collected by the kth sub-beam of the ith hydrophone after demodulation and is expressed as follows: where ω r (∆) represents the envelope of the transmitted signal, ω az (∆) represents the beam pattern of a transponder and a hydrophone, τ is the fast time, K r is the FM rate of the transmitted signal, and f 0 is the center frequency of the transmitted signal.

Development of the Motion Compensation
The proposed algorithm contains the range Fourier transformation, phase and delay correction for the sub-beam, azimuth Fourier transformation, azimuth spectrum replication, coherent addition for each hydrophone, Doppler spectrum division, monostatic imaging, azimuth inverse Fourier transformation, and coherent addition for the imaging result of each sub-beam, as shown in Figure 5.The first step is to transform  (, ; ) into the range frequency domain.Here, the principle of the stationary phase (POSP) [17] is used to perform the range Fourier transformation on  (, ; ).The results of the range Fourier transformation are as follows: The first step is to transform s i (τ, t; r) into the range frequency domain.Here, the principle of the stationary phase (POSP) [17] is used to perform the range Fourier transformation on s i (τ, t; r).The results of the range Fourier transformation are as follows:

Remote
where W r (.) represents the spectral envelope of the transmitted signal, and f r is the range frequency.The second step is to compensate for the sub-beam motion error by the phase correction and delay correction in the order of one-by-one hydrophone.The phase correction is performed on S i ( f r , t; r) with phase multiplication, where the factor for the phase multipli- cation is given by the following: where the subscript l represents the lth sub-beam.Because ξ i,l (t; r) is weakly depen- dent on the range, the delay caused by ξ i,l (t; r) can be considered the same as the delay along the range, which can be replaced by the delay at reference range.According to the Fourier transform shifting properties, the constant delay can be corrected by performing a phase multiplication on S i ( f r , t; r), where the factor for this phase multiplication is given as follows: After multiplying ( 27) with ( 28) and (29) separately, the signal of the ith hydrophone is expressed as follows: The third step is the azimuth Fourier transformation.Here, we again utilize the POSP to perform the azimuth Fourier transform on S i,l ( f r , t; r).The results of the range Fourier transform are as follows: where D is the range migration factor and is represented as follows: The fourth step is the azimuth spectrum replication.The 2-D spectrum of each hydrophone in (31) suffers a serious problem of azimuth undersampling owing to the principle of multi-hydrophone SAS.Thus, it becomes difficult for the multi-hydrophone SAS to obtain a 2-D spectrum without azimuth aliasing before extracting the 2-D spectrum of the sub-beam from (31).The method of azimuth spectrum replication was proposed to solve this problem.This method includes two main steps.The first step is to replicate the undersampled 2-D spectrum of each hydrophone N + M + 1 times.The second step is to rank the replicated 2-D spectrum end-to-end in azimuth.Although the method of azimuth spectrum replication cannot suppress azimuth aliasing, it provides an effective method to deal with the undersampled 2-D spectrum in azimuth.
The fifth step is to coherently add the signal of each hydrophone.To remove the azimuth aliasing caused by the azimuth undersampling of each hydrophone, a method that coherently adds the corrected signal of each hydrophone is utilized as follows: where The result is given as follows: where B a,l and f a,l define the Doppler bandwidth and Doppler center frequency of the l th sub-beam, respectively.From (35), we can see that only the motion error corresponding to the lth sub-beam is effectively compensated.The sixth step is to extract the sub-beam's signal from the 2-D spectrum of the SAS.B a,l and f a,l of the lth sub-beam are calculated by utilizing the mapping relationship between the azimuth view angle and Doppler frequency.According to the values of B a,l and f a,l , the signal of the lth sub-beam can be obtained from (35) and is extracted as follows. (36) The seventh step is to perform the imaging algorithm on the sub-beam signal.The fifth step is essentially a process in which a multi-hydrophone signal is transformed into a monostatic signal; thus, all monostatic imaging algorithms, such as the range-Doppler algorithm [18], chirp scaling algorithm [19], ωK algorithm (ωKA) [20], and their modifications, can be performed on the sub-beam signal to obtain the imaging result of each sub-beam.The imaging algorithm mainly includes range cell migration correction, range compression, second range compression, azimuth matched filter, and azimuth inverse Fourier transformation.However, to meet the pixel points for the scene, the data in (36) must be padded with zero in the azimuth before the azimuth inverse Fourier transformation.In addition, to obtain the imaging results of the other sub-beams except for the lth sub-beam, steps 2 to 7 need to be executed repeatedly.
The eighth step is to coherently add the imaging results of all the sub-beams.Because the sub-beam's Doppler bandwidth is narrow, its imaging result has low resolution.To obtain a high-resolution imaging result, it is necessary to coherently add every sub-beam's imaging result.

Exact Range History Imaging Results of Simulation Data
To evaluate the performance of the proposed MOCO algorithm, a simulation was performed.The system parameters are shown in Table 1, the SAS motion error is shown in Figure 3, and the scene illuminated by sonar has five idea point targets, which are assumed to be located at positions P1 (−3 m, 152 m, 30 m), P2 (−3 m, 158 m, 30 m), P3 (3 m, 158 m, 30 m), P4 (3 m, 152 m, 30 m), and P5 (0 m, 155 m, 30 m), respectively.In the simulation, the exact range history of each target is given by ( 6), and to avoid the new path error caused by the imaging algorithm, a ωKA is utilized as the monostatic imaging algorithm in the seventh step.The results of the proposed algorithm and the conventional algorithm are shown in Figure 6.
From Figure 6a-c, it can be observed that the sidelobe energy and azimuth mainlobe width of the imaging results of the conventional algorithm become stronger and wider with the increase in beam width, respectively.Although the imaging results of the proposed algorithm have problems similar to those of the conventional algorithm, the increasing extent of sidelobe level and mainlobe width are less than those of the conventional algorithm, as shown in Figure 6d-f.Next, to compare the imaging results in more detail, point P5 was extracted from the subfigures of Figure 6.Theoretically, the range image is obtained using match filter technology, and the motion has a smaller effect on the range image than the azimuth image.Thus, for convenience, we only made the azimuth slice of point P5, as shown in Figure 7.Then, the impulse response width was measured and compared with the theoretical resolution of 8 cm.From Table 2, it can be observed from the comparison results that when the beam width is 6 • , 15 • , and 24 • , the mainlobe broadening of the conventional algorithms is 8.9%, 24.9%, and 87.3% at beamwidths of 6 • , 15 • , and 24 • , respectively, while the mainlobe broadening of the proposed algorithm is only 1.2%, 3.6%, and 11.8%.These results show that the imaging result of the proposed algorithm is better than that of the conventional algorithm and is close to the theoretical result.Therefore, the validity of the proposed algorithm was tested through simulation.158 m, 30 m), P4 (3 m, 152 m, 30 m), and P5 (0 m, 155 m, 30 m), respectively.In the sim ulation, the exact range history of each target is given by ( 6), and to avoid the new pat error caused by the imaging algorithm, a KA is utilized as the monostatic imaging a gorithm in the seventh step.The results of the proposed algorithm and the convention algorithm are shown in Figure 6.

Imaging Results of Real Data
In this section, the performance of the proposed algorithm is evaluated by com ing the imaging results of different algorithms.The real data were sourced from South China Sea and were collected using a multi-hydrophone SAS in 2017.The ima results with no motion compensation, the conventional algorithm, and the propose gorithm are shown in Figure 8a-c, respectively, where the number of sub-beams in proposed algorithm is seven. (a)

Imaging Results of Real Data
In this section, the performance of the proposed algorithm is evaluated by comparing the imaging results of different algorithms.The real data were sourced from the South China Sea and were collected using a multi-hydrophone SAS in 2017.The imaging results with no motion compensation, the conventional algorithm, and the proposed algorithm are shown in Figure 8a-c, respectively, where the number of sub-beams in the proposed algorithm is seven.
It can be observed that the image in Figure 8b is clearer and shows more detailed information than the image in Figure 8a; thus, the existing algorithm is effective.However, the image result of the far scene in Figure 8b is worse than that of the near scene.This result indicates that there is still a large path error that is not compensated for by the conventional algorithm.By comparing Figure 8b with Figure 8c, it can be seen that the image result in Figure 8c is better than that in Figure 8b, both in the far scene and in the near scene.Therefore, the validity and superiority of the proposed algorithm are verified.
In this section, the performance of the proposed algorithm is evaluated by comparing the imaging results of different algorithms.The real data were sourced from the South China Sea and were collected using a multi-hydrophone SAS in 2017.The imaging results with no motion compensation, the conventional algorithm, and the proposed algorithm are shown in Figure 8a-c, respectively, where the number of sub-beams in the proposed algorithm is seven.It can be observed that the image in Figure 8b is clearer and shows more detailed information than the image in Figure 8a; thus, the existing algorithm is effective.However, the image result of the far scene in Figure 8b is worse than that of the near scene.This result indicates that there is still a large path error that is not compensated for by the conventional algorithm.By comparing Figure 8b with Figure 8c, it can be seen that the image result in Figure 8c is better than that in Figure 8b, both in the far scene and in the near scene.Therefore, the validity and superiority of the proposed algorithm are verified.

Conclusions
A 2-D SV motion compensation algorithm for a multi-hydrophone SAS based on sub-beam compensation was proposed in this study.To the best of our knowledge, such an algorithm had not yet been reported.The main contributions of this study are as follows: (1) An exact motion error model for the multi-hydrophone SAS, which includes five degrees of freedom, namely roll, pitch, yaw, sway, and heave, is newly established.(2) The ability of the proposed algorithm to compensate for the 2-D SV motion error with

Conclusions
A 2-D SV motion compensation algorithm for a multi-hydrophone SAS based on sub-beam compensation was proposed in this study.To the best of our knowledge, such an algorithm had not yet been reported.The main contributions of this study are as follows: (1) An exact motion error model for the multi-hydrophone SAS, which includes five degrees of freedom, namely roll, pitch, yaw, sway, and heave, is newly established.
(2) The ability of the proposed algorithm to compensate for the 2-D SV motion error with hydrophone dependency for the multi-hydrophone SAS is demonstrated.(3) The ability of the proposed algorithm to compensate for low-frequency and high-frequency motion errors simultaneously is also shown.Moreover, the algorithm can be applied to multi-channel SARs, such as the spaceborne SAR.However, the premise of the proposed algorithm is to determine the motion parameters of the platform measured.

Figure 1 .
Figure 1.Geometric model of the multi-hydrophone SAS.(a) Model of range history; (b) array hydrophones.

Figure 1 .
Figure 1.Geometric model of the multi-hydrophone SAS.(a) Model of range history; (b) array of hydrophones.

Figure 2 .
Figure 2. Relationship between the Doppler frequency and sub-beam.

Figure 4 .
Figure 4. (a−c) are the path error results caused by the conventional algorithm for the beamwidths of 6 , 15 , and 24 , respectively; (d-f) are the path error results of the central sub-beam caused by the proposed algorithm for the beamwidths of 6 , 15 , and 24 , respectively; (g-i) are the path error results of the edge sub-beam caused by the proposed algorithm for the beamwidths of 6 , 15 , and 24 , respectively.

Figure 4 .
Figure 4. (a-c) are the path error results caused by the conventional algorithm for the beamwidths of 6 • , 15 • , and 24 • , respectively; (d-f) are the path error results of the central sub-beam caused by the proposed algorithm for the beamwidths of 6 • , 15 • , and 24 • , respectively; (g-i) are the path error results of the edge sub-beam caused by the proposed algorithm for the beamwidths of 6 • , 15 • , and 24 • , respectively.

Figure 5 .
Figure 5. Flow chart combining the proposed algorithm.

Figure 5 .
Figure 5. Flow chart combining the proposed algorithm.

Figure 7 .
Figure 7. Azimuth slice of the imaging results of the conventional algorithm and the proposed algorithm at the different beamwidths.(a) 6 • , (b) 15 • , and (c) 24 • .

Figure 8 .
Figure 8.The imaging results.(a) No motion compensation; (b) the conventional algorithm; (c) the proposed algorithm.

Figure 8 .
Figure 8.The imaging results.(a) No motion compensation; (b) the conventional algorithm; (c) the proposed algorithm.

Table 2 .
The imaging quality of the results of the conventional algorithm and the proposed algorithm.