Trajectory Deviation Estimation Method for UAV-Borne Through-Wall Radar

: Mini–unmanned aerial vehicles (mini-UAVs) are emerging as a promising platform for through-wall radar to sense the enclosed space in cities, especially high-rise buildings, due to their excellent maneuverability. However, due to unavoidable environmental interference such as airflow, mini-UAVs are prone to trajectory deviation thus degrading their sensing accuracy. Most of the existing approaches model the impact of trajectory deviation into a polynomial phase error on the received signal, which cannot fit the space-variant motion error well. Moreover, the large trajectory deviations of UAVs introduce the unavoidable envelope error. This article proposes an autofocusing algorithm based on the back projection (BP) image, which directly estimates the trajectory deviations between the actual and measured track. Thus, the problem of the 2D space variability of the motion error can be circumvented. The proposed method mainly consists of two steps. First, we estimate the trajectory deviation in the line-of-sight (LOS) direction by exploring the underlying linear property of the wall embedded in the BP imaging result. Then, the estimated trajectory deviation in the LOS direction is compensated for to obtain an updated BP image, followed by a Particle Swarm Optimization (PSO) approach to estimate the trajectory deviation along the track through focusing targets behind the wall. Simulations and practical experiments show that the proposed algorithm can accurately estimate the serious trajectory deviations larger than the range resolution, improving the sensing robustness of UAV-borne through-wall radar greatly.


Introduction
Mini-UAVs equipped with lightweight frequency modulation continuous wave (FMCW) radar are becoming promising candidates for near-range sensing [1,2] and high-rise building [3,4] detection thanks to their maneuverability and flexibility.However, many factors such as ambient airflow disturb the trajectory of mini-UAVs severely, preventing them from sampling the data uniformly along a straight line.Therefore, the disturbance of the airflow on the trajectory has to be estimated and compensated prior to imaging, tracking, etc.Otherwise, the trajectory deviation will induce extra errors in the envelope and phase of the received signal, defocusing the imaging results to different extents [5,6].
Many efforts have been made to estimate and compensate for the motion errors of synthetic aperture radar (SAR).Technically, the existing work can be classified into two categories, i.e., parametric and non-parametric methods.A parametric algorithm mainly converts the estimation of phase error caused by trajectory deviations into solving the parameters embedded in different phase error models such as limited polynomial coefficients [7][8][9][10][11] and discrete cosine transform (DCT) coefficients [12,13].For example, the map drift (MD) algorithm [7] first constructs a second-order polynomial phase error model.
Then, it divides the aperture into two subapertures and transforms each subaperture into the image domain via Fourier transform (FT).Finally, the quadratic phase error (QPE) coefficients are estimated from the drifts between the subimages.Similarly, the phase difference autofocus (PDA) [8] algorithm obtains the QPE coefficients from the FT of the phase difference function built from the subaperture data.While MD [7] and PDA [8] consider the second-order polynomial error only, the performance degrades in practice when there are more complex errors with higher orders.To improve the accuracy, the N subaperturebased methods are studied to estimate the coefficients of an Nth-order polynomial error model.For example, the multiple aperture mapdrift (MAM) algorithm [7,9,10] divides the range-compressed data into N subapertures and estimates the Nth-order polynomial error coefficient through the drift between N(N − 1)/2 pairs of the subapertures.A pseudoinverse matrix of × (N − 1) dimension is calculated to obtain the coefficients.In this way, the complexity increases with the number of subapertures.In addition, for these algorithms utilizing the drift between subapertures, a small subaperture would cause poor azimuth resolution and a low signal-to-clutter-noise ratio (SCNR).They also require range bins containing strong scatterers with high SNR to enable high correlation between subapertures.The contrast optimization autofocus (COA) algorithm [11] employs image contrast as the optimization function to estimate quadratic phase error coefficients, which can greatly deal with the scenario lacking strong scatterers.As introduced, the parameterbased algorithms need to specify the highest order of the phase error model.Otherwise, it will induce a wrong estimation.When there exists a complex form of motion errors, parameter-based algorithms can not fit it well.In addition, the calculation complexity increases with the number of error orders as well.
Instead of modeling the error and solving the related coefficients, non-parametric algorithms do not rely on the phase error model.They are applicable for estimating and eliminating arbitrary-order phase errors higher than one.In 1994, a robust phase gradient autofocusing (PGA) algorithm [14] was proposed for spotlight SAR.It reduces the phase error during multiple iterations by exploring its statistical property.However, the maximum likelihood (ML) estimator in PGA is derived based on the assumption that the clutter in each range bin can be assumed to follow the white Gaussian distribution, which does not always hold in many practical scenarios.The weighted least squares (WLS) [15] algorithm requires no assumption on the clutter model.It estimates phase error from multiple range bins through weighted least squares criteria.Both PGA and WLS autofocus algorithms are developed based on isolated strong scatterers with a high signal-to-clutternoise ratio (SCNR).More autofocus algorithms [16][17][18] are also studied by maximizing the image sharpness without special requirements on strong scatterers.
Although a great deal of research has been devoted to estimating the space-invariant phase error, the estimation of 2D space variant motion error should be studied for nearrange detection using wide-beam radar.Towards this end, many approaches have been proposed to estimate the motion errors under the wide-beam assumption [19][20][21][22][23][24].In [19], the geometric error is reconstructed using six parameters and solved by the maximum intensity correlation criterion, which results in a huge computational burden.There are algorithms with less computation complexity.By subdividing an image or aperture, the MD algorithm is applied to realize local autofocusing and estimate the space invariant local phase errors [20].Then, the local phase errors are concatenated into a full space variant phase error vector.In [21], an improved weighted-phase gradient autofocus (WPGA) is firstly applied for local autofocusing to obtain an estimation of the phase errors in the azimuth-time domain.Subsequently, the estimated phase errors are converted into a trajectory deviation of a subaperture through the WLS method.Finally, trajectory deviations of multiple subapertures are fused into trajectory deviations corresponding to the full aperture.Different from [21], autofocusing of local BP images is explored to estimate the 2D space variant motion error in [22,23].However, BP-based autofocusing algorithms suffer from the phase wrapping problem.Reference [24] presents an algorithm for estimating the jointly 2D space variant motion error in three steps: estimating the space-invariant motion error, estimating the azimuth-variant motion error in one range block, and estimating the range-variant motion error.Due to the variation in the Doppler centroid frequency in mini-UAV-borne radar with respect to the azimuth time, the imaging scenarios for different subapertures in the azimuthal frequency domain are different.As a result, the errors corresponding to different subapertures do not ideally match with the azimuthal space-variant errors.To conclude, for the algorithms utilizing local autofocus, the process of fusing multiple local phase errors into the ultimate phase error may induce additional errors in practice.Moreover, the block size of the images or subapertures should be well designed.A small block or aperture results in poor azimuth resolution, low SNR, heavy calculation complexity, etc.On the contrary, a large block or subaperture can increase residual space-variant phase errors since the motion error does not meet the assumption of local 2D space invariability.
The 2D space variability of the motion error becomes more obvious when it comes to through-wall detection by UAV-borne radar.The main reason is that through-wall radar usually works at a low center frequency (L/S band) with the wavelength ranging from 7.5 cm to 30 cm to maintain the penetrating ability.Moreover, subject to the load capacity of UAVs, the size of the antenna is limited for UAV-borne radar.The small size and low center frequency of the antenna result in a wide beam width [25][26][27][28].In this case, multiple targets may distribute in different locations in one beam width.Moreover, the trajectory deviation of UAVs can be larger than the range resolution, rendering severe envelope error.To the best of our knowledge, most of the conventional motion estimation and compensation algorithms are not applicable in this case.
This article proposes an autofocus algorithm for estimating the trajectory deviation of mini-UAV-borne through-wall radar, which can estimate both the envelope and phase error.The optimization algorithm is based on BP imaging results with both the wall and targets behind.Given the linear characteristic of the wall in the BP image, it behaves like a peak after Radon transformation (aka.R f domain).In this sense, by optimizing the entropy of the image in the R f domain, the trajectory deviation in the line-of-sight direction is estimated through the Particle Swarm Optimization (PSO) method.Following the compensation for the line-of-sight direction, the trajectory deviation along the track is estimated based on the entropy of the new BP image of targets without line-of-sight deviation.The proposed algorithm can deal with both the envelope error and the phase error, suitable for estimation and compensation when the trajectory deviation is much larger than the range resolution.Consequently, the algorithm can achieve estimation accuracy at the level of range resolution.The performance of the algorithm is validated by simulation and experimental results.
This article is organized as follows.Section 2 illustrates the theoretical foundations and mathematical models of motion error estimation.Section 3 elaborates on the trajectory deviation estimation algorithm.In Section 4, the simulation and experimental results are analyzed to validate the proposed algorithm.Section 5 offers additional insight into the proposed algorithm.Finally, Section 6 concludes the paper.

UAV-Based Signal Model for Through-Wall Radar 2.1. Propagation Model of Electromagnetic Waves in a Through-Wall Scenario
After being transmitted by radar, electromagnetic waves refract as they propagate through the wall.The approximate path after refraction can be obtained by calculating the approximate refraction point on the wall surface.As shown in Figure 1a, the radar T x is located at coordinates (X 1 , Y 1 ), while the target T r behind the wall is positioned at (X 2 , Y 2 ).The solid rectangle denotes the actual position of the wall, while the dashed rectangle represents the equivalent wall position, closely adjacent to the radar.The refraction points for electromagnetic waves incident into the wall from free space and transmitted from the rear surface of the wall are denoted as points A and B, respectively.Relatively, regarding the equivalent wall position, consideration only needs to be given to the equivalent refraction point C for electromagnetic waves transmitted from the rear surface of the wall.θ i and θ t are the incidence angle and refraction angle, respectively.The slant range of the target remains constant regardless of the distance between the radar and the wall [29], i.e., ∥AT x ∥ + ∥AB∥ + ∥BT r ∥ = ∥CT x ∥ + ∥CT r ∥.Therefore, the slant range can be calculated under the assumption that the antenna is in proximity to the dashed wall.From Snell's law, we have The one-way propagation path is From the geometric approximation [29], we obtain where ε and d are the dielectric constant and the thickness of the wall, respectively.The calculation of the refraction point C on the internal surface of the wall is omitted by replacing the angle θ i with the angle θ q between the straight line ∥T x T r ∥ and the normal vector, i.e.,

Signal Model of Trajectory Deviation
Figure 1b illustrates the trajectory deviation of a mini-UAV.The radar moves along the X-axis with the irradiation direction of the beam perpendicular to the wall.We denote the Y-axis as the line-of-sight (LOS) direction.The red and blue lines represent the actual and measured trajectory, respectively.The Dryden model is employed to generate a turbulent wind field, in which the trajectory deviation of the UAV turns out to follow a sinusoid-like pattern in the presence of wind disturbance [30].As a result, the trajectory deviation of the UAV is mimicked by a sinusoidal curve.
At a given azimuth sampling time, the measured radar position is P t,m (x m , y m , z m ), while the actual position is P t,a (x a , y a , z a ) = (x m + ∆x, y m + ∆y, z m + ∆z), where ∆x, ∆y, and ∆z are the deviations between the actual and measured trajectory in the X, Y, and Z directions.The measured and actual slant range of target T 2 (x t , y t , z t ) are r m and r a , respec- tively.The slant range error of as shown in Equation (5).The slant range error varies for targets at dif- ferent azimuth and range bins within the beam illumination area, which is called the 2D space variant error.
Assuming that the radar transmits the FMCW signal, the actual echo of target T 2 is The range compressed signal is obtained by applying dechirp and Fast Fourier Transformation (FFT) on the echo (6) with a range frequency instantaneous value f r = −2kr a /c, i.e., Considering that the trajectory is a curve with large deviations, the BP algorithm is applied to imaging in this situation because it is effective under an arbitrary array configuration.In the presence of the slant range error caused by the trajectory deviation, the pixel value I(T 2 , P t,m ) corresponding to target T 2 at sampling point P t,m is where H represents the phase compensation function According to Equation (8), the envelope and phase error arising from the slant range error ∆R lead to a bias when the range-compressed signal is projected to the image domain.The imaging result is defocused correspondingly.

Estimation of Trajectory Deviation in the Line-of-Sight Direction
As depicted in Figure 2, the imaging result of the wall relies on the closest slant range perpendicular to the wall.At sampling point P t,a , the slant range error is also the trajectory deviation in the LOS direction, i.e., ∆R = r a − r m = ∆y.The Radon transformation can integrate a straight line into peaks.Subsequently, it is employed to evaluate the linear characteristics of the wall.For a function f (x, y), the Radon transform R f (ρ, θ) is defined as the line integral along the line determined by ρ and θ The value of each pixel in the R f domain corresponds to a specific linear integration of the original BP imaging result.If there is a deviation in the trajectory, the wall in the BP image becomes defocused and twisted.The corresponding image in the R f domain degrades from a desirable peak to a defocused spot, as depicted in Figure 3.The image contrast in the R f domain is designed as the objective function for autofocus optimization.A higher contrast in the R f domain indicates that the wall in the BP image is more approximate to a straight line, which means the trajectory position in the LOS direction is more accurate.
The Equation ( 11) is equivalent to (12) [31].I(ρ, θ) represents the value of the pixel at point (ρ, θ) in the R f domain.The image contrast is defined as the ratio of the standard deviation of the image intensity to the mean value.Assuming a low frequency of deviation in the UAV trajectory curve, the number of sampling points L just needs to satisfy Nyquist's theorem.The estimation dimension and computation increase with the number of sampling points.
The Particle Swarm Optimization (PSO) method is utilized to search for a set of optimal solutions to ∆y.
Each feasible solution ∆y to the optimization problem is considered to be a location (i.e., a particle) in the search space.The dimension of ∆y is L, and each particle denotes a combination of ∆y corresponding to these L sampling points.The process of the PSO algorithm is shown as Algorithm 1.Initially, a group of randomly distributed particles are initialized.In each iteration, the particles update both their velocity direction and magnitude by searching for and pursuing the global and individual extreme values of the current iteration, i.e., Gbest k and Pbest k .Each particle moves towards the potentially optimal value.Concurrently, both the individual and the global extreme values are updated in each iteration.The optimal solution of the objective function is considered to have been found when the difference between the global extremes found by the population is less than a set threshold over multiple rounds of iterations.
After obtaining the optimal solution through iterations, the deviation curves can be obtained by interpolating the ∆y to Na azimuthal sampling points using the linear interpolation method, and the actual trajectory can be obtained by y a = y m + ∆y.

Algorithm 1 PSO
for each particle i do for each dimension d do Initialize velocity V id and position X id end for end for Initialize Pbest and Gbest k = 1 while not stop do for each particle i do Update the velocity and position of particle i:

Estimation of the Trajectory Deviation along the Track
After estimating and correcting the deviation in the LOS direction, the deviation along the track is estimated by autofocusing on targets behind the wall, as shown in Figure 4. Similar to Figure 2, the red line depicts the actual radar trajectory curve, while the blue line represents the measured trajectory curve, wherein corrections for the trajectory deviation in the line-of-sight direction have been applied.P 1 and P 2 denote the synthetic aperture edges corresponding to the target T 1 .In this situation, ideally, there exists only a deviation ∆x in the along-the-track direction between the actual radar position P t,a and the measured position P t,m .It should be noted that the primary objective of this figure is to elucidate the second stage of the algorithm, specifically the estimation of the deviation along the track through the autofocus of targets.As a result, Figure 4 excludes the refraction of the electromagnetic wave.Before the estimation of trajectory deviation along the track, it is worth noting that the energy of target echoes is inversely proportional to the distance.To prevent targets at long distances from contributing too little to the optimization process and missing important information regarding the space-variant error, we multiply each range-compressed signal by a compensation factor β, which balances the energy of targets at different distances.
N represents the number of range FFT points.The value of α, a hyperparameter, is chosen to scale the energies of the targets at different distances as similar as possible after compensation.Similar to the estimation of the trajectory deviation in the line-of-sight direction as introduced in Section 3.1, the trajectory deviation vector along the track ∆x = [∆x 1 , ∆x 2 , . . . ,∆x l ] is optimized with the PSO algorithm with image contrast as the objective function.It is unnecessary to be aware of the precise location of the target.However, to precisely calculate the slant range of each grid point behind the wall using Equation ( 3), it is necessary to identify both the dielectric constant and thickness parameters of the wall priorly.The flowchart of the algorithm is displayed in Figure 5.

Simulation Results
The thickness of the wall in typical concrete frame structures ranges from 120 mm to 300 mm [32].External walls of brick and concrete buildings are usually 240 mm or more, while internal walls are approximately 120 mm.Overall, the thickness of a wall depends on various factors such as the material, bearing capacity, and insulation requirements.Table 1 lists the electromagnetic parameters of typical building materials.To maintain the penetration capacity, electromagnetic waves in the L or S band should be selected.Note that the attenuation of electromagnetic waves increases with the thickness and dielectric constant of the wall [33][34][35].As shown in Figure 6, the trajectory is designed as a straight line, which is similar to the one in conventional SAR imaging algorithms.Trajectory deviation is in the form of a sinusoid with a maximum magnitude of 0.4 m, larger than twice the range resolution.The wall thickness is 120 mm, and the relative dielectric constant is 4.5.There are four point targets distributed behind the wall.Table 2 lists the key parameters of the FMCW radar.In this section, the minimum-entropy autofocus (MEA) algorithm [36] with range segmentation is adopted as the comparison algorithm.Figure 7 shows the autofocus result of the wall.The BP imaging results are shown in Figure 7a, Figure 7c, and Figure 7e for the cases of no LOS direction trajectory deviation, with sinusoidal trajectory deviation, and when the trajectory deviation has been estimated and compensated for by the proposed algorithm, respectively.The corresponding Radon transform results are shown in Figure 7b,d,f.
According to Figure 7a,b, the wall is considered straight when there is no deviation in the LOS direction trajectory, which corresponds to the ideal peak in the R f domain.When a trajectory deviation exists, Figure 7c shows that the linear properties of the wall are destroyed.Correspondingly, the peak in the R f domain diffuses either, as shown in Figure 7d.After the estimation and compensation for LOS deviation, the BP image of the wall is restored to a straight line, and the peak in R f domain is also focused correspondingly, as shown in Figure 7e and Figure 7f, respectively.As illustrated, the imaging results of the wall are well recovered after estimating and compensating for the LOS deviation.Figure 8 shows the autofocus result of targets behind the wall.The RD images using the comparison algorithm in the absence of along-the-track deviation, in the presence of deviation and after autofocus processing are represented by Figure 8a, Figure 8b, and Figure 8c, respectively.The BP images of the proposed algorithm in the absence of trajectory deviation, in the presence of deviation, and after autofocusing processment are represented by Figure 8d, Figure 8e, and Figure 8f, respectively.As demonstrated in Figure 8a, the targets are well-focused and located accurately when there is no deviation along the track.Figure 8b shows the diffusion when there is a trajectory deviation.Figure 8c demonstrates the focusing effect of the comparison algorithm [36].In Figure 8c, the true positions of targets are marked by the symbol red ×.As shown, there exists fake targets and position deviations in the imaging result.
Remark 1. Notably, as the 2D space variability of the error becomes non-negligible in the throughwall scenario with wide-beam radar, the performance of the algorithm [36] degrades.As a result, we divide the raw data into more range segments to maintain the effectiveness of the algorithm.
As shown in Figure 8d, when there is no deviation along the track, the target positions are accurate and well-focused.Figure 8e shows that trajectory deviations result in severe defocus, nearly impossible to identify the position of the targets.According to Figure 8f, point targets are refocused correctly after being compensated for with the proposed algorithm.
The estimations of the trajectory deviation are shown in Figure 9.The estimated results and residuals of the LOS direction deviation are represented by Figure 9a

Experimental Results
Figure 10 shows the experimental environment.Four angle reflectors with side lengths of 20 cm are placed inside a room of size 2.9 × 5 m as detection targets.The radar moves along the wall outside the room, following a straight line trajectory.The wall is composed of brick and concrete, with a thickness of 120 mm and a relative dielectric constant of 4.5.A vector network analyzer of Keysight Technologies N9923A is used to transmit the step frequency signal during the experiment.The actual position of the radar is unknowable due to the unavoidable measurement error regardless of the experimental method in practice.To determine the actual trajectory of the radar in the experiment, the radar is projected on the ground.We calibrate the projection of the radar on the ground to minimize measurement errors.A total of 29 sampling points are calibrated.The experimental parameters are listed in Table 3, and the experimental results for the wall are presented in Figure 11.Overall, the results are consistent with the simulation results.From Figure 11a,b, it can be seen that the BP image of the wall, in the absence of trajectory deviation, exhibits a desirable straight line that yields a focused peak in the R f domain.When there are trajectory deviations in the LOS direction, as shown in Figure 11c,d, the BP imaging result is severely corrupted and the wall becomes unrecognizable .As a result, the peak energy corresponding to the wall in the R f domain diffuses severely.After compensation with the proposed algorithm, Figure 11e,f show that the wall in the BP image is restored to an ideal straight line, with the peak in the R f domain refocused as well.The restored results are comparable to Figure 11a,b in the absence of trajectory deviation in the LOS direction, which indicates that the trajectory deviations in the line-of-sight direction are corrected greatly.The experimental results for the targets are presented in Figure 12.Notably, the arclike shadow around the point targets is the result of the coherent summation of the range compressed data in the BP algorithm.The compensation of the along-the-track deviation is carried out after the compensation of the LOS direction deviation.Due to the residuals in the LOS direction deviation estimation, the point target is defocused to some extent even when there is no along-the-track deviation, which is demonstrated by the farthest target in Figure 12a.After applying the along-the-track deviation as shown in Figure 12b, the energy of the point targets is scattered severely, and the target positions cannot be recognized.Figure 12c depicts the result when the along-the-track deviation is compensated for with the proposed algorithm.As seen, the point targets are refocused and comparable to the case without along-the-track deviation in Figure 12a.
The experimental results of the trajectory deviation estimation are illustrated in Figure 13.The estimated LOS direction trajectory deviation curve in Figure 13a matches with the true curve.Figure 13b shows that the estimation errors for the LOS direction trajectory deviation fall in a range varying from −0.01 m to 0.07 m.The RMSE of estimation is 0.0285 m. Figure 13c,d

Discussion
The algorithm presented in this article is appropriate for situations where the trajectory deviation is significantly greater than the range resolution.Specifically, in the simulation, the magnitude of the trajectory deviation is approximately three times the range resolution, while in the experiment, it is approximately 20 times the range resolution.In such cases, the envelope error of the echo has a notable impact on the BP imaging.However, the phase error has a great impact on imaging when the compensated residuals are within the range resolution level.Therefore, other phase error estimation and compensation algorithms based on range Doppler-domain data can be applied after processing the proposed algorithm to further improve the estimation accuracy and focusing effect of the image.

Conclusions
In this article, we proposed an autofocus algorithm for estimating the trajectory deviation of UAV through-wall radar.By optimizing the objective function of image contrast, the algorithm estimates the trajectory deviation of UAV-borne radar in the LOS direction utilizing the linear characteristic of the wall and the deviation along the track by the targets behind the wall, respectively.Thus, it circumvents the problem of serious 2D space-variant error and refocuses the imaging result.The simulation and experimental results show that the proposed algorithm works well in cases where the trajectory deviation exceeds the range resolution, which can greatly serve UAV-borne through-wall radar detection applications.The algorithm is capable of achieving maximum estimation accuracy at the level of range resolution.In the future, we will strive to improve the estimation accuracy and estimate UAV posture and vibration error.

Figure 1 .
Figure 1.(a) Through-wall signal propagation path model, (b) through-wall detection model in the presence of trajectory deviation.

Figure 3 .
Figure 3.The R f domain result under the actual and measured trajectory.(a) Actual trajectory, (b) measured trajectory.We sample the trajectory deviation values of UAV at L azimuth time points during a pass along the wall and obtain the deviation curve of the entire trajectory by estimating ∆y, ∆y = [∆y 1 , ∆y 2 , . . . ,∆y l ].

Figure 5 .
Figure 5. Flowchart of the proposed algorithm.

Figure 7 .
Figure 7. (a) BP image of the wall without LOS direction deviation, (b) Radon transform result of (a), (c) BP image of the wall with LOS direction deviation, (d) Radon transform result of (c), (e) BP image of the wall after estimation and compensation for LOS direction deviation, and (f) Radon transform result of (e).

Figure 8 .
Figure 8.(a) RD image of targets without along-the-track deviation, (b) RD image of targets with along-the-track deviation, (c) RD image of targets after autofocusing by [36], (d) BP image of targets without along-the-track deviation, (e) BP image of targets with along-the-track deviation, and (f) BP image of targets after estimating and compensating for the deviations with the proposed algorithm.

Figure 9 .
Figure 9. (a) Estimation of the trajectory deviation in the line-of-sight direction, (b) estimation residual of the trajectory deviation in the line-of-sight direction, (c) estimation of the trajectory deviation along the track, and (d) estimation residual of the trajectory deviation along the track.
represent the estimated curve and residuals of along-the-track deviation, respectively.The residuals fall in a range of −0.1 m to 0.04 m with the RMSE equal to 0.0396 m.The RMSE of along-the-track deviation is slightly larger than the line-ofsight deviation.This is because the accuracy of the along-the-track deviation estimation is affected by the error residuals in the LOS deviation estimation.

Figure 12 .Figure 13 .
Figure 12.Experimental results after the compensation of deviation in the LOS direction.(a) BP image of targets without along-the-track deviation, (b) BP image of targets with along-the-track deviation, (c) BP image of targets after compensation for both LOS and along-the-track deviations.

Table 1 .
Electromagnetic parameters of typical building materials.