Parameter Estimation and Error Calibration for Multi-Channel Beam-Steering SAR Systems

Multi-channel beam-steering synthetic aperture radar (multi-channel BS-SAR) can achieve high resolution and wide-swath observations by combining beam-steering technology and azimuth multi-channel technology. Various imaging algorithms have been proposed for multi-channel BS-SAR but the associated parameter estimation and error calibration have received little attention. This paper focuses on errors in the main parameters in multi-channel BS-SAR (the derotation rate and constant Doppler centroid) and phase inconsistency errors. These errors can significantly reduce image quality by causing coarser resolution, radiometric degradation, and appearance of ghost targets. Accurate derotation rate estimation is important to remove the spectrum aliasing caused by beam steering, and spectrum reconstruction for multi-channel sampling requires an accurate estimate of the constant Doppler centroid and phase inconsistency errors. The time shift and scaling effect of the derotation error on the azimuth spectrum are analyzed in this paper. A method to estimate the derotation rate is presented, based on time shifting, and integrated with estimation of the constant Doppler centroid. Since the Doppler histories of azimuth targets are space-variant in multi-channel BS-SAR, the conventional estimation methods of phase inconsistency errors do not work, and we present a novel method based on minimum entropy to estimate and correct these errors. Simulations validate the proposed error estimation methods.


Introduction
By using azimuth antenna beam-steering technology, spaceborne synthetic aperture radar (SAR) systems can operate in beam-steering (BS) modes, such as sliding spotlight mode [1,2] and Terrain observation with progressive scan (TOPS) mode [3,4], which can achieve higher resolution or wider swaths than traditional strip-map mode. However, spaceborne SAR suffers from a conflict between azimuth resolution and range swath. By increasing the number of azimuth-receiving channels, multi-channel technology [5][6][7] can overcome this dichotomy and significantly increase the swath without resolution loss. As a result, multi-channel technology has been adopted in advanced spaceborne SAR systems [8][9][10][11] and processing algorithms for multi-channel strip-map imaging have been proposed, including spectrum reconstruction algorithm [12] and digital beamforming algorithm [13].
Increasing requirements for high-resolution wide-swath data are currently driving the development of multi-channel BS-SAR [14]. With the combination of beam-steering technology and multi-channel technology, multi-channel sliding spotlight SAR can provide very high resolution over a

Multi-Channel Beam-Steering SAR System
Azimuth multi-channel technology increases the number of azimuth-receiving channels, so significantly improves the spatial sampling rate; this is equivalent to increasing the PRF by using Digital Beam Forming technology. In addition, to achieve higher resolution or wider swath, azimuth multi-channel technology and beam-steering technology can be combined to provide new imaging modes, namely the multi-channel sliding spotlight mode and multi-channel TOPS mode.
As shown in Figure 1, the beam in multi-channel sliding spotlight SAR is steered from forward-looking to backward-looking squint during an illumination period, as in single-channel sliding spotlight SAR. A target can be observed for longer than for a SAR sensor working in strip-map mode, which leads to finer resolution. The N (N = 3 in Figure 1) receivers would acquire the reflected pulse almost simultaneously, and the SAR sensor could correspondingly store N times more pulse data than single-channel spotlight SAR. This means that by reducing the PRF by the factor N, the range swath could become wider while the azimuth resolution would not degrade. Similarly, multi-channel TOPS employs beam steering from backward-looking to forward-looking squint during illumination and can achieve wider swath than single-channel TOPS with the help of N azimuth channels.

Signal Mode
A multi-channel BS-SAR system transmits a linear frequency modulated signal and each sub-channel receives the backscattered echoes. Since parameter errors in azimuth are the focus of this paper, for simplicity only the azimuth signal is considered.
In multi-channel BS-SAR, antenna steering is combined with multi-channel technology. In Figure 2, T w is the overall illumination time, X s represents the fully illuminated swath in azimuth, and X f (r) represents the azimuth antenna footprint length. In addition, there are two parameters specific to multi-channel BS-SAR. One is ω θ , the beam-steering rate, which is usually constant. Since the effective velocity [30] of the satellite is also approximately invariant, the beam is assumed to focus on a virtual rotation point (VRP). The beam rotates clockwise around the VRP in multi-channel sliding spotlight SAR and counterclockwise in multi-channel TOPSAR. Another new parameter, r s , is the distance from the SAR sensor to the VRP at the central illumination time, while r is the distance from the SAR sensor to the target at zero-Doppler time. ω θ and r s are positive in multi-channel sliding spotlight mode and negative in multi-channel TOPS mode. We define a factor α(r), which multiplies the azimuth resolution in multi-channel BS-SAR due to the steering beam, as It can be seen that α (r) < 1 in multi-channel sliding spotlight SAR and α (r) > 1 in multi-channel TOPSAR. So, the azimuth resolution in multi-channel sliding spotlight SAR is finer than in multi-channel strip-map SAR, while coarser in multi-channel TOPSAR. Reference [15] provides another definition of α (r), which is essentially the same.
Assuming that there are N channels and the azimuth antenna pattern is rectangular with 3 dB beamwidth, the azimuth signal from a point target located at (r, vt 0 ) acquired by i-th receiver can be expressed as In Equation (2), t is the slow time in azimuth, rect [•] represents the rectangular function, d i represents the distance between the i-th channel and the reference channel, which usually is at the center of the antenna, v is the effective velocity of the SAR sensor, and λ is the wavelength. R (i) (t; r, t 0 ) is the instantaneous distance between the i-th equivalent phase center of the sensor and an arbitrary target, given by [17] where R (t; r) = r 2 + (vt) 2 − 2vt cos θ, in which θ is the aspect angle depending on r, and f r is Doppler rate. It can be seen that the signal of the i-th channel can be seen as a replica of the reference signal-channel BS-SAR impulse response with a delay.

Spectrum Characteristics
The azimuth spectrum of multi-channel BS-SAR has some particular characters due to steering beam and multi-channel sampling. The raw data support in the azimuth time-frequency domain (TFD) is outlined in Figure 3 for both multi-channel sliding spotlight SAR and multi-channel TOPSAR.

t) and
A ( f ) represent signal amplitudes in the time and Doppler domain, respectively. The gradient color bars (blue, grey and gold) in the background represent the total PRF that multi-channel BS-SAR can achieve with N channels (here N = 3). The purple area is the whole TFD of the multi-channel BS-SAR system and the shaded parts on both sides are the TFDs of edge points without full illumination.
In Figure 3, three points (P1, P2 and P3) located on the left, center, and right of the azimuth are highlighted. The colored thin lines in this figure represent Doppler histories with negative slope, f r . The Doppler history of P2 is shifted upwards because of the constant Doppler centroid. The Doppler centroid of P1 is large in multi-channel sliding spotlight SAR because it experiences forward-looking squint illumination by the beam. By contrast, P3 is illuminated by backward-looking squint and has a smaller Doppler centroid. Consequently, the beam-steering results in a parallelogram TDF support which slants towards the lower right in multi-channel sliding spotlight SAR and towards the upper right in multi-channel TOPSAR. The steering bandwidth, B s , which occurs in multi-channel BS-SAR is then given by where k ω is slope of the azimuth-varying Doppler centroid (or known as derotation rate) as [3] k where θ dc (t) is the instantaneous aspect angle. Due to the beam steering, the illumination time of any point target is expanded or shrunk by the factor 1/α(r), which results in their instantaneous bandwidth, B a , being weighted by the same factor. Figure 3 shows that the instantaneous bandwidth is larger than the PRF but less than N · PRF. The total bandwidth, including the instantaneous bandwidth B a and the steering bandwidth B s is then given by where B 3dB is the bandwidth corresponding to the 3 dB antenna beam width. It can be seen that the total bandwidth, B t , is very large in the case of beam steering while the value N · PRF is usually only a little larger than B 3dB . This means the bandwidth would span several PRFs, resulting in spectrum aliasing. Moreover, the operating PRF in multi-channel SAR systems becomes several (∼N) times less than in single-channel SAR, so spectrum aliasing is very serious and needs to be removed.

Pre-Processing Approach
The combination of filtering reconstruction and derotation is commonly adopted to tackle spectrum aliasing in multi-channel BS-SAR because of its high efficiency and effectiveness. Derotation is essentially a convolution operation, which can be implemented by complex function multiplication and FFT, but the aliasing caused by under-sampling will remain. In the pre-processing algorithm, the FFT can be replaced by filter reconstruction, which can overcome the remaining aliasing and simultaneously produce a uniformly sampled signal.
The first phase function of pre-processing in a multi-channel BS-SAR system is given by where f d is the CDC. After multiplying by the first phase function h (i) 1 (t) , the steering bandwidth B s is eliminated and the CDC is compensated as shown in Figure 4a. Expanding the range R (i) (t; r, t 0 ) by a second order Taylor expansion and ignoring the constant term, the signal expression of i-th channel after multiplying by the first phase function h where u (i) a (t; r, t 0 ) is the reference single-channel BS-SAR signal after first step of derotation, given by As shown in Figure 4a, the bandwidth of s (i) a (t; r, t 0 ) is still larger than the PRF due to the under-sampling in each channel. Transforming (8) into the Doppler domain by performing FFT operation: where U a ( f + (k − 1) f pr f ; r) is the k-th sub-band of the spectrum of u a (t; r, t 0 ). The transfer matrix of multi-channel BS-SAR after the derotation can be described by [7,12] Equation (10) can be then rewritten in matrix form as where and The reconstruction filter is then given by Note P ( f ) may fail to reconstruct the spectrum because the transfer matrix H ( f ) is singular when the receivers' samplings coincide spatially. References [31][32][33] provide the modified reconstruction filters for this case.
Using filter reconstruction in the standard derotation operation is superior to use of the FFT. Signals of N channels after the first phase multiplication are transformed into the Doppler domain and combined as a whole unambiguous spectrum by the reconstruction filtering, which plays a similar role to the FFT in the implementation of convolution. On the other hand, the signal produced by the reconstruction filter can also be regarded as being in the time domain [14] because the FFT in pre-processing is one of steps of derotation implement, which is essentially a convolution operation in time domain. The last phase multiplication in the pre-processing can then be performed as in single-channel BS-SAR, with a phase function given by The signal after the pre-processing can then be expressed as Please note that there are three envelope terms in (18) and vt 0 in the third term represents the azimuth position of arbitrary targets. Whatever the value of t 0 , the first term is contained in the second term, as demonstrated in [34]. Therefore, the time range of signal s b (t; r, t d ) is determined by the first term, and (18) can be simplified as This means the time ranges of the pre-processed signals from targets at different azimuths are independent of their azimuth positions and overlap in time. For simplicity, we use the symbol {t} to represents the time range of the pre-processed signal.
Because only one FFT is employed in the pre-processing, the overlapped envelope is bell-shaped, and it is applicable to perform azimuth antenna pattern correction and signal weighting at this stage. After pre-processing, the equivalent PRF is given as [34] where N a is the total processing number in azimuth [34] after zero-padding. It should be selected to ensure f pr f 2 > B t .

Estimation Methods for DRR, CDC Error, and Phase Inconsistency Errors
Precise DRR, CDC, and well-calibrated phase inconsistency errors are important for focusing of multi-channel BS-SAR data. Compared with single-channel SAR, multi-channel SAR systems can achieve higher resolution or/and wider swath. As a result, the effects of DRR error on imaging are more serious and the required precision of the CDC is higher. At the same time, the spectrum aliasing brought by non-uniform sampling must be solved or circumvented before the estimation of DRR error ad CDC error, which do not exist in single-channel SAR system. Furthermore, the signal-noise-rate loss resulted from non-uniform sampling and the phase inconsistency errors between different channels increase the difficulties in estimation of DRR error and CDC error.
There have been numerous studies of the effects of the CDC error [30] and phase inconsistency errors [21][22][23][24][25][26][27][28][29] on imaging, but little analysis of the derotation rate error. In this section, the impact of DRR error on the pre-processing is first presented, and estimation methods for the DRR error, CDC error and phase inconsistency errors are then introduced. A novel weighting strategy is followed to improve the effectiveness and robustness of these estimation methods. Once these errors are estimated, they can be corrected in the imaging procedure.

Analysis of DRR Error
The derotation rate k ω is important in the pre-processing because the accuracy of the correction of the beam-steering bandwidth B s depends on it. The measurement of azimuth steered antenna pattern also requires high precise k ω [18]. However, k ω is not always known precisely due to the approximation in (5). k ω is defined as the partial derivative of the azimuth-varying Doppler centroid with time, which is approximately the Doppler rate of a point located at the VRP. Furthermore, the effective velocity in (5) may introduce an error into k ω , which will affect the pre-processing. Since the curvature of the orbit and earth, Linear geometric model [30] is commonly used in spaceborne SAR instead of curved geometric model for convenient. In the linear geometric model, the satellite velocity and beam velocity are substituted by the effective velocity in the imaging processing. According to the practical experience, however, those velocities are not exact for calculating derotation rate and the suitable velocity value for DRR is between satellite velocity and effective velocity. Though it is not accurate, the effective velocity is still used to calculate DRR in practice, which will introduce error inevitably. Since the satellite velocity is usually 10% larger than effective velocity for a typical spaceborne SAR satellite with height of 600 km, then the maximum calculating error for DDR will be 5%. If there is a DRR error, the time ranges of different targets in azimuth after pre-processing will not overlap, which will lead to erroneous azimuth antenna pattern correction and signal weighting. More seriously, there will be aliasing in the signal after pre-processing when this error is large enough.
Define k ω with DRR error as: Correspondingly, the slant range from the sensor to the VRP becomes Substituting k e ω into (7), the signal after pre-processing with inaccurate DRR is given by Unlike in (18), the first term in (23) depends on t 0 . Substituting − X s 2v < t 0 < X s 2v into the first term and second term in (23), {t} constrained by the first and second terms and is given by and r e The time range {t} given by (24) lies inside {t} given by (25), so the second term in (23) can be ignored (see Appendix A). The third term cannot be removed because the first term contains t 0 . So (23) can be modified to

Symmetrical Shift
Compared to Equation (19), it can be seen from (26) and Figure 4b that the time range {t} of a point target located at (r, vt 0 ) is shifted by This means the overlapped pre-processed signals from different azimuth targets will be separated based on their different azimuth positions when DRR error exists. The shift of the signal from an arbitrary point target is linearly dependent on its azimuth position. The signals from two targets symmetrically located at the scene center will separate symmetrically, so we refer to this effect as a 'symmetrical shift'.

Scaling Effect
The duration of {t} of an arbitrary point target is still constant but is increased or decreased by a factor given by: Since equivalent PRF (see in (20)) is a function of k ω , the number of samples of {t} for an arbitrary point target is also increased or decreased: where N e t and N t are the numbers of samples of a target with and without DRR error, respectively. Here f e pr f 2 is the equivalent PRF after pre-processing with DRR error. When δ > 0, N t is reduced for multi-channel sliding spotlight SAR and increased for multi-channel TOPSAR, while the opposite occurs when δ < 0. Hence we refer to this as a 'scaling effect'. Although N t will reduce in multi-channel sliding spotlight SAR with δ > 0 and in multi-channel TOPSAR with δ < 0, the number of samples for all targets illustrated will increase because the 'symmetrical shift' will separate the energy along azimuth (see Appendix B).
Taking the multi-channel TOPS mode as an example, a simple simulation is developed using the parameters listed in Table 1. Figure 5 shows the results after pre-processing. Three points are located at different range and azimuth positions in the scenario in Figure 5a. With DRR error (δ = 0.03), signal shift occurs after pre-processing in Figure 5b, whereas the time ranges {t} of three point targets are aligned after pre-processing without DRR error in Figure 5d. The number of samples clearly increase for DRR error in Figure 5c.

Estimation of the DRR Error
Based on the analysis on the effects of DRR error, a novel estimation method is proposed in this section, which contributes to the imaging processing of multi-channel BS-SAR. The central positions of signals coming from different point targets in azimuth after pre-processing will overlap without DRR error, while they will shift when DRR error occurs (see (27)). Assume there are two point targets located at (r,vt 1 ) and (r,vt 2 ), and the time interval between them is Then the time shift between the targets after pre-processing is Therefore, the DRR error can be obtained as Since the raw data are a collection of echoes reflected by all scatterers on the ground, we cannot separate the echoes of two point targets from the raw data. If we split the raw data into two equal parts and extend them to the same size as the whole data by zero-padding, we obtain two images after the pre-processing. Although the envelopes of different targets in each image do not perfectly overlap due to their different locations, the whole envelope will also shift some pixels from the center of the image after pre-processing. So, the formula (32) derived for the point target echo will also apply to the raw data.
The raw data is split into two equal parts in azimuth. To hold the timing positions of sub-data, zero-padding is performed as shown in Figure 6. The zero-padded sub-data are then pre-processed. The time shift between the products of pre-processing can be obtained using image registration. Finally, the DRR error can be determined by (32). Since the time shift is estimated from image amplitude, the last phase multiplication in pre-processing can be ignored.

Estimation of the CDC Error
Since the Doppler spectrum of multi-channel SAR data is ambiguous, the CDC estimation method for single-channel SAR data cannot be used directly. Reference [20] proposes the SCCC method for a multi-channel SAR system, but this will fail when the spatial sampling of multi-channels is severely non-uniform, which restricts its application. In fact, the CDC can be determined based on the pre-processed signal. As noted in Section 2.3, the signal envelope after pre-processing is bell-shaped, similarly to the azimuth antenna pattern. So, the methods in [30], such as power balance and peak detecting, can be used on the pre-processed signal to estimate the CDC. In this section, these methods are extended to be used in the estimation of CDC error. If there is a CDC error, ∆ f d , the signal after the first phase multiplication with h (i) 1 (t) will include ∆ f d . Due to the transformation of the pre-processing, the Doppler shift introduced by a CDC error can be seen as a time delay in the signal after pre-processing as From (33), it can be seen that the time range {t} of a point target is shifted by the DRR error and CDC error. So, the CDC error can be integrated into the estimation of DRR error. The time delay of an arbitrary point target after pre-processing with DRR error and DC error is given by For two point targets located at (r,vt 1 ) and (r, vt 2 ), the time delays after processing are It is then easy to obtain the DRR error and CDC error by solving Equation (35). The DRR error can also be solved using (32), and ∆ f d can be obtained from (35). If we split the raw data equally as in Section 3.2, t 1 = −t 2 . Therefore, the CDC error can be also be obtained as The flowchart of the estimation of DRR error and CDC error is shown in Figure 6. To obtain sufficient precision, the estimation procedure can be iterated until the termination condition is met as where ∆k (q) w and ∆ f (q) d are the estimated DRR and CDC error after the q-th iteration respectively and ∆k w,th and ∆ f d,th denote their thresholds for the precision.

Estimation for Phase Inconsistency Errors
Phase inconsistency error estimation is important for fine spectrum reconstruction and imaging in multi-channel SAR systems. The existing estimation methods for multi-channel strip-map SAR are mainly based on the MUSIC method [21]. This kind of methods are all based on the frequency consistency of all signals from different azimuth positions. In multi-channel strip-map SAR, all the signals from different targets will overlapped in Doppler domain as in Figure 7a. It can be seen that there are two ambiguous spectra in the grey parts, which can be used to construct the covariance matrix of Doppler spectrum. Then the signal subspace and the noise subspace can be extracted and are used to calibrate the phase inconsistency errors. However, in the multi-channel BS-SAR (Figure 7b), the Doppler spectrum of targets at different azimuth position is not overlapped because of the azimuth-varying Doppler centroid. Therefore, it is impossible to determine the ambiguity number in a certain Doppler gate and there is not appropriate spectrum that MUSIC method can employ.  References [27,28] propose estimation methods for multi-channel strip-map SAR based on the spectrum energy distribution, namely AWLS and DSO. The principle of these methods is that the spectrum energy in the effective bandwidth will leak out due to the phase inconsistency errors. Then some merit indicators can be used to calibrate the phase inconsistency errors. However, only part of spectrum is considered in both method which restricts their robustness to CDC error and their feasibilities to multi-channel BS-SAR was not investigated. In this section, the principle of optimization is extended into the multi-channel BS-SAR based on the expression of signal after pre-processing. The smaller the phase inconsistency errors, the smoother and more concentrated the spectrum. Image Entropy is then adopted to assess how well the spectrum is concentrated after pre-processing. The advantage of using image entropy is that it can simultaneously evaluate the spectrum inside and outside the effective bandwidth, which can achieve better performance that AWSL and DSO. Another benefit of using the whole spectrum is that the CDC error can be ignored while the AWLS and DSO methods require knowledge of the CDC.
Using m and l to represent the azimuth and range pixel indices respectively, rewrite Equation (12) in discrete form with phase inconsistency errors: whereS a (m, l) is the discrete form of S a ( f , r) with phase inconsistency errors, given as and Γ is the phase inconsistency error matrix: The unambiguous spectrum U a (m, l) can then be estimated by where P (m) is the discrete form of P ( f ) in (15). Then U a (m + n · f pr f ; l) can be re-expressed as The image entropy involving phase inconsistency error is then given as where [•] H represents conjugate-transpose and The optimization can be expressed as To solve the optimization problem, the cyclic coordinate iteration is needed, in which we minimize the objective function on one channel error at a time while holding all other channels fixed. In each iteration, the optimum algorithms, such as gradient descent and Newton-like algorithm, can be used. Reference [29,35] give a fast convergence algorithm that can also be introduced here. A kind of closed-form solution in each iteration can be obtained by constructing a surrogate function, which has high convergence rate. The termination threshold for iterations is set as where Φ q is the image entropy after q-th iteration and ∆Φ th denotes the threshold for the precision. The small ∆Φ th , the higher the precision and the greater the computational load.
Since the estimation method is based on the minimum entropy, it can be named as minimum entropy (ME) method. The flowchart for this estimation method is given in Figure 8. Because the errors usually interact with each other in practice, we perform the estimations in Figures 6 and 8 alternately to give high estimation precision. Notice that the proposed ME method in our manuscript is different from conventional minimum-entropy-based autofocusing. Because the proposed method is performed in Range-Doppler domain before the imaging processing and focuses on the phase inconsistency errors between different channels while MEBA is targeted at azimuth phase error in image domain.

Weighting for Estimation
In the previous discussion, it was seen that the shape of the pre-processed spectrum is important in estimating DRR error, CDC error, and phase inconsistency errors. In practice, however, thermal noise may affect the pre-processed spectrum. Reference [12] demonstrates that the output noise power after reconstruction can be amplified for the case of highly non-uniform sampling. The uniformity factor Fu can be defined as where d is the effective distance between adjacent channels. In [7], the signal-to-noise ratios (SNR) scaling factor, which quantifies the amplified degree of output noise power due to filter reconstruction, is defined as where SNR in and SNR out are the SNR before and after filter reconstruction, respectively, and PRF uni represents the PRF corresponding to uniform space sampling. Here σ 1 , σ 2 , · · · , σ N are the singular values of matrix H(see (11)). For the uniform case (Fu = 1), the SNR scaling factor is 1. When the system works with non-uniform PRF, the correlation between the adjacent row vectors in H will increase, which results in small singular values, so the SNR scaling factor will increase. More information on the SNR scaling factor can be found in [7]. An example of an amplified noise spectrum is shown in Figure 9a. It can be seen the noise spectrum components at the edge of the bandwidth dominate the degradation of SNR. When the SNR is low, the noise power may exceed the signal power at the edge of the bandwidth, which will degrade the performance of the proposed estimation methods. According to [31], the normalized noise spectrum can be got by reconstruction matrix, given as (a) (b) (c) Figure 9. Weighting of spectrum for estimation. (a) shows the spectrum of signal, noise and noised signal after pre-processing without weighting; (b) is the spectrum of signal, noise and noised signal after pre-processing weighted by F 1 ( f ); (c) is the spectrum of signal, noise and noised signal after pre-processing weighted by F 2 ( f ).
Then a natural weighting function can be formulated to suppress the noise spectrum components as Evidently, the noise spectrum can be flattened with a uniform floor by using the weighting filter in Equation (51). Since the estimation of the DRR and CDC errors is based on the maximum of the spectrum, the weighted spectrum will became more concentrated (see Figure 9b) and the effect of noise will be significantly reduced. For the phase inconsistency errors, however, the noise spectrum at the edge of the bandwidth will still reduce the estimation performance and even cause the iteration to converge to the wrong value. To tackle this problem, we modify Equation (51) as where win ( f ) is the Hanning window, Blackman window, or another commonly used window function. A criterion for choosing the window is that the edge sections of the spectrum cannot be weighted to 0 because all the spectrum components should be included when calculating the signal entropy. In this paper, Hamming window is chosen. The weighted spectrum becomes smoother after weighted by F 2 ( f ), as shown in Figure 9c, and the noise is suppressed further, especially in the edge segment of the spectrum. After using the weighting function in (52), the estimation method for phase inconsistency errors will get better performance.

Experiments and Discussion
In this section, simulated multi-channel BS-SAR data are used to assess the performance of the proposed method to estimate the DRR error, CDC errors, and the phase inconsistency errors. For convenience, the phase inconsistency errors are called PI errors for short in Tables 2 and 3.   Table 3. Image quality indicators of the point targets located at the scene edge.

Point Targets Simulation Experiments
The parameters in Table 1 are used in the simulation experiment. The DRR error, CDC error, and phase inconsistency errors are shown in Table 2. The effects of these errors are first simulated. The SNR is set as 20 dB in this experiment. The Doppler spectrum of multi-channel TOPSAR after pre-processing with different kinds of errors are shown in Figure 10 while the Doppler spectrum of multi-channel sliding spotlight is similar and omitted for briefness. However, the estimation results for both modes are shown in Table 2. Then the imaging results for both modes are presented in Table 3.
From Figure 10, it can be seen the azimuth spectrum with DRR error is slightly widened since the energies from targets located at different azimuth positions are slightly separated due to the 'symmetric shift' effect. As a result, some energy leaks out of the azimuth bandwidth and the amplitude near zero frequency is lower than that without error. The azimuth spectrum with CDC error is shifted because the constant Doppler error leads to a residual shift in the Doppler domain. The effect of phase inconsistency errors is more serious: the amplitudes of different sub-bands U a (m + n · f pr f ; l) are increased or decreased and the spectrum is discontinuous at the boundaries of adjacent sub-bands. The spectrum power outside the 3 dB bandwidth also clearly increases. These impacts are more obvious after antenna correction (Figure 10b). The Doppler spectrum is deformed by DRR error and becomes tilted due to CDC error. Please note that the shape of the overall spectrum with DRR error is not the same as for arbitrary targets because the spectra of point targets located at different positions with DRR error are not located in same spectral interval. Since the spectrum is reconstructed by segment (see U a ( f + k · f pr f ; r) in (14)), the phase inconsistency errors will disturb the weights of S a (k) ( f , r) in each segment. Therefore, the spectrum with phase inconsistency errors becomes discontinuous and unsteady. The effects of these errors on Doppler spectrum in multi-channel sliding spotlight SAR are similar and are omitted here. The imaging results, azimuth profiles and quality indicators of the point targets located at the scene edge are shown in Figure 11 and Table 3. It can be seen that azimuth resolution with DRR error degrades seriously because part of the spectrum is outside the effective bandwidth. The amplitude of the target also decreases due to imperfect azimuth antenna pattern correction and spectrum loss. From Figure 11f, the first null points of azimuth profile are about −25 dB, which cannot meet the quality of application in practice (−40 dB). The CDC error seriously affects the image (Figure 11g) because the range cell migration is not corrected completely. In addition, both the Peak Sidelobe Ratio (PSLR) and Integrated Sidelobe Ratio (ISLR) decrease with phase inconsistency errors and the amplitude decreases as well. Overall, the effects of these errors on multi-channel sliding spotlight is smaller than multi-channel TOPS. Since the azimuth swath in multi-channel sliding spotlight is smaller, which leads the effect of DRR error smaller. CDC error also has much less impact on multi-channel sliding spotlight. However, the intensity loss is much seriously in multi-channel sliding spotlight than multi-channel TOPS.

Performance Analysis of Estimation Methods
To test the performance of proposed estimation methods, the Monte Carlo experiments with 100 times are carried on. In each experiment, the DRR error is set as 3%; the CDC error is set as 100 Hz and phase inconsistency errors are set random between ±30 • . As illustrated in Section 3.5, the SNR after pre-processing in multi-channel BS-SAR system depends on the uniformity of the spatial sampling. Therefore, we here analyze the performance of estimation methods for DRR error, CDC error, and phase inconsistency errors against SNR and Fu. Since the results of multi-channel sliding spotlight SAR is similar with multi-channel TOPSAR, the performance is analyzed based on multi-channel TOPSAR for convenience. For the phase inconsistency errors, the root-mean-square error (RMSE) is defined as: where ∧ ζ i is the estimate of ζ i . The performance of the DRR error estimation method against SNR and Fu is shown in Figure 12. The estimation errors for DRR decrease as SNR increases for both values of Fu used, but the estimation error for Fu = 0.9 is much larger than the figure for Fu = 1.05. This can be explained by the azimuth ambiguity signal rate. When Fu = 0.9, the operating PRF of the system is lower than in the uniform case. As a result, more ambiguous power is aliased into the processing bandwidth. The fluctuation at Fu = 1.1 when SNR = 0 dB in Figure 12b can be explained by the same. As the estimation precisions of DRR error can also be impacted by the azimuth ambiguity signal rate, expect the SNR. When Fu increases from 1 to 1.2, the SNR scaling factor (see (49)) will increase, which will degrade the SNR out and result worse estimation precision. On the other hand, larger Fu means larger PRF and less ambiguity signal rate according the sampling theory. Then the less ambiguity will alleviate the negative tendency of estimation precision brought by worse Fu. In the case here, the scaling factor domains the upward tendency of estimation precision from Fu = 1 to Fu = 1.1 and then the ambiguity signal rate becomes the dominated factor after Fu = 1.1. It also can be seen that this conflict between scaling factor and ambiguity signal rate is depended on the SNR. When SNR is large enough (e.g., 20 dB), the estimation error has no obvious correlation with the uniformity factor. The performance of the CDC error estimation methods against SNR and Fu is shown in Figure 13. In this experiment, we compare the SCCC method with the symmetric shift (SS) method proposed in this paper. The SS method achieves slightly better precision than SCCC for all SNR and is robust against the uniformity factor Fu, while the SCCC method is sensitive to Fu. This is because the SCCC method depends on the correlation between the adjacent pulses. As the Fu increases, the estimation error of SCCC method has the increasing tendency. The drop at Fu = 1.1 is only a bad case, which should be ignored. The figures for proposed SS method at SNR = 0 dB have not obvious correlation with Fu, which indicates SS method is not sensitive to Fu, because SS method is based on the shift between two images after pre-processing, which will partly neutralize the error due to increasing Fu. Therefore, the SS method is in most cases more accurate a bit.
The performance of the phase estimation method (ME method) against SNR and Fu and its comparison with the AWLS method described in [27] is shown in Figure 14. It can be seen that the proposed ME method performs better than AWLS against SNR and Fu. Generally, the precision of both estimates become better as Fu increases. This is because the azimuth ambiguous energies outside the main bandwidth are folded into the effective bandwidth. The azimuth ambiguity signal rate dominates when the PRF is less than PRF uni . The sharp rise in the estimation error from AWLS as Fu changes from 1.15 to 1.2 is an artefact caused by the covariance matrix becoming singular when coincident sampling [31][32][33] occurs, i.e., the sampling of the first channel for one pulse coincides with the sampling of the last channel for the previous pulse, but the proposed method would not be affected in case of coincident sampling.

Distributed Targets Simulation Experiments
In this subsection, distributed target simulation experiments are conducted. The image used for simulation is acquired by Sentinel 1A in March 24th, 2019. This scene is located at Coastal Waters of Great Barrier Reefs in Australia with 20 m resolution. The parameters are the same as for multi-channel TOPSAR in Table 1 except that the resolution is now 20 m. The scene is 10 km in azimuth and 50 km in range. The original scene and imaging results with errors and after error compensation are shown in Figure 15. Adding the errors in Table 2, it can be seen that the image with errors (Figure 15b) is smeared and ghost targets occur. Ghost targets are mainly resulted from the phase inconsistency errors because this is one of specific symbols of phase inconsistency errors. Besides the overlay of ghost targets, the CDC error is another factor that leads the smeared effect. It is not easy to see the effects of DRR error in the image directly because it mainly has an effect on the azimuth resolution and intensity loss. After error correction, the image (Figure 15c) is well focused, as same as the image simulated without errors (Figure 15a).

Conclusions
Multi-channel BS-SAR is a promising approach for high resolution and wide-swath observation, but is subject to errors that can lead to degradation of image quality. To overcome spectrum aliasing caused by beam steering and non-uniform sampling of the azimuth channels, pre-processing, combined with derotation and spectrum reconstruction, is used by most multi-channel BS-SAR imaging algorithms; this requires accurate system parameters and error calibration. However, the derotation rate error is not always accurately calculated, and the constant Doppler centroid and phase inconsistency errors cannot be calibrated by conventional methods. The interaction between different errors also makes the error estimation more difficult.
The derotation rate error leads to a time shift in the pre-processed signal and signal scaling. The shift varies with the azimuth positions of targets, which breaks the overlap of the pre-processed signal, so azimuth antenna pattern correction and signal weighting cannot be performed correctly. Furthermore, the spectra of targets located at the edge of the illuminated scene may shift out of the processing bandwidth and cause resolution degradation and radiometric loss. The CDC error can also lead to a constant spectral shift, which allows the DRR and CDC errors to be determined by detecting the time shifts between two halves of the data after performing pre-processing. The method for estimating the derotation rate and CDC errors is validated by simulation, and the results show it to be robust against SNR and uniformity factor. Phase inconsistency errors between azimuth channels are inherent in multi-channel SAR systems but most estimation methods for multi-channel strip-map SAR cannot be used in multi-channel BS-SAR because the spectra of different azimuth targets are separated in the Doppler domain, which prevents the use of traditional MUSIC methods. Methods based on the spectrum energy distribution are applicable to multi-channel BS-SAR but are sensitive to the CDC error because they depend on the spectrum only inside or outside the processing bandwidth. To overcome this, we use a minimum entropy estimation method based on the full spectrum which is robust against the CDC error. With the help of novel weighting strategy, the ME method can achieve excellent performance even in bad SNR and uniformity factor. Estimation and correction simulations demonstrate its excellent performance.