A High-Resolution SAR Focusing Experiment Based on GF-3 Staring Data

Spotlight synthetic aperture radar (SAR) is a proven technique, which can provide high-resolution images as compared to those produced by traditional stripmap SAR. This paper addresses a high-resolution SAR focusing experiment based on Gaofen-3 satellite (GF-3) staring data with about 55 cm azimuth resolution and 240 MHz range bandwidth. In staring spotlight (ST) mode, the antenna always illuminates the same scene on the ground, which can extend the synthetic aperture. Based on a two-step processing algorithm, some special aspects such as curved-orbit model error correction, stop-and-go correction, and antenna pattern demodulation must be considered in image focusing. We provide detailed descriptions of all these aspects and put forward corresponding solutions. Using these suggested methods directly in an imaging module without any modification for other data processing software can make the most of the existing ground data processor. Finally, actual data acquired in GF-3 ST mode is used to validate these methodologies, and a well-focused, high-resolution image is obtained as a result of this focusing experiment.


Introduction
Synthetic aperture radar (SAR) has developed rapidly over the past few decades as an effective means of Earth observation. The resolution of spaceborne SAR has evolved from tens of meters to meters to even decimeters. Sliding spotlight (SL) mode in which an antenna always aims at a point called the virtual rotation point (VRP), allows for a longer accumulating time [1,2] and results in high-resolution images. Compared with the traditional stripmap mode, in which the VRP moves toward infinity, staring spotlight (ST) mode is another version of SL mode in which the VRP is just within the imaged scene on the ground [3][4][5][6]. Considering the multiple observation angles of SL/ST mode, Munson et al. treated SAR image focusing as a tomographic reconstruction problem of narrow-band computer-aided tomography (CAT) and proposed a tomography formulation [7]. Eichel et al. used spotlight SAR interferometry for terrain elevation mapping and interferometric change detection [8]. Based on the COSMO-SkyMed Spotlight-2 data, Filippo recovered partially imaging algorithms would be a bad influence on the quality of the image. This problem could be solved by introducing two items respectively in range-Doppler domain and two-dimensional frequency domain [23,24]. On other hand, GF-3 uses an active phased array antenna, but the steering angle range and power gain are modulated by array elements. This problem is very common in SL/ST and terrain observation with progressive scan (TOPS) [25] modes and can be solved by compensating the gain-loss ratio in the raw data level [26]. This paper introduces a GF-3 ST focusing experiment that aims to address problems in higher-resolution SAR images and make preparations for China's next generation of spaceborne SAR. In Section 2, the data acquisition and preprocessing are illustrated briefly, and then, data processing including model error compensation, the imaging algorithm, stop-and-go approximation, and antenna pattern modulation is explained in detail. Experimental results are presented in Section 3, followed by discussion and conclusions in Section 4.

Focusing Experiment Based on GF-3 Staring Data
In practice, to acquire operational SL-mode products, GF-3 works in the sliding spotlight mode during which the antenna steers at a VRP below the ground to increase the integration time. The significant distinction between SL mode and ST mode in this experiment is that in ST mode the VRP is located within the imaged scene on the ground (i.e., the antenna is always pointing at the same scene during the accumulating time). Table 1 summarizes some parameters of GF-3 in SL and ST modes. The resolution in slant range can be derived from the bandwidth of the pulse signal, but its projection on the ground-ground resolution-changes with the incidence angle as shown in Figure 1a. As we can see, GF-3 can get a ground resolution range within [0.815, 1.826] m. The resolution in azimuth changes with the antenna steering angle range as shown in Figure 1b. As we can see, GF-3 can get the highest theoretical azimuth resolution of about 42 cm, with a rectangular weighting Doppler spectrum, when the steering angle range is 3.8 • .
However, higher resolution means more difficult processing, and the performance of traditional approaches usually is not satisfactory. In order to get qualified images, this experiment made some improvements to the range history model, the imaging algorithm, the stop-and-go approximation, and the antenna modulation. The details of this experiment can be found in the below subsections.

Data Acquisition and Preprocessing
GF-3 uses an active phased array antenna to achieve azimuth beam steering, which can steer faster compared to traditional mechanical antenna. In [21], Sun et al. introduced GF-3's active phased array antenna, which contains 12 columns (azimuth) and 64 rows (elevation) that have transmit-receive channels measuring 7.5 m (azimuth) and 1.232 m (elevation) that are distributed in two panels in SL/ST mode. In consideration of its grating lobes (in Section 3.4), this antenna can achieve a steering angle within ±1.9 • . At the same time, the steering angle varies discretely but continuously, and the step steering angle is 0.01 • . In other words, the steering angle varies 0.01 • after many pulses, which we call the stationary number of pulses (SNP). In effect, the SNP controls the VRP's position when the PRF is constant; the larger the SNP, the farther the VRP is from the satellite. In this focusing experiment, the data was acquired on 11 March 2017 in Nanjing, China. The SNP is 90 and the SNP of a SL acquisition at the same scene is 120.
After receiving echoes reflected from ground, the processor on GF-3 compresses the data with a block adaptive quantization (BAQ) (8:4) algorithm. Every frame of the compressed data contains not only an echo, but also some essential auxiliary parameters, such as the radar system parameters, the satellite attitude and position, the antenna steering angle, and the beam position.
Because the VRP is fickle due to the SNP, some tricks may be needed to determine the VRP's position. We combine two frames of raw data into one equation and calculate the VPR. Assuming N frames of raw data are acquired during the integration time, there will be N/2 equations (i.e., the number of the VRP is N/2). Then, the final VRP is estimated by the least squares (LS) means. Moreover, some essential parameters like scene size, Doppler bandwidth (the whole scene and single point), and nominal resolution can be calculated.
At this point, both echo data and some essential auxiliary parameters have been acquired.

Curved Orbit and Cubic Phase Error
Initially, the traditional imaging algorithms were based on airborne SAR acquisition geometry that assumed that the target on the ground was still and the platform flew at a constant velocity. But in spaceborne cases, the platform's flight path as well as the Earth's surface are curved, and the Earth rotates continuously. In order to make the most of existing algorithms, the HREM, which used equivalent velocity and squint angle to describe the range history was proposed. This model (see Figure 2a) assumes that the sensor flies at a constant velocity v e , the sensor locates on point O when the antenna points to the target T on the ground, and the azimuth time t = 0. The sensor locates on point P at an arbitrary azimuth time t, A is the nadir point, the distance between O and T is r, the distance between P and T is R e (t), θ L is the looking angle, θ sq is the squint angle, and ϕ e is the complement angle of θ sq . Then, the range history can be expressed as: where λ is the carrier wavelength; f D is the Doppler centroid frequency, and f R is the Doppler frequency modulation rate, and they can be obtained by calculating the first-and second-order derivative of the actual range history between the target and the sensor. In this experiment, some of these parameters are sketched in Table 2.
In most cases, the integration time is usually short (no more than 2 s), and the trajectory can be described by the HREM accurately. In SL or ST mode, however, targets on the ground that are illuminated by a complete beam have a longer integration time as compared to stripmap mode. As shown in Figure 2b, the integration time of the target in the middle of the imaged scene is about 8.58 s. Because of the long integration time and the spaceborne acquisition geometry, the actual range history is not a pure hyperbola and the phased error caused by the range deviation becomes larger as the integration time extends. In addition, the third-and higher-order items of Equation (1) cannot be ignored in the long integration time or high-resolution. In operational cases, with the help of satellite attitude and orbit parameters provided by Global Positioning System (GPS), f D and f R can be calculated. Then, the equivalent range history can be obtained according to Equations (1) and (2), and we compensate the model error in two aspects-range deviation and cubic phase error.  In order to use the HREM to deal with raw data, the deviation between the actual and the equivalent range history must be compensated. The range history vector R st (t) of a target can be obtained with the help of attitude and orbit parameters, and it can be expanded by the Taylor formula: where the subscript s means satellite, the subscript t means target; R st , V st , A st , B st , C st denote the distance, velocity, acceleration, rate of acceleration, and rate of rate of acceleration vector between the satellite and the target when t = 0. Then, the distance between the satellite and the target can be expressed as where and x 1 , x 2 , x 3 , x 4 can be found in [18] (Equations (10)- (13)).
f D and f R of the HREM can be expressed as Then, the deviation can be expressed as The actual SAR echo signal can be expressed as where τ is the range time, A 0 is a complex constant, c is the speed of light, K r is the chirp rate, w r (·) is the window in range, and w a (·) is the window in azimuth. Ignoring A 0 and the windows, the above equation can be expressed as By application of the principle of stationary phase (POSP) [27], the range Fourier transform of the above equation can be written as where F τ {·} represents the range Fourier transform and f τ is the range frequency. Then, the actual range history becomes an ideal hyperbola as described by Equation (1) after compensating in azimuth-time domain. This operation is just like the first-order motion compensation mentioned in [28], which can correct both phase and position. Even though Equation (15) is just valid for the reference target located in the center of imaged scene, the beam width is very small (about 0.287 • in ST mode) and the correction is used on the whole imaged scene in this experiment. After compensating the above derivation, a pure hyperbolic range history is forced, so that the classical imaging algorithms can be used without modification. But in the case of high-resolution, higher-order items, Equation (4) will play an important role in focusing. In this experiment, only third-order items were considered, and c 3 in Equation (4) can be obtained by fitting R st (t) and t. In general, c 3 c 2 , so the quadratic stationary point calculated by the POSP can be approximated as cubic stationary point. Under this approximation, the azimuth quadratic stationary point can be written as Then, the cubic phase error can be compensated by (16) in range-Doppler domain, where f a is the Doppler frequency.

Two-Step Processing Algorithms
In ST mode, the Doppler bandwidth of a target located in the imaged scene can be expressed as 2X I /(D az X) [2], where X I is a synthetic aperture, D az is the antenna aperture in azimuth, and X is the length of the antenna footprint. Under the assumption that scene width is X in azimuth, the Doppler bandwidth of the whole scene is where B as is the Doppler bandwidth of stripmap. The B a in Equation (17) is usually much larger than the PRF. If we deal with the ST data in the way of the traditional stripmap SAR, then there would be a Doppler aliasing phenomenon in azimuth, which would introduce false targets in the focused image. GF-3 uses the two-step processing approach-the de-ramped chirp scaling algorithm (DCSA)-to process the raw echo data. This algorithm involves two steps: the first step is prefiltering in azimuth, and the second step is dealing with the filtered data with the imaging approach of the standard stripmap SAR.
The first step implements an azimuth convolution between the raw data and a reference chirp signal s re f (t):ŝ where t is azimuth time, s a (t) is azimuth raw data, * t is a convolution about t,ŝ a (t) is azimuth data after convolution, f Rre f is the Doppler frequency modulation rate of the reference point (in the center of imaged scene), and f Dre f is the Doppler centroid frequency of the reference point, and the value of f Rre f and f Dre f can be found in Table 2.
In a discrete domain, the above convolution can be converted to a fast Fourier transform (FFT) and complex multiplication shown in the below equation [4] s a (m · ∆t ) = exp jπ f Rre f (m · ∆t ) 2 · FFT s a (i · ∆t ) · exp j2π f Dre f i · ∆t + jπ f Rre f (i · ∆t ) 2 (19) where i = − N a 2 , · · · , N a 2 and N a is the number of samples of raw data in azimuth, m = − P 2 , · · · , P 2 and P is the number of samples of prefiltered data in azimuth, ∆t is the sampling interval of raw data in azimuth, and ∆t is the sampling interval of prefiltered data in azimuth.
In fact, the azimuth prefiltering is equivalent to reducing the signal duration while keeping the Doppler bandwidth unchanged. The result is that the azimuth sampling rate becomes higher than the Doppler bandwidth. Then, the problem of the Doppler aliasing phenomenon in azimuth would be resolved.
The second step is to use the classical CSA to deal with the prefiltered data. But in the process of azimuth filter and phase residual, the second-order term shown in Equation (20) should be added to eliminate the effects of the prefiltering.
Then, the azimuth filter and phase residual can be realized by multiplying the azimuth signal with (21) and the above parameters can be found in [10].

Stop-And-Go Approximation
In the traditional airborne or low-orbit spaceborne SAR imaging algorithm, it is usually assumed that the radar does not move during the transmission of the pulse signal and the reception of the corresponding echo reflected from scatters (i.e., the stop-and-go approximation). Under this approximation, both the signal model and complexity of the imaging algorithm are greatly simplified. However, the approximation will introduce two adverse effects on image quality, especially in a high-resolution SAR system.
The first effect is that there will be a range-dependent azimuth shift in the focused image. In this staring experiment, between a pulse signal transmission and its reception, the satellite moves about 45 m. This shift can be compensated using a linear azimuth phase ramp Φ sg1 in range-Doppler domain after the range cell migration correction [22,24].
where f a is the azimuth frequency vector, τ is the range time vector, and τ s is the time delay of receiving the first range signal. The second effect is that there will be a range-frequency-dependent azimuth shift in the echo signal. In the GF-3 case, the pulse duration is 45 µs, and the satellite moves about 30 cm. The range-time and chirp frequency have a one-to-one relationship, and then, the azimuth phase has a different slope in a different range frequency. Fortunately, this kind of shift is space invariant in azimuth, and it can be compensated using Φ sg2 in two-dimensional frequency domain.
where f τ is the range frequency vector, K r is chirp rate, and f s is range sampling frequency.

Antenna Pattern Modulation
GF-3 uses an active phased array antenna to achieve azimuth beam steering. However, the active phased array antenna has three drawbacks in practical application.
At first, the scanning range is limited by the distance between the two adjacent elements D, which is equivalent to the ratio of antenna aperture D az , and the number of element N az in azimuth. If D is not small enough, grating lobes will appear and constrain the steering range. The position of the grating lobes θ g can be described by Equation (24): where θ s is the steering angle, λ is the carrier wavelength, and n is an arbitrary non-zero integer. Figure 3b shows the main lobe and the grating lobes between [−15 • , 15 • ] when θ s = 1.9 • . There are six peaks in Figure 3b, and obviously, the highest peak is the main lobe and the others are the grating lobes. The interval between the two adjacent peaks is about 5.1 • , which means that the maximum steering range is 5.1 • . In operational SL or ST mode, GF-3 could steer from −1.9 • to 1.9 • in azimuth.
In addition, the antenna's maximum power gain changes with the steering angle. Equation (25) shows the active phased array antenna pattern where G is the power gain of the array antenna, G 0 is the power gain of a single element, N el = 64 is the number of elements in elevation, θ is the look direction or azimuth antenna pattern angle, and θ s is the steering angle. The envelop of array gain is modulated by a sinc(·) function and Figure 3a,b shows relationship between the steering angle and the antenna pattern. The influence of the element pattern is more obvious in TOPSAR [26]. In this experiment, 32,130 frames of data in azimuth were acquired, and the normalized modulation factor of every frame can be found in Figure 3c. Finally, the antenna always illuminates the same scene on the ground in ST mode; therefore, the magnitude of images would be modulated by the antenna pattern. An intuitive phenomenon when the middle part of an image is brighter than the edges in azimuth Take the data acquired in this experiment as an example, we can get the antenna pattern in both transmit and receive mode with the help of auxiliary data, as shown in Figure 4a,b. According to the width of the imaged scene and distance of each pixel in azimuth, the beam width of the whole scene can be calculated. In this experiment, the beam width of the imaged scene is about 0.287 • and the number of the azimuth samples in the final image is 38,588. In order to demodulate the antenna pattern, an interpolation operation is carried out on the round-trip antenna pattern, and the interpolated result can be found in Figure 4c. Then, the inverse of the curve shown in Figure 4c can demodulate the brightness of the focused image by every pixel at the image level.

Processing Flow
In this subsection, a processing flow of GF-3 in ST mode is proposed and it can be found in Figure 5. After data acquisition and preprocessing, the echo data and auxiliary parameters are obtained. Then, the antenna element correction mentioned in Figure 3 is performed on the echo data and some parameters, such as the Doppler centroid frequency and the Doppler frequency modulation rate, are calculated based on auxiliary parameters. In azimuth-time domain, the first-order motion compensation forces the range history of the target into a perfect hyperbola, which is the foundation of the imaging algorithms. The azimuth prefiltering operation can suppress the Doppler aliasing phenomenon in azimuth. The next step is using the classical chirp scaling algorithm to deal with the prefiltered data. At the same time, the stop-and-go correction is carried out in range-Doppler and two-dimensional frequency domains. Before azimuth FFT, the cubic model error is corrected in range-Doppler domain. In order to suppress the sidelobes, Taylor windows are added in range and azimuth. The azimuth weight window was added in azimuth-time domain after prefiltering, and the range weight window was added in range-frequency domain. Finally, antenna modulation was compensated at the image level and a focused image was produced.

Experimental Results
In this section, an acquisition taken by GF-3 ST mode in Nanjing, China was used to verify the methodology mentioned in last section. Some parameters of the data are listed in Table 3. It should be noted that the steering angle range is [−1.78 • , 1.78 • ] in this ST experiment. According to Figure 1b, the theoretical azimuth resolution is about 44.7 cm. But the Doppler spectrum is weighted down by the antenna pattern and a Taylor window, and the actual azimuth resolution is about 54.5 cm.

Two-Step Algorithm Procession
The raw data is directly processed by the two-step processing algorithm and the primary image is shown in Figure 6a. We calculated some parameters of this image, and the result can be found in Table 4, where Na is the number of azimuth samples in the raw echo data, P is the number of azimuth samples after prefiltering, Ba_s is the Doppler bandwidth of the whole imaged scene, Ba_p is the Doppler bandwidth of one target within the imaged scene, and PRF_new is the azimuth sampling rate after prefiltering. Obviously, PRF_new is larger than the Doppler bandwidth (both Ba_s and Ba_p), and as a result, there will not be a Doppler aliasing phenomenon in azimuth.  To analyze the quality of the focused image produced by our approach, a boat moored at the bank of the Yangtze River, which is shown in Figure 6b, is treated as a corner reflector; some results can be found in Table 5 and Figure 7. The overall improvement of the performance is related mainly to the peak sidelobe ratio (PSLR) and the integrated sidelobe ratio (ISLR), rather than to the resolution. So Taylor weight windows are added in both range (−24 dB) and azimuth (−19 dB). This operation can suppress the sidelobes effectively, but the main lobe will widen (i.e., lose resolution). The theoretical resolution is 0.625 m in range and 0.447 m in azimuth. After the weighting operation, the actual resolution is 0.731 m in range and 0.545 m in azimuth. It should be noted that the azimuth signal is not only weighted by a Taylor window, but also by the antenna pattern. Whereas calculating a precise antenna pattern window is very difficult because of the SNP, the actual azimuth resolution is the result of a simulation. The contour in Figure 7c is mussy.

Model Error Correction
As we can see from Figure 7b, asymmetric sidelobes, which are caused by a cubic phase error [5] (pp. 210-212) can be found in the compression results. At the same time, the PSLR and ISLR in azimuth are not perfect enough because of model error. After model error correction, the results can be found in Table 6 and Figure 8. Obviously, the performance in azimuth has been greatly improved: as azimuth resolution becomes higher, PSLR and ISLR become lower. Unfortunately, there still is a small amount of cubic phase redundancy in both range and azimuth. Meanwhile, the contour is still mussy.

Stop-And-Go Correction
The contour in Figure 8c is not tidy enough, and this might be caused by the stop-and-go approximation in the DCSA. After stop-and-go correction, the result can be found in Table 7 and Figure 9. Obviously, the compression result in both slant range and azimuth was better compared with results in the last subsection. The sidelobes can be found in the contour shown in Figure 9c. However, the performance is still not good enough and the sinc pulse seems a little noisy. The reason can be summarized as follows. At first, in order to remove the deviation between the actual range history and the HREM, a novel formula shown in Equation (15) is compensated in azimuth-time domain. But Equation (15) can only remove the derivation of the reference target located in the middle of imaged scene. In reality, different targets have different range histories, and Equation (15) cannot remove the deviation of all targets located in the whole imaged scene. At the same time, we obtain the sensor's real-time coordinates via the GPS receiver mounted on the satellite and then estimate orbit parameters. But the accuracy of the coordinates and the orbit parameters will influence the model error correction. If we use the coordinates provided by a dual-frequency GPS, a more accurate result might be obtained.
In addition, the boat is an integration of a set of scatters instead of an ideal point target, and other targets around the boat shown in Figure 6b will also influence the compression.
Finally, as the pulse signal passes through atmosphere, refraction will occur. Then, the propagation path will be longer than the theoretical range history, and this might affect the parameters' calculations and phase error compensation.

Antenna Pattern Demodulation
In ST mode, GF-3 always illuminates the same scene on the ground, so the brightness of images will be modulated by the antenna pattern, as Figure 6a shows. The inverse of the curve in Figure 4c can compensate for this modulation in the image level, and the result can be found in Figure 10. In order to verify the accuracy of the phase information, we conducted a repeat-pass SAR interferometry experiment based on the ST imagery above and a normal SL imagery on the same scene. The interferogram of the spot marked with white box in Figure 10 can be found in Figure 11a. Only when the data processing presented in this paper keeps the phase fidelity very well, can the ST focused image conjugate multiplied by the image obtained in regular SL mode effectively cancel the common backscatter to form an interferogram. More details of this interferogram experiment will be published in the future.

Discussion and Conclusions
This paper introduces an experiment regarding GF-3 ST mode. After data acquisition and preprocessing, some special aspects such as model error correction, stop-and-go correction, and antenna pattern demodulation are executed with a two-step processing algorithm, and the focused image shown in Figure 10 is produced. The interferogram shown in Figure 11 demonstrates that the data processing presented in this paper keeps the phase fidelity very well. This ST focusing experiment not only provides the highest-resolution image of GF-3, but also lays the foundation for the development of new higher-resolution Chinese SAR in the future.
Because the ST mode introduced in this paper is not a regular working mode of GF-3, users did not do a ST experiment in the calibration field at the commissioning phase, and we cannot get an image with ideal corner reflectors. The scene of the only attainable ST image shown in Figure 10 is very complicated, and it is difficult to find an ideal corner reflector. In order to obtain more precise results, another ST experiment needs to be applied at the calibration field.
ST is different from SL. The existing methods to remove SL cubic phase error in the range history is not suitable for ST. In this experiment, we calculated the cubic phase error for every pixel in azimuth, but this is time consuming. If the resolution becomes higher, the higher-order phase error will also influence focusing. Therefore, it is necessary to develop a new technology that can compensate cubic and high-order phase error efficiently.
In summary, there are still a lot of difficulties to be overcome in the future, and we will focus our efforts on solving these problems, especially the atmosphere's influence on high-resolution images, at a later stage.