A Fast Factorized Back-Projection Algorithm Based on Range Block Division for Stripmap SAR

: Fast factorized back-projection (FFBP) is a classical fast time-domain technique that has garnered significant success in spotlight synthetic aperture radar (SAR) signal processing. The algorithm’s efficiency has been extended to stripmap SAR through integral aperture determination and full-aperture data block processing while retaining its computational efficiency. However, the above method is only operated in the azimuth direction, and the computing efficiency needs to be urgently improved in the actual processing process. This paper proposes a fast factorized back-projection algorithm for stripmap SAR imaging based on range block division. The echo data are divided into multiple subblocks in the range direction, and FFBP processing is applied separately to each full-aperture subblock, further enhancing computational efficiency. The paper analyzes the algorithm’s principles, underscores the necessity of integral aperture determination and full-aperture data block processing, provides specific implementation steps, and applies the algorithm to point target simulation and experimental data from a vehicle-mounted ice radar. The experiments validate the algorithm’s efficiency in stripmap SAR imaging.


Introduction
Synthetic aperture radar (SAR) utilizes the motion of a payload platform, such as a satellite, aircraft, vehicle, etc., to synthesize a virtual long aperture, thereby achieving high resolution in the motion direction and high resolution in the beam direction through the combination of transmitting linear frequency modulated signals and pulse compression techniques [1].SAR possesses unique advantages in all-weather, all-day imaging of scenes and finds widespread applications in various fields.With technological advancements, SAR is evolving towards more flexible beam steering, higher resolution, and larger scene coverage, emphasizing the significance of accurately acquiring high-resolution information.
According to different imaging principles, SAR imaging methods can be classified into two categories: frequency-domain algorithms [2][3][4] and time-domain algorithms.In frequency-domain algorithms, the coupling of range and azimuth leads to a limited mapping range and may even prevent image focusing.Additionally, the requirements for ideal array geometries (straight trajectories and uniform sampling along trajectories) in frequency-domain algorithms are challenging to meet in practical applications.Timedomain algorithms, on the other hand, do not rely on as many assumptions and approximations as frequency-domain algorithms.They can accurately eliminate the coupling of range and azimuth, enabling precise image focusing and wide-area mapping [5,6].
The back-projection (BP) algorithm [7] is a precise time-domain imaging method; however, its high computational complexity limits its application in large-scale scenarios.Yegulalp introduced a fast back-projection (FBP) algorithm [8] with a two-level algorithmic structure to achieve lower computational complexity.The FBP algorithm reduces the computational complexity from O N 3 to O N 2.5 .Ulander proposed a fast factorized back-projection (FFBP) algorithm [9] and block-FFBP algorithm, simplifying the coordinate transformation between polar and Cartesian coordinates.FFBP and its various derivatives enhance imaging resolution and computational efficiency, finding wide application in spotlight SAR imaging [10][11][12][13][14].However, when directly applied to stripmap SAR processing, FFBP faces challenges of unclear integration ranges and computational redundancy [15,16].
Li presented a novel method for stripmap SAR imaging based on fast factorized backprojection [17].Approached from the perspectives of integral aperture and angular domain wavenumber bandwidth, this method describes an overlapping image approach suitable for stripmap SAR processing, achieving the extension of FFBP from spotlight to stripmap mode.Xu developed a new continuous imaging framework based on the FFBP algorithm [18].This framework divides the echo data into multiple full-aperture data blocks, independently processing each block with FFBP, thereby reducing redundant BP integration operations and improving computational efficiency.However, the aforementioned FFBP-based stripmap SAR improvement algorithms only operate in the azimuth direction, neglecting operations in the range direction.Dungan proposed the digital spotlight (DS) algorithm [19], which is capable of extracting SAR phase history for specific regions.The obtained phase history can be utilized for image reconstruction [20][21][22].
This paper introduces an FFBP algorithm for stripmap SAR imaging based on range block division.Combining digital spotlight technology, the full-aperture echo data are partitioned into several sub-blocks in the range direction, each containing fewer samples compared to the original data.Subsequently, each full-aperture sub-block undergoes independent FFBP processing.Through parallel operations, the number of back-projection computations is significantly reduced, further enhancing computational efficiency.
In order to achieve the efficient operation of the FFBP algorithm in strip mode, this paper reviews the basic principles of the FFBP algorithm in Section 2.1 and describes the problems encountered in the direct application of the FFBP algorithm to strip SAR processing, such as unclear integration aperture and computational redundancy.In Sections 2.2 and 2.3, corresponding solutions are proposed for these problems, i.e., integrated aperture judgment and full-aperture data block processing; in Sections 2.2 and 2.3, the principle of integrated aperture judgment and full-aperture data block processing are analyzed and the amount of computation is calculated, and the integrated aperture judgment can be introduced to ensure that the imaging quality is not affected, and the amount of computation can be minimized for a single complete aperture processing.The digital spotlight technology mentioned in Section 2.4 is used to further improve the imaging efficiency through range block operation.In Section 3.1, the point target imaging performance and imaging time of four different algorithms are compared through point target simulation experiments, and the efficiency of the algorithm in strip SAR point target simulation imaging is verified.Section 3.2 processes the measured data of vehicle-mounted multi-channel ice radar provided by the University of Kansas Ice Sheet Remote Sensing Center and further verifies the feasibility and efficiency of the proposed fast factorized back-projection algorithm based on range block division for stripmap SAR imaging by comparing the imaging time of different algorithms.

FFBP Algorithm Overview
With the aim of achieving efficient computation of the FFBP algorithm in stripmap mode, this paper first reviews the fundamental principles of the FFBP algorithm and describes the challenges encountered when directly applying FFBP to stripmap SAR imaging processing.Subsequently, corresponding solutions are proposed for these challenges, and the principles are analyzed while computational requirements are calculated.The introduction of integral aperture determination ensures imaging quality remains unaffected, and processing a single full aperture minimizes computational demands.Further enhancement in computational efficiency is achieved through range block division processing.Finally, the feasibility and effectiveness of the algorithm are validated through simulation experiments and measured data.
The processing stages of the FFBP algorithm include several phases.Firstly, the entire synthetic aperture is divided into multiple shorter sub-apertures using a designated decomposition factor (in this paper, a factor of 2 is chosen).The range-compressed data corresponding to each sub-aperture are then back-projected onto a local polar coordinate grid with its aperture center as the origin, resulting in a sub-image, thereby obtaining "coarse" angular domain resolution sub-images.Subsequently, the sub-images from the previous stage are continuously fused into the sub-images of the current stage.As the recursive fusion progresses, the length of sub-apertures increases, the number of subapertures decreases, and the angular resolution of sub-images continuously improves.Finally, the polar coordinate images are transformed into the Cartesian coordinate system, yielding a full-space resolution SAR image.
During the imaging process, the radar system operates in the squint looking stripmap SAR with a slant angle of θ sq .The radar platform moves along the positive x-axis with a velocity v.For a point target in the imaging scene with a spatial position vector a at time t m , the instantaneous slant range (R) from the Antenna Phase Center (APC) to the pixel is considered.The signal after range compression is given by In the context provided, τ represents the fast time, ω az (a; x) is the antenna pattern, a function of radar and target positions.B denotes the bandwidth of the transmitted signal, c is the speed of light, k rc is the range wave number center, with a value of 4π/λ c , where λ c is the wavelength corresponding to the center frequency.
Applying the FFBP algorithm for processing, assuming the synthetic aperture length is L and the aperture decomposition is divided into K stages, meaning that the FFBP process in a single synthetic aperture includes K recursive operations, with a decomposition factor of 2 for each stage.Therefore, L = 2 K .Establishing a butterfly computation structure, the number of sub-apertures partitioned in the initial stage is N (1) = 2 K−1 , and in the k-th stage, the number of sub-apertures is N (k) = 2 K−k , each with a length of l (k) = L/N (k) .The corresponding sub-aperture center azimuth coordinates are denoted as x , where the subscript "s" represents the s-th sub-aperture.Each sub-image is reconstructed in a local polar coordinate system with its sub-aperture center as the origin, and the sub-images are located in different polar coordinate systems.Let θ (k) s represent the angular domain partition of the k-th stage and the s-th polar coordinate grid.
In the initial stage, shorter sub-apertures result in a lower angular domain sampling rate for the polar coordinate grid.This allows obtaining low-resolution sub-images in the angular domain with fewer sampling points, eliminating the need to back-project data corresponding to each aperture position to every pixel in the imaging grid.Consequently, this enhances the computational efficiency of the algorithm.As illustrated in Figure 1, for the s-th sub-aperture, a local polar coordinate system is established with (r, θ (1) s ) as the origin.The sub-image can then be expressed as where R = R(r, θ s ; δ) represents the instantaneous slant range from the aperture position x apertures increases, the number of sub-apertures decreases, and the angular domain spacing of sub-aperture images is continuously refined.After completing the processing of the K-th stage, where the length of sub-apertures equals the size of the entire aperture, the local polar coordinate systems are equivalent to the global polar coordinate system.
Finally, the polar coordinate system image ( , ) I r θ is transformed back to the Cartesian coordinate system, yielding a full-space resolution image ( , ) I U V .The aperture decomposition establishes an algorithmic structure similar to a butterfly operation, providing the foundation for the fast implementation of the FFBP algorithm.Subsequently, the current stage's sub-images are obtained by coherent summation of the sub-images from the previous stage, that is 2s−1 (r, θ where the symbol "⨿ •" denotes coherent accumulation.Coherent accumulation uses the phase relationship between the received pulses to add the phases of multiple coherent signal bands (complex data addition) and, finally, the signal amplitude accumulation and noise power accumulation, thereby improving the signal-to-noise ratio.Coherent accumulation allows the energy of all radar echoes to be added directly.
The differences between local polar coordinate systems result in I ) to the new coordinate system (r, θ s ) based on the geometric relationship between local polar coordinate systems.The specific implementation is a two-dimensional process dependent on distance and angular domain interpolation.As the length of sub-apertures increases, the number of sub-apertures decreases, and the angular domain spacing of sub-aperture images is continuously refined.After completing the processing of the K-th stage, where the length of sub-apertures equals the size of the entire aperture, the local polar coordinate systems are equivalent to the global polar coordinate system.
Finally, the polar coordinate system image I(r, θ) is transformed back to the Cartesian coordinate system, yielding a full-space resolution image I(U, V).
The aperture decomposition establishes an algorithmic structure similar to a butterfly operation, providing the foundation for the fast implementation of the FFBP algorithm.
Beamforming is achieved at the signal processing level through the coherent accumulation of signals between sub-images, while recursive fusion, at the image fusion level, accomplishes the formation of angular domain resolution from low to high.Next, the FFBP algorithm will be applied to stripmap SAR processing.

Integral Aperture Determination
In spotlight SAR, the imaging scene is continuously covered by the beam throughout the entire synthetic aperture time.All the pixels to be reconstructed correspond to the same integration aperture (i.e., the entire aperture), and each aperture position contributes to the formation of pixels.In stripmap SAR, targets in the imaging scene are not continuously illuminated by the antenna beam during the synthetic aperture time.During the back-projection integration, different pixels correspond to different integration apertures.If range-compressed data are directly back-projected onto the entire imaging grid for integration, it not only causes incorrect accumulation of energy for pixels beyond the beam width, affecting imaging quality, but also increases computational complexity.
For the FFBP algorithm, the quality of sub-aperture images in the aperture segmentation stage directly determines the accuracy of subsequent stages in image recursive fusion.Therefore, it is crucial to introduce integral aperture determination during the sub-aperture back-projection integration process in the aperture segmentation stage.
Assuming the polar coordinates of a point P in the imaging scene are (r, θ s + δ, the direction from the antenna to point P is denoted as ϕ.The antenna beam can illuminate the point when the absolute value of the angle between ϕ and the line of sight direction θ sq is within the range of θ BW /2, i.e., Defining δ that satisfies Equation (4) as the integral aperture for point P, it can be denoted as δ I A .Once the integral apertures for each pixel on the imaging grid are known, the s-th sub-aperture image can be represented as where R = R(r, θ s ; δ) is the instantaneous slant range from aperture position x s + δ to pixel point (r, θ (1) s ).After introducing integral aperture determination, the influence of the antenna pattern can be neglected.Through a similar sub-aperture back-projection integration method as in spotlight SAR, the integral apertures for all pixel points can be accurately determined.This enables the extension of the FFBP algorithm from spotlight SAR to stripmap SAR.

The Processing of Full-Aperture Data Blocks
Assuming the number of samples in the range direction of the imaging scene is M, and in the azimuth direction is N, and a single synthetic aperture contains L samples.The length of a single aperture is L a , and the maximum observable range in the azimuth direction is 2L a , as illustrated in Figure 2. It follows that the full resolution in the azimuth direction is 2L a /N.The total number of BP imaging operations within a complete aperture is given by The FFBP process within a single full aperture involves K recursive operations, with a decomposition factor of 2 for each stage, so that L = 2 K , assuming the range resolution of the image scene remains unchanged for each stage.In the first stage, the full aperture is divided into L a /2 sub-apertures, each containing 2 azimuth samples, and using the echo data from each sub-aperture to form an image with coarse azimuth resolution.The number of BP operations in the first stage is given by The FFBP process within a single full aperture involves K recursive operations, wit a decomposition factor of 2 for each stage, so that L = 2 K , assuming the range resolution the image scene remains unchanged for each stage.In the first stage, the full aperture divided into La/2 sub-apertures, each containing 2 azimuth samples, and using the ech data from each sub-aperture to form an image with coarse azimuth resolution.The num ber of BP operations in the first stage is given by In the second stage, every two coarse images are merged together, resulting in a tot of L/4 images with coarse azimuth resolution.The number of BP operations in the secon stage is given by After the second stage processing, the azimuthal sampling interval becomes half o the original, and the azimuthal resolution increases to twice the original.
Following this pattern, the number of BP operations in the K-th stage is given by After K rounds of merging, the full-resolution image for a single full aperture can b obtained.The total number of BP operations for K stages is In the second stage, every two coarse images are merged together, resulting in a total of L/4 images with coarse azimuth resolution.The number of BP operations in the second stage is given by After the second stage processing, the azimuthal sampling interval becomes half of the original, and the azimuthal resolution increases to twice the original.
Following this pattern, the number of BP operations in the K-th stage is given by After K rounds of merging, the full-resolution image for a single full aperture can be obtained.The total number of BP operations for K stages is Comparing Equation (6) with Equation ( 10), we have Compared to direct BP, the computational speed of the FFBP algorithm is improved by a factor of L/(2log 2 L).
Next, the FFBP algorithm is applied to the continuous imaging of two full apertures.As shown in Figure 3, the two full apertures contain 2L samples with a length of 2L a and a maximum observable distance in the azimuth direction of 3L a .
Compared to direct BP, the computational speed of the FFBP algorithm is impro by a factor of L/(2log2L).
Next, the FFBP algorithm is applied to the continuous imaging of two full apertu As shown in Figure 3, the two full apertures contain 2L samples with a length of 2La a maximum observable distance in the azimuth direction of 3La.For 2L = 2 K+1 , This process includes K + 1 stages, and the number of operations for e stage is as follows: First stage Proceeding in this manner, the number of operations for the Kth stage is At the Kth stage, the azimuth resolution of the image has already reached the o mum, and in the K + 1 stage, the resolution remains unchanged.Therefore, the numbe BP operations for the (K + 1)th stage is After K + 1 merging operations, the full-resolution images of the two full apertu can be obtained.The total number of FFBP operations is given by For 2L = 2 K+1 , This process includes K + 1 stages, and the number of operations for each stage is as follows: First stage Proceeding in this manner, the number of operations for the Kth stage is At the Kth stage, the azimuth resolution of the image has already reached the optimum, and in the K + 1 stage, the resolution remains unchanged.Therefore, the number of BP operations for the (K + 1)th stage is After K + 1 merging operations, the full-resolution images of the two full apertures can be obtained.The total number of FFBP operations is given by The comparison between Equation ( 16) and Equation ( 10) reveals that The aperture length has doubled, but the computational load has increased to three times that of processing a single aperture.The analysis reveals that the left full aperture has no effect on the rightmost imaging area, and similarly, the right full aperture has no effect on the leftmost imaging area.However, both have undergone integration operations, leading to redundancy.
Continue using the FFBP algorithm for continuous imaging of 3, 4, . .., n full apertures.As shown in Figure 4, n full apertures contain nL samples with a length of nL a , and the maximum observable distance in the azimuthal direction becomes (n + 1)L a .
times that of processing a single aperture.The analysis reveals that the left full aperture has no effect on the rightmost imaging area, and similarly, the right full aperture has no effect on the leftmost imaging area.However, both have undergone integration operations, leading to redundancy.
Continue using the FFBP algorithm for continuous imaging of 3, 4, …, n full apertures.As shown in Figure 4, n full apertures contain nL samples with a length of nLa, and the maximum observable distance in the azimuthal direction becomes (n + 1)La.Assuming nL = n2 K ≤ 2 K+x , such that the recursive iterations meet the imaging requirements, this process includes K + x stages, and the number of operations in each stage is as follows: First stage Second stage Continuing in this manner, the number of computations in the Kth stage is In the Kth stage, the azimuth resolution of the image has reached its optimum, and in the (K + 1)th, (K + 2)th,…, (K + x)th stage, the azimuth resolution will not increase further.
The number of BP operations in the (K + 1)th stage is The number of BP operations in the (K + 2)th stage is Assuming nL = n2 K ≤ 2 K+x , such that the recursive iterations meet the imaging requirements, this process includes K + x stages, and the number of operations in each stage is as follows: First stage Continuing in this manner, the number of computations in the Kth stage is In the Kth stage, the azimuth resolution of the image has reached its optimum, and in the (K + 1)th, (K + 2)th, . .., (K + x)th stage, the azimuth resolution will not increase further.
The number of BP operations in the (K + 1)th stage is The number of BP operations in the (K + 2)th stage is Continuing in this manner, the number of BP operations in the After K + x rounds of merging processing, n full-aperture full-resolution images can be obtained.The total number of FFBP operations is Comparing Equation (24) with Equation ( 10), we can observe that The aperture length has increased by a factor of n, but the computational workload has increased by a factor of more than n times compared to processing a single aperture.The increase in computational workload surpasses the increase in aperture length.With the increase in aperture length, the redundant calculations in the integration process become more significant, leading to a sharp increase in computational workload.
To address this issue, we can restrict the FFBP algorithm to process only one complete synthetic aperture at a time, i.e., the full-aperture data block.Subsequently, the results of processing all apertures are accumulated.Pixels in the currently synthetic aperture image with an integration aperture length smaller than the full aperture length can be compensated for by adjacent images.This ensures that the recovered integration aperture equals the true integration aperture, avoiding the underutilization of information.Processing the full-aperture data block also minimizes redundant calculations, improving computational efficiency and allowing for the efficient reconstruction of the image by fully utilizing all available information.
The total number of computations for processing the full-aperture data block is Comparing Equation (26) with Equation ( 24), we can observe that From Equation ( 27), it can be inferred that the processing efficiency for the full-aperture data block is approximately (n + 1)/2 times higher compared to directly using the FFBP algorithm for the continuous imaging of n full apertures.

Range Block Division
The digital spotlight algorithm can extract the SAR phase history data corresponding to the ROI (region of interest), and the extracted phase history data can be used for target recognition or other subsequent processing.Based on the Nyquist sampling theorem, the sampling rate of full-phase historical data is usually selected to sample the whole image without distortion.If the desired ROI is much smaller than the overall image size, then the Nyquist sampling requirement is greatly relaxed, i.e., the phase history of the ROI can be unambiguously sampled with much fewer samples than is required to generate the original SAR image, thus improving imaging efficiency.
A schematic diagram of the digital spotlight is shown in Figure 5.It is assumed that the existing phase history data can meet the needs of alias-free imaging in the green dotted circle area in Figure 5a.The center position of the scene is O, the position of the radar antenna is Q C , and the historical frequency step and pulse repetition rate of the radar phase meet the alias-free imaging requirements in the area with a radius of R 0 .The goal of the digital spotlight algorithm is to reduce the size of the SAR phase historical data set, including the number of frequency samples and the number of pulses in azimuth, and to ensure alias-free imaging within the ROI around the new center C.
The first step of the digital spotlight algorithm requires determining the location and size of the ROI.The second step is to re-determine the phase history of the new center of the ROI, which is the SAR reference point of the new sub-image, and the third step is to "extract" the phase history data in the fast-time dimension.After the first three steps, a small distortion-free range can be obtained that is approximated by the red bold solid line contour area in Figure 5b, and the alias-free range distance is approximate to Wr, and the alias-free azimuth distance is Wx.The fourth step is to achieve uniform azimuth or angle by phase history interpolation.The final step is to "extract" the phase history data in the azimuth dimension, and eventually, the new phase history contains only the samples needed to generate an alias-free image of the ROI, as shown in the bold red solid outline area in Figure 5c.circle area in Figure 5a.The center position of the scene is O, the position of the radar antenna is QC, and the historical frequency step and pulse repetition rate of the radar phase meet the alias-free imaging requirements in the area with a radius of R0.The goal of the digital spotlight algorithm is to reduce the size of the SAR phase historical data set, including the number of frequency samples and the number of pulses in azimuth, and to ensure alias-free imaging within the ROI around the new center C. The first step of the digital spotlight algorithm requires determining the location and size of the ROI.The second step is to re-determine the phase history of the new center of the ROI, which is the SAR reference point of the new sub-image, and the third step is to "extract" the phase history data in the fast-time dimension.After the first three steps, a small distortion-free range can be obtained that is approximated by the red bold solid line contour area in Figure 5b, and the alias-free range distance is approximate to Wr, and the alias-free azimuth distance is Wx.The fourth step is to achieve uniform azimuth or angle by phase history interpolation.The final step is to "extract" the phase history data in the azimuth dimension, and eventually, the new phase history contains only the samples needed to generate an alias-free image of the ROI, as shown in the bold red solid outline area in Figure 5c.
Figure 6 provides a visualization of the phase history extraction in the fast-and slowtime dimensions.From Figure 6a, part of the phase history of frequencies and pulses occupies part of the annulus.Figure 6b shows that the extraction is performed with 2 as the extraction factor in the fast-time dimension.Figure 6c shows the extraction with 2 as the extraction factor in the slow-time dimension.Figure 6 provides a visualization of the phase history extraction in the fast-and slowtime dimensions.From Figure 6a, part of the phase history of frequencies and pulses occupies part of the annulus.Figure 6b shows that the extraction is performed with 2 as the extraction factor in the fast-time dimension.Figure 6c shows the extraction with 2 as the extraction factor in the slow-time dimension.Assuming that the center of the original image is [0,0,0], Figure 7a shows a flat image with an east-west side length of Vx and a north-south side length of Vy.A circle with diameter ||V|| on the ground (imaging plane) delineates the imaging area.As shown in Figure 7b, it is assumed that the alias-free distance Wr is equal to ||V||.The imaging area range is constrained by the maximum and minimum distance ranges Rmax and Rmin, which are located within the alias-free distance ranges R'max and R'min.Assuming that the center of the original image is [0,0,0], Figure 7a shows a flat image with an east-west side length of V x and a north-south side length of V y .A circle with diameter ||V|| on the ground (imaging plane) delineates the imaging area.As shown in Figure 7b, it is assumed that the alias-free distance Wr is equal to ||V||.The imaging area range is constrained by the maximum and minimum distance ranges R max and R min , which are located within the alias-free distance ranges R' max and R' min .
Assuming that the center of the original image is [0,0,0], Figure 7a shows a flat image with an east-west side length of Vx and a north-south side length of Vy.A circle with diameter ||V|| on the ground (imaging plane) delineates the imaging area.As shown in Figure 7b, it is assumed that the alias-free distance Wr is equal to ||V||.The imaging area range is constrained by the maximum and minimum distance ranges Rmax and Rmin, which are located within the alias-free distance ranges R'max and R'min.The frequency sampling vector {fk} has a minimum value of f1, a maximum value of fK, and a step size of Δf [23].In the full phase history {Pk}, each complex phase history sample Pk is associated with fK, and Pk is the phase history sample within the range of the newly generated digital spotlight with a center of C. The difference in distance from the radar antenna to the origin and the radar antenna to the center of the digital spotlight is given by ΔR = R0 − Rc.The frequency sampling vector {f k } has a minimum value of f 1 , a maximum value of f K , and a step size of ∆f [23].In the full phase history {P k }, each complex phase history sample P k is associated with f K , and P k is the phase history sample within the range of the newly generated digital spotlight with a center of C. The difference in distance from the radar antenna to the origin and the radar antenna to the center of the digital spotlight is given by Equation ( 28) is used to correct K frequencies and N p pulses, and the result is a phase history that produces a C-centric ROI SAR image.
The ROI has a smaller alias-free range compared to the original whole scene, which can be met with a new frequency sampling interval ∆f ′ in Equation ( 29), and the alias-free range Wr is designed to be the same as the spotlight diameter ||V||.
Each pulse in the phase history can be decimated by an N DS factor to reduce the number of fast-time or frequency samples.Each pulse is decimated independently, making the decimation operation suitable for parallel processing.
The phase history data obtained after the above processing contain only the samples required to image the new scene in the range dimension, and the samples required to image other range dimensions have been deleted.All original samples remain unchanged in azimuth.
The radar pulse rate in SAR is usually specified by the pulse repetition rate (PRF).PRF, radar speed, and path produce a pulse-per-radian (PPR) pulse rate relative to the azimuth of the center of the spot.The PPR is related to the maximum alias-free span trajectory distance with azimuth spacing requirements where k = ω max /c = 2πf max /c is the maximum wavenumber, Ø is the elevation angle (radian) from the center of the light to the radar antenna, and R 0 is the radius of the spot area.For a digital spotlight with radius R 0 , the desired alias-free distance length across the trajectory is W x = 2R 0 , then the PPR requirement is where Ø depends on the variable elevation angle between the center of the digital spotlight and the Q C of the radar antenna on the radar path.To simplify, the PPR of a digital spotlight can be designed to be a constant related to the minimum elevation angle Ø min in the data block.
The radar antenna position of each pulse in the dataset is stored in meters on (east, north, up) or (x,y,z) Cartesian coordinate space.After converting these coordinates to spherical coordinates (azimuth, elevation, distance) relative to the center of focus, the azimuth information may have variable PPR with changes in relative motion path and velocity, so azimuth spacing ∆θ is not constant.While the azimuth spacing can meet the requirements of Equation ( 31), the filtering algorithm requires a constant sample spacing that can interpolate the phase history to a uniform sample interval based on the smallest ∆θ of the data blocks being processed.The purpose of "extracting" phase historical data in the slow-time dimension is to reduce the size of the dataset to match the smaller azimuth range of W x < 2R 0 , so that ∆θ ′ is greater than ∆θ, the required decimation factor is After the extraction factor is selected, the phase history data block is extracted in the slow-time dimension.Finally, the SAR phase history data corresponding to the ROI is extracted, and the extracted phase history will be used for target identification or other subsequent processing.
The initial full-aperture phase historical data can be used to reconstruct alias-free SAR images in the area around the origin, and the proposed fast decomposition backward projection algorithm for strip SAR imaging based on range tiles takes the first three steps of the digital spotlight algorithm as the first stage of the "two-stage backward projection" image reconstruction process.Start by determining the location and size of the ROI.The phase history of the ROI center is then determined, which serves as the SAR reference point for the new image, and the third step is to "extract" the phase history data in the fasttime dimension.The main goal of DS technology is to decompose the phase history data into multiple parts in the distance upward, forming a smaller area of aliasing-free images around the new center.After a three-step process, the phase history data are decomposed into multiple parts in the distance upwards to form a smaller alias-free imaging area.Finally, the original full-aperture phase history data is filtered into multiple "pseudo" phase histories [24], and the imaging scene is segmented into multiple full-aperture sub-blocks along the distance, and the number of samples contained in each sub-block is greatly reduced, and the imaging efficiency is improved through parallel operation.
The aperture and image schematic are shown in Figure 2. Assuming the imaging scene size is MY G × NY G , where M and N are the number of samples in the range and azimuth directions, and M = N. Y G represents the actual sampling interval.
Here, B is the bandwidth of the transmitted signal, f ∆ is the frequency step size, and c is the speed of light.
Figure 8 gives a schematic diagram of range block division imaging.There are N × N pixels in the original imaging area, and the center of the whole large scene is denoted as O(X 0 ,Y 0 ,Z 0 ), wherein X 0 is in the range dimension, Y 0 is in the azimuth dimension, and Z 0 is the height.The entire imaging area is divided into D subregions in the distance direction by the digital spotlight method, and each subregion contains N × (N/D) pixels.The center O of the original large scene is taken as the reference point, and the center point of the sub-scene (i,j) is C j (h c j , v c j ), h c j is in the range dimension, and v c j is in the azimuth dimension where i, j are the indexes of the matrix columns and rows, respectively, and i = 1, j∈ [1, D] indicates that the azimuth dimension is not divided, and the range dimension is divided into D subblocks.In Figure 8, D = 4, the whole imaging scene is divided into four subscenes in the distance direction, which are denoted as Sub-sence1, Sub-sence2, Sub-sence3 and Sub-sence4, respectively, and the sub-scene center points corresponding to the four sub-scenes are C 1 , C 2 , C 3 and C 4 , respectively.Assuming that the imaging scene is on the ground, Z 0 = 0 and the center of the large scene is O (X 0 ,Y 0 ,0).
Electronics 2024, 13, x FOR PEER REVIEW 14 of 22 and Sub-sence4, respectively, and the sub-scene center points corresponding to the four sub-scenes are C1, C2, C3 and C4, respectively.Assuming that the imaging scene is on the ground, Z0 = 0 and the center of the large scene is O (X0,Y0,0).Calculate the distance of the center point ( , ) of the above sub-scene (i,j) relative to the center O of the original large imaging scene ( , ) ( ( , ) ) 2 ( , ) 0 Taking the center O of the original large scene as the reference point, the position of the radar antenna (relative to the center O of the original large scene) at a certain time is denoted as QC (X,Y,Z), where X is in the range dimension, Y is in the azimuth dimension, and Z is the height.Assuming that the imaging scene is on the ground, so Z0 = 0, the radar antenna position is QC (X,Y,0).As the radar platform moves, the antenna position is constantly moving in the azimuth direction, marked as a continuously moving red dot in Figure 8.For each pulse return of the phase history, the position of the antenna must be updated to reflect the position of the new sub-scene center.The new antenna position (relative to the new scene center Cj) is ( , ) ( ( , ), ( , ), ) where Calculate the distance of the center point C j (h c j , v c j ) of the above sub-scene (i,j) relative to the center O of the original large imaging scene Taking the center O of the original large scene as the reference point, the position of the radar antenna (relative to the center O of the original large scene) at a certain time is denoted as Q C (X,Y,Z), where X is in the range dimension, Y is in the azimuth dimension, and Z is the height.Assuming that the imaging scene is on the ground, so Z 0 = 0, the radar antenna position is Q C (X,Y,0).As the radar platform moves, the antenna position is constantly moving in the azimuth direction, marked as a continuously moving red dot in Figure 8.For each pulse return of the phase history, the position of the antenna must be updated to reflect the position of the new sub-scene center.The new antenna position (relative to the new scene center C j ) is Among them, ① represents range block technology, ② represents sub-aperture judgment and full-aperture data block processing, ③ represents FFBP algorithm imaging, and ④ represents sub-image overlay.
After the above steps, the integrated aperture judgment expands the FFBP algorithm from beam gathering to strips, the full-aperture data block processing can reduce redundant calculations, the distance block decomposition can further reduce the number of backward projection operations in each sub-scene, and finally, each sub-scene is imaged through parallel operation, so as to improve the computing efficiency of the whole imaging process.

Results
In this section, the results of the point target simulation and the experimental results of the raw data are given and analyzed to evaluate the performance of the proposed algorithm.The algorithms for both simulation and raw data experiments are programmed on a MATLAB platform and Windows 10 system.

Simulation Experiments
A set of vehicle-mounted strip SAR simulation parameters were designed for point target performance analysis, with specific parameters as shown in Table 1.Among them, 1 ⃝ represents range block technology, 2 ⃝ represents sub-aperture judgment and full-aperture data block processing, 3  ⃝ represents FFBP algorithm imaging, and 4 ⃝ represents sub-image overlay.
After the above steps, the integrated aperture judgment expands the FFBP algorithm from beam gathering to strips, the full-aperture data block processing can reduce redundant calculations, the distance block decomposition can further reduce the number of backward projection operations in each sub-scene, and finally, each sub-scene is imaged through parallel operation, so as to improve the computing efficiency of the whole imaging process.

Results
In this section, the results of the point target simulation and the experimental results of the raw data are given and analyzed to evaluate the performance of the proposed algorithm.The algorithms for both simulation and raw data experiments are programmed on a MATLAB platform and Windows 10 system.

Simulation Experiments
A set of vehicle-mounted strip SAR simulation parameters were designed for point target performance analysis, with specific parameters as shown in Table 1.To analyze the imaging in more detail, the direct BP, FBP, FFBP, and the fast factorized back-projection algorithm based on range block division are used to simulate the nadir target in Figure 10.The coordinates of this point are (0,−r c ).The results are shown in Figure 12.The parameters measured by the results of each algorithm are shown in Table 2.
The calculation of azimuth resolution is expressed as where λ = c/fc is the radar wavelength, fc is the radio frequency carrier frequency, and L is the aperture length (synthetic aperture).Assume that the bedrock interface depth inside the glacier is d = 2000 m and the synthetic aperture length is 200 m.According to the parameters in Table 1 and Equation (45), the theoretical azimuth resolution ρ A = 5.27 m can be calculated.According to Figure 12b, the azimuth resolution obtained by measuring the 3 dB width is 5.95 m, and the peak side lobe ratio in the azimuth direction is −13.8482dB; according to Figure 12d, the azimuth resolution obtained by measuring the 3dB width is 6.03 m, the peak side/lobe ratio in the azimuth direction is −14.0577dB; according to Figure 12f, the azimuth resolution obtained by measuring the 3 dB width is 5.99 m, and the peak side/lobe ratio in the azimuth direction is −13.9722dB; according to Figure 12h, the azimuth resolution obtained by measuring the 3 dB width is 6.54 m, and the peak side/lobe ratio in the azimuth direction is −13.9359dB.It can be seen that the algorithm proposed in this article has essentially the same azimuthal resolution with direct back-projection.
To analyze the imaging in more detail, the direct BP, FBP, FFBP, and the fast factor ized back-projection algorithm based on range block division are used to simulate the na dir target in Figure 10.The coordinates of this point are (0, c r  ).The results are shown in

Conclusions
In this paper, we propose a fast factorized back-projection algorithm based on range block division for stripmap SAR imaging, combined with digital spotlight technology, to divide the full-aperture echo data into several sub-blocks in the range direction.Each sub-block contains a smaller number of samples than the original data, and then FFBP processing is carried out on the full-aperture sub-blocks, respectively, which significantly reduces the number of back-projection operations and improves the computational efficiency through parallel operation.
This paper reviews the basic principles of the FFBP algorithm, describes the problems of unclear integrated aperture and computational redundancy encountered in the direct application of the FFBP algorithm to strip SAR processing, and proposes corresponding solutions to these problems (i.e., integrated aperture judgment and full-aperture data block processing), and then analyzes the principles of integrated aperture judgment and full-aperture data block processing.The amount of computation is calculated, and the introduction of integrated aperture judgment can ensure that the imaging quality is not affected and the computational cost can be minimized for a single complete aperture processing.
The digital spotlight technology is then used to further improve the imaging efficiency through range block operation.Through the point target simulation experiment, the point target imaging performance and imaging time of four different algorithms are compared, and the efficiency of the algorithm in the strip SAR point target simulation imaging is verified.The measured data of vehicle-mounted multi-channel ice radar provided by the University of Kansas Ice Sheet Remote Sensing Center are processed, and the feasibility and efficiency of the proposed fast factorized back-projection algorithm based on range block division for stripmap SAR imaging are further verified by comparing the imaging time of different algorithms.This algorithm has a certain degree of flexibility in practical application, can be applied to the data collected by vehicle-mounted ultra-wideband ice radar, and can be used for imaging operations more efficiently through further reasonable division of range.

Figure 1 .
Figure 1.The local polar coordinate grid of the k-th sub-aperture in the first stage.

Figure 1 .
Figure 1.The local polar coordinate grid of the k-th sub-aperture in the first stage.
different local polar coordinate systems.Image recursive fusion requires establishing the mapping from the original coordinate systems (r, θ (k−1) 2s−1 ) and (r, θ (k−1) 2s and the azimuth coordinate from the radar's initial position is x (1) s .When the radar moves to x(1)

Figure 3 .
Figure 3. Concept of two full apertures.

Figure 3 .
Figure 3. Concept of two full apertures.

Figure 4 .
Figure 4. Concept of n full apertures.

Figure 4 .
Figure 4. Concept of n full apertures.

Figure 5 .
Figure 5. Schematic diagram of a digital spotlight: (a) Upon recentering and fast time, or frequency, decimation; (b) The alias-free range is reduced to the desired digital spotlight.Slow time, or pulse rate, decimation; (c) reduces the alias-free cross-range as well.

Figure 5 .
Figure 5. Schematic diagram of a digital spotlight: (a) Upon recentering and fast time, or frequency, decimation; (b) The alias-free range is reduced to the desired digital spotlight.Slow time, or pulse rate, decimation; (c) reduces the alias-free cross-range as well.

Figure 6 .
Figure 6.Visualization of phase history extraction in the fast-and slow-time dimensions: (a) Frequency decimation; (b) Reduces the alias-free range and slow time decimation; (c) across frequency bins reduces the alias-free cross-range.The digital spotlight range is specified using center offset C = [Cx,Cy,Cz] and a flat image area with a normal in the upper (z) direction and an edge length of V = [Vx,Vy].Assuming that the center of the original image is [0,0,0], Figure7ashows a flat image with an east-west side length of Vx and a north-south side length of Vy.A circle with diameter ||V|| on the ground (imaging plane) delineates the imaging area.As shown in Figure7b, it is assumed that the alias-free distance Wr is equal to ||V||.The imaging area range is constrained by the maximum and minimum distance ranges Rmax and Rmin, which are located within the alias-free distance ranges R'max and R'min.

Figure 6 .
Figure 6.Visualization of phase history extraction in the fast-and slow-time dimensions: (a) Frequency decimation; (b) Reduces the alias-free range and slow time decimation; (c) across frequency bins reduces the alias-free cross-range.The digital spotlight range is specified using center offset C = [C x ,C y ,C z ] and a flat image area with a normal in the upper (z) direction and an edge length of V = [V x ,V y ].Assuming that the center of the original image is [0,0,0], Figure7ashows a flat image with an east-west side length of V x and a north-south side length of V y .A circle with diameter ||V|| on the ground (imaging plane) delineates the imaging area.As shown in Figure7b, it is assumed that the alias-free distance Wr is equal to ||V||.The imaging area range is constrained by the maximum and minimum distance ranges R max and R min , which are located within the alias-free distance ranges R' max and R' min .

Figure 7 .
Figure 7. Flat and side views of the digital spotlight processing area: (a) From the side view; (b) The alias-free range Wr is designed to exceed the image plane diameter ||V||.

Figure 7 .
Figure 7. Flat and side views of the digital spotlight processing area: (a) From the side view; (b) The alias-free range Wr is designed to exceed the image plane diameter ||V||.

Figure 8 .
Figure 8. Schematic diagram of range block division.

Figure 8 .
Figure 8. Schematic diagram of range block division.

Figure 9 .
Figure 9.The flow chart of the fast factorized back-projection algorithm based on range block division for stripmap SAR imaging.

Figure 9 .
Figure 9.The flow chart of the fast factorized back-projection algorithm based on range block division for stripmap SAR imaging.

(
a d ,r c ), r c = 2000 m is the horizontal distance between the imaging center and the radar.r d = 50 m, a d = 100 m.The point target imaging results are shown in Figure 11.Radar platform average velocity 3 m/s Set five target points, as shown in Figure 10; the coordinates of the target points from left to right (azimuth, range) are ( c r = 2000 m is the horizontal distance between the imaging center and the radar.d r = 50 m, d a = 100 m.The point target imaging results are shown in Figure 11.

Figure 10 .
Figure 10.Position of targets used in the simulation.

Figure 11 .
Figure 11.The point target imaging results using the fast factorized back-projection algorithm based on range block division.

Figure 10 .
Figure 10.Position of targets used in the simulation.

Figure 10 .
Figure 10.Position of targets used in the simulation.

Figure 11 .Figure 11 .
Figure 11.The point target imaging results using the fast factorized back-projection algorithm based on range block division.

Figure 12 .Figure 12 .
Figure 12.The parameters measured by the results of each algorithm are shown in Table2.

Figure 12 . 22 Figure 13 .
Figure 12.Point target range and azimuth profiles: (a,b) are the range and azimuth profiles processed by the direct backward projection algorithm; (c,d) are the range and azimuth profiles processed by the FBP algorithm; (e,f) are the range and azimuth profiles processed by the FFBP algorithm; (g,h) are the range and azimuth profiles processed by the fast factorized back-projection algorithm based on range block division.

Figures 14 and 15 Figure 14 .Figure 15 .
Figures 14 and 15 show SAR imaging results from two different lines.Both lines extend in an east-west direction, with line T1T1′ running from east to west and line T2T2′ running from west to east, with a distance of 500 m in the north-south direction.Figures 14a and 15a are processing results of the FFBP algorithm.Figures 14b and 15b are processing results of the fast factorized back-projection algorithm based on range block division.A comparison was made between the proposed algorithm and the FFBP algorithm, demonstrating the computational advantages of this algorithm.The computation times for Figure 14a,b are 5742 s and 3588 s, respectively, while the computation times for Figure 15a,b are 7509 s and 4446 s, respectively.The algorithm proposed in this paper is superior to the FFBP algorithm.Through the comparison of computation time, it can be concluded that the proposed algorithm can significantly reduce the amount of computation required for imaging ice radar data and achieves nearly the same imaging effects and azimuth resolution as the FFBP algorithm.

Figure 13 .
Figure 13.Location of tracks processed in this article.

Figure 13 .Figure 14 .Figure 15 .
Figure 13.Location of tracks processed in this article.Figures 14 and 15 show SAR imaging results from two different lines.Both lines extend in an east-west direction, with line T1T1′ running from east to west and line T2T2′ running from west to east, with a distance of 500 m in the north-south direction.Figures 14a and 15a are processing results of the FFBP algorithm.Figures 14b and 15b are processing results of the fast factorized back-projection algorithm based on range block division.A comparison was made between the proposed algorithm and the FFBP algorithm, demonstrating the computational advantages of this algorithm.The computation times for Figure 14a,b are 5742 s and 3588 s, respectively, while the computation times for Figure 15a,b are 7509 s and 4446 s, respectively.The algorithm proposed in this paper is superior to the FFBP algorithm.Through the comparison of computation time, it can be concluded that the proposed algorithm can significantly reduce the amount of computation required for imaging ice radar data and achieves nearly the same imaging effects and azimuth resolution as the FFBP algorithm.

Figure 14 .
Figure 14.Experimental results of two algorithms for survey line T1T1 ′ : (a) FFBP; (b) the fast factorized back-projection algorithm based on range block division.

Figure 13 .
Figure 13.Location of tracks processed in this article.

Figures 14 and 15 Figure 14 .Figure 15 .
Figures 14 and 15 show SAR imaging results from two different lines.Both lines extend in an east-west direction, with line T1T1′ running from east to west and line T2T2′ running from west to east, with a distance of 500 m in the north-south direction.Figures 14a and 15a are processing results of the FFBP algorithm.Figures 14b and 15b are processing results of the fast factorized back-projection algorithm based on range block division.A comparison was made between the proposed algorithm and the FFBP algorithm, demonstrating the computational advantages of this algorithm.The computation times for Figure 14a,b are 5742 s and 3588 s, respectively, while the computation times for Figure 15a,b are 7509 s and 4446 s, respectively.The algorithm proposed in this paper is superior to the FFBP algorithm.Through the comparison of computation time, it can be concluded that the proposed algorithm can significantly reduce the amount of computation required for imaging ice radar data and achieves nearly the same imaging effects and azimuth resolution as the FFBP algorithm.

Figure 15 .
Figure 15.Experimental results of two algorithms for survey line T2T2 ′ : (a) FFBP; (b) the fast factorized back-projection algorithm based on range block division.

Table 1 .
SAR parameters in simulation.

Table 2 .
Point target performance analysis.