A Framework for Distributed LEO SAR Air Moving Target 3D Imaging via Spectral Estimation

: This paper aims to perform imaging and detect moving targets in a 3D scene for space-borne air moving target indication (AMTI). Speciﬁcally, we propose a feasible framework for distributed LEO space-borne SAR air moving target 3D imaging via spectral estimation. This framework contains four subsystems: the distributed LEO satellite and radar modeling, moving target information processing, baseline design framework, and spectrum estimation 3D imaging. Firstly in our method, we develop a relative motion model between the satellite platform and the 3D moving target for satellite and radar modeling. In a very short time, the relative motion between the platform and the target is approximated as a uniform motion. We then establish the space-borne distributed SAR moving target 3D imaging model based on the motion model. After that, we analyze the inﬂuencing factors, including the Doppler parameters, the three-dimensional velocity, acceleration, and baseline intervals, and further investigate the performance of the 3D imaging of the moving target. The moving target spectrum estimation 3D imaging ﬁnally obtains the 3D imaging results of the target, which preliminarily solves the imaging and resolution problems of slow air moving targets. Simulations are conducted to verify the effectiveness of the proposed distributed LEO space-borne SAR moving target 3D imaging framework.


Introduction
Synthetic Aperture Radar (SAR) provides two-dimensional images with higher resolutions. In addition, SAR has advantages often offered by non-optical sensors, such as working in all weather conditions and having a high penetrability [1]. The increasing demand for 3D information in various application fields, such as terrain mapping and target recognition, motivates research on 2D high-resolution SAR. However, the moving target introduces azimuth offset, range migration, and defocusing issues. Additionally, the singlechannel system cannot obtain 3D information on a moving target and thus cannot perform 3D high-resolution imaging of the moving target [2]. On the other hand, distributed SAR offers various viewing angles and multiple baselines to obtain multi-dimensional scattering information of the target. A typical distributed SAR system with multiple vertical baselines acquires 3D information and the resolution capability in the height direction. Thus, we can provide a new solution for AMTI if the three-dimensional imaging of air-moving targets can be achieved. Moreover, moving target 3D imaging is of great significance for detecting and identifying air military targets and creates significant social and economic impacts on the traffic control of the air targets. Overall, 3D imaging of moving targets lays a good foundation for subsequent object detection and recognition.
Currently, most research activities focus on imaging and detecting two-dimensional ground-moving targets [3]. For instance, Zhang et al. [4] proposed an effective clutter suppression and 2D moving target imaging approach for the geosynchronous-low earth orbit (GEO-LEO) bistatic multichannel SAR system. The authors also performed experiments on fast-moving targets to verify the SAR ground moving target indication (GMTI) capabilities. By combining the geometric modeling of the turning motion and the imaging geometry of space-borne SAR, Wen et al. [5] also proved the accuracy of the analysis of the turning motion imaging signatures. Furthermore, the authors demonstrated the accuracy and validity of their velocity estimation method. Zhang et al. [6] developed an azimuth spectrum reconstruction and imaging method for 2D moving targets in Geosynchronous space-borne-airborne bistatic multichannel SAR, and confirmed a significant performance gain for SAR-GMTI. Moreover, Duan et al. [7] developed a CNN STAP method to improve clutter suppression performance and computation efficiency. Their approach employs a deep learning scheme to predict the high-resolution angle-Doppler profile of the clutter in the GMTI task. Zhan et al. [8] analyzed spaceborne early warning radar performance for AMTI.
The traditional GMTI or AMTI along-track baseline is inappropriate for imaging and detecting weak or slow targets [9]. Therefore, researchers have proposed a distributed vertical baseline architecture, which can distinguish ground and air targets in the height direction. Our method's distributed vertical multi-baseline space-borne SAR takes full advantage of the discriminative ability along the height direction and overcomes the problem of poor detection of weak or slow targets. Therefore, this paper provides a new solution using the vertical multi-baseline distributed SAR to solve the weak and slow air-moving targets' 3D imaging and resolution problems.
This paper focuses on 3D imaging of air moving targets for distributed LEO spaceborne SAR. At present, SAR 3D imaging is mainly applied to stationary targets [10] and very limited research has been conducted on the 3D imaging of moving targets. For example, TomoSAR [11] is applied to reconstruct urban buildings, and Fabrizio [12] proposed the new differential tomography framework with the deformation velocity, which provided the differential interferometry tomography concepts, allowing for joint elevation-velocity resolution capability. Budillon et al. [13] provided reliable estimates of the temporal and thermal deformations of the detected scatterers (5D tomography). Although current methods employ different times for the static target, for moving targets, if there are different time observations, the imaging process would bring a large offset of the same pixel in different sequences of images, resulting in the inability to register sequential images. In practice, however, detecting moving targets is required in many applications, e.g., future air traffic control systems. Nevertheless, few studies exist on the 3D imaging of moving targets. For instance, considering geometric invariance, Ferrara et al. [14] proposed two moving target imaging algorithms to solve the problem of reconstructing a 3D target image. The algorithms include a greedy algorithm and a version of basis pursuit denoising. However, in practice, the real target location is unknown. To reduce the system's complexity and cost, Sakamoto et al. [15] suggested a UWB radar imaging algorithm that estimates unknown two-dimensional target shapes and motions using only three antennas. Their algorithm's performance depends on the target's shape characteristics and is not used in distributed SAR systems. Wang et al. [16] proposed the Fractional Fourier Transform (FrFT) algorithm to achieve 3D velocity estimation for moving targets via geosynchronous bistatic SAR. Gui et al. [17] introduced a two-dimensional imaging response algorithm for a three-dimensional moving target for a single SAR using a back-projection imaging algorithm. However, this method is not suitable for distributed SAR. Liu et al. [18] proposed a distributed SAR moving target 3D imaging method to solve the nonuniform 3D configuration clutter suppression problem. Nevertheless, in this method, the baseline distribution is not in the vertical baseline, and there is no discussion of the factors affecting imaging performance.
As was explained above, traditional SAR moving target imaging focuses on twodimensional ground scenes, and thus current research on 3D moving targets is unable to detect the weak and slow air-moving targets for AMTI. Meanwhile, there are also the following difficulties, such as the azimuth offset caused by motion and the extra signal phase received by each array element, resulting in image defocus. Hence, a framework for the 3D imaging of moving targets is urgently required. Therefore, we propose a feasible framework for distributed LEO space-borne SAR moving target 3D imaging via spectral estimation, which preliminarily solves the imaging and resolution problems of weak and slow air-moving targets. Specifically, this paper proposes a three-dimensional imaging model for moving targets, and further proposes the spatial spectrum estimation method for joint motion in distributed LEO SAR imaging. Then, we analyze the effects of various factors such as azimuth offset, residual video phase (RVP), and different baseline intervals under various velocities. Experimental verification highlights the effectiveness of the proposed distributed LEO space-borne SAR moving target 3D imaging framework.
The main contributions of this paper are as follows: (1) We propose a feasible framework for air-moving target 3D imaging in distributed LEO space-borne SAR. This framework comprises a distributed LEO satellite and radar modeling, target information processing structure, moving target imaging baseline design, and moving target spectrum estimation for 3D imaging.
(2) We develop a 3D imaging model of moving targets for the space-borne distributed LEO SAR and investigate the influencing factors, including the Doppler parameters, threedimensional velocity and acceleration, RVP phase, and baseline interval on the performance of 3D imaging of moving targets.
(3) We obtain the 3D imaging result of the moving target using spatial spectrum estimation and compare them against the results of static scenes, different velocities and baseline intervals. The problem of imaging and resolution for slow air-moving targets is preliminarily solved.
The remainder of this paper is as follows. Part 2 presents the preliminaries and methods, establishes the SAR moving target 3D imaging model, and proposes the framework for distributed LEO space-borne SAR moving target 3D imaging. Additionally, this section provides the method design, and based on the SAR moving target 3D imaging model, we design the spatial spectrum estimation for the distributed SAR scene. Part 3 presents the simulation results, and Part 4 discusses the findings. Finally, Part 5 summarizes and concludes this work. Body-centered frame  

Coordinate System
: The point target and the antenna position vector are converted to the scene coordi- Earth-centered inertial frame (S i : O e − X i Y i Z i ): O e is the Earth's center, the axis O e X i points to the vernal equinox of the J2000.0, the axis O e Z i points to the Earth's north pole along the Earth's rotation axis, and the axis O e Y i is obtained according to the right-hand rule.
Earth-centered Earth-Fixed frame (S e : O e − X e Y e Z e ): O e is the center of the Earth, the axis O e X e points to the intersection of the prime meridian and the equator, the axis O e Y e points to the Earth's north pole along the Earth's rotation axis, and the axis O e Z e is obtained according to the right-hand rule.
Earth-centered orbit frame (S es : O e − X es Y es Z es ): O e is the center of the Earth, the axis O e X es coincides with the geocentric vector of the reference spacecraft, pointing from the geocentric to the spacecraft, the axis O e Y es is perpendicular to the axis O e X es in the orbital plane of the reference spacecraft pointing to the motion direction, and axis O e Z es is obtained according to the right-hand rule.
Local vertical local horizontal frame (S s : O si − X s Y s Z s ) (i = 1, 2 · · · n): O si is the center of mass of the spacecraft, the axis O si Z s points to the center of the Earth, the axis O si X s is along the velocity direction of spacecraft in the orbital plane, and the right-hand rule determines the axis O si Y s .
The body-centered frame is fixed to the spacecraft, which is the reference coordinate system for defining the attitude angle including the yaw angle, pitch angle, and roll angle. O si is the center of mass of the spacecraft, the axis O si X f , axis O si Y f , and axis O si Z f aligns with the orthogonal inertial principal axes of the spacecraft. When the yaw angle, pitch angle, and roll angle are all zero, the axis O si X f points to the speed direction, the axis O si Y f is the negative normal of the orbital plane, and the right-hand rule determines the axis O si Z f .
The Scene frame (S t : O t − X t Y t Z t ) O t is the scene center, the axis O t Z t points from the center of the Earth, the axis O t Y t is in the plane defined by the beam footprint velocity direction and is perpendicular to the O t Z t , and the axis O t X t is obtained according to the right-hand rule. This coordinate system is attached to the surface of the Earth and rotates with the Earth's rotation.
The conversion relationship between the coordinate systems is as follows: The point target and the antenna position vector are converted to the scene coordinate system, where A s f , A es s , A i es , A e i , and A t e are the rotation matrices between the coordinate systems.
Suppose that the position vector of the antenna in the body-centered frame is t is the coordinate of the scene coordinate system origin in the fixed Earth coordinate system, R e is the Earth's radius, h is the orbit height, r = R e + h, and r is the distance from the satellite's center of mass to the Earth's center. We can then write: Let the scene coordinate system coordinates of a point target be → R t T = (x 0 , y 0 , z 0 ). The distance between the antenna phase center and the target is [19]: In a very short observation synthetic aperture time t a , the motion of the satellite platform can be decomposed into a uniform acceleration linear motion along each coordinate axis [20]. For further details, see Appendix A.

Model of 3D SAR Moving Target Imaging
The SAR transmits a linear frequency modulation (LFM) signal and demodulates the received echo signal. The received signal is: where τ represents the range time, t denotes the azimuth time, A 0 is a complex constant, and w r represents the range envelope. Furthermore, w a represents the azimuth envelope, t c is the beam center deviation time, λ is the wavelength corresponding to the radar center frequency f 0 , c is the speed of light, K r represents the chirp signal frequency modulation in the range direction, and R(t) is the instantaneous slant range.
Using the Born approximation, the complex value of the pixel indexed (x, y) in the azimuth-range direction after a target's two-dimensional imaging is [21]: where [s min , s max ] is the span of the target's height in the normal slant range (NSR) direction, γ(x, y, s) is the three-dimensional distribution function of the complex scattering coefficient of the scene.
To align the two-dimensional image pixel sequence corresponding to the features of the same name, complex image registration is first required. The first image is selected as the main image, and the other images are registered based on the main image as a reference. The complex value can be expressed as [22]:   Assuming that the same target is observed from N different spatial positions, a SAR image can be obtained. Since all single-look complex image data of the target area are From the moving target's initial position P(x 0 , y 0 , z 0 ) and the geometric relationship, we have the following: For the three-dimensional velocity v x , v y , v z , we have For the three-dimensional acceleration a x , a y , a z , we have where: and sin θ = x 0 R 0n , cos θ = h n −z 0 R 0n , n = 1, 2, . . . , N. We use the third-order Taylor expansion to investigate the impact of the moving target on the SAR image R n (t). For details, please refer to Appendix B.
There is no requirement to preserve the image's phase Thus, the quadratic phase term can be ignored. Hence, the above formula can be rewritten as where ξ n = 2d/λR n is the spatial frequency corresponding to height s in the NSR direction and d denotes the baseline interval. Note that the complex value of the resolution unit with the same name in the image sequence g(n) is a discrete sampling of the spectrum of the target's characteristic scattering function γ(s) in the NSR direction and considering a resolution unit. The scattering characteristic function is [23]:

SAR Moving Target 3D Imaging Performance Analysis
In this paper, the performance of SAR 3D imaging contains Doppler and azimuth offset.

Doppler Performance with Velocity and Acceleration
By analyzing the Doppler performance of 3D moving targets, the relationship between 3D velocity and acceleration can be established. The acquisition is also significant for predicting target position, 3D focusing and matching, and laying the foundation for distributed LEO SAR 3D imaging of moving targets. For three-dimensional velocity, where f dc n is the Doppler centroid frequency and f r n is the Doppler frequency rate, we write and for three-dimensional acceleration, f dc n is the Doppler centroid frequency and f r n is the Doppler frequency rate.

Azimuth Offset
The azimuth offset, ∆x, is [3]: where ∆R max is the range cell migration.

RVP Phase
The residual video phase (RVP) is [24]: If the baseline direction is perpendicular to the line of sight, R 1 ≈ R k = R 0 and K a1 ≈ K ak . Hence, the difference between the two Doppler centers is mainly due to the slight difference in the direction of sight.
Assuming that the target's speed to satellite 1 is V los at the line of sight, the velocity projected on satellite k is V los cos δ, where δ is the opening angle from the target to satellite 1 and satellite k and δ = L R 0 . Then, we calculate the difference between the two RVP terms as follows: Based on the design parameters in Tables 1-3: In the moving target 3D imaging, the phase difference is very small and thus it can be ignored.

Azimuth Defocus and Range Contamination
Using the parameters in Tables 1-3, the constraint on the target not to be defocused is [19]: Therefore, the constraint for preventing range contamination is: where B r is the Chirp bandwidth.

Baseline Interval
Suppose L is the baseline length of the distributed SAR. The resolution in the height direction is [25,26]: where d min is the shortest interval, N is the number of baselines, and d denotes the baseline interval. The maximum allowable height of the target scene is:

Distributed LEO Spaceborne SAR Moving Target 3D Imaging Framework
The processing flow of the Distributed LEO space-borne SAR moving target 3D imaging framework is illustrated in Figure 3.

Distributed LEO Spaceborne SAR Moving Target 3D Imaging Framework
The processing flow of the Distributed LEO space-borne SAR moving target 3D imaging framework is illustrated in Figure 3.  The processing flow includes four main parts: distributed LEO satellite and radar modeling, moving target information processing, moving target imaging baseline design, and moving target spectrum estimation 3D imaging. The distributed LEO satellite and radar modeling framework includes the satellite, radar, and scene parameters. It also includes the distributed LEO SAR simulation system unit, where the output is the complex image sequence. Furthermore, the proposed framework established the relative motion model and distributed SAR moving target 3D imaging model. The moving target information processing framework contains the ground control point (GCP), 3D velocity, and acceleration setting, and the moving target setting comprises the Doppler parameters, RVP phase, azimuth offset, azimuth defocus and range contamination. The moving target imaging baseline design framework performs the configuration and optimization of different baselines. Finally, all results are input into the information processing unit. The moving target spectrum estimation 3D imaging framework includes the moving target complex image registration unit where deramping and phase error compensation are carried out [27,28]. Furthermore, the spectrum estimation algorithm for height imaging and 3D scene moving target imaging are performed followed by data display [29].

Method of the Distributed LEO Spaceborne SAR Moving Target 3D Imaging
For the general spatial spectrum estimation, we have the following: where   is the input signal, and ( ) P w is the signal power. For the 3D moving target signal, we have the following: The processing flow includes four main parts: distributed LEO satellite and radar modeling, moving target information processing, moving target imaging baseline design, and moving target spectrum estimation 3D imaging. The distributed LEO satellite and radar modeling framework includes the satellite, radar, and scene parameters. It also includes the distributed LEO SAR simulation system unit, where the output is the complex image sequence. Furthermore, the proposed framework established the relative motion model and distributed SAR moving target 3D imaging model. The moving target information processing framework contains the ground control point (GCP), 3D velocity, and acceleration setting, and the moving target setting comprises the Doppler parameters, RVP phase, azimuth offset, azimuth defocus and range contamination. The moving target imaging baseline design framework performs the configuration and optimization of different baselines. Finally, all results are input into the information processing unit. The moving target spectrum estimation 3D imaging framework includes the moving target complex image registration unit where deramping and phase error compensation are carried out [27,28]. Furthermore, the spectrum estimation algorithm for height imaging and 3D scene moving target imaging are performed followed by data display [29].

Method of the Distributed LEO Spaceborne SAR Moving Target 3D Imaging
For the general spatial spectrum estimation, we have the following: where w = w 1 w 2 · · · w M T denotes the weight vector, y(n) is the measurement signal, x(n) is the input signal, and P(w) is the signal power.
For the 3D moving target signal, we have the following: where A is a matrix, and γ s is the scattering coefficient vector. If the velocity is zero, the matrix A for the elevation direction is: For a velocity of V r , we have [30]: where v r (p) is the radial velocity of the scattering point at position p in the scattering point dictionary model. Additionally, P is the scatter point dictionary model and Q is the number of speed samples in the speed dictionary.

Beamforming for 3D Moving Target
Beamforming, also referred to as spatial filtering, is one of the hallmarks of array signal processing. Its essence is to perform spatial filtering by weighting each array element to enhance the desired signal and suppress interference. The weighting factor of each array element can also be adaptively changed according to the change in the signal environment. For w = a(V r )(β), we have where P CBF (a(V r )(β)) is the signal power and R CBF is the covariance matrix of moving target echo data.

Capon for 3D Moving Target
The weight vector is adaptively formed according to the input and output signals of the array. Different weight vectors can also direct the formed beam in different directions, and the direction of obtaining the maximum output power for the desired signal is the signal incident direction. Thus: where P Capon (a(V r )(β)) is the signal power and R Capon is the covariance matrix of moving target echo data.

MUSIC for 3D Moving Target
The specific steps include (i) performing eigenvalue decomposition on the covariance matrix R MUSIC of the received data array to obtain the mutually orthogonal signal subspace U S and noise subspace U N and then (ii) using the orthogonal characteristics to estimate the signal parameters [31,32]: where P MUSIC (a(V r )(β)) is the signal power, and R MUSIC is the covariance matrix of moving target echo data.

Simulation Results
This section generates the simulated SAR data using the Spaceborne Radar Advanced Simulator (SBRAS) system [33,34].

Satellite Parameters
For SC1, SC2, SC3 . . . SC20, the simulation conditions are as follows. Table 1 reports the spacecraft's orbit parameters, where a is the semi-major axis, e is the eccentricity, i is the inclination, Ω is the right ascension of ascending node, ω is the argument of perigee, and f is the true anomaly.
is the signal power and Capon R is the covariance matrix of moving target echo data.

MUSIC for 3D Moving Target
The specific steps include (i) performing eigenvalue decomposition on the covariance matrix MUSIC R of the received data array to obtain the mutually orthogonal signal subspace S U and noise subspace N U and then (ii) using the orthogonal characteristics to estimate the signal parameters [31,32]: is the signal power, and MUSIC R is the covariance matrix of moving target echo data.

Simulation Results
This section generates the simulated SAR data using the Spaceborne Radar Advanced Simulator (SBRAS) system [33,34].

Satellite Parameters
For SC1, SC2, SC3…SC20, the simulation conditions are as follows. Table 1 reports the spacecraft's orbit parameters, where a is the semi-major axis, e is the eccentricity, i is the inclination, Ω is the right ascension of ascending node, ω is the argument of perigee, and f is the true anomaly.

Radar Parameters
The radar and signal propagation parameters are presented in Tables 2 and 3, respectively.

Radar Parameters
The radar and signal propagation parameters are presented in Tables 2 and 3, respectively.

Case 1: Doppler Performance with velocity and acceleration
For v y = 0 : 1 : 200, v x = 100, v z = 0 : 1 : 200, the radial velocity is v r = v y sin θ. The corresponding results are illustrated in the following figures.
Based on the given velocity, the Doppler centroid of the 1st and 20th images of the moving target continuously increases as the radial velocity v r and vertical velocity v z increase in Figure 5. The deviation of the two images is small in Figure 6, indicating that the Doppler centroid error under different heights is rather small. By increasing the radial and vertical velocities, the Doppler frequency rate of the 1st and 20th images becomes a quadratic function. Figure 5 highlights that for v y = 100 m/s, there is a maximum value, and the impact of v z is small. Figure 6 also reveals that the Doppler frequency rate deviation of the two images is very small. Based on the given velocity, the Doppler centroid of the 1st and 20th images of the moving target continuously increases as the radial velocity r v and vertical velocity z v increase in Figure 5. The deviation of the two images is small in Figure 6, indicating that the Doppler centroid error under different heights is rather small. By increasing the radial and vertical velocities, the Doppler frequency rate of the 1st and 20th images becomes a quadratic function. Figure 5 highlights that for 100m / s  y v , there is a maximum value, and the impact of z v is small. Figure 6 also reveals that the Doppler frequency rate deviation of the two images is very small.   Based on the given velocity, the Doppler centroid of the 1st and 20th images of the moving target continuously increases as the radial velocity r v and vertical velocity z v increase in Figure 5. The deviation of the two images is small in Figure 6, indicating that the Doppler centroid error under different heights is rather small. By increasing the radial and vertical velocities, the Doppler frequency rate of the 1st and 20th images becomes a quadratic function. Figure 5 highlights that for 100m / s  y v , there is a maximum value, and the impact of z v is small. Figure 6 also reveals that the Doppler frequency rate deviation of the two images is very small.   For v y = 0 : 1 : 200, v x = 100, v z = 0 : 1 : 200, the radial velocity is v r = v y sin θ. We set a x = a y = a z = 2 m/s 2 , and the corresponding results are presented in the following figures.

SAR Moving Target 3D Imaging Performance Analysis
Based on the given acceleration, the Doppler centroid of the 1st and 20th images of the moving target increases as the increasing radial and vertical velocities increase, regardless of the acceleration. Figure 7 highlights that the Doppler frequency rate of the 1st and 20th images also increases with the increasing radial velocity v r and the normal velocity v z . Additionally, the impact of v z is very small, which is directly related to a y and a z . We can also see that acceleration mainly affects the Doppler frequency rate instead of the Doppler centroid. Figure 8b also suggests that the Doppler frequency rate deviation of the two images is very small, inferring that acceleration estimation requires at least three Doppler frequency rate equations. r velocity z v . Additionally, the impact of z v is very small, which is directly related to y a and z a . We can also see that acceleration mainly affects the Doppler frequency rate instead of the Doppler centroid. Figure 8b also suggests that the Doppler frequency rate deviation of the two images is very small, inferring that acceleration estimation requires at least three Doppler frequency rate equations.  velocity z v . Additionally, the impact of z v is very small, which is directly related to y a and z a . We can also see that acceleration mainly affects the Doppler frequency rate instead of the Doppler centroid. Figure 8b also suggests that the Doppler frequency rate deviation of the two images is very small, inferring that acceleration estimation requires at least three Doppler frequency rate equations.    The azimuth direction of all 20 satellites is linearly related to the radial velocity, and the greater the velocity, the greater the offset. For a range velocity of 200 m/s, the difference between the moving point and azimuth offsets of the 1st and 20th satellites is 0.035-0.04 m. The offset difference is about 0.04 m, and the resolution is at the meter level. After registering the complete image, the moving target does not need additional registration.

Case 3: Baseline Interval
For different baseline intervals, the results are presented in Figure 11. We set the baseline interval and obtained the maximum allowable height of the target scene. For L = 1566 m, we set the height resolution to 6.35 m for 20 satellites.

Static Target by Spectral Estimation
Through the distributed SAR simulation imaging system, 20 sequence images are obtained. The first trajectory image is used as the reference image, and the other images are registered with the first image. The interference phase is then calculated to find the image's ground control point (GCP) (this paper sets 100 GCP points). We then find the reference slant distance, divide the grid using the spectral estimation method, focus the entire image, and finally achieve height-directional imaging. Figure 12 shows the 20 SAR images of the cone scene by GCP points. We set the baseline interval and obtained the maximum allowable height of the target scene. For L = 1566 m, we set the height resolution to 6.35 m for 20 satellites.

Static Target by Spectral Estimation
Through the distributed SAR simulation imaging system, 20 sequence images are obtained. The first trajectory image is used as the reference image, and the other images are registered with the first image. The interference phase is then calculated to find the image's ground control point (GCP) (this paper sets 100 GCP points). We then find the reference slant distance, divide the grid using the spectral estimation method, focus the entire image, and finally achieve height-directional imaging. Figure 12 shows the 20 SAR images of the cone scene by GCP points. Remote Sens. 2022, 14, x FOR PEER REVIEW 17 of 25 The results presented in the following figures are based on the spectral estimation method. Figure 13 illustrates the ground truth of the 3D scatter model, Figure 14a,b is the 3D imaging result under 20 and 10 tracks, respectively. Compared with the points corresponding to the ground truth, we observe that the 3D imaging result based on 100 GCP points presents a certain error relationship between them. Figure 15 presents the imaging results of the 1st and 55th points for 20 tracks, and Figure 16 shows the imaging results of the 1st and 55th points for 10 tracks. Both figures highlight that the NSR direction is offset by a certain distance, corresponding to a height of 8.4961 m for Target S1 and 8.2695 m for Target S2. The imaging error of this point is also smaller. Overall, the results reveal that when the total baseline length remains unchanged, the larger the baseline interval, the higher the height resolution.  The results presented in the following figures are based on the spectral estimation method. Figure 13 illustrates the ground truth of the 3D scatter model, Figure 14a,b is the 3D imaging result under 20 and 10 tracks, respectively. Compared with the points corresponding to the ground truth, we observe that the 3D imaging result based on 100 GCP points presents a certain error relationship between them. Figure 15 presents the imaging results of the 1st and 55th points for 20 tracks, and Figure 16 shows the imaging results of the 1st and 55th points for 10 tracks. Both figures highlight that the NSR direction is offset by a certain distance, corresponding to a height of 8.4961 m for Target S1 and 8.2695 m for Target S2. The imaging error of this point is also smaller. Overall, the results reveal that when the total baseline length remains unchanged, the larger the baseline interval, the higher the height resolution. The results presented in the following figures are based on the spectral estimation method. Figure 13 illustrates the ground truth of the 3D scatter model, Figure 14a,b is the 3D imaging result under 20 and 10 tracks, respectively. Compared with the points corresponding to the ground truth, we observe that the 3D imaging result based on 100 GCP points presents a certain error relationship between them. Figure 15 presents the imaging results of the 1st and 55th points for 20 tracks, and Figure 16 shows the imaging results of the 1st and 55th points for 10 tracks. Both figures highlight that the NSR direction is offset by a certain distance, corresponding to a height of 8.4961 m for Target S1 and 8.2695 m for Target S2. The imaging error of this point is also smaller. Overall, the results reveal that when the total baseline length remains unchanged, the larger the baseline interval, the higher the height resolution.

A Moving Target by Spectral Estimation
Case 1: We set the point in the 5th row and the 6th column (the 55th point) as the moving target point (the red arrow), where for the moving target, its range velocity is vx = 0.1 m/s (preventing range contamination). The corresponding results are presented in Figures 17-20. Specifically, Figure 17 shows the 1st, 2nd, and 3rd in 20 SAR images. Figure  18a is the 3D imaging result for 20 tracks, and Figure 18b is for 10 tracks. Figure 19

A Moving Target by Spectral Estimation
Case 1: We set the point in the 5th row and the 6th column (the 55th point) as the moving target point (the red arrow), where for the moving target, its range velocity is vx = 0.1 m/s (preventing range contamination). The corresponding results are presented in Figures 17-20. Specifically, Figure 17 shows the 1st, 2nd, and 3rd in 20 SAR images. Figure  18a is the 3D imaging result for 20 tracks, and Figure 18b is for 10 tracks. Figure 19

A Moving Target by Spectral Estimation
Case 1: We set the point in the 5th row and the 6th column (the 55th point) as the moving target point (the red arrow), where for the moving target, its range velocity is vx = 0.1 m/s (preventing range contamination). The corresponding results are presented in Figures 17-20. Specifically, Figure 17 shows the 1st, 2nd, and 3rd in 20 SAR images. Figure  18a is the 3D imaging result for 20 tracks, and Figure 18b is for 10 tracks. Figure 19

A Moving Target by Spectral Estimation
Case 1: We set the point in the 5th row and the 6th column (the 55th point) as the moving target point (the red arrow), where for the moving target, its range velocity is v x = 0.1 m/s (preventing range contamination). The corresponding results are presented in Figures 17-20. Specifically, Figure 17 shows the 1st, 2nd, and 3rd in 20 SAR images. Figure 18a is the 3D imaging result for 20 tracks, and Figure 18b is for 10 tracks. Figure 19 illustrates the imaging results of the 1st and 55th points for 20 tracks, and Figure 20 shows the imaging results of the 1st and 55th points for 10 tracks.
The above figures highlight that the 3D imaging result based on 100 GCP points has a certain offset error between them that is relative to the corresponding points in the static state. In the NSR direction, the result corresponds to a height of 8.9961 m for Target M1, and 8.7695 m for Target M2.
Case 2: Similarly, we set the point in row 5 and column 6 as the moving target point (the red arrow), where for the moving target, the range velocity is v x = 2 m/s. The corresponding results are illustrated in Figures 21-24. Specifically, Figure 21 presents the 1st, 2nd, and 3rd in 20 SAR images. Figure 22a is the 3D imaging result for 20 tracks, and Figure 22b is for 10 tracks. Figure 23 shows the imaging results of the 1st and 55th points for 20 tracks, and Figure 24 also shows the imaging results of the 1st and 55th points for 10 tracks. The above figures infer that the 3D imaging result based on 100 GCP points has a certain offset error between them relative to the corresponding points in the static state. In the NSR direction, the result corresponds to a height of 16 illustrates the imaging results of the 1st and 55th points for 20 tracks, and Figure 20 shows the imaging results of the 1st and 55th points for 10 tracks.
The above figures highlight that the 3D imaging result based on 100 GCP points has a certain offset error between them that is relative to the corresponding points in the static state. In the NSR direction, the result corresponds to a height of 8.9961 m for Target M1, and 8.7695 m for Target M2.  illustrates the imaging results of the 1st and 55th points for 20 tracks, and Figure 20 shows the imaging results of the 1st and 55th points for 10 tracks.
The above figures highlight that the 3D imaging result based on 100 GCP points has a certain offset error between them that is relative to the corresponding points in the static state. In the NSR direction, the result corresponds to a height of 8.9961 m for Target M1, and 8.7695 m for Target M2.  illustrates the imaging results of the 1st and 55th points for 20 tracks, and Figure 20 shows the imaging results of the 1st and 55th points for 10 tracks.
The above figures highlight that the 3D imaging result based on 100 GCP points has a certain offset error between them that is relative to the corresponding points in the static state. In the NSR direction, the result corresponds to a height of 8.9961 m for Target M1, and 8.7695 m for Target M2.   Figure 21 presents the 1st, 2nd, and 3rd in 20 SAR images. Figure 22a is the 3D imaging result for 20 tracks, and Figure 22b is for 10 tracks. Figure 23 shows the imaging results of the 1st and 55th points for 20 tracks, and Figure 24 also shows the imaging results of the 1st and 55th points for 10 tracks. The above figures infer that the 3D imaging result based on 100 GCP points has a certain offset error between them relative to the corresponding points in the static state. In the NSR direction, the result corresponds to a height of 16.2695 m for Target M3, and 16.4961 m for Target M4.    Case 2: Similarly, we set the point in row 5 and column 6 as the moving target point (the red arrow), where for the moving target, the range velocity is vx = 2 m/s. The corresponding results are illustrated in Figures 21-24. Specifically, Figure 21 presents the 1st, 2nd, and 3rd in 20 SAR images. Figure 22a is the 3D imaging result for 20 tracks, and Figure 22b is for 10 tracks. Figure 23 shows the imaging results of the 1st and 55th points for 20 tracks, and Figure 24 also shows the imaging results of the 1st and 55th points for 10 tracks. The above figures infer that the 3D imaging result based on 100 GCP points has a certain offset error between them relative to the corresponding points in the static state. In the NSR direction, the result corresponds to a height of 16.2695 m for Target M3, and 16.4961 m for Target M4.   Case 2: Similarly, we set the point in row 5 and column 6 as the moving target point (the red arrow), where for the moving target, the range velocity is vx = 2 m/s. The corresponding results are illustrated in Figures 21-24. Specifically, Figure 21 presents the 1st, 2nd, and 3rd in 20 SAR images. Figure 22a is the 3D imaging result for 20 tracks, and Figure 22b is for 10 tracks. Figure 23 shows the imaging results of the 1st and 55th points for 20 tracks, and Figure 24 also shows the imaging results of the 1st and 55th points for 10 tracks. The above figures infer that the 3D imaging result based on 100 GCP points has a certain offset error between them relative to the corresponding points in the static state. In the NSR direction, the result corresponds to a height of 16.2695 m for Target M3, and 16.4961 m for Target M4.

Simulation Evaluation
We set 100 GCP points 3D imaging time and employed the root means square error (RMSE) as the evaluation condition (performance shown in the Table 4):

Simulation Evaluation
We set 100 GCP points 3D imaging time and employed the root means square error (RMSE) as the evaluation condition (performance shown in the Table 4):

Discussion
In this paper, we propose a framework for distributed LEO SAR air slow moving target 3D imaging via spectral estimation. Specifically, we design adequate simulations to verify the intermediate links' effectiveness and the method's final results, highlighting the influencing factors. The simulations demonstrate that we achieve air slow moving target 3D imaging at a speed of 0.1 m/s and 2 m/s. Instead of traditional AMTI methods, our method can distinguish slow-moving targets in the height direction. Compared with the static scene, we found that the moving targets at different speeds result in different effects. The greater the speed, the greater the offset of the moving target. Furthermore, the results suggest that the higher the number of baselines, the better the imaging quality. We also compared the above simulation evaluation results. Additionally, the speed of the moving target is higher, the time consumption is shorter, and the root mean square error is larger. These indicate that the moving target 3D image's quality is inferior to that of at a low speed. When using different spectral estimation methods, the method's times are similar. However, CBF requires the longest time, with Capon requiring the second longest time and MUSIC the shortest. Furthermore, regardless of the spectral estimation method, for different baseline numbers, the larger the baseline number, the longer the time and the smaller the root mean square error. It should be noted that the results are consistent with the performance analysis of 3D imaging of moving targets. Furthermore, the above simulation results confirm that the proposed distributed LEO SAR moving target 3D imaging framework meets the requirements and is suitable for AMTI.

Conclusions
This paper designs a feasible framework for distributed LEO space-borne SAR moving target 3D imaging space-borne for AMTI in the height of slow air moving targets. Our framework contains four subsystems: distributed LEO satellite and radar modeling, moving target information processing, baseline design framework, and spectrum estimation 3D imaging. Specifically, we first establish the relative motion model of the satellite platform and the 3D imaging of the distributed LEO SAR moving target. Based on the proposed model, a cone scene is designed, and the echo data of the moving target obtained by the distributed SAR simulation system-SBRAS are used to generate two-dimensional sequence images. Considering the key influencing factors, such as the three-dimensional velocity and acceleration, we then discuss the effects of velocities and different baseline intervals on the moving targets' imaging performance. Moreover, spatial spectrum estimation is used to perform 3D moving target imaging in the NSR direction. The simulations and analysis for different speeds of the moving target are also presented, confirming the proposed method's efficiency. Future work will improve the entire framework for the distributed LEO SAR moving target 3D imaging for the cases where the image is defocused due to the target's high speed.  The coordinates of the satellite platform can be expressed as (r cos φ(t a ) , r sin φ(t a ), 0) in Local vertical local horizontal frame and (x sc , y sc , z sc ) is the satellite platform position. Thus, we have: We use the second-order Taylor expansion of the position (x sc , y sc , z sc ). Therefore: x sc (t a ) = r cos i sin φ 0 sin σ 0 + cos φ 0 cos σ 0 +r[(cos i sin φ 0 cos σ 0 − cos φ 0 sin σ 0 )ω e + (cos i cos φ 0 sin σ 0 − sin φ 0 cos σ 0 )ω sc ]t a +r 2(cos i cos φ 0 cos σ 0 + sin φ 0 sin σ 0 )ω e ω sc − (cos i sin φ 0 sin σ 0 + cos φ 0 cos σ 0 ) ω 2 e + ω 2 sc t a 2 /2 y sc (t a ) = r cos i sin φ 0 cos σ 0 − cos φ 0 sin σ 0 +r[(cos i cos φ 0 cos σ 0 + sin φ 0 sin σ 0 )ω sc − (cos i sin φ 0 sin σ 0 − cos φ 0 cos σ 0 )ω e ]t a −r 2(cos i cos φ 0 sin σ 0 − sin φ 0 cos σ 0 )ω e ω sc + (cos i sin φ 0 cos σ 0 − cos φ 0 sin σ 0 ) ω 2 e + ω 2 sc t a 2 /2 z sc (t a ) = r sin i sin φ 0 + r sin i cos φ 0 ω sc t a − r sin φ 0 ω 2 sc t a 2 /2 (A3) Hence, we have the following:   x sc y sc z sc   =   x sc 0 + v sc x t a + a sc x t a 2 /2 y sc 0 + v sc y t a + a sc y t a 2 /2 z sc 0 + v sc z t a + a sc z t a 2 /2 where (x sc 0 , y sc 0 , z sc 0 ) is the initial satellite platform position, v sc x , v sc y , v sc z is the satellite platform speed, and a sc x , a sc y , a sc z is the satellite platform acceleration. Therefore, in a very short observation time t a , the motion of the satellite platform can be decomposed into a uniform acceleration linear motion along each coordinate axis.