Next Article in Journal
Perspective Back-Projection Algorithm: Interface Imaging for Airborne Ice Detection
Previous Article in Journal
Distance Transform-Based Spatiotemporal Model for Approximating Missing NDVI from Satellite Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Trajectory Estimation Method Based on Microwave Three-Point Ranging for Sparse 3D Radar Imaging

School of Electronics and Information Engineering, Beihang University, Beijing 100191, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(20), 3397; https://doi.org/10.3390/rs17203397
Submission received: 13 August 2025 / Revised: 17 September 2025 / Accepted: 3 October 2025 / Published: 10 October 2025
(This article belongs to the Section Engineering Remote Sensing)

Abstract

Highlights

What are the main findings?
  • A microwave three-point ranging scheme with three external reflective spheres uniquely determines the radar’s 3D position at each sample—even under random jitter—without any communications link.
  • The resulting trajectory accuracy enables high-fidelity 3D radar imaging under sparse sampling.
What is the implication of the main finding?
  • A simple, retrofit-friendly external calibration (three spheres) increases the robustness and flexibility of sparse near-field radar imaging.
  • The method scales to UAV-borne 3D imaging, supporting accurate in-flight radar localization without relying on communications.

Abstract

Precise estimate of antenna location is essential for high-quality three-dimensional (3D) radar imaging, especially under sparse sampling schemes. In scenarios involving synchronized scanning and rotational motion, small deviations in the radar’s transmitting position can lead to significant phase errors, thereby degrading image fidelity or even causing image failure. To address this challenge, we propose a novel trajectory estimation method based on microwave three-point ranging. The method utilizes three fixed microwave-reflective calibration spheres positioned outside the imaging scene. By measuring the one-dimensional radial distances between the radar and each of the three spheres, and geometrically constructing three intersecting spheres in space, the radar’s spatial position can be uniquely determined at each sampling moment. This external reference-based localization scheme significantly reduces positioning errors without requiring precise synchronization control between scanning and rotation. Furthermore, the proposed approach enhances the robustness and flexibility of sparse sampling strategies in near-field radar imaging. Beyond ground-based setups, the method also holds promise for drone-borne 3D imaging applications, enabling accurate localization of onboard radar systems during flight. Simulation results and error analysis demonstrate that the proposed method improves trajectory accuracy and supports high-fidelity 3D reconstruction under non-ideal sampling conditions.

1. Introduction

Three-dimensional (3-D) radar imaging has evolved into a cornerstone sensing modality for autonomous drones, ground-based surveillance, and compact antenna-test facilities. By synthesizing a large synthetic aperture in both elevation and azimuth, 3-D radar can recover volumetric scattering distributions that reveal a target’s geometry, material composition, and even internal cavities [1,2]. In recent years, sparse 3-D imaging—that is, reconstructing high-resolution scenes from far fewer spatial samples—has received considerable attention because it dramatically shortens acquisition time and reduces data-handling requirements [3,4]. Yet, sparse sampling also tightens the tolerance on antenna-trajectory accuracy: any position error directly maps into phase error that cannot be averaged out by dense measurements. Even millimeter-level deviations translate into conspicuous artifacts or complete imaging failure at Ku- and millimeter-wave bands.
One pragmatic sparse-sampling architecture is synchronized vertical scanning and azimuthal rotation (SVSAR) [5]: the radar antenna (or the target on a turn-table) rotates horizontally while the antenna simultaneously scans linearly along the vertical rail. The helical or V-shaped trajectory generated thereby is easy to implement on existing cylindrical radar cross section (RCS) ranges and is compatible with the limited payload of small drones. Nevertheless, synchronizing two mechanical axes plus the timing of transmit/receive (T/R) events is non-trivial; unmodelled delay, servo jitter, or wind-induced vibration may offset the antenna from its nominal coordinates at each pulse repetition interval (PRI). When compounded over hundreds of pulses, these sub-millimeter offsets cause severe phase-wrapping errors and blur the 3-D reconstruction.
Numerous studies have attempted to correct trajectory errors after-the-fact [6]. Multi-channel interferometric calibration [7] and phase-gradient autofocus (PGA) variants [8] may estimate inter-pulse phase drifts, but they require either extra hardware channels or strongly scattering calibration poles. Sub-aperture based residual compensation, as proposed by Zhang et al. [9], effectively suppresses slow drift yet is less effective against high-frequency servo vibration. Manzoni et al. [10] leveraged on-board Inertial Measurement Unit (IMU) in automotive multiple-input multiple-output Synthetic Aperture Radar (MIMO-SAR) to remove constant-velocity errors, whereas the method deteriorates when IMU bias accumulates. Yang et al. [11] exploited multi-channel interferometric phase to infer array deformation, but the multi-receiver configuration increases cost and weight. Sun et al. [12] analyzed frequency-synchronization error in distributed SAR; their framework, however, presumes precisely surveyed base-lines and is not directly transferable to single-platform near-field setups.
The last five years have witnessed a surge of Unmanned Aerial Vehicle (UAV)-oriented trajectory–localization studies that complement sparse 3-D radar imaging. O. Nacar et al. [13] propose a Gated Recurrent Unit (GRU)-based network that forecasts future velocities rather than positions, yielding sub-centimeter mean-square error. However, this method is purely data-driven and cannot compensate for mechanical jitter, providing insufficient support for the millimeter-level phase accuracy required for radar imaging. X. Jing et al. [14] formulate a weighted optimization that balances down-link capacity and radar-based target localization, solved via iterative refinement. This method assumes that the UAV’s attitude is known and does not discuss trajectory errors, making it difficult to directly meet the high-precision requirements of sparse near-field imaging. A. Gupta et al. [15] examine visual-inertial fusion and Simultaneous Localization and Mapping (SLAM) pipelines for airborne platforms [16]. However, the performance of the visual/IMU method is limited in indoor dark rooms or metallic environments, and it does not address the issue of millimeter-wave phase alignment. Y. Pan et al. [17] optimize multi-drone paths jointly with sensing–communication power budgets. This method assumes accurate positioning and lacks modeling for mechanical errors and timing jitter. Y. Hou et al. [18] design a backstepping-sliding-mode controller to reject wind disturbances in quadrotors. However, this method relies on external precise location feedback and cannot independently provide the absolute coordinates required for radar positioning.
Existing research has made progress in trajectory prediction, cooperative planning, and SLAM localization, but solutions that balance sparse sampling efficiency with millimeter-level trajectory accuracy without requiring expensive hardware have not yet emerged. Existing compensation technologies either rely on expensive auxiliary sensors [19,20,21,22] (laser trackers, differential Global Positioning System (GPS)) or assume motion errors to be slow variables, thereby ignoring high-frequency jitter. For UAV radar [23,24] or indoor near-field turntable testing, the addition of sensors is often limited by load, power consumption, or line of sight. Moreover, many algorithms iterate in the image domain, and once the phase error becomes too large, it becomes difficult to converge. Therefore, there is an urgent need for a positioning solution that does not require mechanical structure modifications, is compatible with ground rails and drone platforms, can coexist with test scenarios, and provides centimeter to millimeter-level accuracy.
This paper introduces a microwave three-point ranging (M3PR) strategy tailored for sparse 3-D radar imaging under synchronized scanning–rotation sampling. Three low-cost metallic calibration spheres are placed outside the region of interest. At every pulse, the radar measures the round-trip distances to these spheres; the intersection of the resulting distance spheres yields the instantaneous radar position. The key contributions of this work are highlighted below:
  • A novel trajectory estimation method is proposed for 3D radar imaging, focusing on precise radar localization via microwave three-point ranging. The method requires no communication infrastructure and relies solely on three fixed microwave-reflective calibration spheres placed outside the imaging region. By extracting one-dimensional range profiles and using the known coordinates of the sphere centers, the radar’s spatial position can be uniquely determined at each sampling instance. An analytical solution for position estimation is derived, and the sensitivity of microwave imaging systems to trajectory jitter is analyzed.
  • Based on the proposed localization method, its applicability is further investigated in both ISAR and SAR system configurations. For each system, a rational layout strategy for the three reference spheres is developed to ensure peak identifiability and accurate phase compensation under different geometric constraints. These layout designs provide theoretical support for future engineering implementation.
The subsequent sections of this paper are structured as follows. Section 2 introduces the proposed trajectory estimation method and its analytical solution for 3D radar imaging, along with an analysis of the sensitivity of phase errors to trajectory deviations. Section 3 presents the system configuration and parameter settings used for validation. Section 4 provides trajectory localization and 3D imaging results based on the proposed method. Section 5 discusses the positioning and phase compensation accuracy, the applicability of the proposed method to both ISAR and SAR configurations, and the layout strategies for the three reference spheres under each scenario. Finally, Section 6 concludes the study.

2. Methods

2.1. System Model

Modern 3-D radar imaging platforms employ a variety of sampling geometries—including planar, spherical, and cylindrical trajectories—to acquire volumetric scattering data. Among these choices, cylindrical scanning strikes a favorable balance between mechanical simplicity and angular diversity, making it the de facto standard in compact antenna-test ranges and drone-borne payloads. To lay a clear theoretical foundation, we first review the conventional full-coverage cylindrical model, in which the radar (or target) rotates through a complete 360° at each elevation step. We then extend this formulation to a synchronized linear-scan and rotation scheme that enables sparse acquisition while preserving 3-D resolution. The mathematical relationships derived in the following subsections serve as the basis for the subsequent phase-error sensitivity analysis and for the proposed microwave three-point positioning method.

2.1.1. Traditional Cylindrical Sampling Model

Figure 1 illustrates the model of the cylindrical sampling 3D imaging system. The imaging Cartesian coordinate system is defined with its origin at the center of the turntable. The coordinates of any imaging position are denoted by P n x n , y n , z n , and the corresponding scattering intensity is σ x n , y n , z n . The distance from the cylindrical sampling array of length L Z to the z-axis of the imaging coordinate system is R . In a classical cylindrical acquisition, the measurement proceeds as follows: the turntable first rotates by a prescribed azimuth increment and is then held stationary while the Tx/Rx antenna performs a vertical mechanical scan, transmitting and receiving echoes throughout the scan. After the vertical sweep is completed, the turntable advances by the next azimuth increment and the process is repeated. The sequence continues until the turntable has covered the full azimuthal sector of interest, thereby synthesizing a virtual cylindrical aperture for 3-D image reconstruction. The coordinates of the radar transmit-receive positions are R cos θ , R sin θ , z .
The cylindrical sampling approach has been widely adopted in indoor imaging measurements. However, in outdoor environments, its application becomes more challenging due to the typically large dimensions of the measured targets, which require long and heavy guide rails. Moreover, environmental factors—particularly wind—can induce mechanical vibrations during antenna scanning, and the limited precision of the rail system further exacerbates positioning errors. As a result, the actual position of the transmitting/receiving antenna may deviate significantly from its nominal trajectory, leading to phase compensation errors and potentially causing imaging failure. To address the positioning errors caused by rail-system limitations and external disturbances, there is a pressing need for a trajectory localization method that can accurately determine the antenna’s actual position and remain compatible with existing measurement systems.

2.1.2. Sampling Model for Linear Scanning and Rotationally Synchronized Motion

The synchronized scanning–rotation sampling scheme [5] is a novel sparse acquisition method that offers simple operation and acceptable imaging quality, making it well-suited for integration with existing near-field imaging systems. In practical measurements, the procedure is as follows: after configuring an appropriate speed ratio between the turntable and the vertically scanning antenna, both begin moving simultaneously while the Tx/Rx antenna transmits and receives signals in parallel. The vertical scan operates in a back-and-forth manner and continues in synchrony with the turntable until the full azimuthal sector is covered. This process results in a virtual cylindrical “V”-shaped sampling trajectory, as illustrated in Figure 2.
This sparse sampling method has been validated through principle-level experiments in an anechoic chamber. However, its practical deployment may still be affected by position errors caused by timing jitter and rail-induced vibration, which can significantly degrade phase compensation during imaging. Therefore, it is necessary to develop a jitter-error mitigation approach that enables accurate localization of the antenna’s actual motion trajectory, thereby enhancing the engineering applicability and practical value of this sparse sampling scheme.

2.2. Phase Error Sensitivity Analysis of Trajectory Deviation

To quantify the impact of scanning antenna trajectory errors on 3D imaging quality, this section derives the phase mismatch threshold based on the monostatic near-field scattering model and establishes a theoretical criterion for complete image defocusing.
For a monostatic system, the time-domain echo corresponding to the m -th pulse at frequency f k   (with wave number k = 2 π / λ k ) can be expressed as:
s m ( f k ) = n = 1 N σ n exp [ j 2 k R m , n ] ,
where R m , n   = r m   r n , which denotes the distance between the antenna position r m and the scatterer position r n , and σ n is the complex scattering amplitude of the n -th scatterer. In ideal imaging, reconstruction is performed using exact geometric compensation.
s ^ m   ( f k ) = s m   ( f k ) e x p [ j 2 k R m , n r e f ] ,
where R m , n r e f   is computed from the nominal trajectory. If a trajectory deviation Δ r m exists, the actual distance becomes:
R m , n = R m , n   r e f + Δ R m , n   ,   Δ R m , n   ( r m r e f r n ) Δ r m R m , n   r e f   .
This results in a residual phase error given by:
Δ ϕ m , n = 2 k Δ R m , n   .
A commonly used metric for evaluating image focusing quality can be expressed as:
C F = m , k σ n e j Δ ϕ m , n m , k σ n   =   e j Δ ϕ e x p ( 1   2 σ ϕ 2 ) ,
where σ ϕ is the root-mean-square (rms) of the residual phase error. Combining with Equation (4), the threshold for the trajectory rms position error is given by:
σ R   m a x = σ ϕ m a x 2 k = λ 4 π   .
In other words, when the rms of the antenna jitter exceeds approximately λ / 4 π 0.08 λ ( λ / 12 ) , the focusing quality drops below 0.8, and the image begins to degrade noticeably. If the jitter approaches λ / 4 , resulting in σ ϕ π , more than half of the main-lobe energy is lost, leading to severe defocusing and imaging failure.
In addition, the impact of antenna jitter on imaging accuracy varies with its direction in space. Axial (radial) errors—typically caused by rail straightness deviations or target eccentricity—directly affect the range term Δ R and therefore have the most significant influence on phase accuracy. In contrast, lateral errors (vertical or horizontal) introduce second-order phase terms under near-field conditions, with a magnitude on the order of ( k Δ x 2 / ( 2 R ) ) . While their impact is relatively minor for the current imaging setup, they may still be non-negligible in short-range systems.

2.3. The Proposed Three-Point Positioning Method

A laser tracker (LT) combines laser ranging and angular encoding to rapidly measure 3-D coordinates. Using fixed target spheres to define a reference frame and a corner-cube retro-reflector (CCR) on the object, the beam returns along the incident path for interferometric/phase processing; range, azimuth, and elevation yield 3-D position via spherical-to-Cartesian transformation (Equation (7)). Leveraging spherical trilateration, LTs offer fast, sub-millimeter accuracy but require costly opto-mechanics and stable Line of Sight (LOS), limiting outdoor use. For portable radar localization needing only millimeter-level accuracy, such precision is unnecessary, motivating our low-cost three-sphere ranging scheme.
x = R c o s ϕ c o s θ y = R c o s ϕ s i n θ z = R s i n ϕ   ,  
wherein, R is the distance obtained by combining measurements from the absolute distance meter and the interferometer, while θ and ϕ represent the azimuth and elevation angles, respectively.

2.3.1. Three-Point Localization Algorithm

To address this, we propose using three fixed-position metallic spheres with known spatial coordinates. During the frequency sweep of the imaging process, the one-dimensional range profiles to these spheres are extracted. Taking each sphere’s center as the origin and its corresponding range as the radius, three spheres are constructed in space. The unique intersection point of these three spheres represents the actual position of the antenna. A schematic illustration of this principle is shown in Figure 3.
The three spherical surfaces in Figure 3 can be represented by the following system of equations:
( x x A ) 2 + ( y y A ) 2 + ( z z A ) 2 = ( d A + R A ) 2 ( x x B ) 2 + ( y y B ) 2 + ( z z B ) 2 = ( d B + R B ) 2 ( x x C ) 2 + ( y y C ) 2 + ( z z C ) 2 = ( d C + R C ) 2 .
This system of equations describes the geometric constraints between the antenna position ( x , y , z ) and the surfaces of the three spheres. Here, ( x A   , y A   , z A ) denotes the center of sphere A with radius R A ; ( x B   , y B   , z B ) is the center of sphere B with radius R B ; and ( x C   , y C   , z C ) is the center of sphere C with radius R C . To solve the system, any two Equations (e.g., those for spheres A and B) can be subtracted and expanded to eliminate the quadratic terms, resulting in a linear equation in ( x , y , z ) :
2 ( x B x A ) x + 2 ( y B y A ) y + 2 ( z B z A ) z = x B 2 x A 2 + y B 2 y A 2 + z B 2 z A 2 + ( d A + R A ) 2 ( d B + R B ) 2 .
Applying the same operation to spheres A and C yields a second linear plane equation:
M P = d ,
where is matrix multiplication.
P = [ x , y , z ] T ,   P A = [ x A , y A , z A ] T ,   P B = [ x B , y B , z B ] T ,   P C = [ x C , y C , z C ] T
M = P B P A P C P A 2 × 3
d = 1 2 P B 2 P A 2 + ( d A + R A ) 2 ( d B + R B ) 2 ; P C 2 P A 2 + ( d A + R A ) 2 ( d C + R C ) 2
A solution to the linear equations yields a line, where P 0 is a particular solution and v is the null-space direction vector:
P = P 0 + λ v
Substituting the line equation into any of the spherical Equations (e.g., sphere A) yields Equation (15), which can be expanded into a quadratic equation in the parameter λ , as shown in Equation (16).
P 0 + λ v P A 2 = d A 2
a λ 2 + b λ + c = 0
The constant term is given in Equation (17), and solving the quadratic using the root formula in Equation (18) provides two intersection points. The physically meaningful solution is selected as the final antenna position.
a = v v b = 2 v ( P 0 P A ) c = P 0 P A 2 d A 2
λ 1 , 2 = b ± b 2 4 a c 2 a
Although the intersection of three spheres should theoretically yield a unique solution for the antenna’s current position, in practice, the position must be solved at every time step, and factors such as measurement noise in the range estimates can prevent the three spheres from intersecting exactly. In such cases, the quadratic equation may have no real solution, causing the analytical method to fail and making it highly sensitive to noise. To improve robustness, this study adopts a nonlinear least squares strategy to estimate the antenna position by minimizing the residuals between the estimated and measured distances to the three reference points, as formulated in Equation (16).
P ^ = arg min p i = 1 3 P P i d i 2
The algorithm avoids the need for analytical intersection and guarantees a stable solution, providing enhanced fault tolerance and engineering applicability. It is well-suited for high-precision estimation of the antenna trajectory under range measurement errors and noisy conditions.

2.3.2. Three-Point Localization Measurement Model

Based on the three-point localization principle, a measurement model for sparse 3D imaging can be established. Taking the synchronized scanning–rotation sampling scheme as an example, three low-scattering support rods of different heights are placed outside the target region, each mounting a metallic positioning sphere of potentially varying size. The corresponding scattering measurement model is illustrated in Figure 4. In this setup, the target is mounted on a turntable and rotates about the vertical axis, while the Tx/Rx antenna performs a vertical scanning motion along a linear rail. This allows the system to collect backscattered data from multiple azimuths and heights, forming a composite dataset for 3D imaging.
In Figure 4, the positioning spheres are placed outside the target region and must not obstruct the line of sight to the target. The monostatic radar (Tx/Rx antenna) is mounted on a vertical linear stage to scan the target mounted on a rotary turntable. This localization method can be extended to millimeter-level, portable radar applications. Compared to solutions such as laser trackers, the microwave three-point ranging approach significantly reduces cost and system complexity without sacrificing accuracy.

2.4. Three-Dimensional Imaging Parameters

In the imaging model, the rotation of the target on the turntable is equivalently transformed into the motion of the antenna rotating around a stationary target, as shown in Figure 1 and Figure 2. During this process, the antenna simultaneously performs vertical scanning. Under this equivalent configuration, the distance variation d between the antenna and the target can be expressed as:
d = ( x R cos θ ) 2 + ( y R sin θ ) 2 + ( z z ) 2 .
The transceiver antenna performs stepped vertical scanning along the vertical rail (denoted as z ). At each scanning position, the antenna transmits stepped-frequency electromagnetic waves f . Assuming the target is located at position ( x , y , z ) , the reflected signal undergoes a phase delay 2 k d , where d represents the distance between the antenna and the target, k = 2 π f / c , and c is the speed of light. If the scattering coefficient of the target is s ( x , y , z ) , the received signal at each scanning position on the vertical axis can be expressed as:
E s ( θ , f , z ) = s ( x , y , z ) e j 2 k d d x d y d z .
Based on the received electromagnetic echo signal E s ( θ , f , z ) and the theory of electromagnetic wave propagation, the target’s scattering coefficient s ( x , y , z ) can be determined accordingly.
s ( x , y , z ) = E s ( θ , f , z ) e j 2 k d d θ d f d z
In the measurement process, the echo signal E s ( θ , f , z ) is directly acquired by the receiving antenna. According to Equation (20), once the distance variation d between the antenna and the target is known, phase compensation can be applied using Equation (22) to retrieve the scattering coefficient of the radar target. This imaging algorithm serves to validate the effectiveness of the proposed localization-based measurement model for 3D reconstruction. To ensure consistent spatial resolution in the reconstructed image, namely:
δ x = δ y = δ z ,
where δ x , δ y and δ z denote the spatial resolutions along the x-, y-, and z-axes, respectively. The following condition must be satisfied [5]:
λ 0 2 Θ = c 2 B = λ 0 2 L z y 0 2 + L z 2 / 4 ,
where λ 0 is the center wavelength of the measurement frequency, Θ is the angular range of the measurement, B is the system’s frequency bandwidth, L z is the length of the scanning rail, and y 0 is the vertical distance from the rail to the target origin. In addition, the sampling interval determines the extent of the imaging region ( D x , D y , D z ) .
D y = c / ( 2 Δ f ) D x = λ 0 / ( 2 Δ θ ) Δ z = λ min 4 1 + ( D y + 2 y 0 ) 2 / ( D z + L z ) 2 ,
where Δ f is the frequency-domain sampling interval, Δ θ denotes the angular sampling interval, and Δ z represents the vertical scanning interval. At this point, the parameter settings for 3D imaging can be determined using Equations (24) and (25).

3. Validation Setup

Figure 5 illustrates the schematic diagram of the system used for validation. The system consists of a UAV equipped with an onboard radar system, a turntable, three calibration spheres, and a computer processor. The UAV-borne radar is integrated within the airframe and performs electromagnetic transmission and reception (Tx/Rx). The program was executed in FEKO 2021 on a computer equipped with an Intel Core i7-13790F (2.10 GHz) processor and 32 GB RAM. CPU-based parallel acceleration was employed. In the FEKO electromagnetic simulation, an ideal point source is used as the transmitting source, and near-field reception is employed. The test target is positioned at the center of the turntable, while three reference spheres are placed away from the target area on the turntable to avoid blocking or interfering with the target’s scattering. To achieve cylindrical synthetic aperture measurement, the positions of the antennas and the turntable are continuously adjusted. In conventional cylindrical synthetic aperture configurations, the transmit–receive antenna pair performs vertical scanning at each angular step until the full azimuthal rotation is completed. In contrast, the “V”-shaped synthetic aperture configuration synchronizes the scanning of the transmit–receive antenna pair with the rotation of the target, both proceeding at constant speeds until the entire azimuthal coverage is achieved.
During the FEKO simulation for validating the proposed three-point localization theory, the first step involves positioning the target at the center of the coordinate origin (0, 0, 0). Based on the target’s size, three metallic reference spheres are placed outside the target region. In the example used in this study, three spheres with a radius of 0.1 m are located at (0.3, −0.7, 0), (0.54, 0.54, 0), and (1.2, 0.8, 0), respectively. In the second step, an electric dipole source with random position jitter is defined, along with near-field reception at jittered positions. A monostatic reception configuration is employed. The third step configures the physical optics (PO) method to compute the scattered fields. In the fourth step, the collected electric field data are processed using the three-point localization algorithm to reconstruct the 3D image of the target.
The choice of parameters in the 3D imaging process significantly affects the final image quality. This study adopts cylindrical-scanning-based measurement parameters (as listed in Table 1) to ensure image integrity and achieve approximately uniform spatial resolution in lateral, longitudinal, and vertical directions. The spatial resolution is 0.0375 m, and the imaging volume is defined as 1 m × 1 m × 1 m.
The proposed three-point localization method requires a sufficiently precise one-dimensional (1D) range profile to identify the ranges of the auxiliary calibration spheres. Therefore, a wide measurement bandwidth is necessary (16 GHz in Table 1). After extracting the sphere ranges and solving the three-point localization, only the central 4-GHz sub-band (8–12 GHz) is used for three-dimensional (3D) imaging. This strategy enables both accurate localization and high-fidelity 3D reconstruction. Additionally, the proposed three-point-based localization method is applicable to both conventional cylindrical sampling and advanced sparse sampling schemes. The sampling strategy does not affect the underlying localization theory presented in this study. Therefore, to enable a quantitative comparison of localization performance, cylindrical sampling is used as an example, and random positional jitter introduced during the sampling process is evaluated.
To further validate the effectiveness of the proposed trajectory-based localization method, four test targets with distinct scattering mechanisms are selected. First, a metallic sphere with a simple curved surface is used to verify the method under basic scattering conditions. Next, a cylindrical model is employed to represent a curved surface structure with linear scattering characteristics. Then, a dihedral model is adopted to simulate a planar structure exhibiting both specular and multiple scattering. Finally, a complex UAV model is selected to assess the method’s applicability under intricate scattering scenarios. Detailed information on the four targets is provided in Table 2.
Subsequently, simulations are performed for the four selected targets, followed by phase compensation to obtain their corresponding 3D images. Specifically, three types of phase compensation strategies are applied to the jittered cylindrical sampling data: (1) imaging with phase correction using the actual jitter values, (2) imaging with direct phase correction without any localization operation, and (3) imaging with phase correction based on the UAV trajectory estimated via the proposed three-point localization method.

4. Validation Results

The 3D radar imaging results presented in this study are based on amplitude-normalized images, with the amplitude values quantified in decibels (dB). This normalization technique ensures that the dynamic range of the imaging data is suitable for comparison and viewing, thereby enhancing the clarity of the reconstructed images. The use of dB as a unit for amplitude enhances the ability to differentiate signal intensities.

4.1. Trajectory Localization Results

In practical applications, both SAR imaging using UAV-based scanning of stationary targets and ISAR imaging using fixed radars for moving targets are subject to positional jitter. To evaluate the effectiveness of the proposed three-point-based localization method in compensating for trajectory jitter during cylindrical scanning, electromagnetic simulations are conducted using FEKO. In the simulation, random perturbations are introduced into the radar measurement positions. According to Equation (8), the first step involves extracting one-dimensional range profiles of the three reference spheres, as illustrated in Figure 6, yielding distance measurements d A , d B and d C . In Equation (8), the parameters x A , y A , z A , x B , y B , z B , x C , y C , z C , and the radii R A , R B , and R C are known quantities, representing the centers and radii of the three reference spheres, respectively.
For each radar sampling position, a one-dimensional range profile is obtained, as illustrated in Figure 6. The first three peaks in the profile correspond to the distances from the radar to the surfaces of the three reference spheres, and the associated x-axis values represent d A , d B and d C in Equation (8). Once these distances are extracted, they are substituted into Equations (9)–(18) to analytically compute the radar’s spatial coordinates at each sampling point within the imaging system. As shown in Figure 7, the spatial positions of the radar during the sampling process are illustrated. The “true position” represents the actual radar positions under jitter conditions; the “preset position” corresponds to the ideal radar scanning path without considering jitter; and the “estimated position” refers to the radar positions estimated using the proposed three-point localization method under jitter conditions.
As shown in Figure 7, the comparison of radar positions indicates that the locations estimated by the proposed three-point localization method are significantly closer to the true positions than the preset positions without jitter compensation. Subsequently, Figure 8 presents the comparisons of the radar’s x-, y-, and z-coordinate components at each sampling position.
As shown in Figure 8, the comparison of radar position coordinates indicates that the estimates obtained using the proposed three-point localization method are significantly closer to the true positions than the preset positions without jitter compensation. Once the spatial coordinates of the radar at all sampling points are determined, phase compensation can be applied to the imaging domain based on the estimated radar positions, enabling the reconstruction of the target’s 3D image.
The discrepancy between the estimated and true values in Figure 8 arises from the ill-conditioning of the linearized three-sphere system and the limited observability along certain directions. When the three reflectors are nearly collinear from the radar’s viewpoint, subtend a small solid angle, or lie at markedly different range scales, the condition number of the matrix M in Equation (10) increases. As a result, small range-extraction errors are amplified, leading to reduced sensitivity in one coordinate component (predominantly the X component in Figure 8). In future work, we will enlarge the triangle area and the solid angle subtended by the three reflectors and avoid near-collinearity; where feasible, we will also use N > 3 reflectors and solve a weighted/robust least-squares problem to improve conditioning and outlier resistance.

4.2. Three-Dimensional Imaging Results

First, the proposed three-point localization method is applied to estimate the radar positions for 3D imaging of the metallic sphere target. As shown in Figure 9, the imaging results are compared using phase compensation based on the radar’s true positions, preset positions, and the estimated positions obtained via the three-point localization method, according to Equations (20) and (22).
As illustrated in Figure 9, the proposed three-point positioning method enables accurate image reconstruction by effectively compensating for phase errors induced by platform jitter. The resulting image quality closely matches that obtained using the ground truth antenna positions. In contrast, using the pre-set nominal positions without accounting for position deviations leads to severe image degradation, with clutter and defocused features obscuring the actual target. Furthermore, to validate the robustness of this approach, Figure 10 presents imaging results for a representative linear scattering target (a cylindrical model), demonstrating the effectiveness of the three-point-based compensation under more complex scattering scenarios.
As shown in Figure 10, the imaging result of the cylindrical target is comparable to that of the metallic sphere. For the linear scattering structure, minor linear artifacts are observed in Figure 10c, but they have negligible impact on identifying the cylindrical target. The imaging performance under the proposed three-point positioning approach is significantly superior to that using the pre-defined position shown in Figure 10b. Subsequently, the effectiveness of this method is further evaluated on a planar coupled scattering target (dihedral model), with the results illustrated in Figure 11.
The imaging results of the dihedral target in Figure 11a and Figure 11c are highly consistent, confirming the validity of the proposed three-point positioning method. The observed scattering centers correspond to the three vertical edges of the dihedral structure and the coupled scattering generated within the interior angle. To further evaluate the method’s applicability to more complex geometries, a UAV target is imaged using the same approach, with results shown in Figure 12.
The UAV imaging results in Figure 12a and Figure 12c are highly consistent, further confirming the effectiveness and general applicability of the proposed three-point positioning method.
The above results demonstrate that the proposed three-point positioning method enables effective and generalized phase compensation for imaging a wide range of scattering targets. This approach is applicable across various types of scatterers, confirming its robustness and adaptability in diverse imaging scenarios.

5. Discussion

Following the validation of the proposed positioning method, further analysis is required to evaluate its localization accuracy, investigate its applicability in both SAR and ISAR imaging frameworks, examine the constraints on the placement of the three reference spheres, and address potential challenges in practical engineering implementations.

5.1. Discussion of Positioning Result

Accurate phase compensation is essential for successful 3D imaging, which requires knowledge of the radar’s sampling position. In this paper, a three-point positioning method is proposed to determine the spatial location of the radar. This method relies on one-dimensional range profiles obtained via microwave measurements, from which the distances d A , d B and d C in Equation (8) are extracted. Consequently, the accuracy of the 1D range profiles directly affects the subsequent position estimation using Equation (9). The measured distances and associated errors for the three reference spheres are presented in Figure 13. And the distance and its error are expressed in meters (m).
The results in Figure 13 show that the distance estimation errors for all three reference spheres are approximately 2 mm. These distances are then used in the three-point positioning method to determine the radar’s spatial position, as shown in Figure 7, and subsequently derive the phase compensation. A comparison example of the obtained compensation phases is illustrated in Figure 14.
As shown in Figure 14a, the estimated phase closely follows the trend of the true phase. Substituting the estimated phase in Figure 14a into Equation (22) yields the 3-D image of the target under test. The error comparison in Figure 14b further indicates that the phase error of the estimated phase is significantly lower than that of the pre-set phase, which does not account for platform jitter. A detailed comparison of the distance and phase data obtained during the three-point positioning process is provided in Table 3.
Table 3. Performance evaluation of the proposed positioning method.
Table 3. Performance evaluation of the proposed positioning method.
SymbolPre-Set Trajectory ImagingProposed Positioning-Based Imaging
Mean trajectory localization error 10.23 m0.06 m
Mean compensated phase error 22.17 rad0.6 rad
Imaging qualityDefocused,
resulting in imaging failure
Capable of maintaining a dynamic range of 15 dB
1 The trajectory shown in Figure 7 exhibits a mean error. 2 The phase shown in Figure 14 exhibits a mean error.
The results presented in Table 3, together with the analyses above, verify the effectiveness of the proposed positioning method. This method enables accurate 3D imaging and provides a solid theoretical foundation for subsequent imaging applications.
The FEKO-based simulations verify the feasibility and effectiveness of the proposed method in principle. However, real measurement environments introduce technical and practical limitations. The localization is sensitive to errors in the radial (range-axis) coordinate x of the three auxiliary spheres. Their coordinates therefore must be surveyed with high precision. Mutual scattering between the target and the environment and between the spheres and the environment can obscure peak selection when extracting the spheres’ slant ranges from one-dimensional range profiles. The coordinate-solving procedure should be refined to enable more reliable peak picking. Internal RF and system noise may bias the solution; denoising and robustness enhancements are therefore warranted. Future work should include real-world experiments and prioritize resolving these issues to improve engineering readiness and to enable accurate localization for sparse 3-D imaging during outdoor radar scans.

5.2. Application of the Positioning Method in SAR and ISAR Imaging

The positioning results and analysis presented above are based on a quasi-ISAR measurement setup, as illustrated in Figure 5. In this configuration, the three reference spheres remain fixed, while the target rotates independently on a turntable, and the radar (or UAV-mounted radar) performs vertical scanning. The trajectory obtained from Equation (8) under this setup requires a coordinate transformation to yield the trajectory shown in Figure 7. The coordinate transformation is expressed as follows.
x = x cos ( θ ) y = x sin ( θ ) z = z ,
wherein, x and z are obtained from the calculation in Equation (8), and θ denotes the pre-defined azimuth angle. By applying the transformation in Equation (26), the actual spatial position of the radar can be derived, assuming the target remains stationary.
When the proposed three-point positioning method is applied to SAR imaging, the corresponding validation setup is illustrated in Figure 15. During measurement, the target and the three reference spheres are mounted on a rotating platform, such that the reference spheres rotate synchronously with the target, while the radar (or UAV-mounted radar) performs vertical scanning. In this configuration, coordinate transformation is not required, and the radar trajectory shown in Figure 7c can be directly obtained using Equation (8). To achieve positioning results equivalent to those of the ISAR-based system, it is essential to acquire accurate one-dimensional range profiles of the three reference spheres. However, to obtain clearly distinguishable peaks in the 1D range domain—as demonstrated in Figure 6—the placement of the reference spheres in this system must satisfy specific spatial constraints.

5.3. Analysis of the Positioning Spheres Layout

In summary, the proposed positioning method is applicable to both ISAR and SAR imaging systems. However, the placement requirements for the three or four reference spheres differ between the two configurations. Regardless of the application system, the identification of auxiliary-sphere ranges from the one-dimensional (1D) range profile must follow the “first-three-peaks” principle illustrated in Figure 6. Specifically, the peaks associated with spheres A, B, and C shall correspond to the first three peaks in Figure 6 (ordered from near to far), and this ordering must remain unchanged across measurements. The correspondence between each auxiliary sphere and its peak position is therefore fixed and invariant. In the ISAR system, the reference spheres remain fixed throughout the measurement. Therefore, it is only necessary to ensure that, when the radar is at an azimuth angle of 0°, the one-dimensional range profiles of the three spheres are distinguishable at different elevation positions, and their order remains consistent. A schematic of the corresponding reference sphere arrangement for the ISAR scenario is shown in Figure 16.
In the design of the reference sphere placement shown in Figure 16, it is necessary to ensure that the spheres do not obstruct the target and that the order of their one-dimensional range peaks remains unchanged. To satisfy these conditions, the three reference spheres are placed on the plane z = 0 , and their distances to the radar position must meet the following constraint:
R > d A > d B > d C .
By taking the radar position as the center and the four distances as radii to form circles, it can be ensured that the one-dimensional range profile requirements are satisfied at all radar elevation positions.
In the SAR system, since both the target and the three reference spheres are mounted on the rotating platform, it is essential to ensure that, within the radar’s azimuthal rotation range, the one-dimensional range profiles of the three spheres remain distinguishable at different elevation positions and maintain a consistent order. A schematic of the corresponding reference sphere arrangement for this configuration is shown in Figure 17.
In the reference sphere placement design shown in Figure 17, it is necessary to ensure that the spheres do not obstruct the target and that the order of their one-dimensional range peaks remains consistent. Similarly, the three reference spheres are placed on the plane z = 0 . Furthermore, their distances to the radar position must satisfy the following constraint:
d M > d A > d B > d C
wherein, d M denotes the distance from the radar position at the edge of the observation line to the lower corner of the imaging region. To ensure that the scattering peak position of reference sphere A remains unaffected by the target, the condition d M > d A must be satisfied. After determining the position of reference sphere A, a circle is drawn with the radar position at the left edge of the observation line as the center and d A as the radius. Reference sphere B is then placed within this circle to determine its position. Subsequently, another circle is drawn using the same radar position as the center and d B as the radius to ensure that reference sphere C is located within it. Once the positions of all three reference spheres are determined, the radar-to-sphere distance relationships along the entire observation line will satisfy Equation (28). Failure to meet this condition may result in incorrect identification of the 1D range peaks, leading to inaccurate radar localization and, consequently, failed 3D imaging of the target.

6. Conclusions

This study proposes a novel trajectory estimation method for 3D radar imaging, focusing on precise radar localization based on microwave three-point ranging. The method operates without any communication infrastructure, relying solely on three fixed microwave-reflective calibration spheres deployed outside the imaging region. By extracting one-dimensional range profiles during the radar scanning process and utilizing the known coordinates of the calibration sphere centers, the radar’s spatial position can be uniquely determined at each sampling moment. Building on this method, its applicability is further explored in both ISAR and SAR system configurations. For each scenario, a practical and geometry-constrained layout strategy of the reference spheres is developed to ensure clear peak discrimination and accurate phase compensation. These layout guidelines provide theoretical support for future engineering implementation. Validation results demonstrate that the one-dimensional ranging error remains within approximately 2 mm, and both the trajectory estimation accuracy and phase compensation precision are sufficient to meet the requirements of high-quality 3D imaging. The proposed method exhibits strong adaptability across different imaging scenarios and offers a robust, communication-independent localization solution for synchronized scanning systems.
It is worth noting that the current validation is conducted under ideal, clutter-free conditions with canonical targets. Future work should involve systematic quantitative evaluation on large-scale, distributed, or realistic targets, especially in outdoor environments affected by clutter, multipath effects, and other environmental factors. Such field-level experiments will serve as the next stage of practical validation and are essential for demonstrating the generalizability and real-world applicability of the proposed approach.
This work contributes significantly to the advancement of 3D radar imaging by improving the robustness of sparse sampling strategies and enabling high-precision localization on both ground-based and UAV-mounted platforms. Future research will focus on extending the method’s applicability to cluttered or multipath conditions, optimizing geometric configurations under engineering constraints, and integrating the approach with AI-driven reconstruction algorithms to support real-time, high-resolution radar imaging in complex environments.

Author Contributions

Conceptualization, C.L. and J.Z.; Data curation, C.L. and X.W.; Formal analysis, C.L. and Z.Y.; Investigation, C.L. and X.W.; Methodology, C.L. and J.Z.; Software, C.L.; Supervision, J.M., T.H. and J.Z.; Validation, C.L. and X.W.; Writing—original draft, C.L.; Writing—review and editing, J.Z., Z.Y. and X.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

Thanks to the editors and reviewers for their careful review, constructive suggestion and reminding, which helped improve the quality of the paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Hong, W.; Lin, Y.; Wei, L.; Zhang, H.; Feng, S.; Teng, F.; Wang, Y. Multi-aspect aperture synthesis for 3D radar imaging: A review. Sens. Imaging 2023, 25, 1. [Google Scholar] [CrossRef]
  2. Liang, J.; Zhang, Q.; Luo, Y.; Yuan, H.; Chen, Y. Three-dimensional imaging with bistatic vortex electromagnetic wave radar. Remote Sens. 2022, 14, 2972. [Google Scholar] [CrossRef]
  3. Wang, Y.; Zhang, X.; Zhan, X.; Zhang, T.; Zhou, L.; Shi, J.; Wei, S. An RCS measurement method using sparse imaging based 3-D SAR complex image. IEEE Antennas Wirel. Propag. Lett. 2021, 21, 24–28. [Google Scholar] [CrossRef]
  4. Wang, M.; Wei, S.; Liang, J.; Zeng, X.; Wang, C.; Shi, J.; Zhang, X. RMIST-Net: Joint range migration and sparse reconstruction network for 3-D mmW imaging. IEEE Trans. Geosci. Remote Sens. 2021, 60, 1–17. [Google Scholar] [CrossRef]
  5. Lou, C.; Zhao, J.; Wu, X.; Zhang, Y.; Yang, Z.; Li, J.; Miao, J. Efficient Sampling Schemes for 3D Imaging of Radar Target Scattering Based on Synchronized Linear Scanning and Rotational Motion. Remote Sens. 2025, 17, 2636. [Google Scholar] [CrossRef]
  6. Zhang, G.; Xu, Y.; Liu, C.; Xie, P.; Ma, W.; Lu, Y.; Kong, X. Study of the image motion compensation method for a vertical orbit dynamic scanning TDICCD space camera. Opt. Express 2023, 31, 41740–41755. [Google Scholar] [CrossRef] [PubMed]
  7. Zhou, L.; Deng, M.; He, J.; Wang, B.; Zhang, S.; Liu, X.; Wei, S. A HRWS SAR Motion Compensation Method with Multichannel Phase Correction. Remote Sens. 2024, 16, 3544. [Google Scholar] [CrossRef]
  8. Smith, J.W.; Torlak, M. Efficient 3-D near-field MIMO-SAR imaging for irregular scanning geometries. IEEE Access 2022, 10, 10283–10294. [Google Scholar] [CrossRef]
  9. Zhang, Y.; Huang, L.; Xu, Z.; Wang, Z.; Chen, B. Combined Motion Compensation Method for Long Synthetic Aperture Radar Based on Subaperture Processing. J. Mar. Sci. Eng. 2025, 13, 355. [Google Scholar] [CrossRef]
  10. Manzoni, M.; Rizzi, M.; Tebaldini, S.; Monti–Guarnieri, A.V.; Prati, C.M.; Tagliaferri, D.; Spagnolini, U. Residual motion compensation in automotive MIMO SAR imaging. In Proceedings of the 2022 IEEE Radar Conference (RadarConf22), New York, NY, USA, 21–25 March 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 1–7. [Google Scholar]
  11. Yang, L.; Zhang, F.; Sun, Y.; Chen, L.; Li, Z.; Wang, D. Motion Error Estimation and Compensation of Airborne Array Flexible SAR Based on Multi-Channel Interferometric Phase. Remote Sens. 2023, 15, 680. [Google Scholar] [CrossRef]
  12. Sun, X.; Chen, L.; Zhou, Z.; Du, H.; Huang, X. Impact analysis and compensation methods of frequency synchronization errors in distributed geosynchronous synthetic aperture radar. Remote Sens. 2024, 16, 1470. [Google Scholar] [CrossRef]
  13. Nacar, O.; Abdelkader, M.; Ghouti, L.; Gabr, K.; Al-Batati, A.; Koubaa, A. VECTOR: Velocity-enhanced GRU neural network for real-time 3D UAV trajectory prediction. Drones 2024, 9, 8. [Google Scholar] [CrossRef]
  14. Jing, X.; Liu, F.; Masouros, C.; Zeng, Y. ISAC from the sky: UAV trajectory design for joint communication and target localization. IEEE Trans. Wirel. Commun. 2024, 23, 12857–12872. [Google Scholar] [CrossRef]
  15. Gupta, A.; Fernando, X. Simultaneous localization and mapping (slam) and data fusion in unmanned aerial vehicles: Recent advances and challenges. Drones 2022, 6, 85. [Google Scholar] [CrossRef]
  16. Norbelt, M.; Luo, X.; Sun, J.; Claude, U. UAV Localization in Urban Area Mobility Environment Based on Monocular VSLAM with Deep Learning. Drones 2025, 9, 171. [Google Scholar] [CrossRef]
  17. Pan, Y.; Li, R.; Da, X.; Hu, H.; Zhang, M.; Zhai, D.; Dobre, O.A. Cooperative trajectory planning and resource allocation for UAV-enabled integrated sensing and communication systems. IEEE Trans. Veh. Technol. 2023, 73, 6502–6516. [Google Scholar] [CrossRef]
  18. Hou, Y.; Chen, D.; Yang, S. Adaptive robust trajectory tracking controller for a quadrotor UAV with uncertain environment parameters based on backstepping sliding mode method. IEEE Trans. Autom. Sci. Eng. 2023, 22, 4446–4456. [Google Scholar] [CrossRef]
  19. Muralikrishnan, B.; Phillips, S.; Sawyer, D. Laser trackers for large-scale dimensional metrology: A review. Precis. Eng. 2016, 44, 13–28. [Google Scholar] [CrossRef]
  20. Gai, Y.; Zhang, J.; Guo, J.; Shi, X.; Wu, D.; Chen, K. Construction and uncertainty evaluation of large-scale measurement system of laser trackers in aircraft assembly. Measurement 2020, 165, 108144. [Google Scholar] [CrossRef]
  21. Morgan-Owen, G.J.; Johnston, G.T. Differential GPS positioning. Electron. Commun. Eng. J. 1995, 7, 11–21. [Google Scholar] [CrossRef]
  22. Zhu, X.; Xu, Q.; Zhou, J.; Deng, M. Remote landslide observation system with differential GPS. Procedia Earth Planet. Sci. 2012, 5, 70–75. [Google Scholar] [CrossRef]
  23. Yanchun, Z.; Wei, L.; Rui, W.; Peipeng, Z.; Ranran, S. A test quality estimation method for UAV outdoor composite scattering measurement. IEEE Antennas Wirel. Propag. Lett. 2023, 23, 69–73. [Google Scholar] [CrossRef]
  24. Zhang, R.; Zhang, S.; Wang, Z.; Song, Z.; Bai, C. A Novel Analytical Method for Eliminating the Multi-Probe Array and Environment Interference in Spherical Near-Field Measurement. IEEE Trans. Instrum. Meas. 2025, 74, 6002714. [Google Scholar]
Figure 1. The traditional cylindrical sampling imaging system.
Figure 1. The traditional cylindrical sampling imaging system.
Remotesensing 17 03397 g001
Figure 2. The synchronized scanning–rotation sampling imaging system.
Figure 2. The synchronized scanning–rotation sampling imaging system.
Remotesensing 17 03397 g002
Figure 3. Schematic diagram of three-point localization.
Figure 3. Schematic diagram of three-point localization.
Remotesensing 17 03397 g003
Figure 4. Imaging sampling model based on three-point ranging localization.
Figure 4. Imaging sampling model based on three-point ranging localization.
Remotesensing 17 03397 g004
Figure 5. The diagram of the validation system.
Figure 5. The diagram of the validation system.
Remotesensing 17 03397 g005
Figure 6. One-dimensional range profiles of reference spheres.
Figure 6. One-dimensional range profiles of reference spheres.
Remotesensing 17 03397 g006
Figure 7. Spatial positions of the radar during the sampling process: (a) true position (with jitter), (b) preset position (without jitter), and (c) estimated position (using the proposed three-point localization method).
Figure 7. Spatial positions of the radar during the sampling process: (a) true position (with jitter), (b) preset position (without jitter), and (c) estimated position (using the proposed three-point localization method).
Remotesensing 17 03397 g007
Figure 8. Comparison of radar position coordinates along the (a) x-, (b) y-, and (c) z-axes.
Figure 8. Comparison of radar position coordinates along the (a) x-, (b) y-, and (c) z-axes.
Remotesensing 17 03397 g008
Figure 9. Three-Dimensional imaging results of one sphere using phase compensation based on the (a) true radar positions, (b) preset positions, and (c) estimated positions.
Figure 9. Three-Dimensional imaging results of one sphere using phase compensation based on the (a) true radar positions, (b) preset positions, and (c) estimated positions.
Remotesensing 17 03397 g009
Figure 10. Three-Dimensional imaging results of one cylinder using phase compensation based on the (a) true radar positions, (b) preset positions, and (c) estimated positions.
Figure 10. Three-Dimensional imaging results of one cylinder using phase compensation based on the (a) true radar positions, (b) preset positions, and (c) estimated positions.
Remotesensing 17 03397 g010
Figure 11. Three-Dimensional imaging results of dihedral model using phase compensation based on the (a) true radar positions, (b) preset positions, and (c) estimated positions.
Figure 11. Three-Dimensional imaging results of dihedral model using phase compensation based on the (a) true radar positions, (b) preset positions, and (c) estimated positions.
Remotesensing 17 03397 g011
Figure 12. Three-Dimensional imaging results of UAV model using phase compensation based on the (a) true radar positions, (b) preset positions, and (c) estimated positions.
Figure 12. Three-Dimensional imaging results of UAV model using phase compensation based on the (a) true radar positions, (b) preset positions, and (c) estimated positions.
Remotesensing 17 03397 g012
Figure 13. Distances and estimation errors between the radar and the three reference spheres. For reference sphere 1, (a) the actual distance, (b) estimated distance, and (c) the corresponding error are reported. For reference sphere 2, (d) the actual distance, (e) estimated distance, and (f) the corresponding error are reported. For reference sphere 3, (g) the actual distance, (h) estimated distance, and (i) the corresponding error are reported.
Figure 13. Distances and estimation errors between the radar and the three reference spheres. For reference sphere 1, (a) the actual distance, (b) estimated distance, and (c) the corresponding error are reported. For reference sphere 2, (d) the actual distance, (e) estimated distance, and (f) the corresponding error are reported. For reference sphere 3, (g) the actual distance, (h) estimated distance, and (i) the corresponding error are reported.
Remotesensing 17 03397 g013
Figure 14. Comparison of phase compensation results. (a) The true phase, pre-set phase, and estimated phase are presented for evaluation. (b) The phase errors between each of the three phases and the true phase are also analyzed.
Figure 14. Comparison of phase compensation results. (a) The true phase, pre-set phase, and estimated phase are presented for evaluation. (b) The phase errors between each of the three phases and the true phase are also analyzed.
Remotesensing 17 03397 g014
Figure 15. Schematic diagram of the SAR validation system.
Figure 15. Schematic diagram of the SAR validation system.
Remotesensing 17 03397 g015
Figure 16. Reference sphere layout in the ISAR system.
Figure 16. Reference sphere layout in the ISAR system.
Remotesensing 17 03397 g016
Figure 17. Reference sphere layout in the SAR system.
Figure 17. Reference sphere layout in the SAR system.
Remotesensing 17 03397 g017
Table 1. Parameter settings for three-point-based localization imaging.
Table 1. Parameter settings for three-point-based localization imaging.
SymbolParameterDescription
P VVPolarization Mode of the Antenna
R 2 mDistance from the Antenna to the Target Center
B 16 GHzSweep Bandwidth
f c 10 GHzCenter Frequency
Θ 23.52°Angular Sweep Range
L z 0.828 mRail Length
Δ f 10 MHzFrequency Interval
Δ θ 0.84°Angular Interval
Δ z 0.018 mRail Scanning Interval
N 401Number of Frequency Sampling Points
M 29Number of Angular Sampling Points
K 47Number of Rail Sampling Points
T 1363Total Number of Samples M × K
Note: Parameter selection and calculations are based on Equations (23)–(25).
Table 2. Target information.
Table 2. Target information.
TargetDimensionsGeometric Structure
One SphereRadius: 10 cmRemotesensing 17 03397 i001
Cylinder modelRadius: 5 cm
Height (z): 20 cm
Remotesensing 17 03397 i002
Dihedral modelLength (x): 20 cm
Width (y): 20 cm
Height (z): 20 cm
Remotesensing 17 03397 i003
UAV modelLength (x): 30 cm
Width (y): 30 cm
Height (z): 8 cm
Remotesensing 17 03397 i004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lou, C.; Zhao, J.; Wu, X.; Yang, Z.; Miao, J.; Hong, T. A Trajectory Estimation Method Based on Microwave Three-Point Ranging for Sparse 3D Radar Imaging. Remote Sens. 2025, 17, 3397. https://doi.org/10.3390/rs17203397

AMA Style

Lou C, Zhao J, Wu X, Yang Z, Miao J, Hong T. A Trajectory Estimation Method Based on Microwave Three-Point Ranging for Sparse 3D Radar Imaging. Remote Sensing. 2025; 17(20):3397. https://doi.org/10.3390/rs17203397

Chicago/Turabian Style

Lou, Changyu, Jingcheng Zhao, Xingli Wu, Zongkai Yang, Jungang Miao, and Tao Hong. 2025. "A Trajectory Estimation Method Based on Microwave Three-Point Ranging for Sparse 3D Radar Imaging" Remote Sensing 17, no. 20: 3397. https://doi.org/10.3390/rs17203397

APA Style

Lou, C., Zhao, J., Wu, X., Yang, Z., Miao, J., & Hong, T. (2025). A Trajectory Estimation Method Based on Microwave Three-Point Ranging for Sparse 3D Radar Imaging. Remote Sensing, 17(20), 3397. https://doi.org/10.3390/rs17203397

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop