Next Article in Journal
Deep Learning and Transformer Models for Groundwater Level Prediction in the Marvdasht Plain: Protecting UNESCO Heritage Sites—Persepolis and Naqsh-e Rustam
Previous Article in Journal
Long-Term Hourly Ozone Forecasting via Time–Frequency Analysis of ICEEMDAN-Decomposed Components: A 36-Hour Forecast for a Site in Beijing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Angular Super-Resolution of Forward-Looking Scanning Radar via Grid-Updating Split SPICE-TV

1
School of Information and Communication Engineering, University of Electronic Science and Technology of China (UESTC), Chengdu 611731, China
2
Yangtze Delta Region Institute, University of Electronic Science and Technology of China (UESTC), Quzhou 324003, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2025, 17(14), 2533; https://doi.org/10.3390/rs17142533
Submission received: 28 May 2025 / Revised: 15 July 2025 / Accepted: 18 July 2025 / Published: 21 July 2025

Abstract

The sparse iterative covariance-based estimation (SPICE) method has recently gained significant attraction in the field of scanning radar super-resolution imaging because of its angular resolution enhancement capability. However, it is unable to preserve the target profile, and the estimator is constrained by high computational complexity and memory consumption. In this paper, a grid-updating split SPICE-TV algorithm is presented. The method allows for the efficient updating of reconstruction results with both contour and resolution, and a recursive grid-updating implementation framework of the split SPICE-TV has the capability to reduce the computational complexity. First, the scanning radar angular super-resolution problem is transformed into a constrained optimization problem by simultaneously employing sparse covariance fitting criteria and TV regularization constraints. Then, the split Bregman method is employed to derive an efficient closed-form solution to the problem. Ultimately, the matrix inversion problem is transformed into an online iterative equation to reduce the computational complexity and memory consumption. The superiority of the proposed method is verified by simulation and experimental data.

1. Introduction

The scanning radar is widely utilized in various fields, including aircraft autonomous landing, material airdrop, and ground attack [1,2,3]. However, its angular resolution is limited to the size of the radar aperture in the forward-looking directions. Especially for airborne platforms, it is challenging to provide the necessary space resources [4,5], leading to low angular resolution. The conventional synthetic aperture radar (SAR) methodology encounters challenges in forward-looking imaging as a result of the concurrence of the equal-distance and equal-Doppler lines within the forward-looking zone [6,7,8]. Doppler beam sharpening (DBS) encounters difficulties in the resolution of forward-looking imaging due to the symmetry issues and the reduction in the Doppler centroid gradient [9,10]. Based on the scanning mode of radar beam, angular super-resolution methods have been studied to improve the angular resolution depending on the convolution signal model between the antenna pattern and target scatterings.
The super-resolution imaging methods are proposed from different perspectives, such as spectral estimation-based methods [11,12,13], Bayesian methods [4,14,15] and regularization methods [16,17,18,19,20]. In [11], a fast conjugate gradient iterative adaptive algorithm-based spectral estimation approach was developed for forward-looking super-resolution imaging. This approach can accurately model velocity migration across different directions, decouple range and azimuth-elevation dependencies, and enhance target resolution efficiency. However, it only constructs the echo covariance matrix based on a specific range bin, which is difficult applied in low signal-to-noise ratios (SNRs) and limited snapshots conditions. Alternatively, an efficient Bayesian forward-looking super-resolution imaging algorithm [4] was proposed by reformulating the imaging task as a convex optimization problem. However, the method suffers from high computational complexity and prior distribution sensitivity. To avoid the reliance on idealized models of spectral estimation methods and the high computational costs of Bayesian approaches, the regularization methods was proposed based on sparse L 1 norm constraints [20]. However, the penalty parameters of the regularization methods should be manually adjusted according to the actual environment.
To leverage sparsity, reference [21] introduces the Sparse Iterative Covariance Estimation (SPICE). Experimental results demonstrate that this method offers a higher angular resolution compared to the Iterative Adaptive Approach (IAA). However, when the target exhibits distinct edge characteristics, the SPICE method fails to reconstruct the detailed contours of the target. To reconstruct contour information of the target, a total variation (TV) regularization method [22] has been introduced for scanning radar super-resolution imaging. This method uses the TV norm as a penalty term to preserve edge details and achieve high-quality imaging. It preserves the edge contours of the target, but it is no significant enhancement in angular resolution.
To address the aforementioned challenge of simultaneously enhancing angular resolution and preserving target contours, researchers have developed multimodal fusion techniques [23,24,25]. For instance, reference [23] achieved significant improvement in emotion recognition accuracy by integrating local convolutional features with global transformer attention through cross-modal transformers. Similarly, in the radar domain, such multimodal fusion approaches [26,27,28,29] have demonstrated potential for enhancing detection capabilities.
To enhance angular resolution and preserve target contours simultaneously, a super-resolution imaging method for scanning radar called SPICE-TV was proposed in [30]. However, this method employs MATLAB’s CVX toolbox to solve the joint optimization problem of SPICE-TV, and it does not offer a closed-form solution, which leading to significant computational costs. Furthermore, a split SPICE-TV method [31] is proposed by deriving a closed-form solution of the SPICE-TV method using the split Bregman method. Nevertheless, the method necessitates the inversion of a high-dimensional matrix on numerous occasions, and it is only capable of batch processing the entirety of the scanned data.
To reduce computational complexity and enable real-time processing, several online angular super-resolution methods have been put forth, primarily based on beam recursion approaches. In [32], an online Tikhonov method is proposed to achieve real-time super-resolution capability, but it delivers limited resolution improvement. The subsequent online q-SPICE method [33] can significantly enhance angular resolution, but the improvement comes at the expense of losing target contour reconstruction. The total variation method [34] can address this limitation by accurately recovering target contours, though its angular resolution performance remains constrained.
In order to address the aforementioned issues, this paper introduces a grid-updating split SPICE-TV super-resolution method. The method allows for the efficient updating of reconstruction results with both contour and resolution, and a recursive grid-updating implementation framework of the split SPICE-TV has the capability to reduce the computational complexity. First, the scanning radar angular super-resolution problem is transformed into a constrained optimization problem by simultaneously employing sparse covariance fitting criteria and TV regularization constraints. Then, the split Bregman method is employed to derive an efficient closed-form solution to the problem. Ultimately, the matrix inversion problem is transformed into an online iterative equation to reduce the computational complexity and memory consumption. The proposed method offers two advantages. On the one hand, the spatial complexity of the proposed method remains constant throughout the beam recursion process, and does not increase with time. On the other hand, the online grid-updating procedure can reduce the number of iterations, improving the processing efficiency and imaging quality without additional computational burden.
The arrangement of this paper is as follows. Section 2 introduces the echo model of the real aperture scanning radar. Section 3 presents the proposed method. Section 4 and Section 5 provide the simulation and measured data results. Finally, Section 6 concludes the paper.

2. Problem Analysis

This section details the modeling of scanning radar echoes and provides a comparative retrospective analysis between sparse iterative covariance fitting estimation and traditional TV methods.

2.1. Echo Model

The geometric motion model of the forward-looking scanning radar, as illustrated in Figure 1, involves an aircraft flying at a height H and moving with a velocity v along the Y-axis. The radar beam sweeps counterclockwise at an angular velocity w, with a pitch angle α at time t = 0 when the aircraft is positioned at point A. At time t, assume there is a target P in space located at a distance R 0 from point A, with a horizontal azimuth of φ 0 and a spatial azimuth of θ 0 . The geometric relationships yield the equation cos θ 0 = cos φ 0 cos α .
At time t, as the aircraft transitions from point A to point B, the target P is characterized by a horizontal azimuth φ relative to the aircraft and a spatial azimuth θ . The distance between the aircraft and the target P can be represented as
R ( t ) = R 0 2 + ( v t ) 2 2 R 0 v t cos θ 0
Expanding R ( t ) in a Taylor series around t = 0 yields
R ( t ) = R 0 v cos θ 0 t + v 2 sin 2 θ 0 2 R 0 t 2
Given that v 2 t 2 (the product of velocity squared and time squared) is significantly smaller than the target distance, and  sin θ 0 0 in the forward-looking region, the second-order Taylor expansion term becomes negligible [35]. Thus, the approximation for R ( t ) simplifies to
R ( t ) R 0 v cos θ 0 t
The forward-looking scanning radar emits wideband linear frequency modulation (FM) signals, resulting in
S ( τ ) = rect τ T r exp j 2 π f 0 τ + j π K r τ 2
where τ is the fast time variable in the range direction, T r is the pulse width of the linear FM signal, f 0 is the carrier frequency, K r is the frequency modulation slope, and rect · is the window function:
rect τ T r = 1 , if τ < T r 2 , 0 , otherwise .
After down-conversion, the echo signal of target P can be expressed as
S ( τ , t ) = σ 0 f ( t ) rect τ τ d T r exp j π K r ( τ τ d ) 2 exp j 2 π f 0 τ d
where σ 0 is the target-scattering coefficient of P, f ( t ) is the antenna pattern modulation function, t is the slow time variable in the azimuth direction, τ d = 2 R t / c is the echo delay, and c is the speed of the electromagnetic wave.
By applying pulse compression to the echo signal, we obtain
S r c ( τ , t ) = σ 0 f ( t ) sin c B ( τ τ d ) exp j 2 π f 0 τ d
where sin c · is the range pulse compression response function, and B is the transmitted signal bandwidth.
After pulse compression, in scenarios involving low-speed platforms, the Doppler term exp j 2 π f 0 τ d is disregarded. For all detectable targets, the azimuth echo signal is equivalent to the convolution between the antenna’s azimuth pattern function and the target-scattering characteristics, expressed as
S ¯ r c τ , t = σ 0 f t sin c B τ τ d d τ d t
Considering a random target’s range and azimuth angle in this space as R and θ , respectively, we can derive
τ = 2 R c t = θ θ 0 ω
Substituting Equation (9) into Equation (8) yields
S ¯ r c R , θ = σ 0 sin c 2 B c R R d f θ θ 0 d R d θ
Equation (10) can be reformulated in convolution form
S ¯ r c R , θ = σ R , θ h R , θ
where h R , θ is the antenna pattern function and σ R , θ is the effective scattering function. The discretization of (11) yields [36,37]
y = Hx + e
where y C N represents the echo, N is the sampling number of the azimuthal echo, x C N is the scattering coefficient of the target, e C N is the additive noise, and H C N × N is the pattern function of the antenna.
In this paper, the echo is neglected as the beam scans in and out of the scene, prompting the adoption of a truncated antenna pattern matrix, accurately expressed as
H = h θ 0 h θ l h θ l h θ l h θ l h θ 0 N × N
where h θ l h θ 0 h θ l represents samples of the antenna pattern. The number of samples is determined by the pulse repetition frequency, beamwidth, and antenna scanning velocity.

2.2. Sparse Iterative Covariance Fitting Estimation

The deconvolution operation in the above model poses an ill-posed problem, typically addressed using regularization methods. To estimate sparse distributed targets in practical scenarios such as airports and seaports, the sparse covariance fitting criterion is defined as follows:
f = R 1 2 y y H R F 2
where
R = AB A *
A = H I a 1 a 2 a N
B = d i a g x 1 2 x 2 2 x K 2 σ 1 σ 2 σ N d i a g b 1 b 2 b K + N
where · F denotes the Frobenius norm, I is the N × N identity matrix, a i represents the column vector of matrix A , d i a g · indicates a diagonal matrix, and  σ n represents the noise variance. Referring to [38], minimizing Equation (14) leads to
min p n > 0 y H R 1 y + Wp 1
where w j = a n y 2 , j = 1 , 2 , , N + K , W = d i a g w 1 w 2 w K + N
Since the constraint term of SPICE involves a weighted L 1 norm, this method demonstrates significant advantages in enhancing target resolution; however, it fails to preserve the edge information of the targets. Consequently, it is not suitable for reconstructing targets with sharp outlines. While iterative reweighting and the MM method can be employed for batch processing, they tend to have high complexity. To mitigate this, online grid-updating can be performed using the loop minimization criterion [33], which helps reduce the overall complexity.

2.3. Traditional TV Method

Although the aforementioned method significantly enhances the resolution of point targets, it is ineffective in reconstructing the edge contours and scale information of extended targets. As an effective image processing technique, the total variation (TV) can preserve edge information and suppress noise by incorporating total variation regularization. This approach is more effective in reconstructing the edge contours and scale information of extended targets, thereby improving the overall quality and usability of the image. The total variation norm is conventionally expressed using a 2D gradient operator:
x T V = i , j azimuth x i j + range x i j
where azimuth and range represent the reciprocals in the azimuthal and range directions, respectively. To address the low azimuthal resolution issue in scanning radar, this paper employs the anisotropic total variation norm to obtain gradient information in the azimuthal direction of targets:
x T V = x 1
where ∇ denotes the azimuthal reciprocal, and for optimization expression, the subscript “azimuth” is omitted. The azimuthal reciprocal operator is defined as x = x i + 1 x i , i = 1 , 2 , , N 1 , and thus ∇ can be represented in matrix form as
= 1 1 1 1 1 1 1
The total variation norm fundamentally constitutes a gradient-based operator. In radar-imaging applications, target boundaries exhibit substantial amplitude fluctuations, resulting in enhanced gradient magnitudes. Leveraging these properties by imposing the total variation norm as a regularization constraint enables accurate edge feature recovery. Consequently, the corresponding objective function takes the form
J = min x y Hx 2 2 + μ x 1
where min x y Hx 2 serves as the data fidelity term, approximating the original target. μ x 1 is utilized to maintain edge constraints on the azimuthal angle’s edge details. μ denotes the regularization parameter for the TV term, balancing the level of resolution and edge constraint.
While Equation (22) can be addressed through batch processing using either the Bregman or MM methods, its computational complexity remains prohibitively high for online implementation. As demonstrated in [34], leveraging inter-pulse correlation allows the conversion of matrix inversion into iterative matrix multiplication operations. This approach enables sequential echo processing for real-time reconstruction updates, thereby facilitating dynamic target tracking in forward-looking scenarios.

3. Proposed Method

In this section, the cost function of the SPICE-TV method is first derived and solved using the split Bregman approach. Subsequently, based on the derived solution, the matrix inversion problem is transformed into an iterative processing task, significantly reducing the computational complexity. Finally, a comprehensive complexity analysis is conducted.

3.1. Split SPICE-TV Method

As demonstrated in the previous section, existing online methods incorporate only a single constraint term [33,34]. This limitation results in suboptimal imaging performance when both sparse and extended targets coexist in the scene, significantly hindering their practical applicability.
Our idea is to combine the high-resolution advantages brought by SPICE and the contour-preserving performance brought by the TV regular term, and to use the above online processing ideas to derive a real-time super-resolution processing technology with sparse extended target imaging capability. The detailed derivation of the technology is as follows.
Assuming uniform noise variance, i . e . i , σ i = σ , the SPICE and LASSO problems are equivalent [31]:
min x y H x 2 + Dx 1
where the weighting matrix satisfies D = d i a g h 1 2 2 N · y 2 2 , h 2 2 2 N · y 2 2 , h K 2 2 N · y 2 2 .
Equation (23) presents an optimization problem in the form of a Lagrangian, structured as
min x y H x 2 2 s . t . D x 1 ε
where D and D represent bidirectional mapping relationships. In this paper, the following weighted LASSO problem is iteratively addressed:
J = min x y H x 2 2 + Dx 1
Substituting the total variation norm into Equation (25), we obtain a new cost function:
J = min x y H x 2 2 + Dx 1 + μ x 1
where min x y H x 2 is the data fidelity term, approximating the original target. Dx 1 represents sparse weighted constraints to ensure sparsity. μ x 1 is employed to maintain edge constraints on the azimuthal angle’s edge details. μ denotes the regularization parameter for the TV term, balancing the degree of resolution and edge constraints. To solve Equation (26), we introduce intermediate variables, transforming the above equation into a constrained problem:
x ^ = min x 1 2 y H x 2 2 + d 1 1 + μ d 1 1 s . t . d 1 = Dx , d 2 = x
By eliminating constraints, the above equation can be reformulated into an unconstrained problem:
x ^ = min x 1 2 y Hx 2 2 + d 1 1 + μ d 1 1 + γ 1 2 d 1 Dx 2 2 + γ 2 2 d 2 x 2 2
The parameters μ , γ 1 , and  γ 2 play distinct yet complementary roles in the regularization framework: μ controls the strength of TV regularization, with higher values producing smoother solutions; γ 1 regulates the weight of the covariance fitting term in SPICE, where increased values enhance sparsity in the dictionary domain and lead to sharper peaks in the imaging results; while γ 2 balances the relative contribution of the TV regularization term, with larger values promoting greater spatial smoothness through synergistic interaction with μ . It should be noted that while γ 2 globally modulates the influence of the TV term, μ directly governs the shrinkage threshold in the regularization process. Solving Equation (28) using the Bregman method yields
x k + 1 , d 1 k + 1 , d 2 k + 1 = min x 1 2 y Hx 2 2 + d 1 1 + μ d 1 1 + γ 1 2 d 1 Dx b 1 k 2 2 + γ 2 2 d 2 x b 2 k 2 2
where
b 1 k + 1 = b 1 k + D x k + 1 d 1 k + 1
b 2 k + 1 = b 2 k + x k + 1 d 2 k + 1
By separating variables, Equation (29) can be split into the following three subproblems.

3.1.1. Subproblem 1

The equation for solving x is as follows:
x k + 1 = min x 1 2 y Hx 2 2 + γ 1 2 d 1 Dx b 1 k 2 2 + γ 2 2 d 2 x b 2 k 2 2
The iterative equation for the solution of Equation (32) is as follows:
x k + 1 = H T H + γ 1 D T D + γ 2 T 1 H T y + γ 1 D T d 1 k b 1 k + γ 2 T d 2 k b 2 k

3.1.2. Subproblem 2

The equation for solving d is as follows:
d 1 k + 1 = min d 1 d 1 1 + γ 1 2 d 1 Dx b 1 k 2 2
d 2 k + 1 = min d 2 μ d 2 1 + γ 2 2 d 2 x k b 2 k 2 2
The iterative equations for the solutions of Equations (34) and (35) are as follows:
d 1 k + 1 = s h r i n k D x k + 1 + b 1 k , 1 γ 1
d 2 k + 1 = s h r i n k x k + 1 + b 2 k , μ γ 2

3.1.3. Subproblem 3

b can be solved by Equations (30) and (31).

3.2. Proposed Grid-Updating Split SPICE-TV Method

The echo matrix Y is composed of multiple pulse echoes, and thus the echo matrix Y can be expressed as Y n = y 0 , y 1 , , y n T C N × N . Similarly, the antenna convolution matrix H can be written as H n = h 1 , h 2 , , h N T C N × N . The echo matrix Y and antenna convolution matrix H can be rewritten as
H n = H n 1 h n , Y n = Y n 1 y n
Then the solution of the above three subproblems can be rewritten as follows.

3.2.1. Subproblem 1

Substituting Equation (38) into Equation (33), the equation for solving x can be rewritten as
x n + 1 = p n + 1 H n + 1 Y n + 1 T + γ 1 D T d 1 n b 1 n + γ 2 T d 2 n b 2 n
where
p n + 1 = H n + 1 T H n + 1 + γ 1 D T D + γ 2 T 1 = H n T H n + h n + 1 T h n + 1 + γ 1 D T D + γ 2 T 1
The relationship between p n + 1 and p n is as follows:
p n + 1 1 = p n 1 + h n + 1 T h n + 1
The recursive iterative formula for Equation (41) can be derived from the matrix inversion formula:
p n + 1 = p n p n h n + 1 T h n + 1 p n 1 + h n + 1 p n h n + 1 T
Equation (39) can be rewritten as
x n + 1 = p n p n h n + 1 T h n + 1 p n 1 + h n + 1 p n h n + 1 T H n + 1 T Y n + 1 + γ 1 D T d 1 n b 1 n + γ 2 T d 2 n b 2 n

3.2.2. Subproblem 2

The equation for solving d can be rewritten as
d 1 n + 1 = s h r i n k D x n + 1 + b 1 n , 1 γ 1
d 2 n + 1 = s h r i n k x n + 1 + b 2 n , μ γ 2

3.2.3. Subproblem 3

The equation for solving b can be rewritten as
b 1 n + 1 = b 1 n + D x n + 1 d 1 n + 1
b 2 n + 1 = b 2 n + x n + 1 d 2 n + 1
Through the above derivation, we have transformed the matrix inversion problem into an iterative processing problem. The following pseudocode can be used for the online updating of echo reconstruction (Algorithm 1).
The online method significantly reduces the number of iterations via real-time updating processing. Unlike traditional batch methods that require recomputing global data (matrix inversion) in each iteration, the online method incrementally updates the solution using only the current sample. This strategy avoids redundant calculations—each step processes only the newly arrived data ( h n and y n in Equation (38)), drastically lowering the computational cost per iteration. Moreover, incremental updates enable faster adaptation to the data dynamics, particularly for streaming or large-scale scenarios. The real-time nature accelerates progress toward the convergence criterion x n + 1 x n < ε , ultimately reducing the total iterations required.
Algorithm 1 The pseudocode of the proposed method.
Initialize:
n = 0 , x n = y , d 1 n = 0 , d 2 n = 0 , b 1 n = 0 , b 2 n = 0
P n = γ 1 D T D + γ 2 T
Grid Update Procedure:
While n < N and x n + 1 x n 2 > ε
1 Covariance Matrix Update:   p n + 1 = p n p n ( h n + 1 ) T h n + 1 p n 1 + h n + 1 p n ( h n + 1 ) T
2 Weight   Vector   Update : w n + 1 = ( H n + 1 ) T Y n + 1 + γ 1 D T ( d 1 n b 1 n ) +   γ 2 T ( d 2 n b 2 n )
3 Image Estimate Update:      x n + 1 = p n + 1 w n + 1
4 Sparse Component Update:     d 1 n + 1 = s h r i n k ( D x n + 1 + b 1 n , 1 γ 1 )
5 TV Component Update:      d 2 n + 1 = s h r i n k ( x n + 1 + b 2 n , μ γ 2 )
6 Multiplier Updates :       b 1 n + 1 = b 1 n + D x n + 1 d 1 n + 1     b 2 n + 1 = b 2 n + x n + 1 d 2 n + 1
End

3.3. Complexity Analysis

3.3.1. Time Complexity Analysis

The time computational complexity of the algorithm is contingent on the size of the matrix and the operations of multiplication, division, and inversion.
Equation (42) performs covariance matrix updates with a time complexity of O ( 3 N 2 + N ) . Equation (43) executes image estimation updates at O ( N 2 + 4 N ) complexity. For Equations (44) and (45), which handle SPICE and TV component updates, respectively, the element-wise operations result in linear O ( N ) complexity. Similarly, Equations (46) and (47) involve only vector additions and subtractions for multiplier updates, maintaining O ( N ) complexity. In summary, the proposed method achieves a per-iteration computational complexity of O ( N 2 ) .
Similarly, we derive that the temporal complexity is O ( N 3 ) for the conventional TV, split SPICE, and split SPICE-TV algorithm, while the online TV algorithm and online q-SPICE algorithm achieve reduced complexity of O ( N 2 ) .

3.3.2. Space Computational Complexity

The space computational complexity is a measure of the amount of memory required by an algorithm during execution. Spatial complexity includes not only the fixed space required by the algorithm itself but also the space required by the input data.
The inverse covariance matrix P n and measurement matrix H n + 1 demand O ( N 2 ) storage each, dominating the memory footprint. Primary variables ( x n , y n + 1 , w n + 1 ) and ADMM auxiliary variables ( d 1 n , d 2 n , b 1 n , b 2 n ) collectively require only O ( N ) space due to their vector forms. Regularization operators ( D T , ∇) further contribute O ( N 2 ) storage as sparse matrices. In summary, the algorithm’s space complexity scales quadratically as O ( N 2 ) .
Similarly, it can be demonstrated that the space computational complexity remains O ( N 2 ) for all considered methods: conventional TV, online TV, split SPICE, online q-SPICE, and split SPICE-TV approaches.
Comparative analysis demonstrates that the proposed methodology achieves superior computational efficiency without increasing space complexity, exhibiting lower time complexity than conventional approaches while maintaining comparable performance to online methods.

4. Simulation

This section presents point target and area target simulations using the IAA method, the OMP method, the traditional TV method, the online TV method, the split SPICE method, the online q-SPICE method, the split SPICE-TV method, and the grid-updating split SPICE-TV method. By comparing the simulation results of these methods, the effectiveness of the grid-updating split SPICE-TV method will be demonstrated. All simulations were conducted on a Windows operating system equipped with 64 GB of RAM and a 12th Gen Intel(R) Core(TM) i9-12900H processor, utilizing MATLAB 2019 (The equipment we purchased from Hubei Hanlian Technology Co., Ltd., Wuhan, China).

4.1. Point Target Simulation

This section presents point target simulations to evaluate the proposed method’s effectiveness. The simulation parameters include an azimuth scanning range of 10 ° to 10 ° , 3 ° azimuth beamwidth, 30 ° / s scanning velocity, and 1000 Hz pulse repetition frequency.

4.1.1. Profile Results

The initial point target configuration is shown in Figure 2, featuring two targets of equal amplitude and edge properties located at 0 ° and 2 ° azimuth positions. Figure 3a displays the received echoes at 25 dB SNR. Beamwidth-induced aliasing occurs due to the target angular separation being smaller than the antenna beamwidth. We conducted 100 Monte Carlo simulation trials, with representative super-resolution processing outcomes demonstrated in Figure 3b–i.
Figure 3b displays the OMP approach ( k = 70 , ν = 100 ). Although this method can effectively delineate target contours, its resolution remains limited, and it produces a noticeable false target. Figure 3c presents the reconstruction using conventional TV regularization ( k = 70 , μ = 0.04 , γ = 5 ), revealing significant sidelobe artifacts and limited resolution capability. Figure 3d displays the online TV reconstruction ( μ = 0.1 , γ = 50 ), which, similarly to conventional TV regularization, exhibits pronounced sidelobe artifacts and degraded resolution performance. In the simulation environment, while the OMP method demonstrates satisfactory noise suppression performance, it generates distinct reconstruction artifacts. Both TV-based methods exhibit inadequate noise suppression capabilities along with noticeable artifacts in their imaging results.
Figure 3e presents the reconstruction using IAA approach ( k = 70 , = 0.1 ). It can be seen that this method can distinguish the target well and has a relatively high resolution, but it cannot display the edge information of the target. Figure 3f presents the split SPICE reconstruction ( k = 70 , γ = 50 ), demonstrating superior performance relative to both traditional and online TV methods with markedly suppressed sidelobes and enhanced resolution. However, the technique shows limited capability in preserving target edge delineation. Figure 3g demonstrates the online q-SPICE reconstruction ( q = 2 ), which achieves comparable performance to the split SPICE approach with substantial sidelobe suppression and resolution enhancement. However, similar limitations persist in accurately representing target edge geometries. In the simulation environment, both SPICE methods and the IAA method demonstrate competent noise suppression performance while maintaining artifact-free reconstruction quality. As can be observed, both SPICE methods demonstrate superior resolution compared to the IAA approach.
The split SPICE-TV method’s output ( k = 70 , μ = 0.04 , γ 1 = 70 , γ 2 = 4 ) in Figure 3h reveals significant advancements over previous techniques: while maintaining the sidelobe and resolution advantages of SPICE, it uniquely preserves edge information that was previously obscured. Finally, the imaging result of the grid-updating split SPICE-TV method ( μ = 0.04 , γ 1 = 70 , γ 2 = 4 ) is presented in Figure 3i. The sidelobe suppression effect of this proposed method is significantly superior to that of the two TV methods, and it also enhances resolution. In comparison to the split SPICE method and online q-SPICE method, the edge contours of the targets are clearly displayed. Furthermore, the sidelobe suppression and resolution improvement effects are essentially comparable to those of the split SPICE-TV method. In the simulation environment, the imaging results of the two split SPICE-TV methods demonstrate satisfactory noise suppression performance without observable artifacts.

4.1.2. Quantitative Analysis

For comprehensive super-resolution performance assessment, the Relative Error (ReErr) metric is employed to quantify the correlation between reconstructed images and the original scene. The ReErr is mathematically defined as
ReErr = x ^ x 2 2 x 2 2
where x ^ corresponds to the super-resolved reconstruction and x represents the ground truth. The metric exhibits an inverse relationship with reconstruction fidelity—lower ReErr values indicate superior imaging quality through closer approximation to the original scene.
We conducted 100 Monte Carlo trials, recording the maximum, minimum, and mean values of ReErr along with the average runtime for each method. These performance metrics are comprehensively summarized in Table 1. Quantitative comparisons at 25 dB SNR are summarized in Table 1. Notably, the ReErr value of the beam-updating split SPICE-TV method is significantly lower than that of the IAA method, the OMP method, the traditional TV method, the online TV method, the split SPICE method, and the online q-SPICE method, while remaining comparable to the split SPICE-TV method.
To further analyze the noise suppression performance of each method, Figure 4 illustrates the ReErr variation across SNR levels for all eight approaches, using the parameter configurations specified in Table 1. The proposed approach maintains superior performance (lower ReErr) compared to IAA, OMP, traditional TV, online TV, split SPICE, and online q-SPICE methods across all tested SNR conditions, while achieving comparable accuracy to the split SPICE-TV baseline.
The runtime of the eight methods is detailed in Table 1. The proposed method, which does not require matrix inversions during its iterative process, exhibits significantly lower computational complexity compared to batch processing methods. It is clear that the runtime of the proposed method is considerably less than that of the IAA method, the OMP method, the traditional TV method, the split SPICE method, and the split SPICE-TV method. Additionally, its runtime is nearly identical to that of the online TV method and the online q-SPICE method, consistent with the earlier theoretical analysis.

4.2. Surface Target Simulation

This section presents a surface target simulation to rigorously evaluate the grid-updating split SPICE-TV method’s efficacy. The complete simulation configuration is specified in Table 2.

4.2.1. Surface Target Results

Figure 5 displays the reference surface target scenario, consisting of three rectangular landmasses. Two connected islands contain five sparse targets in total—two positioned above and three below. The 20 dB SNR echo response in Figure 6a shows beamwidth-induced overlap among the lower three targets due to insufficient angular separation. Across 100 Monte Carlo simulation trials, representative super-resolution processing outcomes are demonstrated in Figure 6b–i, with all parameters consistent with the point target simulation protocol.
The simulation result of the OMP method is presented in Figure 6b. It can be seen that the resolution of this method is relatively low and it cannot restore the outline of the extended island target very well. The simulation result of the traditional TV method is shown in Figure 6c. Similar to the point target simulation results, this method exhibits numerous sidelobes and demonstrates lower resolution for the five sparse targets above and below. The simulation result of the online TV method is presented in Figure 6d. Analysis reveals that the five sparse targets exhibit very low resolution, with noticeable sidelobes present in the center of the island target. While the OMP method demonstrates satisfactory noise suppression performance, it generates distinct reconstruction artifacts. Both TV-based methods exhibit inadequate noise suppression capabilities along with noticeable artifacts in their imaging results.
The simulation result of the IAA method is shown in Figure 6e. Although the imaging result of this method has a very high resolution, the peaks and troughs can be clearly seen from the imaging result of the intermediate extended island targets. The simulation result of the split SPICE method is shown in Figure 6f. While this method achieves high resolution for the five sparse targets, it fails to capture the edge characteristics of the targets and does not identify the intermediate island targets. The simulation result of the online q-SPICE method is displayed in Figure 6g. Although the resolution of the five sparse targets is high, it does not accurately represent the outline of the middle island target. Both SPICE methods and the IAA method demonstrate competent noise suppression performance while maintaining artifact-free reconstruction quality. As can be observed, both SPICE methods demonstrate superior resolution compared to the IAA approach.
The simulation result of the split SPICE-TV method is shown in Figure 6h. Compared to the traditional TV method and the online TV method, the split SPICE-TV method reduces sidelobes of the island group targets while enhancing the resolution of the five sparse targets. In contrast to the split SPICE method and the online q-SPICE method, this approach effectively captures the edge contours of the targets. The imaging results of the grid-updating split SPICE-TV method are illustrated in Figure 6i. It is evident that the proposed method outperforms the traditional TV method and the online TV method in terms of resolution and sidelobe suppression. Furthermore, in comparison to the split SPICE method and the online q-SPICE method, the proposed approach accurately identifies island targets and reflects target contours, with imaging results largely consistent with those of the split SPICE-TV method. The imaging results of the two split SPICE-TV methods demonstrate satisfactory noise suppression performance without observable artifacts.

4.2.2. Quantitative Analysis

For quantitative super-resolution assessment of the Figure 6 results, we utilize the Structural Similarity Index (SSIM) to measure the reconstruction fidelity. SSIM comprehensively evaluates the perceptual image quality through luminance, contrast, and structural comparisons, with values approaching one indicating optimal reconstruction. The metric is mathematically defined as
SSIM ( X , Y ) = 2 u X u Y + c 1 2 δ XY + c 2 u X 2 + u Y 2 + c 1 δ X 2 + δ Y 2 + c 2
where X is the matrix of the results of each super-resolution method, Y is the echo matrix, μ X is the average value of X , u Y is the mean value of Y , δ X is the variance of X , u Y is the variance of Y , and o XY is the covariance of X and Y .
One hundred Monte Carlo experiments were performed to evaluate each method’s SSIM (maximum, minimum, mean) and average runtime, with the results compiled in Table 3. The SSIM values of the simulation results of the three super-resolution methods are shown in Table 3.
Although the IAA method achieves the highest SSIM value, it exhibits inferior edge recovery performance for extended targets compared to the proposed method, along with significantly higher computational time. The SSIM value of the proposed method is higher than those of the OMP method, traditional TV method, the online TV method, the split SPICE method, and the online q-SPICE method, while remaining nearly identical to that of the split SPICE-TV method.
The runtime of the four methods is presented in Table 3. The computational complexity of the proposed method is significantly lower than that of batch processing methods, as its iterative process does not require matrix inversions. The runtime of the grid-updating split SPICE-TV method is considerably less than that of the IAA method, the OMP method, the traditional TV method, the split SPICE method, and the split SPICE-TV method. Furthermore, it is nearly equivalent to the runtimes of the online TV method and the online q-SPICE method, supporting the earlier theoretical analysis.
The data presented above demonstrate that the proposed method effectively reduces computational complexity while maintaining the high image quality characteristic of the split SPICE-TV method.

4.3. Statistical Significance Analysis

To statistically validate whether the observed differences between ReErr and SSIM metrics in the quantitative analysis are significant rather than coincidental, we conducted a samples t-test for rigorous significance testing. Since ReErr and SSIM are derived from two distinct datasets, we employed an independent samples t-test for statistical validation, which can be formally expressed as
t t e s t = x ¯ ReErr x ¯ SSIM s ReErr 2 n ReErr + s SSIM 2 n SSIM
where x ¯ ReErr is the mean of ReErr, x ¯ SSIM is the mean of SSIM, s ReErr 2 is the variance of ReErr, s SSIM 2 is the variance of SSIM, n ReErr is the number of Monte Carlo experiments conducted to collect ReErr, and n SSIM is the number of Monte Carlo experiments conducted to collect SSIM. We have conducted 100 Monte Carlo simulations to evaluate the proposed method’s ReErr and SSIM performance metrics, followed by rigorous t-test statistical analysis. The statistical analysis yielded t t e s t = 1.005 ( p = 0.328 ), indicating no significant difference at the α = 0.05 level.

5. Measured Data Results

In the previous section, the feasibility of the proposed method was demonstrated through point target and surface target simulations. To further evaluate the imaging performance of the grid-updating split SPICE-TV method, this section processes two sets of measured data using the IAA method, the OMP method, the traditional TV method, the online TV method, the split SPICE method, the online q-SPICE method, the split SPICE-TV method, and the grid-updating split SPICE-TV method to validate the restoration capabilities of the grid-updating split SPICE-TV method for sparse targets and target contours. Additionally, a prominent point was selected in the scene, and profiles of the echoes, along with results from the IAA method, the OMP method, the traditional TV method, the online TV method, the split SPICE method, the online q-SPICE method, the split SPICE-TV method, and the grid-updating split SPICE-TV method, were compared.

5.1. Imaging for Urban Monitoring Applications

In this section, the effectiveness of the proposed method is validated using measured data collected from a urban monitoring application. The experimental parameters are presented in Table 4.

5.1.1. Imaging Results

The real scene is depicted in Figure 7, where the building outlined in red and the parking lot marked in yellow represent the key areas of interest for this set of measured data. The echo pattern of the original scene, shown in Figure 8a, demonstrates clear contours in both the red and yellow regions. The super-resolution imaging results are presented in Figure 8b–i. All methodological parameters maintain identical configurations to those implemented in the previous section.
The imaging result of the OMP method is shown in Figure 8b. The results indicate no significant resolution improvement compared to the original echo data. The imaging result of the traditional TV method is displayed in Figure 8c. Compared to the echo of the original scene, the contours of the imaging results in the red and yellow regions are sharper, but the improvement in resolution is limited. The imaging result of the online TV method is shown in Figure 8d. Similar to the traditional TV method, the imaging results of the online TV method exhibit sharper contours in the red and yellow regions. However, the enhancement in the azimuth resolution remains minimal. Consistent with the favorable simulation results obtained previously, the OMP method and two TV methods demonstrate comparably effective noise suppression performance with artifacts under high-SNR experimental conditions.
The imaging result of the IAA method is displayed in Figure 8e. As observed, the IAA method suffers from severe energy dispersion and sidelobe interference when dealing with complex scenes, significantly degrading its resolution performance. The imaging result of the split SPICE method is presented in Figure 8f. It can be observed that, although the split SPICE method demonstrates a significant improvement in azimuth resolution compared to both the traditional TV method and the online TV method, this approach fails to reconstruct the contours of the building in the red region and the parking lot in the yellow region. The imaging result of the q-SPICE method is shown in Figure 8g. Similar to the split SPICE method, despite its high resolution, this method fails to reveal the contours of the targets in the red and yellow regions. Diverging from the favorable simulation outcomes, high-SNR experimental measurements reveal marked performance differences: the IAA method demonstrates substantially degraded noise suppression with conspicuous artifacts, whereas both SPICE variants maintain excellent noise rejection capabilities while producing artifact-free reconstructions.
The imaging result of the split SPICE-TV method is displayed in Figure 8h. Compared to the traditional TV method and the online TV method, the split SPICE-TV method significantly enhances the resolution of targets in both the red and yellow regions. Furthermore, in contrast to the split SPICE method and the online q-SPICE method, the split SPICE-TV method successfully reconstructs the contours of the targets in the red and yellow regions. The imaging result of the grid-updating split SPICE-TV method is shown in Figure 8i. It is also evident that the proposed method significantly improves the resolution of targets in the red and yellow regions compared to the traditional TV method and the online TV method. Additionally, it successfully reconstructs the contours of the targets in these regions, outperforming both the split SPICE method and the online q-SPICE method. Moreover, the imaging result is nearly identical to that of the split SPICE-TV method. Under high-SNR measured data conditions, the imaging results of the two split SPICE-TV methods exhibit superior noise suppression performance while remaining entirely free of discernible artifacts.

5.1.2. Quantitative Analysis

The range bin of 50 m from the profile results of the eight methods is extracted from Figure 8 and depicted in Figure 9. As demonstrated, when processing complex real-world scenarios, both IAA and OMP methods exhibit significantly lower resolution performance compared to the proposed approach. In comparison to the traditional TV method and the online TV method, the proposed method exhibits reduced sidelobes and enhanced resolution. Additionally, when compared to the online q-SPICE method, it successfully recovers the target contours, yielding imaging results similar to those of the split SPICE-TV method.
Table 5 compares the −3 dB bandwidth performance across methods. The proposed method demonstrates superior resolution (narrower bandwidth) to the IAA method, the OMP method, the traditional TV method, the online TV method while matching the split SPICE-TV’s performance. The proposed method outperforms both TV variants and maintains comparable bandwidth characteristics to split SPICE-TV.
The runtime of the eight methods is presented in Table 5. The runtime of the grid-updating split SPICE-TV method is significantly shorter than that of the IAA method, the OMP method, the traditional TV method, the split SPICE method, and the split SPICE-TV method, while remaining nearly the same as that of the online TV method and the online q-SPICE method, which aligns with the earlier theoretical analysis.

5.2. Imaging for Coastal Monitoring Applications

In the previous section, the effectiveness of the proposed method was demonstrated using measured data recorded from a urban monitoring application. In this section, measured data captured from a coastal monitoring application will be utilized to further validate the efficacy of the proposed method. The experimental parameters are detailed in Table 6.
The real scene is depicted in Figure 10, where two outlined targets are visible in the bottom-left and top-right corners. The echo of the original scene is shown in Figure 11a. The super-resolution imaging results are illustrated in Figure 11b–i. Due to the high level of noise present in the measured data, the parameters of the methods discussed in the preceding sections are not directly applicable here. Consequently, the parameters for each method have been re-adjusted in this section to accommodate the specific characteristics of the measured data.
The imaging result of the OMP method ( k = 70 , ν = 10,000) is shown in Figure 11b. Similar to the previous set of measured data result obtained with the OMP method, the imaging resolution shows no significant improvement. The imaging result of the traditional TV method ( k = 70 , μ = 20,000, γ = 50 ) is displayed in Figure 11c, which contains numerous sidelobes. The imaging result of the online TV method ( μ = 20,000, γ = 10 ) is shown in Figure 11d. Similar to the traditional TV method, the online TV method also exhibits sidelobes. Under low-SNR experimental conditions, the imaging results of the above three methods demonstrate adequate noise suppression performance while exhibiting moderate artifact levels.
The imaging result of the IAA method ( k = 70 , = 1000 ) is displayed in Figure 11e. As evidenced by the results, the IAA method demonstrates significantly degraded imaging performance when processing complex scenes with severe background noise. The imaging result of the split SPICE method ( k = 70 , γ = 10,000) is presented in Figure 11f. Due to significant noise in the echoes, the split SPICE method is unable to identify the edges of the targets. The imaging result of the q-SPICE method ( q = 2 ) is shown in Figure 11g. Despite its high resolution, this method fails to reveal the contours of the targets. Under low-SNR experimental conditions, both IAA and split SPICE methods exhibit substantially compromised noise suppression capabilities along with clearly visible artifacts. In contrast, the online q-SPICE method maintains robust noise rejection performance while producing artifact-free reconstructions.
The imaging result of the split SPICE-TV method ( k = 70 , μ = 10,000, γ 1 = 70 , γ 2 = 4 ) is displayed in Figure 11h, where a decrease in the number of sidelobes and an increase in resolution can be observed. The imaging result of the grid-updating split SPICE-TV method ( μ = 10,000, γ 1 = 100 , γ 2 = 40 ) is shown in Figure 11i. Similarly, it can be observed that its sidelobes are smaller than those of the traditional TV method and the online TV method, and it exhibits higher resolution compared to both. The imaging result is nearly identical to that of the split SPICE-TV method. Under low-SNR measured data conditions, the imaging results of the two split SPICE-TV methods maintain competent noise suppression performance while exhibiting no discernible artifacts.
Table 7 presents the runtime of all evaluated methods. The runtime of the grid-updating split SPICE-TV method is significantly shorter than that of the IAA method, the OMP method, the traditional TV method, the split SPICE method, and the split SPICE-TV method, while remaining nearly identical to that of the online TV method and the online q-SPICE method, consistent with the earlier theoretical analysis.
The above analysis demonstrates that the proposed method effectively reduces computational complexity while preserving the high image quality of the split SPICE-TV method.

6. Conclusions

In this paper, we introduce an grid-updating split SPICE-TV method tailored for scanning radars. This method boasts several advantages. Firstly, compared to the traditional SPICE approach, it integrates TV norm regularization, better preserving azimuthal edge contours. In contrast to conventional TV regularization methods, our approach yields higher imaging resolution and lower sidelobes. Secondly, unlike traditional online methods with a single constraint term, our proposed method incorporates two constraint terms, facilitating the recovery of both sparse targets and edge contours simultaneously. Lastly, in comparison to the existing methods based on Bregman iteration, our proposed approach offers an online closed-form solution, significantly reducing computational costs without compromising imaging quality. Simulation and experimental validations substantiate the effectiveness of the proposed method.
However, the practical deployment of the proposed method is constrained by its requirement for the manual adjustment of three critical parameters. The performance degrades when these parameters are mismatched to dynamic operational environments (e.g., varying clutter or SNR conditions), and the manual tuning process reduces reproducibility across different radar systems. To address these limitations, our future work will focus on developing a neural network architecture capable of predicting optimal parameters directly from raw radar data. This adaptive parameter selection approach would preserve the method’s theoretical advantages while eliminating its manual tuning bottleneck, thereby facilitating real-world field applications.

Author Contributions

Conceptualization, R.L. and J.L.; methodology, R.L.; software, J.L. and L.J.; validation, R.L. and J.L.; formal analysis, R.L.; investigation, Y.Z. (Yongchao Zhang) and J.Y.; resources, Y.Z. (Yin Zhang); and D.M. data curation, R.L.; writing—original draft preparation, R.L; writing—review and editing, J.Y., L.J. and D.M.; visualization, D.M. and Y.Z. (Yongchao Zhang); supervision, D.M.; project administration, Y.Z. (Yongchao Zhang); funding acquisition, Y.Z. (Yongchao Zhang), Y.Z. (Yin Zhang), Y.H. and J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China under Grant Number 62471103 and 62301131, and the Natural Science Foundation for Distinguished Young Scholars of Sichuan under Grant Number 2023NSFSC1970.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Young, S.; Harrah, S.; de Haag, M. Real-time integrity monitoring of stored geo-spatial data using forward-looking remote sensing technology [aircraft navigation/displays]. In Proceedings of the 21st Digital Avionics Systems Conference, Irvine, CA, USA, 27–31 October 2002; Volume 2, p. 11D1. [Google Scholar]
  2. Zhu, R.; Lu, G.; Liang, L.; Zhang, Z. Forward-looking imaging method of airborne array radar based on uniform circular array. In Proceedings of the 2022 IEEE 22nd International Conference on Communication Technology (ICCT), Nanjing, China, 11–14 November 2022; pp. 1852–1858. [Google Scholar]
  3. Ressler, M.; Nguyen, L.; Koenig, F.; Wong, D.; Smith, G. The army research laboratory (arl) synchronous impulse reconstruction (sire) forward-looking radar. In Proceedings of the Unmanned Systems Technology IX, Orlando, FL, USA, 9–13 April 2007; SPIE: Bellingham, WA, USA, 2007; Volume 6561, pp. 35–46. [Google Scholar]
  4. Chen, H.; Li, Y.; Gao, W.; Zhang, W.; Sun, H.; Guo, L.; Yu, J. Bayesian forward-looking superresolution imaging using doppler deconvolution in expanded beam space for high-speed platform. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5105113. [Google Scholar] [CrossRef]
  5. Guo, Y.; Liang, Y.; Chen, D.; Suo, Z.; Li, S.; Xing, M. Superresolution forward-looking imaging with greedy pursuit for high-speed dynamic platform under optimized doppler convolution model. IEEE Trans. Geosci. Remote Sens. 2023, 61, 5108514. [Google Scholar] [CrossRef]
  6. Zhang, G.; Liang, Y.; Chen, S.; Xing, M.; Li, Z. Super-resolution forward-looking imaging method for manoeuvering platform with optimised dictionary and extended sparsity adaptive matching pursuit. IET Radar Sonar Navig. 2022, 16, 912–923. [Google Scholar] [CrossRef]
  7. Lu, J.; Zhang, L.; Huang, Y.; Cao, Y. High-resolution forward-looking multichannel SAR imagery with array deviation angle calibration. IEEE Trans. Geosci. Remote Sens. 2020, 58, 6914–6928. [Google Scholar] [CrossRef]
  8. Pang, B.; Dai, D.; Xing, S.; Wang, X.S. Development and perspective of forward-looking SAR imaging technique. Syst. Eng. Electron. 2013, 35, 2283–2290. [Google Scholar]
  9. Nguyen, L.H.; Ton, T.T.; Wong, D.C.; Ressler, M.A. Signal processing techniques for forward imaging using ultrawideband synthetic aperture radar. In Proceedings of the Unmanned Ground Vehicle Technology V, Orlando, FL, USA, 21–25 April 2003; SPIE: Bellingham, WA, USA, 2003; Volume 5083, pp. 505–518. [Google Scholar]
  10. Nguyen, L.H.; Le, C. 3D imaging for millimeter-wave forward-looking synthetic aperture radar (sar). In Proceedings of the Passive and Active Millimeter-Wave Imaging XXIII, Online, 27 April–9 May 2020; SPIE: Bellingham, WA, USA, 2020; Volume 11411, pp. 83–94. [Google Scholar]
  11. Luo, J.; Huang, Y.; Zhang, Y.; Tuo, X.; Zhang, D.; Mao, D.; Yi, Q.; Zhang, Y.; Yang, J. Two-dimensional angular super-resolution for airborne real aperture radar by fast conjugate gradient iterative adaptive approach. IEEE Trans. Aerosp. Electron. Syst. 2023, 59, 9480–9500. [Google Scholar] [CrossRef]
  12. Dropkin, H.; Ly, C. Superresolution for scanning antenna. In Proceedings of the 1997 IEEE National Radar Conference, Syracuse, NY, USA, 13–15 May 1997; IEEE: Piscataway, NJ, USA, 1997; pp. 306–308. [Google Scholar]
  13. Lu, J.; Zhang, L.; Quan, Y.; Meng, Z.; Cao, Y. Parametric azimuth-variant motion compensation for forward-looking multichannel sar imagery. IEEE Trans. Geosci. Remote Sens. 2021, 59, 8521–8537. [Google Scholar] [CrossRef]
  14. Tan, K.; Lu, X.; Yang, J.; Su, W.; Gu, H. A novel bayesian super-resolution method for radar forward-looking imaging based on markov random field model. Remote Sens. 2021, 13, 4115. [Google Scholar] [CrossRef]
  15. Li, W.; Li, M.; Zuo, L.; Sun, H.; Chen, H.; Li, Y. Forward-looking super-resolution imaging for sea-surface target with multi-prior bayesian method. Remote Sens. 2021, 14, 26. [Google Scholar] [CrossRef]
  16. Çetin, M.; Karl, W.C. Feature-enhanced synthetic aperture radar image formation based on nonquadratic regularization. IEEE Trans. Image Process. 2001, 10, 623–631. [Google Scholar] [CrossRef] [PubMed]
  17. Shu, Z.; Zong, Z.; Huang, L.; Huang, L. Forward-looking radar super-resolution imaging combined tsvd with l1 norm constraint. In Proceedings of the IGARSS 2019—2019 IEEE International Geoscience and Remote Sensing Symposium, Yokohama, Japan, 28 July–2 August 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2559–2562. [Google Scholar]
  18. Huang, A.; Kou, L.; Liang, Y.; Mao, Y.; Gao, H.; Chu, Z. Fusion of ground-based and spaceborne radar precipitation based on spatial domain regularization. J. Meteorol. Res. 2024, 38, 285–302. [Google Scholar] [CrossRef]
  19. Yi, J.; Yang, M.; Liu, N.; Liu, M.; Chen, Y. Radar forward-looking imaging for complex targets based on sparse representation with regularized AK-SVD and GGAMP-VSBL algorithm. IEEE Trans. Geosci. Remote Sens. 2024, 62, 5105714. [Google Scholar] [CrossRef]
  20. Han, J.; Zhang, S.; Zheng, S.; Wang, M.; Ding, H.; Yan, Q. Bias analysis and correction for ill-posed inversion problem with sparsity regularization based on 1 norm for azimuth super-resolution of radar forward-looking imaging. Remote Sens. 2022, 14, 5792. [Google Scholar] [CrossRef]
  21. Stoica, P.; Babu, P.; Li, J. New method of sparse parameter estimation in separable models and its use for spectral analysis of irregularly sampled data. IEEE Trans. Signal Process. 2010, 59, 35–47. [Google Scholar] [CrossRef]
  22. Cao, K.; Zhou, X.; Cheng, Y.; Fan, B.; Qin, Y. Total variation-based method for radar coincidence imaging with model mismatch for extended target. J. Electron. Imaging 2017, 26, 063007. [Google Scholar] [CrossRef]
  23. Khan, M.; Tran, P.N.; Pham, N.T. El Saddik, A.; Othmani, A. MemoCMT: Multimodal Emotion Recognition Using Cross-Modal Transformer-Based Feature Fusion. Sci. Rep. 2025, 15, 5473. [Google Scholar]
  24. Nguyen, L.H.; Pham, N.T.; Khan, M.; Othmani, A.; Saddik, A.E.I. Hubert-clap: Contrastive learning-based multimodal emotion recognition using self-alignment approach. In Proceedings of the 6th ACM International Conference on Multimedia in Asia, Ser. MMAsia ′24, Auckland New Zealand, 3–6 December 2024; Association for Computing Machinery: New York, NY, USA, 2024. [Google Scholar]
  25. Khan, M.; Ahmad, J.; Gueaieb, W.; Masi, G.D.; Karray, F.; Saddik, A.E. Joint multi-scale multimodal transformer for emotion using consumer devices. IEEE Trans. Consum. Electron. 2025, 71, 1092–1101. [Google Scholar] [CrossRef]
  26. Wang, S.; Mei, L.; Liu, R.; Jiang, W.; Yin, Z.; Deng, X.; He, T. Multi-modal fusion sensing: A comprehensive review of millimeter-wave radar and its integration with other modalities. IEEE Commun. Surv. Tutor. 2024, 27, 322–352. [Google Scholar] [CrossRef]
  27. Guo, J.; Wei, J.; Xiang, Y.; Han, C. Millimeter-Wave Radar-Based Identity Recognition Algorithm Built on Multimodal Fusion. Sensors 2024, 24, 4051. [Google Scholar] [CrossRef] [PubMed]
  28. Liu, H.; Liu, Z. A multimodal dynamic hand gesture recognition based on radar–vision fusion. IEEE TRansactions Instrum. Meas. 2023, 72, 8001715. [Google Scholar] [CrossRef]
  29. Liu, Q.; Xiao, Y.; Gui, Y.; Dai, G.; Li, H.; Zhou, X.; Ren, A.; Zhou, G.; Shen, J. MMF-RNN: A Multimodal Fusion Model for Precipitation Nowcasting Using Radar and Ground Station Data. IEEE Trans. Geosci. Remote Sens. 2025, 63, 4101416. [Google Scholar] [CrossRef]
  30. Luo, J.; Zhang, Y.; Zhang, Y.; Yang, S.; Huang, Y.; Yang, J. A spice-tv super-resolution method for scanning radar. In Proceedings of the 2023 IEEE Radar Conference (RadarConf23), San Antonio, TX, USA, 1–5 May 2023; IEEE: Piscataway, NJ, USA, 2023; pp. 1–5. [Google Scholar]
  31. Luo, J.; Zhang, Y.; Sun, T.; Zhang, Y.; Huo, W.; Huang, Y.; Yang, J. A split spice-tv super-resolution method for scanning radar. In Proceedings of the 2024 IEEE Radar Conference (RadarConf24), Denver, CO, USA, 6–10 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–5. [Google Scholar]
  32. Kang, Y.; Zhang, Y.; Mao, D.; Tuo, X.; Zhang, Y.; Huang, Y. Super-resolution doppler beam sharpening based on online tikhonov regularization. In Proceedings of the 2019 6th Asia-Pacific Conference on Synthetic Aperture Radar (APSAR), Xiamen, China, 26–29 November 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  33. Zhang, Y.; Li, J.; Li, M.; Zhang, Y.; Luo, J.; Huang, Y.; Yang, J.; Jakobsson, A. Online sparse reconstruction for scanning radar using beam-updating q-spice. IEEE Geosci. Remote Sens. Lett. 2021, 19, 3503905. [Google Scholar] [CrossRef]
  34. Zhao, X.; Mao, D.; Wang, W.; Zhang, Y.; Zhang, Y.; Huang, Y.; Yang, J. Online angular super-resolution with total variation regularization for airborne scanning radar. In Proceedings of the 2024 IEEE Radar Conference (RadarConf24), Denver, CO, USA, 6–10 May 2024; IEEE: Piscataway, NJ, USA, 2024; pp. 1–5. [Google Scholar]
  35. Dai, J.; Sun, W.; Jiang, X.; Wu, D. Array Radar Three-Dimensional Forward-Looking Imaging Algorithm Based on Two-Dimensional Super-Resolution. Sensors 2024, 24, 7356. [Google Scholar] [CrossRef] [PubMed]
  36. Li, W.; Li, M.; Zuo, L.; Sun, H.; Chen, H.; Lu, X. Azimuth super-resolution of forward-looking imaging based on Bayesian learning in complex scene. Signal Process. 2021, 187, 108141. [Google Scholar] [CrossRef]
  37. Zhao, Z.; Hou, Y. Block sparse bayesian angular super-resolution imaging for scanning radar in forward-looking area. In Proceedings of the 2021 IEEE 5th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Xi’an, China, 15–17 October 2021; Volume 5, pp. 1025–1028. [Google Scholar]
  38. Zhang, Y.; Jakobsson, A.; Zhang, Y.; Huang, Y.; Yang, J. Wideband sparse reconstruction for scanning radar. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6055–6068. [Google Scholar] [CrossRef]
Figure 1. Geometric motion representation for forward-scanning radar systems.
Figure 1. Geometric motion representation for forward-scanning radar systems.
Remotesensing 17 02533 g001
Figure 2. Original scene of the point target.
Figure 2. Original scene of the point target.
Remotesensing 17 02533 g002
Figure 3. Point target profile result. (a) Real beam echo; (b) Result of OMP method; (c) Result of traditional TV method; (d) Result of online TV method; (e) Result of IAA method; (f) Result of split SPICE method; (g) Result of online q-SPICE method; (h) Result of split SPICE-TV method; (i) Result of grid-updating split SPICE-TV method.
Figure 3. Point target profile result. (a) Real beam echo; (b) Result of OMP method; (c) Result of traditional TV method; (d) Result of online TV method; (e) Result of IAA method; (f) Result of split SPICE method; (g) Result of online q-SPICE method; (h) Result of split SPICE-TV method; (i) Result of grid-updating split SPICE-TV method.
Remotesensing 17 02533 g003
Figure 4. ReErr vs. SNR.
Figure 4. ReErr vs. SNR.
Remotesensing 17 02533 g004
Figure 5. Original scene of the surface target.
Figure 5. Original scene of the surface target.
Remotesensing 17 02533 g005
Figure 6. Surface target profile result. (a) Real beam echo; (b) Result of OMP method; (c) Result of traditional TV method; (d) Result of online TV method; (e) Result of IAA method; (f) Result of split SPICE method; (g) Result of online q-SPICE method; (h) Result of split SPICE-TV method; (i) Result of grid-updating split SPICE-TV method.
Figure 6. Surface target profile result. (a) Real beam echo; (b) Result of OMP method; (c) Result of traditional TV method; (d) Result of online TV method; (e) Result of IAA method; (f) Result of split SPICE method; (g) Result of online q-SPICE method; (h) Result of split SPICE-TV method; (i) Result of grid-updating split SPICE-TV method.
Remotesensing 17 02533 g006
Figure 7. Original scene of the measured data.
Figure 7. Original scene of the measured data.
Remotesensing 17 02533 g007
Figure 8. Measured data results. (a) Real beam echo; (b) Result of OMP method; (c) Result of traditional TV method; (d) Result of online TV method; (e) Result of IAA method; (f) Result of split SPICE method; (g) Result of online q-SPICE method; (h) Result of split SPICE-TV method; (i) Result of grid-updating split SPICE-TV method.
Figure 8. Measured data results. (a) Real beam echo; (b) Result of OMP method; (c) Result of traditional TV method; (d) Result of online TV method; (e) Result of IAA method; (f) Result of split SPICE method; (g) Result of online q-SPICE method; (h) Result of split SPICE-TV method; (i) Result of grid-updating split SPICE-TV method.
Remotesensing 17 02533 g008
Figure 9. Profile results of the echo and imaging results from 50 m range bin in Figure 8.
Figure 9. Profile results of the echo and imaging results from 50 m range bin in Figure 8.
Remotesensing 17 02533 g009
Figure 10. Original scene of the measured data.
Figure 10. Original scene of the measured data.
Remotesensing 17 02533 g010
Figure 11. Measured data results. (a) Real beam echo; (b) Result of OMP method; (c) Result of traditional TV method; (d) Result of online TV method; (e) Result of IAA method; (f) Result of split SPICE method; (g) Result of online q-SPICE method; (h) Result of split SPICE-TV method; (i) Result of grid-updating split SPICE-TV method.
Figure 11. Measured data results. (a) Real beam echo; (b) Result of OMP method; (c) Result of traditional TV method; (d) Result of online TV method; (e) Result of IAA method; (f) Result of split SPICE method; (g) Result of online q-SPICE method; (h) Result of split SPICE-TV method; (i) Result of grid-updating split SPICE-TV method.
Remotesensing 17 02533 g011
Table 1. Analysis of imaging quality for point target simulation.
Table 1. Analysis of imaging quality for point target simulation.
MethodReErrRuntime
MINMAXMEAN
OMP Method 0.617 0.628 0.622 0.099 s
Traditional TV Method 0.877 0.889 0.883 1.148 s
Online TV Method 0.016 0.426 0.939 0.021 s
IAA Method 0.419 0.432 0.426 2.846 s
Split SPICE Method 0.789 0.801 0.795 1.514 s
Online q-SPICE Method 0.632 0.643 0.638 0.022 s
Split SPICE-TV Method 0.061 0.076 0.070 1.293 s
Proposed Method 0.062 0.077 0.071 0.029 s
Table 2. Simulation parameters of the surface target.
Table 2. Simulation parameters of the surface target.
ParameterValue
Beam width 4 °
Scanning region 20 ° 20 °
Scanning speed 100 ° /s
PRF 1000 Hz
Carrier Frequency 10 GHz
Bandwidth 45 MHz
Time width 1   μ s
Sampling Frequency 90 MHz
Table 3. Analysis of imaging quality for surface target simulation.
Table 3. Analysis of imaging quality for surface target simulation.
MethodSSIMRuntime
MINMAXMEAN
OMP Method 0.516 0.557 0.534 7.235 s
Traditional TV Method 0.675 0.714 0.693 87.725 s
Online TV Method 0.672 0.711 0.695 1.997 s
IAA Method 0.754 0.796 0.773 237.124 s
Split SPICE Method 0.487 0.521 0.501 95.655 s
Online q-SPICE Method 0.474 0.519 0.496 2.004 s
Split SPICE-TV Method 0.691 0.731 0.712 90.832 s
Proposed Method 0.694 0.728 0.719 2.059 s
Table 4. Experimental parameters.
Table 4. Experimental parameters.
ParameterValue
Carrier frequencyX band
Scanning range 180 ° 180 °
Scanning speed 24 / s
Beamwidth 5.1 °
PRF 200 Hz
Platform velocity 0 m/s
Platform height 3 m
Pitch angle 10 °
Table 5. Analysis of imaging quality for measured data.
Table 5. Analysis of imaging quality for measured data.
Method−3 dB BandwidthsRuntime
OMP Method 3.571 2.086 s
Traditional TV Method 2.858 52.311 s
Online TV Method 2.859 0.007 s
IAA Method 2.143 81.281 s
Split SPICE Method 0.928 56.363 s
Online q-SPICE Method 0.931 0.008 s
Split SPICE-TV Method 1.428 54.314 s
Proposed Method 1.430 0.008 s
Table 6. Experimental parameters.
Table 6. Experimental parameters.
ParameterValue
Carrier frequencyX band
Scanning range 30 ° 30 °
Scanning speed 72 / s
Beamwidth 5.1 °
PRF 200 Hz
Platform velocity 47 m/s
Platform height 300 m
Pitch angle 30 °
Table 7. Runtime for measured data.
Table 7. Runtime for measured data.
MethodRuntime
OMP Method 2.881 s
Traditional TV Method 76.556 s
Online TV Method 0.597 s
IAA Method 126.611 s
Split SPICE Method 82.812 s
Online q-SPICE Method 0.604 s
Split SPICE-TV Method 79.641 s
Proposed Method 0.657 s
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, R.; Luo, J.; Zhang, Y.; Zhang, Y.; Jiao, L.; Mao, D.; Huang, Y.; Yang, J. Angular Super-Resolution of Forward-Looking Scanning Radar via Grid-Updating Split SPICE-TV. Remote Sens. 2025, 17, 2533. https://doi.org/10.3390/rs17142533

AMA Style

Li R, Luo J, Zhang Y, Zhang Y, Jiao L, Mao D, Huang Y, Yang J. Angular Super-Resolution of Forward-Looking Scanning Radar via Grid-Updating Split SPICE-TV. Remote Sensing. 2025; 17(14):2533. https://doi.org/10.3390/rs17142533

Chicago/Turabian Style

Li, Ruitao, Jiawei Luo, Yin Zhang, Yongchao Zhang, Lu Jiao, Deqing Mao, Yulin Huang, and Jianyu Yang. 2025. "Angular Super-Resolution of Forward-Looking Scanning Radar via Grid-Updating Split SPICE-TV" Remote Sensing 17, no. 14: 2533. https://doi.org/10.3390/rs17142533

APA Style

Li, R., Luo, J., Zhang, Y., Zhang, Y., Jiao, L., Mao, D., Huang, Y., & Yang, J. (2025). Angular Super-Resolution of Forward-Looking Scanning Radar via Grid-Updating Split SPICE-TV. Remote Sensing, 17(14), 2533. https://doi.org/10.3390/rs17142533

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop