Next Article in Journal
Development of a Smart Energy-Saving Driving Assistance System Integrating OBD-II, YOLOv11, and Generative AI
Previous Article in Journal
Spatially Adaptive and Distillation-Enhanced Mini-Patch Attacks for Remote Sensing Image Object Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Weighted STAP Algorithm Based on the Greedy Block Coordinate Descent Method

1
College of Information Engineering, Inner Mongolia University of Technology, Hohhot 010080, China
2
Inner Mongolia Key Laboratory of Radar Technology and Application, Hohhot 010051, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(17), 3432; https://doi.org/10.3390/electronics14173432
Submission received: 6 July 2025 / Revised: 15 August 2025 / Accepted: 25 August 2025 / Published: 28 August 2025

Abstract

Space–time adaptive processing (STAP) based on sparse recovery (SR-STAP) has demonstrated remarkable clutter suppression performance under insufficient sample conditions. However, the main aim of sparse recovery is to solve the norm minimization problem. To this end, this study proposes a weighted STAP algorithm based on a greedy block coordinate descent method to address the problems of slow convergence speed and insufficient estimation accuracy in the existing l2,1-norm minimization methods. First, the weights are estimated using the multiple signal classification (MUSIC) algorithm. Then, a greedy block selection rule that favors sparsity is used, prioritizing the update of the weighted block that has the greatest impact on sparsity. Although the proposed algorithm in this paper is greedy in nature, it is globally convergent. Finally, the accuracy of clutter covariance matrix estimation and the convergence speed of the SR-STAP algorithm are enhanced by reasonably estimating the noise power and selecting appropriate regularization parameters. The results of simulation experiments indicate that the proposed algorithm can effectively suppress clutter ridge expansion, achieving excellent clutter suppression and target detection performance compared with the existing methods, as well as satisfactory convergence properties.

1. Introduction

Airborne pulse–Doppler radar systems have been broadly used in wide-area ground observation and continuous surveillance tasks, and accurate detection of moving targets is crucial for ensuring their performance. Effective detection of moving targets typically requires suppressing ground and sea clutter signals. Space–time adaptive processing (STAP), as an efficient clutter suppression technique, has been widely used in airborne radar systems. Since the pioneering work of Brennan and Reed, the STAP technique has become a mainstream method of suppressing ground and sea clutter in airborne radar echo signals [1,2,3,4]. This method uses an ideal clutter plus noise covariance matrix (CNCM) of a cell under test (CUT) to construct an optimal STAP filter. However, the CCM is generally unknown in practice and estimated using independent and identically distributed (IID) training snapshots. According to the Reed–Mallet–Brennan (RMB) criterion, the IID snapshots with at least twice the degrees of freedom (DOF) of a system are required to ensure the STAP performance degradation does not exceed 3 dB [5]. However, in heterogeneous and non-stationary clutter environments, it is challenging to obtain a sufficient number of IID snapshots due to the high DOF of modern radar systems and the influence of complex terrain environments and antenna array configurations, which can cause significant degradation of the clutter suppression performance of the STAP method.
The rapid development of compressive sensing in recent years has significantly propelled the research on sparse recovery STAP (SR-STAP) algorithms. The SR-STAP algorithms, which exploit the inherent sparsity of clutter spectra, have enabled efficient signal processing. The theory of sparse recovery was first applied by Maria for high-resolution clutter spectrum estimation, revealing its great potential for alleviating sample shortage [6]. In recent years, notable progress has been made in the field of sparse recovery methods, which can primarily be categorized into the following three types.
  • Bayesian algorithms: These algorithms transform the l1 norm optimization process into a maximum a posteriori probability estimation task. By using prior information on a signal and the Bayesian parameter estimation model, these algorithms determine the posterior probability of unknown parameters, which enables high-precision reconstruction of the original signal. Recently, a STAP method based on the framework of sparse Bayesian learning, which enhances sparsity using a generalized double Pareto prior distribution and estimates the hyperparameters using the expectation maximization (EM) method, was proposed [7]. Nevertheless, this method has high computational complexity and a slow convergence rate, which limits its practical application. Recent work imported a sparse Bayesian learning-based multi-measurement vector STAP algorithm [8], which can estimate a clutter covariance matrix (CCM) with high accuracy using very few training snapshots under unknown parameter conditions, thus improving clutter suppression and slow-moving target detection capabilities. Nevertheless, this algorithm suffers from high computational complexity, and its clutter suppression performance degrades under non-ideal conditions. In addition, a STAP algorithm based on sparse Bayesian learning with hierarchical composite priors was proposed in reference [9]. This algorithm can significantly enhance sparsity by constructing an innovative three-level hierarchical composite prior model and further improve sparsity using a generalized double Pareto prior distribution. However, its computational complexity is high, and its clutter suppression performance deteriorates significantly at high-gain phase error levels. In short, the Bayesian algorithms have an enhanced sparse reconstruction performance due to their ability to automatically estimate the unknown parameters and hyperparameters of a signal. But despite that, it relies on the setting of the prior distribution and has a high computational complexity.
  • Greedy algorithms: The main principle of these algorithms is to iteratively approximate the global optimal solution through local optimization. Representative greedy algorithms include matching pursuit (MP) [10], orthogonal matching pursuit (OMP) [11,12], and subspace pursuit [13]. A STAP method based on OMP was proposed to address the problem of target detection in heterogeneous and non-stationary environments by using the Multiple-Input Multiple-Output (MIMO) radar geometry model and the OMP algorithm to estimate the CCM [12]. However, the performance of this method is influenced by the MIMO radar geometric configuration and environmental parameter selection. Moreover, its effectiveness has not been fully validated in practical applications. Recently, an extended orthogonal matching tracking (OMPα) algorithm has been introduced [11]. This algorithm reduces the number of measurements required for OMP to recover sparse signals by increasing the number of iterations, which makes it similar to the convex relaxation-based basis tracking Basis Pursuit (BP) algorithm [14]. However, the sparsity level of a signal is still required for OMPα, which can be difficult to obtain in practical applications. In addition, an increase in the iteration number can yield higher computational complexity. However, although greedy algorithms are generally characterized by fast computational speed, they tend to have low reconstruction accuracy and may experience degraded performance in noisy environments.
  • Convex optimization algorithms: In these algorithms, the l0-norm problem, which is an NP-hard problem, is relaxed into a convex optimization problem, and the sparse recovery property of the l1-norm is used to approximate the l0-norm. Consequently, the l0-norm problem is transformed into an l1-norm minimization, that is, the LASSO problem. In recent years, convex optimization algorithms have garnered extensive research interest, and numerous methods have been proposed, including the spectral projected gradient method for L1 minimization (SPGL1) [15], the accelerated gradient method known as sparse learning with efficient projections (SLEP) [16], a variant of the matching tracking algorithm [17], the SpaRSA algorithm [18], the YALL1-Group algorithm [19], and the block coordinate descent (BCD) algorithm [20]. Qin et al. proposed a hybrid block coordinate descent algorithm (BCD-HYB) to solve the group lasso problem efficiently [20]. The algorithm combines accurate block optimization (BCD-GL) and variable step size iterative shrinkage threshold (ISTA-BC) methods. It adaptively selects the optimal solution strategy for variable groups of different sizes. This significantly improves computational efficiency and surpasses existing methods on various datasets. However, the algorithm still faces a computational complexity bottleneck caused by eigen decomposition when dealing with large variable groups. Its performance is significantly reduced when processing specific challenging data sets, such as the Nemirovski data set. Meanwhile, none of them is specifically designed for the CCM estimation problem in STAP.
Despite significant achievements of the aforementioned algorithms in the field of sparse signal recovery, several challenges still need to be addressed. First, the Bayesian algorithms rely on complex prior distributions and have low computational efficiency. Second, the greedy algorithms have insufficient reconstruction accuracy in noisy environments. Furthermore, the convex optimization algorithms cannot meet the practical constraints of non-uniformity clutter, insufficient snapshots, and mesh mismatch. Therefore, it is challenging for the existing methods to meet the requirements for high-precision and high-efficiency estimation of the CCM in airborne radar systems. To address this problem, this study proposes a weighted STAP algorithm based on a greedy block coordinate descent method. First, the weights are determined using the prior estimation of the MUSIC spectrum. Then, the snapshots are processed using the greedy block coordinate descent method. Under the weighted condition, the proposed algorithm updates the block with the largest variation preferentially following a greedy block selection rule, then estimates the noise power, and selects appropriate regularization parameters. The experimental verification results indicate that the proposed GBCD-STAP algorithm exhibits global convergence properties. The experimental simulation results also demonstrate that, compared with the existing algorithms, the proposed GBCD-STAP algorithm can improve the convergence speed and CCM estimation accuracy.
This research is systematically structured across five components: Section 2 establishes fundamental radar signal modeling principles for aerial surveillance systems and traditional MUSIC power spectrum estimation method. Section 3 presents the method of weighted greedy block coordinate descent algorithm and proves the global convergence of this method. A performance evaluation using simulated radar data occupies Section 4, with comprehensive conclusions and research outlook detailed in the closing section.

2. Signal Model and Related Theory

2.1. STAP Signal Model

This study assumes that an airborne pulse–Doppler early warning radar system contains a uniform linear array (ULA) with a side-looking configuration, as depicted in Figure 1. The ULA holds N array elements, with an element spacing d, which equals half the wavelength of the transmitted radar signal. During a coherent processing interval (CPI), the radar emits M pulses at a constant pulse repetition frequency (PRF). The radar platform is located at altitude H, with azimuth and elevation angles of θ and φ, respectively. The carrier platform moves in the positive direction of the x-axis at a velocity Va.
According to the Ward clutter model [4], the observed distance ring on the ground can be uniformly divided into Nc clutter blocks. Disregarding the influence of range ambiguity, the radar echo signal model can be formulated as follows:
x = c + n ,   = k = 1 N c α k b ( f d , k , f s , k ) + n ,   = k = 1 N c α k ( b ( f d , k ) b ( f s , k ) ) + n ,
where c and n denote the clutter and Gaussian white noise, respectively; the complex amplitude and the space-time steering vector of the kth clutter block are represented by αk and b ( f d , k , f s , k ) , respectively. Additionally, b(fd,k) and b(fs,k) correspond to the temporal and spatial steering vectors, respectively, and their expressions are given as follows:
b ( f d , k ) = [ 1 , e j 2 π f d , k , , e j 2 π ( M 1 ) f d , k ] T b ( f s , k ) = [ 1 , e j 2 π f s , k , , e j 2 π ( N 1 ) f s , k ] T ,
where fd,k and fs,k represent the normalized Doppler frequency and normalized spatial frequency of the kth clutter block, respectively, and they can be formulated as
f d , k = 2 V a c o s φ n c o s θ n / ( λ f r ) f s , k = d c o s φ n c o s θ n / λ .
Under the assumption that clutter and noise are uncorrelated, the definition of the CCM can be given as follows:
R = E [ x x H ] = R c + R n ,
where E[·] denotes the expectation operation; (·)H represents the conjugate transpose; and Rc and Rn, which are the clutter covariance matrix and noise covariance matrix, respectively, are defined as follows:
R c = E [ x c x c H ] = k = 1 N c α c , k 2 b ( f d , k , f s , k ) b H ( f d , k , f s , k ) , R n = E [ n n H ] = σ 2 I M N ,
where σ2 represents the noise power, and IMN signifies the identity matrix, whose dimensions are MN × MN.
In practical applications, it is challenging to directly acquire the ideal CCM. Hence, snapshots from neighboring distance ring are typically employed as training snapshots. The CCM is then estimated through the maximum likelihood method, which can be formulated as
R ^ = 1 L l = 1 L x l x l H ,
where L denotes the number of training snapshots.
Adhering to the maximum output signal-to-clutter-plus-noise ratio (SCNR) criterion, the optimal STAP weight vector can be derived by addressing the subsequent optimization problem:
min w   w H R ^ w         subject to   w H b ( f d , k , f s , k ) = 1 .
To address the problem of clutter suppression performance in STAP methods being constrained by the quantity and quality of snapshots, this study uses sparse recovery methods for snapshot reconstruction, developing the SR-STAP method. The SR-STAP method makes full use of the sparse characteristics of clutter, and the received signal model of radar is expressed by
X = B A + n ,
where B = [ b ( f d , 1 , f s , 1 ) , , b ( f d , N d , f s , N s ) ] is the MN × NsNd dimensional dictionary matrix; the number of points in the spatial domain is denoted by N s = ρ s N ; the quantity points of in the temporal domain is represented by N d = ρ d M ; the spatial and Doppler domains have resolutions of ρs and ρd, respectively, and the space–time steering vector associated with each point is termed an atom; A K × L is the sparse coefficients vector, where K = NsNd, and most of the elements in A are zeros; and n M N × L is the zero-mean Gaussian noise vector.
According to the theory of sparse recovery, the estimation of A can be achieved by addressing the subsequent optimization problem:
min A A 0         s . t .     X B A 2 2 ε .
where ||·||0 denotes the l0-norm, ||·||2 indicates the l2-norm, and ε is the fitting error threshold, and “s.t.” is the abbreviation of “subject to”.
Nevertheless, the l0-norm is an NP-hard problem, rendering it incapable of being solved efficiently. As a result, the l0-norm is substituted by the l1-norm to define the convex optimization problem as follows:
min A A 1         s . t .     X B A 2 2 ε .
To solve a sparse coefficient matrix A, the objective function in Equation (10) needs to be solved, which is equivalent to addressing an underdetermined linear equation given by
A = arg min A X B A 2 2 + λ A 1 ,
where λ denotes the regularization parameter.
For solving the sparse optimization in Equation (11), the existing methods typically use optimization software packages, such as CVX [21]. However, this approach faces some limitations when handling large-scale problems. Specifically, the CVX toolbox requires substantial computational resources, including memory and processor time, which can restrict its applicability in resource-constrained environments. Meanwhile, although there are numerous norm minimization methods, most of them do not use the prior knowledge of the clutter covariance matrix and a specific problem structure and also have slow convergence. To address these limitations, this paper proposes an effective minimum l2,1-norm STAP method.

2.2. MUSIC Power Spectrum Estimation Method

The MUSIC algorithm is a classical spectral estimation method that has been widely used in array signal processing. This algorithm estimates signal direction by exploiting the orthogonality between the signal subspace and the noise subspace. In the sparse reconstruction process of the SR-STAP method, the MUSIC algorithm is often selected to enhance the estimation of the signal subspace, aiming to improve the accuracy of sparse recovery.
First, the entire eigenvector space can be divided into the clutter subspace and the noise subspace through the eigenvalue decomposition of the covariance matrix of snapshots. The covariance matrix is denoted by R ^ , and its eigenvalue decomposition is expressed by R ^ = i = 1 M N λ i u i u i H , where λi is the ith eigenvalue, and ui is the corresponding eigenvector. According to the eigenvalues’ magnitude, the eigenvectors can be divided into two parts: the first T eigenvectors form the clutter subspace US, and the last (MNT) eigenvectors form a noise subspace UN.
A dictionary matrix B can be divided into two parts: a dictionary matrix of the real clutter, denoted by B ζ 0 , and a dictionary matrix corresponding to the noise, denoted by B ζ 0 ¯ , where ζ 0 is the index set of the corresponding clutter, and ζ 0 ¯ represents its complement. According to the MUSIC algorithm [22], the orthogonality condition between the signal and noise subspaces is defined as follows:
U N H B ζ 0 = 0 ,
U N H B ζ 0 ¯ = Y ,
where Y is a matrix with a dimension of (MNT) × (NsNdT).
Based on the MUSIC prior, a weighting parameter w i i = 1 N s N d is defined by
w i = 1 U N H b i 2 , i = 1 , , N s N d .
When the number of snapshots is sufficient, the weighting parameters satisfy the condition of w i w j , where i ζ 0 and j ζ 0 ¯ . By selecting the weighting parameters this way, indices in the non-zero support set of a sparse signal are subjected to greater penalties, whereas indices outside this set experience smaller penalties. The weighted parameters are introduced to enhance the penalization of non-zero signal components while reducing the weights assigned to noise regions. This approach can significantly reduce the interference from noise in sparse solution estimation.

3. Greedy Block Coordinate Descent Algorithm

The greedy block coordinate descent (GBCD) algorithm is an optimization method that integrates a greedy strategy with block coordinate descent. Its fundamental principle is to dynamically select the block that contributes the most to the objective function reduction for updating. This approach enhances algorithm efficiency while maintaining sparsity. In the proposed weighted STAP algorithm, the GBCD method is used to efficiently solve the weighted sparse recovery problem, optimizing the CCM estimation. Compared with the traditional block coordinate descent (BCD) algorithm, the GBCD algorithm introduces two key improvements. First, it updates only the block that has the greatest impact on sparsity in each iteration, thus avoiding complex computation of irrelevant blocks, which can significantly reduce both the number of iterations and computational complexity. Second, it leverages the prior information obtained from the MUSIC algorithm to weight the objective function, enhancing the penalty weights for non-zero signal components and thus further promoting sparse solution generation. Using a greedy selection strategy and rapid updates of critical blocks, this method reduces convergence time and can accurately recover the sparse structure of the clutter spectrum. It has been demonstrated that, despite using the greedy strategy, the proposed algorithm can still guarantee global convergence.

3.1. GBCD Operational Principle

Initially, the objective function delineated in Equation (11) is expanded to accommodate the multi-measurement vector (MMV) context, and it can be articulated as follows:
min 1 2 X B A F 2 + λ A 2 , 1 ,
where ||·||F denotes the matrix Frobenius norm, and A 2 , 1 = i = 1 N s N d α i 2 is the l2,1-norm of matrix A, which denotes the ith row of matrix A.
Next, the objective function of Equation (12) can be given as follows:
F ( A ) = G ( A ) + H ( A ) ,
where G ( A ) = 1 2 X B A F 2 , and H ( A ) = λ A 2 , 1 = λ i = 1 N s N d α i 2 .
In each iteration, a quadratic approximation is obtained using matrix A(k) obtained in the previous iteration, yielding an approximate solution of F(A), which can be formulated as follows:
F ( A ) = G ( A ( k ) ) + ( G ( A ( k ) ) ) H ( A A ( k ) ) + 1 2 β A A ( k ) F 2 + H ( A ) = i = 1 N s N d 1 2 β α i p i ( k ) 2 2 + λ α i 2 + c ( k ) ,
where P ( k ) = A ( k ) β G ( A ( k ) ) = A ( k ) β B H ( B A ( k ) X ) ; αi and p i ( k ) denote the ith row of A and P(k), respectively; and c ( k ) = G ( A ( k ) ) β 2 G ( A ( k ) ) F 2 , where β = 1 b i 2 2 , and bi is a column of B.
This is because in the CCM estimation problem of STAP, all column vectors of a dictionary matrix B have the same norm of β = 1 b i 2 2 , i = 1 , , N s N d .
The standard BCD algorithm updates A(k + 1) by minimizing Equation (14). The separable character of Equation (14) allows it to be minimized in a parallel manner. According to the soft thresholding algorithm [23], the solution to the ith subproblem can be expressed as follows:
α i ( k ) = p i ( k ) p i ( k ) 2 max ( 0 , p i ( k ) 2 λ β ) .
In the CCM estimation process, a highly sparse signal needs to be solved. To this end, in each iterative optimization step, the conventional BCD algorithm alters the blocks (i.e., rows) that should be zero during the scanning period without explicitly prioritizing sparsity.
This study proposes a weighted GBCD algorithm to assign higher iterative priority to potential non-zero signal components and suppress those corresponding to noise while maintaining sparsity. The proposed algorithm updates only the blocks that yield the maximum descent after weighting rather than directly updating all blocks, which can be expressed as follows:
i 0 = argmax   Δ u ˜ i   = argmax   w i α i ( k + 1 ) α i ( k ) 2 i = 1 , , N s N d ,
where Δ u ˜ = w i α i ( k + 1 ) α i ( k ) 2 is the distance descended by the ith block, and w i = 1 U n H b i 2 , i = 1 , 2 , N s N d is the weight parameter obtained based on the MUSIC prior.
Therefore, the proposed algorithm avoids updating unnecessary blocks in each iteration, which enhances its efficiency. Meanwhile, the weighted GBCD algorithm ensures accurate identification and updating of the blocks that significantly affect sparsity. As a result, superior clutter suppression and target detection performance are achieved in STAP.
Finally, a noise power σ2 is estimated, and an appropriate regularization parameter λ is selected. According to the conclusions of existing research [24], the λ value should be set to σ2. Moreover, this conclusion will be proved in the experiment. The rationale for selecting the noise power value is based on calculating the mean of the estimated noise power elements, rather than directly using these estimates [25]. Particularly, for different channels and pulses, the noise is always independent and identically distributed. According to the existing findings [26], the noise power can be estimated as follows:
σ h 2 = b h H R 1 X b h H R 1 b h 2 h = 1 , , N M ,
where σ h 2 denotes the hth column of the unit matrix, and R is expressed as follows:
R = 1 L l = 1 L k = 1 K ( α k , l 2 ) b k b k H + σ 2 I .
The mean of the calculated noise power values is regarded as the final estimate of the noise power. This approach can mitigate the impact of the stochastic nature of noise estimation, enhancing the estimation accuracy. Because the noise power calculation requires estimating A, the proposed method must operate in an iterative manner.

3.2. GBCD Global Convergence

This section elucidates the global convergence of the weighted GBCD algorithm by leveraging the convergence theory of the block coordinate gradient descent framework and integrating it with the characteristics of the proposed greedy block selection rule.
Based on the following two known lemmas [27], it can be proven that the algorithm converges when the objective function is coercive and the update process satisfies the modified Armijo criterion and the Gauss–Southwell-r rule.
Lemma 1.
If each iteration satisfies the modified Armijo criterion and the Gauss–Southwell-r rule, each cluster point of a sequence  α ˜ ( k )  is a stable point of  f ( α ˜ )  [28].
Lemma 2.
If  f ( α ˜ )  is forced (i.e., lim α ˜ f ( α ˜ ) = ) and each iteration satisfies the modified Armijo criterion, then a sequence α ˜ ( k ) converges to α ˜ .
The proof of global convergence for the proposed weighted GBCD algorithm is outlined below. Initially, it is established that this algorithm adheres to the modified Armijo condition and the Gauss–Southwell-r criterion. By reformulating the objective function F ( A ) = G ( A ) + H ( A ) in Equation (16) into a vectorized variable format, the objective function is then defined as
F ( α ˜ ) = G ( α ˜ ) + H ( α ˜ ) = 1 2 X ˜ B ˜ α ˜ 2 2 + λ i = 1 N s N d α ˜ ξ i 2 ,
where ξ i = i , i + N s N d , , i + ( L 1 ) N s N d , and M 1 , 2 , , L N s N d .
In the kth iteration, the weighted GBCD algorithm finds a descent direction u ˜ ( k ) = u ˜ H ( α ˜ ( k ) , ξ i 0 ) , which is defined by
u ˜ H ( α ˜ ( k ) , ξ i 0 ) arg min d ˜ G ( α ˜ ( k ) ) + ( G ( α ˜ ( k ) ) ) H u ˜ + 1 2 β u ˜ H u ˜ + H ( α ˜ ( k ) + u ˜ ) u ˜ j = 0 , j ξ i 0 ,
where i 0 = argmax   w i α ˜ ξ i ( k + 1 ) α ˜ ξ i ( k ) 2 = argmax   w i u ˜ H ( α ˜ ( k ) , ξ i 0 ) 2 .
For any i, there is β = 1 b i 2 2 . The principal submatrix of the actual Hessian matrix, denoted by H 0 = 2 G ( α ˜ ) = B ˜ H B ˜ , is associated with the index set ξ i . This set is defined according to the reference [27] as
H 0 ξ i ξ i = b i 2 2 I M N = 1 β I M N .
In the kth iteration, only the variables associated with the index set ξ i 0 are updated. Therefore, the Hessian matrix can be directly substituted into Equation (23), yielding the following expression:
u ˜ H ( α ˜ ( k ) , ξ i 0 ) arg min d ˜ G ( α ˜ ( k ) ) + ( G ( α ˜ ( k ) ) ) H u ˜ + 1 2 u ˜ H H 0 u ˜ + H ( α ˜ + u ˜ ) u ˜ j = 0 , j ξ i 0 .
The criterion mentioned previously is employed to choose the descent direction for the kth iteration. Since H0 is the actual Hessian matrix, leveraging the convexity of G ( A ˜ ) results in the derivation of the subsequent inequality:
G ( α ˜ ( k ) + u ˜ ( k ) ) G ( α ˜ ( k ) ) + ( G ( α ˜ ( k ) ) ) H u ˜ ( k ) + 1 2 ( u ˜ ( k ) ) H H 0 u ˜ ( k ) .
According to F ( α ˜ ) = G ( α ˜ ) + H ( α ˜ ) , it holds that
F ( α ˜ ( k ) + u ˜ ( k ) ) F ( α ˜ ( k ) ) + ( G ( α ˜ ( k ) ) ) H u ˜ ( k ) + 1 2 ( u ˜ ( k ) ) H H 0 u ˜ ( k ) + H ( α ˜ ( k ) + u ˜ ( k ) ) H ( α ˜ ( k ) ) F ( α ˜ ( k ) ) + η ( G ( α ˜ ( k ) ) ) H u ˜ ( k ) + 1 2 ( u ˜ ( k ) ) H H 0 u ˜ ( k ) + H ( α ˜ ( k ) + u ˜ ( k ) ) H ( α ˜ ( k ) ) ,
where 0 < η < 1 .
Furthermore, according to Lemma 1, for any 0 γ < 1 , the following inequality is satisfied:
( G ( α ˜ ( k ) ) ) H u ˜ ( k ) + γ ( u ˜ ( k ) ) H H 0 u ˜ ( k ) + H ( α ˜ ( k ) + u ˜ ( k ) ) H ( α ˜ ( k ) ) 0 .
It can be shown that Equation (27) represents a modified Armijo criterion for δ ( k ) = 1 and γ = 1 2 .
Next, it can be proven that the update result of i 0 = argmax   w i α ˜ ξ i ( k + 1 ) α ˜ ξ i ( k ) 2 = argmax   w i u ˜ H ( α ˜ ( k ) , ξ i 0 ) 2 fulfills the modified Gauss–Southwell-r criterion.
In the kth iteration, the objective function is approximated in the following manner:
F ( α ˜ ) G ( α ˜ ( k ) ) + ( G ( α ˜ ( k ) ) ) H ( α ˜ α ˜ ( k ) ) + 1 2 ( α ˜ α ˜ ( k ) ) H H 0 ( α ˜ α ˜ ( k ) ) + H ( α ˜ ) ,
which denotes a completely separable problem for the block ξ i = i , i + N s N d , , i + ( L 1 ) N s N d , i = 1 , , N s N d ; hence, it holds that
u ˜ H ( α ˜ ( k ) , ξ i 0 ) 2 u ˜ H ( α ˜ ( k ) , ξ i ) 2 , i i 0 .
Next, combining Equation (30) with the following equation:
u ˜ H ( α ˜ ( k ) , M ) 2 = i = 1 N s N d u ˜ H ( α ˜ ( k ) , ξ i ) 2 2 ,
yields
u ˜ H ( α ˜ ( k ) , ξ i 0 ) 2 1 N s N d u ˜ H ( α ˜ ( k ) , M ) 2 ,
here V = 1 N s N d > 0 .
Additionally, based on i 0 = argmax   w i α i ( k + 1 ) α i ( k ) 2 , it can be obtained that w i 0 α i 0 ( k + 1 ) α i 0 ( k ) 2 min i   w i   max i α i 0 ( k + 1 ) α i 0 ( k ) 2 , which in turn yields α i 0 ( k + 1 ) α i 0 ( k ) 2 min w i i max w i i max i α i 0 ( k + 1 ) α i 0 ( k ) 2 . Combining the above equation with Equation (34) yields
α i 0 ( k + 1 ) α i 0 ( k ) 2 = u ˜ H ( α ˜ ( k ) , ξ i 0 ) 2 1 N s N d u ˜ H ( α ˜ ( k ) , M ) 2 ,
where V = 1 N s N d min w i i max w i i > 0 .
Consequently, the update approach outlined in Equation (19) adheres to the modified Gauss–Southwell-r criterion.
In summary, considering the objective function given in Equation (15) is coercive, and the proposed weighted block coordinate gradient descent algorithm has been proven to satisfy both the modified Armijo condition and the modified Gauss-Southwell-r condition, both Lemmas 1 and 2 are valid under these conditions. According to Lemma 1, every cluster point of α ˜ ( k ) represents a stable point of f ( α ˜ ) ; additionally, according to Lemma 2, there exists only one cluster point, which is the limit point. Furthermore, as Equation (15) represents a convex optimization problem, its stationary point corresponds to the global optimum.
Consequently, the aforementioned conclusions demonstrate the global convergence of the weighted GBCD algorithm.

3.3. STAP Weight Calculation

After the CCM has been estimated, the adaptive filter weights can be determined according to the following formula as
w = R 1 b ( f d , t , f s , t ) b ( f d , t , f s , t ) H R 1 b ( f d , t , f s , t ) .
The pseudo-code of the proposed GBCD-STAP algorithm is presented in Algorithm 1.
Algorithm 1. Workflow of the GBCD-STAP algorithm
Initialization: A = 0 N s N d × L , σ 2 = 1 , β = 1 b ( f d 1 , f s 1 ) 2 2
1. Weighted values wi are estimated by R ^ = 1 L X X H , R ^ = U S Σ S U S H + U N Σ N U N H and w i = 1 U N H b i 2 , i = 1 , , N s N d ;
2. The comp, which represents the block update priority metric value, is updated using p i = α i β b i H ( B A X ) , where α i ( k ) = p i p i 2 max ( 0 , p i 2 λ β ) and c o m p ( i ) = w i α i ( k ) α i 2 ;
3. The block to be updated is selected according to c o m p ( i 0 ) = max ( c o m p ) ;
4. The block selected in Step 2 is updated using A ( k ) [ α 1 ; ; α i 0 1 ; α i 0 ; α i 0 + 1 ; ; α N s N d ] to obtain A;
5. R is reconstructed by R = 1 L l = 1 L k = 1 K ( α k , l 2 ) b k b k H + σ 2 I ;
6. According to σ h 2 = b h H R 1 X b h H R 1 b h 2 h = 1 , , N M , σ 2 is updated by σ 2 = m e a n ( σ h 2 ) ;
Steps 2–6 are repeated until the iteration stop condition ( σ 2 ) ( k + 1 ) ( σ 2 ) ( k ) ( σ 2 ) ( k ) ψ is satisfied.
Output: R

4. Proposed Algorithm Simulation Analysis

The efficacy of the proposed algorithm was evaluated via simulation experiments. A comparative analysis was conducted between the proposed algorithm and several established algorithms, including the SR-STAP algorithm [29], the SBL-STAP algorithm [30], and the IAA-based SR-STAP algorithm [31]. The number of training snapshots used for each algorithm was set to 20; the grid resolution was set as μ d = 4 and μ s = 4 . The relevant simulation parameters are shown in Table 1. All experimental results were obtained as an average of 100 Monte Carlo trials.

4.1. Clutter Power Spectrum Analysis

First, the Capon spectra of each algorithm were obtained. The Capon spectrum was derived based on the estimated clutter covariance matrix, providing an accurate representation of the true clutter power. The Capon spectra of the algorithms are presented in Figure 2. Figure 2a illustrates that the clutter spectrum generated by the SR-STAP algorithm is predominantly focused on the clutter ridge. Nevertheless, the estimated clutter energy is relatively low, which is not favorable for subsequent clutter suppression. Furthermore, as shown in Figure 2b, the SBL-STAP algorithm used the prior information of the signal and a Bayesian parameter estimation model to determine the posterior probability of unknown parameters, demonstrating superior reconstruction performance. However, the clutter spectrum exhibited significant broadening, which might potentially submerge low-velocity targets within the widened clutter. As presented in Figure 2c, the IAA-based SR-STAP algorithm, which used the IAA spectrum as weighting values, also faced significant broadening of the clutter ridge. However, as shown in Figure 2d, the clutter power spectrum obtained by the proposed GBCD-STAP algorithm was concentrated on the clutter ridges. The distribution of the clutter ridges generated by the GBCD-STAP algorithm was more centralized than that of the other algorithms, with a notable reduction in broadening and higher clutter power spectrum values. These characteristics were conducive to subsequent clutter suppression.

4.2. Output Signal-to-Heterodyne Noise Ratio Loss Analysis

The second experiment was conducted to investigate the output signal-to-clutter-plus-noise-ratio loss (SCNRloss) performance of the algorithms. The SCNRloss was defined as a ratio of the output SCNR value of an algorithm to the output signal-to-noise ratio (SNR) of the matched filter in a noise-only environment as follows:
SCNR loss = σ n 2 w H b ( f d , t , f s , t ) 2 b ( f d , t , f s , t ) 2 2 w H R w ,
where w denotes the estimated filter weight vector, and R denotes the true CCM.
The SCNRloss curve was selected for the analysis in this study because it reflects the clutter suppression performance of an algorithm. The variations in the output SCNRloss of each algorithm with the Doppler frequency are presented in Figure 3, where it can be seen that the SR-STAP algorithm, despite having a narrow notch, exhibited a relatively shallow depth of suppression, which indicated good performance in the sidelobe region but insufficient clutter suppression capability in the main lobe. The SCNRloss curves of the SBL-STAP, IAA-based SR-STAP algorithms showed varying degrees of notch broadening, which resulted in poorer clutter suppression effects. The SCNRloss curve of the proposed GBCD-STAP algorithm had a narrower and deeper notch than those of the other algorithms, demonstrating its superior clutter suppression performance.

4.3. Target Detection Performance Analysis

In the third experiment, the moving target was detected by filtering the test snapshots, and the output power was subsequently calculated. Each algorithm was used to filter the snapshots containing 100 range cells, with the target positioned in the 151st range cell. The output power results obtained by different algorithms are presented in Figure 4. It is evident from the figure that all the algorithms were capable of successfully detecting the target situated in the 151st range cell. However, the proposed GBCD-STAP algorithm exhibited a significantly lower residual clutter output power than the other algorithms. Consequently, the GBCD-STAP algorithm exhibited superior performance in detecting moving targets.

4.4. Convergence Speed Analysis

This experiment assessed the convergence speed of the proposed GBCD-STAP algorithm. The variations in the computation time with the number of array elements for different SR-STAP algorithms are presented in Figure 5. In this experiment, the number of pulses was consistent with the number of array elements, and the simulation experiments were performed on a computer equipped with an Intel(R) Core(TM) i5-8300H CPU @ 2.30 GHz processor. As shown in Figure 5, the computation time of the IAA-based SR-STAP and SBL-STAP algorithms was greater than that of the proposed algorithm. Namely, because the IAA-Based SR-STAP algorithm relied on the computationally intensive IAA spectrum as weighting values, its computational complexity was high, and its convergence speed was slow. The SBL-STAP algorithm required calculating the covariance matrix and inverse matrix of the sparse coefficients, whereas the proposed algorithm only calculated the covariance matrix and inverse matrix of the received data. Because the dimension of the sparse coefficients was much larger than the DOF of the received data, the SBL-STAP algorithm had higher complexity than the proposed GBCD-STAP algorithm. Although the SR-STAP method exhibited relatively slow growth of convergence speed, from the perspective of comprehensive performance, the reduction in convergence time yielded a decrease in target detection performance. Hence, the proposed algorithm had a better convergence effect.

4.5. Computational Complexity Analysis

This section analyses the computational complexity of the SBL-STAP, IAA-based SR-STAP, and GBCD-STAP algorithms. Table 2, where J = MN, P = NsNd, shows the number of matrix multiplications in one iteration for L snapshots. As can be seen from the table, the computational complexity of the GBCD algorithm is significantly lower than that of the other two algorithms. This is because the proposed GBCD-STAP algorithm updates the blocks with the greatest impact on sparsity in a single iteration. This avoids the unnecessary updating of blocks in each iteration that occurs with the traditional block coordinate descent algorithm, thus improving the efficiency and accuracy of the algorithm. In conclusion, the GBCD-STAP algorithm has been shown to effectively reduce computational complexity.

4.6. Effect of Off-Grid on GBCD-STAP Performance

Figure 6a illustrates the Capon spectrum of the GBCD-STAP algorithm in the event of off-grid, and Figure 6b illustrates the Capon spectrum under ideal conditions. Although the proposed GBCD-STAP algorithm has made significant improvements in overall performance and computational complexity, experimental results show that the algorithm’s performance will still be affected to some extent in cases of off-grid. Therefore, solving the problem of off-grid is crucial for improving the algorithm’s efficiency and robustness.

4.7. Analysis of the Selection of Regularization Parameters

The SCNRloss curves of the proposed algorithm when λ is set to 0.05σ2, 0.1σ2, 0.5σ2, σ2, and 1.5σ2 are compared. The results are given in Figure 7. It can be found that the regularization parameter λ and noise power estimation σ2 minimally affect the final outcome. Specifically, optimal performance of the algorithm is demonstrated when the regularization parameter λ is set equal to the noise power σ2. The result indicates that, while the selection of λ and σ2 is significant for the algorithm’s performance, the algorithm exhibits robustness to variations in these parameters. Furthermore, the best SCNRloss is achieved when λ is equivalent to σ2, offering a viable reference point for parameter tuning in practical applications.

5. Conclusions

Aiming at the clutter suppression problem encountered by airborne radar SR-STAP methods, this study has proposed a weighted STAP algorithm based on the GBCD method. First, the prior estimation values obtained by the MUSIC algorithm are used as weighting factors and combined with the GBCD method. This approach prioritizes updating the blocks that have the most significant impact on sparse representation instead of updating all the blocks in each iteration, thereby avoiding the problem of unnecessary block updates in the traditional block coordinate descent algorithm. As a result, both the efficiency and accuracy of the algorithm are enhanced. Second, the estimation accuracy and convergence speed of the clutter covariance matrix are improved by reasonably estimating the noise power and selecting appropriate regularization parameters. The proposed method was validated through simulation experiments and compared with several existing methods. The simulation results demonstrate that, in contrast to the existing algorithms, the proposed algorithm exhibits superior performance in detecting moving targets and possesses better real-time capability. However, this paper does not analyze the mismatch phenomenon caused by the array amplitude phase errors and the off-grid phenomenon. Later, error correction can be carried out by integrating the disturbance model into the sparse reconstruction phase, which can compensate for the deviation in the steering vector caused by grid mismatch. Another method is to use a neural network to learn the distribution characteristics of off-grid errors and update the dictionary matrix in real-time.

Author Contributions

Conceptualization, Z.G. and N.Y.; methodology, N.Y.; software, N.Y.; validation, Z.G. and N.Y.; formal analysis, Z.G.; investigation, Z.G., N.Y., and Z.W.; resources, Z.G. and N.Y.; data curation, Z.G. and N.Y.; writing—original draft preparation, N.Y.; writing—review and editing, Z.G. and Z.W.; visualization, N.Y.; supervision, W.X., W.T., and Z.W.; project administration, Z.G. and Z.W.; funding acquisition, Z.G. and N.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China under grant number 61761037, the Inner Mongolia Science and Technology Plan Project under grant number 2025SKYPT0044, and the Basic Scientific Research Business Cost Project of Colleges directly under the Inner Mongolia under grant number ZTY2025034.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Brennan, L.E.; Reed, L. Theory of adaptive radar. IEEE Trans. Aerosp. Electron. Syst. 1973, AES-9, 237–252. [Google Scholar] [CrossRef]
  2. Guerci, J.R. Space-Time Adaptive Processing for Radar; Artech House: London, UK, 2014. [Google Scholar]
  3. Klemm, R. Principles of Space-Time Adaptive Processing; IET: Birmingham, UK, 2002. [Google Scholar]
  4. Ward, J. Space-time adaptive processing for airborne radar. In Proceedings of the IEE Colloquium on Space-Time Adaptive Processing, London, UK, 6 April 1998; pp. 2/1–2/6. [Google Scholar]
  5. Reed, I.S.; Mallett, J.D.; Brennan, L.E. Rapid convergence rate in adaptive arrays. IEEE Trans. Aerosp. Electron. Syst. 2007, AES-10, 853–863. [Google Scholar] [CrossRef]
  6. Maria, S.; Fuchs, J.-J. Application of the global matched filter to STAP data an efficient algorithmic approach. In Proceedings of the IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Toulouse, France, 14–19 May 2006; p. IV. [Google Scholar]
  7. Zhang, S.; Wang, T.; Liu, C.; Wang, D. A space-time adaptive processing method based on sparse Bayesian learning for maneuvering airborne radar. Sensors 2022, 22, 5479. [Google Scholar] [CrossRef]
  8. Duan, K.; Wang, Z.; Xie, W.; Chen, H.; Wang, Y. Sparsity-based STAP algorithm with multiple measurement vectors via sparse Bayesian learning strategy for airborne radar. IET Signal Process. 2017, 11, 544–553. [Google Scholar] [CrossRef]
  9. Cao, J.; Wang, T.; Cui, W. Sparse Bayesian learning using hierarchical synthesis prior for STAP. IET Radar Sonar Navig. 2025, 19, e70001. [Google Scholar] [CrossRef]
  10. Kim, H.H.; Govoni, M.A.; Haimovich, A.M. Cost analysis of compressive sensing for MIMO STAP random arrays. In Proceedings of the 2015 IEEE Radar Conference (RadarCon), Arlington, VA, USA, 10–15 May 2015; pp. 0980–0985. [Google Scholar]
  11. Sahoo, S.K.; Makur, A. Signal recovery from random measurements via extended orthogonal matching pursuit. IEEE Trans. Signal Process. 2015, 63, 2572–2581. [Google Scholar] [CrossRef]
  12. Ślesicka, A.; Kawalec, A. An application of the orthogonal matching pursuit algorithm in space-time adaptive processing. Sensors 2020, 20, 3468. [Google Scholar] [CrossRef] [PubMed]
  13. Dai, W.; Milenkovic, O. Subspace pursuit for compressive sensing signal reconstruction. IEEE Trans. Inf. Theory 2009, 55, 2230–2249. [Google Scholar] [CrossRef]
  14. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  15. Van, E.; Berg, M.S.; Friedlander, M.P.; Murphy, K. Group Sparsity via Linear-Time Projrction; Technical Report TR-2008-09; University of British Columbia: Vancouver, BC, Canada, 2008. [Google Scholar]
  16. Liu, J.; Ji, S.; Ye, J. SLEP: Sparse learning with efficient projections. Ariz. State Univ. 2009, 6, 7. [Google Scholar]
  17. Eldar, Y.C.; Kuppinger, P.; Bolcskei, H. Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Trans. Signal Process. 2010, 58, 3042–3054. [Google Scholar] [CrossRef]
  18. Wright, S.J.; Nowak, R.D.; Figueiredo, M.A. Sparse reconstruction by separable approximation. IEEE Trans. Signal Process. 2009, 57, 2479–2493. [Google Scholar] [CrossRef]
  19. Deng, W.; Yin, W.; Zhang, Y. Group sparse optimization by alternating direction method. In Proceedings of the Wavelets and Sparsity XV, San Diego, CA, USA, 26–29 August 2013; pp. 242–256. [Google Scholar]
  20. Qin, Z.; Scheinberg, K.; Goldfarb, D. Efficient block-coordinate descent algorithms for the group lasso. Math. Program. Comput. 2013, 5, 143–169. [Google Scholar] [CrossRef]
  21. Wang, Y.; Wang, J.; Li, J.; He, Q.; He, Z. Direct data domain STAP method via joint-sparse characteristic of clutter and noise. Syst. Eng. Electron. 2024, 46, 2980–2987. [Google Scholar]
  22. Schmidt, R. Multiple emitter location and signal parameter estimation. IEEE Trans. Antennas Propag. 1986, 34, 276–280. [Google Scholar] [CrossRef]
  23. Beck, A.; Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  24. Fang, J.; Wang, F.; Shen, Y.; Li, H.; Blum, R.S. Super-resolution compressed sensing for line spectral estimation: An iterative reweighted approach. IEEE Trans. Signal Process. 2016, 64, 4649–4662. [Google Scholar] [CrossRef]
  25. Gudmundson, E.; Jakobsson, A.; Jensen, J.A.; Stoica, P. Blood velocity estimation using ultrasound and spectral iterative adaptive approaches. Signal Process. 2011, 91, 1275–1283. [Google Scholar] [CrossRef]
  26. Roberts, W.; Stoica, P.; Li, J.; Yardibi, T.; Sadjadi, F.A. Iterative adaptive approaches to MIMO radar imaging. IEEE J. Sel. Top. Signal Process. 2010, 4, 5–20. [Google Scholar] [CrossRef]
  27. Wei, X.; Yuan, Y.; Ling, Q. DOA estimation using a greedy block coordinate descent algorithm. IEEE Trans. Signal Process. 2012, 60, 6382–6394. [Google Scholar] [CrossRef]
  28. Tseng, P.; Yun, S. A coordinate gradient descent method for nonsmooth separable minimization. Math. Program. 2009, 117, 387–423. [Google Scholar] [CrossRef]
  29. Yang, Z.; Li, X.; Wang, H.; Jiang, W. On clutter sparsity analysis in space–time adaptive processing airborne radar. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1214–1218. [Google Scholar] [CrossRef]
  30. Sun, Y.; Yang, X.; Long, T.; Sarkar, T.K. Robust sparse Bayesian learning STAP method for discrete interference suppression in nonhomogeneous clutter. In Proceedings of the 2017 IEEE Radar Conference (RadarConf), Seattle, WA, USA, 8–12 May 2017; pp. 1003–1008. [Google Scholar]
  31. Zhang, S.; Wang, T.; Liu, C.; Ren, B. A Fast IAA–Based SR–STAP Method for Airborne Radar. Remote Sens. 2024, 16, 1388. [Google Scholar] [CrossRef]
Figure 1. Geometric model of an airborne radar.
Figure 1. Geometric model of an airborne radar.
Electronics 14 03432 g001
Figure 2. Clutter power spectra estimation results: (a) clutter power spectrum obtained using SR-STAP algorithm; (b) clutter power spectrum obtained using SBL-STAP algorithm; (c) clutter power spectrum obtained using IAA-based SR-STAP algorithm; and (d) clutter power spectrum obtained using GBCD-STAP algorithm.
Figure 2. Clutter power spectra estimation results: (a) clutter power spectrum obtained using SR-STAP algorithm; (b) clutter power spectrum obtained using SBL-STAP algorithm; (c) clutter power spectrum obtained using IAA-based SR-STAP algorithm; and (d) clutter power spectrum obtained using GBCD-STAP algorithm.
Electronics 14 03432 g002aElectronics 14 03432 g002b
Figure 3. Signal-to-clutter-plus-noise ratio loss results of different algorithms.
Figure 3. Signal-to-clutter-plus-noise ratio loss results of different algorithms.
Electronics 14 03432 g003
Figure 4. Output power results of the four algorithms.
Figure 4. Output power results of the four algorithms.
Electronics 14 03432 g004
Figure 5. Computing times of different algorithms.
Figure 5. Computing times of different algorithms.
Electronics 14 03432 g005
Figure 6. Clutter power spectra estimation results using GBCD-STAP algorithm within off-grid and on-grid. (a) clutter power spectrum obtained using GBCD-STAP algorithm within off-grid; (b) clutter power spectrum obtained using GBCD-STAP algorithm within on-grid.
Figure 6. Clutter power spectra estimation results using GBCD-STAP algorithm within off-grid and on-grid. (a) clutter power spectrum obtained using GBCD-STAP algorithm within off-grid; (b) clutter power spectrum obtained using GBCD-STAP algorithm within on-grid.
Electronics 14 03432 g006
Figure 7. Local magnification of signal-to-clutter-plus-noise ratio loss results with different regularization parameters.
Figure 7. Local magnification of signal-to-clutter-plus-noise ratio loss results with different regularization parameters.
Electronics 14 03432 g007
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParameterValue
Antenna array typeUniform linear array for side-looking
Number of array elements10
Number of pulses sent in one CPI10
Array element spacing (m)0.15
Wavelength (m)0.3
Aircraft speed (m/s)260
Aircraft altitude (m)3000
Pulse repetition frequency (Hz)4000
Clutter-to-noise ratio (dB)60
Signal-to-clutter-plus-noise ratio (dB)0
Table 2. Comparison results of average complexity of single samples.
Table 2. Comparison results of average complexity of single samples.
AlgorithmComplex Multiplier
SR-STAPPJ2 + PJ + L(N + K − 1)J
SBL-STAPJ3 + 4J2 + 3LJ + 2LP
IAA-based SR-STAPJ3+ PJ2 + 2(L + K + 1)J + 2LN
GBCD-STAP5J + LP(6N + 4)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gao, Z.; Yang, N.; Wu, Z.; Xu, W.; Tan, W. Weighted STAP Algorithm Based on the Greedy Block Coordinate Descent Method. Electronics 2025, 14, 3432. https://doi.org/10.3390/electronics14173432

AMA Style

Gao Z, Yang N, Wu Z, Xu W, Tan W. Weighted STAP Algorithm Based on the Greedy Block Coordinate Descent Method. Electronics. 2025; 14(17):3432. https://doi.org/10.3390/electronics14173432

Chicago/Turabian Style

Gao, Zhiqi, Na Yang, Zhixia Wu, Wei Xu, and Weixian Tan. 2025. "Weighted STAP Algorithm Based on the Greedy Block Coordinate Descent Method" Electronics 14, no. 17: 3432. https://doi.org/10.3390/electronics14173432

APA Style

Gao, Z., Yang, N., Wu, Z., Xu, W., & Tan, W. (2025). Weighted STAP Algorithm Based on the Greedy Block Coordinate Descent Method. Electronics, 14(17), 3432. https://doi.org/10.3390/electronics14173432

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop