Efficient 2D DOA Estimation via Decoupled Projected Atomic Norm Minimization

: This paper presents an efficient two‑dimensional (2D) direction of arrival (DOA) estima‑ tion method, termed as decoupled projected atomic norm minimization (D‑PANM), to solve the angle‑ambiguity problem. It first introduces a novel atomic metric via projecting the original atom set onto a smoothing space, based on which we formulate an equivalent semi‑definite program‑ ming (SDP) problem. Then, two relatively low‑complexity decoupled Toeplitz matrices can be ob‑ tained to estimate the DOAs. We further exploit the structural information hidden in the newly constructed data to avoid pair matching for the azimuth and elevation angles when the number of sensors is odd, and then propose a fast and feasible decoupled alternating projections (D‑AP) algo‑ rithm, reducing computational complexity to a great extent. Numerical simulations are performed to demonstrate that the proposed algorithm is no longer restricted by angle ambiguity scenarios, but instead provides a more stable estimation performance, even when multiple signals share the same angles in both azimuth and elevation dimensions. Additionally, it greatly improves the reso‑ lution, with control of the computation load compared with the existing atomic norm minimization (ANM) algorithm.


Introduction
Two-dimensional (2D) direction of arrival (DOA) estimation is an important branch of array signal processing encountered in various applications: radar, wireless communication, sonar, seismology, etc. [1][2][3][4][5].The core issue of this field is the nonlinear 2D spatial parameters estimation problem.Although numerous algorithms have been devised for 2D DOA estimation to date [6][7][8], the investigation of fast and effective algorithms with high resolution and precision, utilizing highly limited snapshots with increasingly complex signal scenarios, remains a hot topic.
The current state-of-the-art high-resolution 2D DOA estimation algorithms primarily focus on the subspace-based methods and the sparsity-based ones.The classic subspace algorithms are 2D MUSIC and 2D ESPRIT [9][10][11]: MUSIC is implemented by employing the orthogonality of the steering vectors and the noise subspace, with a huge computational cost of 2D spectral peak search, and ESPRIT constructs two subspaces of rotation-invariant properties corresponding to two diagonal angular matrices, respectively, thus avoiding a spectral search.Although the subspace methods mentioned above have achieved considerable performance in terms of resolution, theoretically reaching the Cramér-Rao bound (CRB), they heavily rely on a relatively large number of snapshots, an environment with a high signal-to-noise ratio (SNR), and a known source number.Conversely, the sparse reconstruction algorithms are intended to build mathematical models between array observation data and the 2D DOA, followed by a series of optimization steps based on different matching criteria, which no longer require the number of sources as a prior and are robust to noise.Nevertheless, the original sparsity-based algorithms divide the entire spatial directions into discrete grids [12][13][14], forming a redundant dictionary to formulate the array data, where the grid mismatch problem may occur to a large extent.In view of this, varieties of off-grid algorithms have been proposed, one after the other, to overcome this strict grid limit by introducing quantization errors between divided grids and real values [15][16][17], and many types of strategies are applied to approximate these errors instead.Unfortunately, these strategies can still hardly express the data precisely when the quantization errors are larger, reducing the accuracy of algorithms greatly.In addition, they all have to face huge computational challenges.
The sparse reconstruction algorithms have opened a new chapter, with the concept of continuous compressed sensing (CCS) first introduced by Candès et al. [18], and the estimated parameters are no longer dependent on the grid, but allowed to take any values.Thus, a theory of super-resolution was born based on the total variation (TV) norm, along with a theoretically minimum interval condition.Due to the CCS presented in a continuous domain, Chandrasekaran et al. [19] and Tang et al. [20] then generalized the theory to the discrete domain and developed an atomic norm metric, which served as the foundation for a range of later algorithms, especially in terms of the parameter estimation, where the atomic norm minimization (ANM) theory made it possible to handle data with a few snapshots-even a single one-while retaining the advantages of sparse algorithms.Meanwhile, Tang et al. [20] argued that the atomic norm could be minimized through equivalent semi-definite programming (SDP), which was further extended to one-dimensional (1D) parameter estimation from complete and incomplete data by Yang et al. [21], and they also completed an intensive study of a reweighting strategy to enhance the resolution greatly [22].Similar theories have been advanced regarding high-dimensional data by Chi et al. [23] and [24], [25]].However, the computational complexity begins to increase dramatically as the number of dimensions increases.To handle this, Tian et al. [26] proposed a decoupled atomic norm minimization (DeANM) algorithm expressing the 2D Toeplitz problem into two decoupled 1D matrices.Despite the algorithm displaying a low complexity in terms of computational load, its results are ill-posed when the sources share the same angles in either azimuth or elevation dimension, i.e., they exhibit the problem of angular ambiguity.
In this paper, we consider 2D DOA estimation for a uniform rectangular array (URA) in the case of angle ambiguity and propose an efficient optimization method based on the framework of atomic norm minimization (ANM).Motivated by the idea of spatial smoothing processing [27], we introduce a novel atom metric via a projection operator, which fully exploits the phase elimination property of the SDP problem converted from ANM.The SDP is formulated mainly by two relatively low-complexity decoupled Toeplitz matrices, which is similar to the de-noising covariances in the traditional sense, and hence, the estimations of azimuth and elevation of interest can be efficiently achieved, respectively.This proposed algorithm is, therefore, named decoupled projected atomic norm minimization (D-PANM).In addition, this paper presents a more stable recovery method without pair matching, utilizing the newly constructed atom when the array has an odd number of sensors.This method always provides correct angle pairs, even in complex scenarios where multiple signals share the same angles.Subsequently, a fast implementation of D-PANM, named decoupled alternating projections (D-AP), is presented, which is generalized from the 1D alternating projections (AP) algorithm [28], reducing the computational complexity greatly compared with the most commonly used SDP solver, namely, SDPT3.However, its application is conditional.The simulation results show that the proposed D-PANM is no longer limited by the application scenarios and exhibits a better anti-noise performance than the DeANM algorithm proposed in [26].Furthermore, compared with the vectorized ANM algorithm [23,24], it provides more effective DOA estimation without pair matching, even when multiple signals share the same azimuth and elevation angles, and it additionally has remarkable advantages in terms of both resolution and computational load.
Our main contributions are summarized as follows: • We formulate a novel atom norm metric under the framework of ANM via a defined projected operator, which not only follows the decoupled strategy of DeANM to reduce the complexity, but also has the ability to handle the angle-ambiguity problem compared with existing methods.• We utilize the structural characteristics of the newly constructed data to provide a more stable recovery criterion without pair matching.• We present a fast implementation of our algorithm to reduce the computational complexity, and employ a joint low-rank projection to improve the convergence rate.• We further show that our proposed algorithm with a decoupled reweighted strategy has a higher resolution than existing vectorized ANM.
The rest of the paper is organized as follows.Section 2 formulates the 2D DOA estimation model and introduces the problem setup.Section 3 introduces the proposed approach.Section 4 performs numerical simulations to validate the proposed method.Section 5 concludes the paper.For ease of presentation, the main abbreviations used in this paper are provided in Table 1, and a brief overview of the proposed method and the related ANM algorithms is also given in Table 2.
Resolution: higher, no limit to minimum separation.
Notations: Boldface letters stand for vectors and matrices.(•) * , (•) T , (•) H , (•) −1 , (•) + , and E{•} denote the conjugate, transpose, conjugate transpose, inverse, pseudo-inverse, and statistical expectation, respectively.trace(•), T 1 (•), T(•), rank(•), and conv(•) represent the trace, the 1D Toeplitz matrix, the block Toeplitz matrix, the rank, and the convex hull, respectively.C denotes the set of complex numbers.diag(A) retains the diagonal elements of A as a vector, while DI AG(a) constructs a matrix with vector a as the diagonal and zeros elsewhere.⌈•⌉ and ⌊•⌋ are rounded up and down to integers, respectively.vec (A)  indicates the vectorization of the matrix A. ∥•∥ and ∥•∥ F stand for ↕ 2 and the Frobenius norm, respectively.∥ • ∥ A represents the atom norm induced by A. |•| is the amplitude of a complex scalar or the absolute of a real one.⊗ denotes the Kronecker product.A ≽ 0 implies that A is positive semidefinite (PSD).

Signal Model and Problem Statement
Consider a uniform rectangular array (URA) consisting of N × M sensors with intersensor spacing d along the x-direction and z-direction, respectively.There are I far-field narrowband uncorrelated signals {c i (t)} I i=1 impinging from distinct directions {(θ i , φ i )} I i=1 , where θ i represents the azimuth angle and φ i is the elevation value.Note that θ i is redefined as the angle between the signal and the yoz plane rather than the traditional one (Figure 1).
Electronics 2024, 13, x FOR PEER REVIEW 5 of 25 that i  is redefined as the angle between the signal and the yoz plane rather than the traditional one (Figure 1).Therefore, the single-snapshot array output matrix without noise can be expressed as , , is the array manifold matrix with the steering vectors ( , where In addition, assume that the phases   0 I i i  = are i.i.d.samples uniformly drawn from either distribution with mean   0 i E  = or the complex unit circle, which is a necessary condition to guarantee the solutions of the subsequent algorithms in this paper [20,31].
The goal of our paper is to recover   Note that X is a linear combination of a few steering matrices

aa
. As is similar to the decoupled atomic norm minimization (DeANM) algorithm in [26], we then utilize the atomic norm to seek the sparsest expression of some defined atoms by treating ( ) ( ) ( ) aa as the atom.Specifically, the atom set M and the atomic norm induced by M are defined as Therefore, the single-snapshot array output matrix without noise can be expressed as where is the array manifold matrix with the steering vectors a , where c i = |c i |ξ i with |c i | being the amplitude of the receiving signal c i and ξ i being the phase.Let the spacing d be half the wavelength of the signals as usual, and the elements of X can then be given by In addition, assume that the phases {ξ i } I i=0 are i.i.d.samples uniformly drawn from either distribution with mean E{ξ i } = 0 or the complex unit circle, which is a necessary condition to guarantee the solutions of the subsequent algorithms in this paper [20,31].
The goal of our paper is to recover {θ i } I i=1 and {φ i } I i=1 from the receiving data X.Note that X is a linear combination of a few steering matrices a N (θ i ) a * M (φ i ) H .As is similar to the decoupled atomic norm minimization (DeANM) algorithm in [26], we then utilize the atomic norm to seek the sparsest expression of some defined atoms by treating a N (θ i ) a * M (φ i ) H as the atom.Specifically, the atom set A M and the atomic norm induced by A M are defined as where Θ = (θ, φ), and ∥•∥ A M denotes the atom norm symbol.Next, the atomic norm ∥X∥ A M is optimized via an equivalent semi-definite programming (SDP) given by where T 1 (u θ ) and T 1 u φ denote the 1D Toeplitz matrices, with u θ and u φ as their respec- tive first columns, and the {θ i } I i=1 and {φ i } I i=1 of interest are coded in these two Toeplitz matrices, respectively.Once the optimum solutions T 1 ( ûθ ) and T 1 ûφ are determined, the estimates of {θ i } I i=1 and {φ i } I i=1 can be obtained accordingly.Unfortunately, when two or more signals impinge from the same direction in either the azimuth or elevation dimension, the optimization results of ( 5) will be ill-posed because the Toeplitz matrix of the corresponding dimension will be rank-deficient.

The Proposed D-PANM
In order to handle the angle-ambiguity problem mentioned above, [27] adopted a linear projection operator to map the array output matrix to a block Hankle matrix, and then exploited the newly constructed array matrix to estimate the DOAs combined with a traditional matrix pencil approach.
The constructed matrix is a K × K block Hankle matrix, written as where is also an L × L Hankle matrix, with L = M − L + 1, and K = N − K + 1.According to [27], the estimation of DOAs achieved the optimal performance when K = ⌈N/2⌉ and L = ⌈M/2⌉.In fact, X e is a matrix enhanced by applying smoothing processing along each dimension.Here, each column of X n is a window segment of the vector sequence {x(n, 0), x(n, 1), • • • , x(n, M − 1)}, and the parameter L is the corresponding sliding win- dow length.X e is formed by the window segment of the matrix sequence {X 0 , X 1 , • • • , X N−1 }, and parameter K denotes the sliding window length accordingly.
Then, X e can be given in the form of steering vectors by where T are obtained from parts of a N (θ i ), respectively, while being acquired in the same way from a (8) can be concisely expressed as and Note that F H F is a diagonally dominant matrix with real values on the diagonal, based on which a new covariance matrix Obviously, the newly defined R 1 displays the same features as the traditional covariance matrix of the Toeplitz structure, and therefore, as long as R 1 is given, the azimuth angles {θ i } I i=1 can be estimated via extensive existing approaches such as the Vandermonde decomposition [24], the matrix pencil method [32], etc.
On the other hand, we construct another matrix of size LK × LK as follows: It is also a block Hankle matrix applied to estimate the elevation angles {φ i } I i=1 [33], whose analysis procedure is similar to that of the azimuth angles, and (11) can also be rewritten as We redefined another new covariance matrix as the diagonal matrix Λ 2 , and then {φ i } I i=1 could be estimated through an assumed R 2 in a similar fashion to that outlined in the approaches mentioned earlier.
Inspired by the analysis above, we introduce a permutation matrix: where H L×K j,i denotes a matrix of size L × K with one at the position (j, i) and zeros else- where, and is defined similarly, except for the size, which is K × L.Then, we further construct more efficient array observation data X e H H rather than utilizing X or X e directly, expressed as where has the same structure as the conjugate of D, except in terms of size.We denoted the projection operator of the original array receiving matrix X onto the new observation data space by and the range of P by P = E ∈ C KL×LK E = P (X), X ∈ C N×M , where P is essentially a smoothing operator that handles the data through multiple sliding windows: Next, let us define a new projected atom set as: and hence, the projected atomic norm induced by the convex hull of A P is given by Note that ( 17) is an optimization problem seeking the smallest possible combination of the projected atoms from an infinite set A P .This is not easy to solve directly.As such, we turn to the optimization of an equivalent semi-definite programming (SDP) instead, and propose a decoupled projected atomic norm minimization (D-PANM) problem as described in the proposition below.

Proposition 1. For any N × M array receiving matrix
utilizing the projection operator of X = X 0 onto P, if the minimum angle distance between{Θ i } I i=1 satisfies where θ i − θ j and φ i − φ j are considered the wrapped distances on the unit circle.Then, the solution to ( 17) is guaranteed by (18) with at least a possibility of 1 − δ, and two Toeplitz matrices with {Θ i } I i=1 coded in can be efficiently achieved via a SDP given by: Since P is a linear operator satisfying its homogeneity and additivity, the performance guarantees of this proposition, followed by the equivalent SDP problem, can be easily derived from the atom norm theory in [20,26,31].We omit the details and provide a brief proof of the equivalence (20) in Appendix A.
the positive semidefinite (PSD) feasible cone can be expressed as follows: Electronics 2024, 13, 846 Then, one has the following two Toeplitz matrices with ), which implies that T(u b ) in the PSD cone owns the same column space as the covariance matrix R 1 defined previously, while T u * d also has the same Toeplitz structure as R 2 , except for the size and the constant differences in terms of the diagonal elements.That is to say, we can efficiently estimate {θ i } I i=1 and {φ i } I i=1 by exploiting the optimum solutions T( ûb ) and T( û * d ), respectively.It is im- portant to note that the PSD constraint above will eliminate the phase information of multiple measurement data, which here results in the phase eliminations of P (X) and P (X) H , respectively, making it possible to separately process the two spaces of P (X).However, for single-snapshot data, this constraint can help to deal with cases where the rank of the covariance is one, which explains how the framework of ANM can even handle the data with a single snapshot.In addition, the estimators T( ûb ) and T( û * d ) are obtained with a de-noising process; thus, knowledge of the number of sources is not needed, even if the traditional methods are then applied to achieve estimates of {θ i } I i=1 and {φ i } I i=1 .
Remark 2. In particular, the conditions for the minimum angle distances still rely on the size of X (i.e., the lengths N and M) rather than the projection parameters K and L. Furthermore, [23] has shown that the minimum distances can be relaxed to min{⌊1.19/(N− 1)⌋, ⌊1.19/(M − 1)⌋}, which satisfies most practical applications.We conducted experiments to verify these, and these are described in Section 4 of the paper.
In fact, the array receiving data are always corrupted by the additive white Gaussian noise (AWGN) as follows: and hence, the projection of Y 0 onto P is given by Nevertheless, P (N) can hardly be employed directly, because projecting N onto P will not only increase the computation complexity, but also make the statistically independent noise variables become correlated.Note that P is a linear mapping, and thus, P (N) is uniquely determined by the noise data N. Thereby, N can be considered as the kernel of P (N), and is an acceptable noise constraint.Combining (17), (20), and Remark 1, two Toeplitz optimization matrices, where {θ i } I i=1 and {φ i } I i=1 of interest are coded in, are achieved in the presence of noise via min or, equivalently, min where η 2 is the noise level known.The equations in (20) or (28) are usually solved based on the interior point method using a SDP solver of the CVX tool, namely, SDPT3 [29,30].
Then, {θ i } I i=1 and {φ i } I i=1 can be estimated from the optimum solutions T( ûb ) and T( ûd ) via traditional estimation methods, as mentioned before.

The Odd-Number Array and Fast Algorithm
This section focuses mainly on the case where both N and M are odd, and develops a new estimation criterion for {θ i } I i=1 and {φ i } I i=1 without pair matching [33] after the SDP ( 20) or ( 28) is solved, which can bring computational convenience and performance improvements.Now, we select the optimal parameters K = (N + 1)/2 and L = (M + 1)/2, where one has K = N − K + 1 = K and L = L. Assume that two estimators T( ûb ) and T( ûd ) have been obtained, and then the following equations hold: according to ( 15), (23), and (24).We denote the eigenvalue decompositions of T( ûb ) and T( ûd ) by where U bs contains the principal eigenvectors of T( ûb ), whose eigenvalues satisfy σ b,i ≥ γσ b,max .Here, σ b,max denotes the maximum eigenvalue of T( ûb ), and γ is a con- stant and can be fixed at 0.1.Meanwhile, U ds is obtained in the same way from T( û * d ), which is reasonable because the minor components of the de-noising estimators T( ûb ) and T( û * d ) have almost near-zero eigenvalues.Then, assuming there are two nonsingular ma- trices such that we obtain and U ds are constructed from U ds in the same way with the K rows deleted.We then obtain the two following eigenvalue decompositions: and {θ i } I i=1 and {φ i } I i=1 are coded in the diagonal eigenvalue matrices Γ and Ψ according to the rotational invariant subspace method [27].Using (32) and (33), it follows that Since O is also a nonsingular matrix, OU where parameter α is introduced to prevent the rank defect when θ i = φ i .Finally, the eigenvalue matrices can be obtained, adopting the common eigenvector matrix U as and the estimation of the DOAs can be achieved directly using Φ = (diag(Γ), diag(Ψ)) without a pairing step.This new estimation criterion not only quickens the solving process to a certain extent, but also provides the DOA estimation without pair matching, which proves to be more effective even when multiple signals share the same angles in both the azimuth and elevation dimensions (Section 4).In addition, due to higher temporal complexity of the SDPT3 solver based on the interior point method, we further propose a fast implementation of the SDP ( 20) or (28), termed as the decoupled alternating projections (D-AP) algorithm, to adequately exploit the structural information about P (X), which is motivated by [28,34].Note that the optimization problem (20) displays the following features: 1.
The feasible set is PSD.

2.
We obtain T(u b ) ≽ 0 and T(u d ) ≽ 0 using the Schur Complement Lemma [35]; hence, λ d,i ≥ 0 and λ b,i ≥ 0 for all the i = 1, 2, • • • , I, where λ d,i and λ b,i denote the eigenvalues of T(u b ) and T(u d ), respectively.Thus, the essence of the objective function in (20) becomes minimizing two ↕ 1 norms of the eigenvalues.Also, because of the Hermitian features of T(u b ) and T(u d ), (20) is equivalent to optimizing two low-rank Toeplitz matrices.

3.
T(u b ) and T(u d ) on the PSD cone have the following relationship: Aiming at the analysis above, some important projection operators are defined accordingly: 1.
Let S be the projection of a J × J Hermitian matrix A onto the PSD subspace, and the range of S is the PSD set as where A = J ∑ j=1 λ j υ j υ H j is the eigenvalue decomposition with eigenvalues λ j J j=1 and eigenvectors υ j J j=1 .

2.
Let L be the projection of a J × J Hermitian matrix A onto the low-rank set, with a rank of no more than Q, by introducing a threshold parameter τ, which aims mainly to make the eigenvalues of A sparse, and τ is chosen for a balance between the accuracy of the solution and the convergence rate of the algorithm [36].

3.
Let T be the projection of a KL × KL matrix A onto the two-level Toeplitz subspace, such that where T B (U) is an L × L block Toeplitz projection with each block [23], and U K:2K−1,|h|+1 and U 1:K,|h|+1 denote the first column and the first row of T h , respectively.
Then, the D-AP algorithm can be carried out via iteratively projecting the optimization variables S, T(u b ), and T(u d ) onto the corresponding spaces, which are presented in detail in Table 3, where T(u b ) and T(u d ) share a joint low-rank projection utilizing the mathematical relationship (29), improving both the accuracy and the convergence rate of the algorithm to a certain extent.

Remark 3.
The initial Y 0 should be unitized, helping to select the parameter τ in the low-rank projection steps, and an empirical value 0.1 is provided here for the signals in this paper.Note that the low-rank projection in the D-AP algorithm is also a de-noising process in the presence of noise, so this algorithm can also deal with the SDP (28) [37,38].

Remark 4. The D-AP algorithm is an iterative process that involves alternating projections onto the PSD and two low-rank spaces. Specifically, at each iteration, the PSD projection consists of two steps: the eigenvalue decomposition and matrix multiplication, resulting in a computational
complexity of O 1/2 2 (NM) 3 .The low-rank projections follow a similar process, but with re- duced complexity of O 3/2 6 (NM) 3 due to a joint projection of T(u b ) and T(u d ).Assuming that the maximum number of iterations is k p , the overall complexity is O k p 19/2 6 (N) 6 for N = M. Recall that the D-PANM algorithm implemented by the SDPT3 solver relies on calculating the Newton direction by solving a group of linear equations, and its complexity, determined by the size of the PSD constraint, is O (N) 7 for N = M [26].Consequently, the D-AP algorithm offers a more computationally efficient solution compared with the D-PANM one.

Higher Resolution and Discussions
In the previous section, we discussed a fast implementation algorithm utilizing two low-rank projections under the framework of the proposed projected atomic norm.Actually, since the atomic norm theory is limited by the conditions of the minimum angle distances (stated in Proposition 1), it is usually substituted by an atomic ↕ 0 norm [22,39], which will be further transformed into the low-rank problem for optimization.
We define a projected atomic ↕ 0 norm via the projected atom set A P , proposed in Section 3.1 as Similarly, an approximation of ∥P (X)∥ A P ,0 is allowed as Unfortunately, the discontinuous problem (43) above is NP-hard.Although the D-AP algorithm proposed earlier can be exploited to solve the low-rank problem, it lacks stability in some case with higher resolution requirements.In order to obtain a more accurate solution, many smooth surrogate functions which simulate the characters of the rank are then adopted for the objective function of (43) instead [40].Here, we consider two concave trace functions given by as the approximations of rank(T(u b )) and rank(T(u d )), where ε b and ε d are the parameters introduced to avoid the appearances of zero matrices.We then find the local optimum for the new programming problem via the majorization-minimization (MM) algorithm [41], which is an iterative method with the following decoupled weighted optimization problem for each iteration: with the weighting functions . Then, problem (43) can be implemented via iterative optimization (45) until the accuracy condition is met, and this algorithm is termed decoupled projected reweighted atomic norm minimization (D-PRAM), following the same naming convention seen in [22].The reason for mentioning the above problem in this paper is that we are surprised to find that our proposed projected atom set can not only handle the angle-ambiguity problem under the framework of ANM, but also has a better resolution performance under the framework of atomic ↕ 0 norm minimization than the existing 2D reweighting atomic norm algorithm induced by a vectorized atom set [22][23][24].The vectorized 2D atomic norm optimization problem adopts an atom set given by via vectorization of the matrix set A M , and its atomic norm, defined similarly to the way stated earlier except for the induced atom, can be computed via the following SDP problem: where T(u) is a N M × N M block Toeplitz matrix.For ease of comparison, this 2D atomic norm algorithm is termed as vectorized atomic norm minimization (vecANM), and its corresponding reweighted algorithm is referred to as vectorized reweighted atomic norm minimization (vecRAM) according to [22,24], which is a problem optimizing the atomic ↕ 0 norm induced by the vectorized atom set A V and similarly formulates the following SDP for each iteration: where W j denotes the weighting function accordingly.Note that (48) here reweights to the single term of the objective function, while the proposed D-PRAM adopts a decoupled reweighting strategy, and we will show that the decoupled strategy achieves a more enhanced performance in terms of resolution in the subsequent simulation.Additionally, the computational load of our proposed algorithms has also been relatively reduced compared with these vectorized 2D atomic norm optimization algorithms.According to the analysis in the previous subsection, the complexity of the vecANM problem ( 47) is predominantly influenced by the size of the PSD cone, which can reach O (NM) 3.5 .In contrast, the pro- posed D-PANM algorithm has a PSD constraint of smaller size KL × LK, thereby reducing the complexity to O (NM) 3.5 /2 3.5 .For the reweighted versions, i.e., vecRAM and D- PRAM, the computational complexities are O k R,1 (NM) 3.5 and O k R,2 (NM) 3.5 /2 3.5 , respectively, given the iterations k R,1 and k R,2 .Notably, this implies that D-PRAM holds more noticeable advantages regarding computational efficiency, particularly in scenarios necessitating multiple iterations.

Numerical Simulations
We present a series of numerical simulations to illustrate the performance of the proposed D-PANM algorithm and its derived results compared with those of the existing ANM algorithms.All of the methods involved in this section are illustrated briefly in Table 4, along with the computational complexity.Each experiment was based on singlesnapshot array data, the signals were generated independently with the same constant magnitudes σ s and the phases satisfied a randomly uniform distribution from −π to π such that E CC H = σ 2 s I.The array's signal-to-noise radio (SNR) was set to be 10 log 10 σ 2 s /σ 2 n , where σ 2 n is the covariance of the Gaussian noise.Specifically, the number of signals I was not known as a priori, and only the information about the magnitude of noise N Mσ 2 n was given.First, we provide an intuitive example to verify the accuracy of the proposed algorithms when the signals share common angles in both the azimuth and elevation dimensions.In particular, consider a URA with N = M = 9, and I = 7 narrowband signals randomly generated with σ 2 s = 1 and directions (−37.1 • , 81.2 • ), (−19.6 • , 34.9 • ), (5.2 • , 19.2 • ), (5.2 • , 70.6 • ), (20.1 • , 55.3 • ), (38.5 • , 19.2 • ), and (57.3 • , 55.3 • ).Assume single-snapshot ar- ray data polluted by the noise with σ 2 n = 1/10 SNR/10 , where SNR = 30dB, and let K = L = (N + 1)/2 = 5.Then, 500 Monte Carlo experiments were carried out to estimate the DOAs for each of the algorithms DeANM, vecANM, and D-PANM.Figure 2a-c show the results, respectively, all of which were implemented by the SDPT3 solver.It is apparent that DeANM could hardly obtain correct angles, and vecANM provided incorrect DOA pairs in some runs in such a multi-angle ambiguity scenario.However, the proposed D-PANM with an automatic pairing criterion demonstrated a strong performance.Furthermore, the estimated results of the fast algorithm D-AP proposed in Section 3.2 are described in Figure 2d, which had almost the same recovery performance as the D-PANM did using an SDPT3 solver, but with a fairly small amount of computation.We will give a detailed computational analysis in the following simulation.( Second, we investigate the performance of the algorithms in terms of resolution, adding the reweighted versions of the corresponding algorithms.Let 9 NM == and the mapping parame- ters 5 K L = =.Suppose there were 2 I = sources impinging onto the array, one of which was set to be the reference with DOAs ( ) 30 ,30 , while the other one gradually moved away by a di- rectional step of 0.5 on a scale of 1 to 10 in both the azimuth and elevation dimensions, and that a random fluctuation within a range of 0.025  was allowed for each trial.We empirically set the initial reweighting parameters version of D-PANM) could efficiently distinguish the sources with a 3 distance while vecRAM (the reweighted version of vecANM) can only distinguish a 6 one, which is to say that our de- coupled reweighting strategy adopted in D-PRAM is superior to the one utilized in vecRAM.In addition, the resolution performance of D-AP is also presented in Figure 3 with a deletion of failed runs, denoted by D-APDeleted.Note that, although the curve shows that D-AP is almost unlimited in resolution, it becomes increasing unstable as the angle distance grows closer.To verify this, the success rates of our algorithms D-PRAM and D-AP are shown in Figure 4, where the success rate is Second, we investigate the performance of the algorithms in terms of resolution, adding the reweighted versions of the corresponding algorithms.Let N = M = 9 and the mapping parameters K = L = 5.Suppose there were I = 2 sources impinging onto the array, one of which was set to be the reference with DOAs (30 • , 30 • ), while the other one gradually moved away by a directional step of 0.5 • on a scale of 1 • to 10 • in both the azimuth and elevation dimensions, and that a random fluctuation within a range of ±0.025 • was allowed for each trial.We empirically set the initial reweighting parameters ε = λ max for each algorithm uniformly and reduced them by half in each iteration, where λ max denotes the maximum eigenvalue of the corresponding Toeplitz matrix in the first iteration.One hundred Monte Carlo experiments were carried out, and their results are shown in Figure 3, which compares D-PANM and vecANM with their reweighted algorithms, respectively, where RMSE is the root mean squared error.The results show that D-PANM and vecANM had the same resolution performance, being able to exactly recover two sources mutually separated by 8.5 • , and verified the condition for the minimum angle distance argued in Proposition 1: sin(8.5 • ) ≥ min(1.19/(N− 1), 1.19/(M − 1)).Meanwhile, D-PRAM (the reweighted version of D-PANM) could efficiently distinguish the sources with a 3 • distance while vecRAM (the reweighted version of vecANM) can only distinguish a 6 • one, which is to say that our decoupled reweighting strategy adopted in D-PRAM is superior to the one utilized in vecRAM.In addition, the resolution performance of D-AP is also presented in Figure 3 with a deletion of failed runs, denoted by D-APDeleted.Note that, although the curve shows that D-AP is almost unlimited in resolution, it becomes increasing unstable as the angle distance grows closer.To verify this, the success rates of our algorithms D-PRAM and D-AP are shown in Figure 4, where the success rate is set to be the ratio of runs with RMSE < 10 −2 .The results indicate that the performance of D-AP is not stable enough when the angle distances are relatively closer, even though it is generally considered that the implementation method based on AP is capable of handling the low-rank problem.However, its stability becomes guaranteed starting with a larger angle interval of 13 • , and the computational complexity benefit remains attractive.By contrast, D-PRAM has a 100% success rate as long as the sources are separated within the minimum allowable range.
set to be the ratio of runs with 2 RMSE 10 −  . The results indicate that the performance of D-A not stable enough when the angle distances are relatively closer, even though it is generally con ered that the implementation method based on AP is capable of handling the low-rank prob However, its stability becomes guaranteed starting with a larger angle interval of 13 , and the c putational complexity benefit remains attractive.By contrast, D-PRAM has a 100% success ra long as the sources are separated within the minimum allowable range.Furthermore, the mean computational times of the algorithms involved in this sec are provided in Figure 5, obtained by a computer with an Inter i7-7700K 4.20 GHz C As shown in the figure, the performance of D-PANM showed improvement in term reducing the computational amount compared with vecANM, meanwhile, D-PR based on the decoupled reweighted strategy even displayed the same computational c plexity as vecANM after multiple iterations, but had a greatly enhanced resolution course, D-AP, as a fast implementation of D-PANM, exhibited reductions orders of m nitude larger in terms of computational complexity.Additionally, it is interesting to fi that the curve of DeANM was higher than that of D-AP even though the theoretical c putational complexity of DeANM was lower than that of D-AP, which was most lik due to the differences in implementation methods between different algorithms and time-consuming nature of scheduling the SDPT3 solver.However, there was a downw trend for DeANM as the number of sensors increased.Furthermore, the mean computational times of the algorithms involved in this section are provided in Figure 5, obtained by a computer with an Inter i7-7700K 4.20 GHz CPU.As shown in the figure, the performance of D-PANM showed improvement in terms of reducing the computational amount compared with vecANM, meanwhile, D-PRAM based on the decoupled reweighted strategy even displayed the same computational complexity as vecANM after multiple iterations, but had a greatly enhanced resolution.Of course, D-AP, as a fast implementation of D-PANM, exhibited reductions orders of magnitude larger in terms of computational complexity.Additionally, it is interesting to find that the curve of DeANM was higher than that of D-AP even though the theoretical computational complexity of DeANM was lower than that of D-AP, which was most likely due to the differences in implementation methods between different algorithms and the time-consuming nature of scheduling the SDPT3 solver.However, there was a downward trend for DeANM as the number of sensors increased.
Finally, we carried out 100 Monte Carlo experiments to validate the performance of the proposed algorithms in the presence of noise.In this simulation, we considered the scenario without angle ambiguity for the convenience of comparing our algorithms and the DeANM algorithm, which can hardly deal with the data when the signals share the same angles in either the azimuth or elevation dimension.In particular, we take a group of signals with distinct directions (18 • , 32 • ), (38 • , 70 • ), and (50 • , 55 • ), and let N = M = 11 and K = L = 6.Assume the array receiving data are polluted by the Gaussian noise, with the SNR varying from 0 to 30dB, and Figure 6 compares the RMSEs of different algorithms with respect to SNR, where CRB denotes the Cramér-Rao bound.The results show that the RMSE curve of the proposed D-PANM kept the same level as that of vecANM, but was lower than that of DeANM, i.e., although the noise constraint in D-PANM relying on the kernel matrix was the same as that in DeANM, the performance of D-PANM was still better, implying that the smoothing projection process helped to improve the anti-noise performance.In addition, the proposed fast implementation algorithm D-AP exhibited good performance under a larger SNR, but began to degenerate as SNR decreased, mainly due to the influence of the parameters.Finally, we carried out 100 Monte Carlo experiments to validate the performance of the proposed algorithms in the presence of noise.In this simulation, we considered the scenario without angle ambiguity for the convenience of comparing our algorithms and the DeANM algorithm, which can hardly deal with the data when the signals share the same angles in either the azimuth or elevation dimension.In particular, we take a group of signals with distinct directions ( )  6 compares the RMSEs of different algorithms with respect to SNR, where CRB denotes the Cramér-Rao bound.The results show that the RMSE curve of the proposed D-PANM kept the same level as that of vecANM, but was lower than that of DeANM, i.e., although the noise constraint in D-PANM relying on the kernel matrix was the same as that in DeANM, the performance of D-PANM was still better, implying that the smoothing projection process helped to improve the anti-noise performance.In addition, the proposed fast implementation algorithm D-AP exhibited good performance under a larger SNR, but began to degenerate as SNR decreased, mainly due to the influence of the parameters.

Conclusions
In this study, we developed a 2D DOA estimation method in angle-ambiguity scenarios based on the framework of ANM, and a valid atom set, i.e., the projected atom set , was then constructed taking full advantage of the phase elimination property of the equivalent SDP problem, along with the smoothing idea.Indeed, the D-PANM algorithm induced by it fully retained the benefits of ANM.That is, it is capable of handling the array data with limited snapshots, even a single one, without knowing the source number.Moreover, it not only accurately yielded the estimation of DOA with automatic pairing when two or more signals impinged from the same directions, but also was more robust to the noise compared with ANM of the decoupled type.In addition, the proposed algorithm had a lower computational load compared with the existing vecANM, and its resolution with a decoupled reweighted strategy was superior to that of the comparison algorithms.Furthermore, a D-AP algorithm was also utilized to accelerate the implementation of our problem, which proved effective under many conditions.
However, the proposed method is based on the ideal array manifold matrix and is relatively sensitive to signals with vastly different power levels, so a more robust algorithms in a real test environment should be investigated in the future.Moreover, we will consider incorporating the processing method with multiple-snapshot data.Lastly, a char-

Conclusions
In this study, we developed a 2D DOA estimation method in angle-ambiguity scenarios based on the framework of ANM, and a valid atom set, i.e., the projected atom set A P , was then constructed taking full advantage of the phase elimination property of the equivalent SDP problem, along with the smoothing idea.Indeed, the D-PANM algorithm induced by it fully retained the benefits of ANM.That is, it is capable of handling the array data with limited snapshots, even a single one, without knowing the source number.Moreover, it not only accurately yielded the estimation of DOA with automatic pairing when two or more signals impinged from the same directions, but also was more robust to the noise compared with ANM of the decoupled type.In addition, the proposed algorithm had a lower computational load compared with the existing vecANM, and its resolution with a decoupled reweighted strategy was superior to that of the comparison algorithms.Furthermore, a D-AP algorithm was also utilized to accelerate the implementation of our problem, which proved effective under many conditions.However, the proposed method is based on the ideal array manifold matrix and is relatively sensitive to signals with vastly different power levels, so a more robust algorithms in a real test environment should be investigated in the future.Moreover, we will consider incorporating the processing method with multiple-snapshot data.Lastly, a characteristic of ANM is that it can deal with incomplete data, so a sparse array or an array with missing elements is also a direction for future research.i.e., SDP(P (X)) ≤ ∥P (X)∥ A P .
On the other hand, we prove SDP(P (X)) ≥ ∥P (X)∥ A P from the results of Theorem A1. Given


being the phase.Let the spacing d be half the wavelength of the signals as usual, and the elements of X can then be given by

U
ds , where U _ bs and U bs are selected from U bs with the last and first L rows deleted, respectively, while U _ ds

_ + ds U
ds O −1 displays the same eigenvalues as U _ + ds U ds , i.e., the eigenvalue matrix Ψ.Then, the following linear fitting and a joint eigenvalue decomposition are computed by αU _ + bs U bs + (1 − α)U _ + ds

Figure 2 .
Figure 2. The estimated DOAs in the angle-ambiguity scenario with 9 NM == , 5 KL == , 30dB SNR = , and the number of signals 7 I = : (a) DeANM; (b) vecANM; (c) proposed D-PANM; uniformly and reduced them by half in each iteration, where max  denotes the maximum eigenvalue of the corresponding Toeplitz matrix in the first iteration.One hundred Monte Carlo experiments were carried out, and their results are shown in Figure 3, which compares D-PANM and vecANM with their reweighted algorithms, respectively, where RMSE is the root mean squared error.The results show that D-PANM and vecANM had the same resolution performance, being able to exactly recover two sources mutually separated by 8.5 , and verified the condition for the minimum angle distance argued in Proposition 1: , D-PRAM (the reweighted

Figure 4 .
Figure 4. Success rates of DOA estimation with respect to the angle interval: (a) azimuth dista i   ; (b) elevation distance

Figure 4 .
Figure 4. Success rates of DOA estimation with respect to the angle interval: (a) azimuth distance ∆θ i ; (b) elevation distance ∆φ i .

Electronics 2024 , 25 Figure 5 .
Figure 5. Mean computational time vs. the number of sensors.

Figure 5 . 25 Figure 6 .
Figure 5. Mean computational time vs. the number of sensors.

2 √
) is positive semidefinite (PSD), since |c i | ≥ 0 for all i hold.In addition, each diagonal element of T(u b ) is KL KL I ∑ i |c i |, and the diagonal elements of T(u d ) is KL KL I (P (X)), as the minimum point, has the following relationship: SDP(P (X)) ≤ 1 KLKL (trace(T(u b )) + trace(T(u d ))) =

Table 1 .
The abbreviation index.

Table 2 .
An overview of the proposed method and related algorithms under the framework of ANM.

Table 3 .
The D-AP algorithm.

Table 4 .
A brief illustration of proposed algorithms and the comparison ones.(k R,1 , k R,2 , and k P denote the maximum number of iterations in related algorithms).
which can be considered as the atom of A MM .Thus, ∥P (X)∥ A MM =