The Impact of the Covariance Matrix Sampling on the Angle of Arrival Estimation Accuracy

: Several works show that the linear Angle of Arrival (AoA) methods such as Projection Matrix (PM) have low computational complexity compared to the subspace methods. Although the PM method is classiﬁed as a subspace method, it does not need decomposition of the measured matrix. This work investigates the e ﬀ ect of the sampled columns within the covariance matrix on the projection matrix construction. To the authors’ knowledge, this investigation has not been addressed in the literature. Unlike the subspace methods such as Multiple Signal Classiﬁcation (MUSIC), Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT), Minimum Norm, Propagator, etc., which have to use a speciﬁc number of columns, we demonstrate this aspect is not applicable in the PM method. To this end, the projection matrix is formed based on a various number of sampled columns to estimate the arrival angles. A theoretical analysis is accomplished to illustrate the relationship between the number of the sampled columns and the degrees of freedom (DOFs). The analysis shows that with the same aperture size, the DOFs can be increased by increasing only the number of sampled columns in the projection matrix calculation step. An intensive Monte Carlo simulation for di ﬀ erent scenarios is presented to validate the theoretical claims. The estimation accuracy of the PM method, based on the proposed selected sampling methodology outperforms all the other techniques with less complexity compared to the Capon and MUSIC methods. The estimation accuracy is evaluated in terms of Root Mean Square Error (RMSE) and the Probability of Successful Detection (PSD). The results are presented and discussed.


Introduction
The major application of a sensor array is to estimate parameters of a signal or signals impinging on the array [1][2][3]. Examples include radar systems [4], public security [1], and emergency call locating [5]. The Fifth Generation (5G) utilizes Massive Multiple Input Multiple Output (Massive MIMO) and beamforming technologies to compensate for penetration loss, to produce multiple beams simultaneously and to provide huge bandwidth and data rate [6,7]. Direction-finding systems encounter many challenges caused by mutual coupling [8,9] and propagation environments such as multipath, limited data (few snapshots), and the time-varying nature of the signals [10]. Arriving signals typically suffer from fading channels, which may cause a low Signal-to-Noise Ratio (SNR). The new applications mentioned, in addition to the challenges presented by the channel, made direction estimation an active area during the last few decades.
Bartlett's method is one of the earliest Angle of Arrival (AoA) estimation techniques. It applies an equal weighting to each antenna element to construct the steering vector in the specific direction [11].
The method reinforces the radiation of an antenna array towards the Direction of Arrival (DoA) by aligning the propagation delays of incident signals at the elements, while sources that are propagating from other directions, and the noise, are not enhanced. This method gives poor estimation resolution, which depends mainly on the size of the antenna array. However, increasing the aperture size leads to rising hardware and computational complexity.
Capon proposed an algorithm to estimate the power of incoming signals from one direction, and it considers all other signals as interference [12]. The Capon method has much better resolution than the Bartlett method, but it is more complicated to implement, as it requires the computation of the inverse of the covariance matrix. Moreover, its performance falls below that of the Bartlett algorithm when the received signals are highly correlated [13].
A Maximum Entropy (ME) algorithm proposed by Burg [14] depends on the extrapolation of the covariance matrix, where the extrapolation has to be selected to maximize signal entropy. This is accomplished by searching for the autoregressive coefficients that minimize the expected prediction error subject to the constraint that the first auto-regressive (AR) coefficient is unity. The ME method uses the Lagrange Multiplier technique to solve this condition and find the optimum weights.
Pisarenko Harmonic Decomposition (PHD) was the first approach that exploited the eigenvectors of the covariance matrix to estimate the direction of incoming signals [15]. It assumes the dimension of the noise is equal to one column, regardless of the number of arriving signals [16], and uses the eigenvector associated with the least eigenvalue. This method is only applicable to Uniform Linear Arrays (ULAs). Producing false peaks is its main drawback [17,18].
A Multiple Signal Classification (MUSIC) method is an Eigenstructure technique, which provides fairly estimates for the angles of arrival, the number of signals, and their strengths [19,20]. This algorithm conceptually depends on the orthogonality between the noise subspace and steering vector of the antenna array to generate the spatial spectra of received array signals. The maximum points of the spectrum indicate the directions of the impinging sources.
The Root-MUSIC technique was proposed to reduce the computational complexity of the MUSIC method [21]. Its main idea is searching the roots, which are associated with the direction of arrival signals, instead of extensive searching through the antenna array vector. Although it is faster than MUSIC, it is again only applicable to the (ULA), and not all of the roots give the correct location of the arrival angles.
An Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) was also suggested to estimate the direction of arrival signals [22]. This method assumes the type of source is a narrowband signal, and that the number of received signals is less than the number of antenna elements. The main idea of this technique depends on exploiting the rotational invariance of the signal subspace, which is produced by two arrays with a translationally invariant structure. It is vital to separate these subarrays translationally and not rotationally.
Another subspace technique called propagator has been proposed to estimate the DoAs of the signals [23]. The advantage of this method is reduced complexity compared with other subspace techniques since it does not require decomposition of the covariance matrix. However, it uses the whole of the covariance matrix to obtain the propagation operator. Further, it is only suitable in the presence of Additive White Gaussian Noise (AWGN) channels, and its performance deteriorates significantly under non-uniform spatial noise. An Orthonormal Propagator Method (OPM) has been presented [24] and has been shown to improve the performance of the original propagator method, although it increases the computational burden of constructing the propagator operator.
In Reference [25], the authors proposed a maximum likelihood approach known as Space Alternating Generalized Expectation-maximization (SAGE) for channel modeling. This approach has been used to obtain path parameters from channel measurement data by performing an iterative joint Expectation-Maximization (EM) on many parameters, for instance, incidence angles, relative delay, and Doppler frequency, of the incident signals. SAGE can obtain multiple parameter estimations and separate the received signals via simultaneously searching in different domains [26]. However, EM searching iterations on many factors entail a substantial computational burden and increase the complexity.
A Unitary Matrix Pencil (U-MP) method was proposed in Reference [27] to reduce the complexity of the computation. The authors applied the unitary matrix transformation to the matrix pencil (PM) method to estimate the DoAs of the signals efficiently. The transformation maps the Centro-Hermitian matrices to real matrices. It has been demonstrated that this real matrix is symmetrical, and hence, both matrix decomposition and spectral search can be implemented with real-valued computation. However, most of the presented real-valued methods are only suitable for Centro-Symmetrical Arrays (CSAs) [28] which severely limits their applications [29,30].
Root-MUSIC has been developed to be applicable for Uniform Circular Arrays (UCAs) so that it can provide 360 • azimuthal coverage [31,32]. The UCA Root-MUSIC transforms the array measurement data from element space to the Beamspace Sample Covariance Matrix (BSCM) using Beamspace Transform (BT), which is based on the phase mode excitation.
A modified Real-Value MUSIC was proposed in Reference [33] to reduce the computational complexity of the Real MUSIC method without losing its suitability for an arbitrary array. This method uses only the real values of the measured covariance matrix in both the Eigen decomposition and spectral search stages, leading to significant complexity reduction.
In Reference [34], the authors presented a Maximum Signal Subspace (MSS) AoA method, which depends on the orthogonality between the manifold vector and the eigenvector that is associated with the highest eigenvalue regardless of the number of incoming signals. This method reduced the computational load in the grid-searching step; however, it is only suitable for ULAs. Later, the authors proposed an algorithm called Subtracting Signal Subspace (SSS), which is based on the orthogonality between the manifold vector and signal subspace and can estimate signal directions of arrival in both Two-Dimensional (2D) and Three-Dimensional (3D) applications [35]. The SSS technique can be used any array geometry and has a lower computational load in the grid searching stage compared to the MUSIC method.
A Propagator Direct Data Acquisition (PDDA) that can determine the DoAs directly from the measurement data without the need to form the covariance matrix has been presented in Reference [36]. This method partitions the received data matrix into two parts and then finds the cross-correlation between them. A Minimum Variance Norm (MVN) method [37] was presented, based on the Maximum Likelihood (ML) function that assumes the wanted signals are arriving from a specific direction, whereas the other directions are assumed as interferers. This algorithm uses a Lagrange optimization technique in which the obtained optimum weights apply to the received array in order to find the arrival angles in 2D and 3D planes.
In Reference [38], a sparse-based off-grid method using coprime arrays to deal with the DoA estimation problem was proposed. This method uses separate coarse and fine processing stages to reduce the estimation error in the case of a limited number of snapshots. The coprime arrays have been used to extend the virtual array aperture, hence increasing the degrees of freedom, and so to detect more sources than the number of sensors.
Besides the work referenced above, many methodologies, ideas, and theories have been presented to improve the estimation accuracy and/or reduce the computational complexity, while overcoming particular limitations. However, to the authors' knowledge, no effort has yet been made to address the influence of the sampled number of rows/columns on the performance of the Projection Matrix (PM) method for direction estimation applications. The present work now considers the impact of such sampled columns on the estimation efficiency of the PM method. It is worth mentioning that the final dimension of the projection matrix used in the grid-searching step is constant whether a few or many sampled columns are selected. This means the complexity of the computations at this step is constant, regardless of the number of selected columns. It is proved that any increase in the number of chosen sampled columns has a positive effect on the projection matrix construction in terms of increasing the number of generated nulls towards the arrival signals, and enhances robustness to noise. Hence, the estimation accuracy will be improved without needing to increase the size of the array aperture. It is worth noting that the number of exploited columns cannot be changed for the methods which need to decompose the measured matrix, such as MUSIC, ESPRIT, MP, etc., or the techniques that need to divide the Covariance Matrix (CM), for instance, Propagator, OPM, etc.
The paper is organized so that Section 2 presents AoA modeling with an arbitrary antenna array, while the working principles of the PM and several related algorithms are given in Section 3. Section 4 discusses the proposed matrix-sampling methodology for constructing the projection matrix, together with its effect on resolving the arrival angles and the complexity of computation. In Section 5, simulation results, discussions, and comparisons are carried out. Finally, Section 6 summarizes the results and sets out conclusions.

AoA Model and Problem Formulation
It is assumed that L narrowband signals, arriving from different elevation (θ j ) and azimuth angles (φ j ) angles are incident on an arbitrary antenna array, as shown in Figure 1 [36]. The signal sources are assumed to lie in the far-field. At the receiving end, these signals are received by M sensors and processed digitally to determine their DoAs. The total received baseband signal, X(t) under Gaussian noise channel conditions is given as follows: where: T is a Binary Phase Shift Keying (BPSK) modulated signal. N(t) = [n 1 (t), n 2 (t), . . . , n M (t)] is an AWGN for each channel. A(θ, φ) is the M × L matrix of steering vectors and is defined as follows: With k signals impinging on the above array, the unit vector defining the incidence azimuth and elevations angles of one signal is described below: where,â x ,â y , andâ z are the unit vectors for Cartesian co-ordinates. The unit vector that measures the distance from an original reference point to the i'th element can be given below: Here, ϕ i represents the angular separation between the x-plane and the locations of the ith element. Next, the angles (α ik ) should be calculated from the dot product between unit vectors v i and u k , as below: The whole set of produced angles due to L signals incident on M elements can be constructed as a matrix, as described below: The corresponding time delay, τ ik , of each α ik can be computed as follows: The matrix that includes the time delays of all incident signals on M sensors can be given as follows: The angular phase difference (ψ ik ) is presented below: where, λ is the wavelength. Then, the corresponding phase difference of the T ik can be presented as follows: The array steering vector of the above antenna array under these assumptions and conditions can be defined as follows:

The Projection Matrix Construction
It is well-known that the subspace AoA techniques are very sensitive to under-or over-estimation of the sources number L, where any mistake in this estimation can cause a poor resolution. Although the PM method is classified as a subspace method, it does not need to separate the signal and noise subspaces. In this work, it is shown that the PM method can work without the need to know the number of arriving signals. Considering that M receivers are used to measure the data of impinging signals at N times, then the received data matrix, X(t), can be given as follows: . . .
Once the received data matrix is measured, one needs to compute the array covariance matrix (

of 19
Once the received data matrix is measured, one needs to compute the array covariance matrix ( ) as follows: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: However, when the environment is noisy, the least mean square (LMS) method is used to estimate as expressed below, Next, we need to construct a matrix, such that: is the identity with dimension (M -L) × M. The spatial spectrum of the propagator method can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) approach to and then sorting the eigenvalues in a descending way yields: xx ) as follows:

of 19
Once the received data matrix is measured, one needs to compute the array covariance matrix ( ) as follows: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: = However, when the environment is noisy, the least mean square (LMS) method is used to estimate as expressed below, = ( ) Next, we need to construct a matrix, such that: is the identity with dimension (M -L) × M. The spatial spectrum of the propagator method can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) approach to and then sorting the eigenvalues in a descending way yields: Once the received data matrix is measured, one needs to compute the array covariance matrix ( ) as follows: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: = However, when the environment is noisy, the least mean square (LMS) method is used to estimate as expressed below, = ( ) Next, we need to construct a matrix, such that: is the identity with dimension (M -L) × M. The spatial spectrum of the propagator method can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) approach to and then sorting the eigenvalues in a descending way yields: Once the received data matrix is measured, one needs to compute the array covariance matrix ( ) as follows: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: = However, when the environment is noisy, the least mean square (LMS) method is used to estimate as expressed below, = ( ) Next, we need to construct a matrix, such that: is the identity with dimension (M -L) × M. The spatial spectrum of the propagator method can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) approach to and then sorting the eigenvalues in a descending way yields: where,

of 19
Once the received data matrix is measured, one needs to compute the array covariance matrix ( ) as follows: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: = However, when the environment is noisy, the least mean square (LMS) method is used to estimate as expressed below, = ( ) Next, we need to construct a matrix, such that: is the identity with dimension (M -L) × M. The spatial spectrum of the propagator method can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) approach ss is the (L × L) source signal correlation matrix Once the received data matrix is measured, one needs to compute the array covariance ( ) as follows: variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they from one location to another during the estimation or tracking process time, the directions o sent signals will change, and thus, the directions of emitted signals will arrive at the receiving end from different angles. This means ( , ) can be changed partially or completely and quently the spatial spectrum of the will change. To this end, the covariance matrix need measured frequently in the practical applications to ensure reliable estimation. A co knowledge of cannot be assumed, instead, we may use as necessary the sample-averag mated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatia trum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix in sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique operator of C M-L into C L . In the noiseless system, the propagator is given as follows: = However, when the environment is noisy, the least mean square (LMS) method is used mate as expressed below, = ( ) Next, we need to construct a matrix, such that: is the identity with dimension (M -L) × M. The spatial spectrum of the propagator m can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) ap ss = E(S(t)S(t)), σ n 2 is the noise variance, I M is the M × M identity matrix, and H denotes the Hermitian transpose.
When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means A(θ, φ) can be changed partially or completely and consequently the spatial spectrum of the 6 of 19 Once the received data matrix is measured, one needs to compute the array covariance matrix ( ) as follows: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: = However, when the environment is noisy, the least mean square (LMS) method is used to estimate as expressed below, = ( ) Next, we need to construct a matrix, such that: xx will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of variance, is the M × M identity matr When some or all of the signals' s from one location to another during the sent signals will change, and thus, the d end from different angles. This means quently the spatial spectrum of the measured frequently in the practical knowledge of cannot be assumed, mated array input auto-covariance matr

≈ 1
The Capon method [12] needs to ca trum can be defined as follows: Alternatively, the idea of the propa sub-matrices, as described below: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Once the received data matrix is measured, one needs to compute the array covariance m ( ) as follows: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the n variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they m from one location to another during the estimation or tracking process time, the directions of sent signals will change, and thus, the directions of emitted signals will arrive at the receiving a end from different angles. This means ( , ) can be changed partially or completely and co quently the spatial spectrum of the will change. To this end, the covariance matrix needs t measured frequently in the practical applications to ensure reliable estimation. A comp knowledge of cannot be assumed, instead, we may use as necessary the sample-average mated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial s trum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into sub-matrices, as described below: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below:

of 19
Once the received data matrix is measured, one needs to compute the array covariance matrix where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Once the received data matrix is measured, one needs to compute the array covariance matrix where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Once the received data matrix is measured, one needs to compute the array covariance matrix where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spec- By considering

of 19
Once the received data matrix is measured, one needs to compute the array covariance matrix where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: 1 is a non-singular matrix, the propagator may be defined as a unique linear operator P of C M-L into C L . In the noiseless system, the propagator is given as follows:

of 19
Once the received data matrix is measured, one needs to compute the array covariance matrix where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: Once the received data matrix is measured, one needs to compute the array covariance matr where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noi variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they mov from one location to another during the estimation or tracking process time, the directions of the sent signals will change, and thus, the directions of emitted signals will arrive at the receiving arra end from different angles. This means ( , ) can be changed partially or completely and cons quently the spatial spectrum of the will change. To this end, the covariance matrix needs to b measured frequently in the practical applications to ensure reliable estimation. A comple knowledge of cannot be assumed, instead, we may use as necessary the sample-average es mated array input auto-covariance matrix given by: 1 (17) However, when the environment is noisy, the least mean square (LMS) method is used to estimate P as expressed below, Once the received data matrix is measured, one needs to compute the array covariance matrix where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is th variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when the from one location to another during the estimation or tracking process time, the directions sent signals will change, and thus, the directions of emitted signals will arrive at the receivin 2 (18) Next, we need to construct a V matrix, such that: I M−L is the identity with dimension (M -L) × M. The spatial spectrum of the propagator method can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) approach to 6 of 19 Once the received data matrix is measured, one needs to compute the array covariance matrix ( ) as follows: where, is the (L × L) source signal correlation matrix = ( ( ) ( )), is the noise variance, is the M × M identity matrix, and denotes the Hermitian transpose. When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and consequently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average estimated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spectrum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: However, when the environment is noisy, the least mean square (LMS) method is used to estimate as expressed below, Next, we need to construct a matrix, such that: is the identity with dimension (M -L) × M. The spatial spectrum of the propagator method can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) approach to and then sorting the eigenvalues in a descending way yields: xx and then sorting the eigenvalues in a descending way yields: Once the received data matrix is measured, one needs to compute the array covariance matrix When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and conse-quently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average esti-mated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spec-trum can be defined as follows: Alternatively, the idea of the propagator method [24] is to divide the covariance matrix into two sub-matrices, as described below: By considering is a non-singular matrix, the propagator may be defined as a unique linear operator of C M-L into C L . In the noiseless system, the propagator is given as follows: However, when the environment is noisy, the least mean square (LMS) method is used to esti-mate as expressed below, Next, we need to construct a matrix, such that: is the identity with dimension (M -L) × M. The spatial spectrum of the propagator method can be constructed as follows: For the subspace methods, one needs to apply the Eigenvalue Decomposition (EVD) approach to and then sorting the eigenvalues in a descending way yields: Q SS = q 1 , q 2 , . . . . . . , q L is the Signal Subspace (SS) matrix, which associates with L largest eigenvalues, Σ SS = (Σ 1 , Σ 2 , . . . . . . , Σ L ). whereas Q NS = q L+1 , q L+2 , . . . . . . , q M represents a Noise Subspace (NS) matrix, which is associated with the M-L lowest eigenvalues, Σ NS = (Σ L+1 , Σ L+2 , . . . . . . , Σ M ). MUSIC exploits the orthogonality between the manifold vector (i.e., steering vector) and the eigenvectors of the NS to determine the directions of arrival signals. The spatial spectrum of the MUSIC method can be constructed as follows [19]: Alternatively, one can exploit the orthogonality between the manifold vector and SS to estimate the AoAs. To achieve this task, projections of the L eigenvectors (i.e., Q SS ) should be calculated first as follows [39]: Then, the pseudo-spectrum of the PM method based on the SS matrix can be constructed as follows [37]: With this assumption, the construction of the projection matrix is based on the SS eigenvectors, and therefore, the decomposition of the CM is required. Instead, it can directly exploit L rows/columns of the CM to construct a projection matrix without the need to decompose the CM. Consider the observed CM defined as follows: Once the received data matrix is measured, one needs to compute the array covariance matrix When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and conse-quently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average esti-mated array input auto-covariance matrix given by: The Capon method [12] needs to calculate the inverse of the CM and, therefore, its spatial spec-trum can be defined as follows: In this case, one needs to select L rows/columns of 6 of 19 Once the received data matrix is measured, one needs to compute the array covariance matrix When some or all of the signals' sources are non-stationary, in other words when they move from one location to another during the estimation or tracking process time, the directions of their sent signals will change, and thus, the directions of emitted signals will arrive at the receiving array end from different angles. This means ( , ) can be changed partially or completely and conse-quently the spatial spectrum of the will change. To this end, the covariance matrix needs to be measured frequently in the practical applications to ensure reliable estimation. A complete knowledge of cannot be assumed, instead, we may use as necessary the sample-average esti-mated array input auto-covariance matrix given by: xx to sample Q L as given below: Then, the projection matrix based on Q L can be computed as follows: The pseudo-spectrum of the PM method using U L can be constructed using the following formula.
The working principle of the above formula still depends on the previous knowledge of the number of arrival signals, which is difficult in the practical applications. Thus, a pre-processing is required to find the actual number of arrival signals, and this, in turn, will increase the computational complexity. Therefore, in the next section, the projection matrix will be formed regardless of the number of arrival signals.

The Methodology of Matrix Sampling
The way of sampling the rows/columns of a matrix has a substantial influence on estimating the eigenvalues and eigenvectors of the covariance matrix [40,41]. Volume sampling is the picking of k-subsets of the rows/columns of a certain matrix with probabilities proportional to the squared volumes of the simples defined by them. It was first introduced in Reference [42] in the context of low-rank approximation of matrices. The problem at hand is to find a method that can increase the estimation resolution of the PM method without increasing M. Thus, our approach is based on picking a P-subset of rows of M ∈ C M×P so that projecting onto their span can provide sufficient information about the incident signals but without raising the computational burden of a complete spectral analysis significantly. This problem is identical to the column-subset problem of the matrix transpose mentioned in Reference [40].
The PM method assumes the number of arrival signals is known and less than a number of array sensors [43]. However, it is considered a low-complexity method since it does not need to compute the inversion or decomposition of the CM, as is done in the Capon and MUSIC methods, respectively. Although the PM method that is defined in Equation (28) is categorized as a signal subspace method, where L should be known in advance, it seems this assumption is not entirely accurate since the SS and NS matrices are obtained after applying EVD. Therefore, there is no need to separate these matrixes in order to calculate the projection matrix.
Hence, a new projection matrix will be constructed based on selecting P rows/columns instead of using only L, where L ≤ P < M. Under this assumption, the PM method will be considered as a blind method. Then, the estimation accuracy of the PM method will depend mainly upon the value of P; Q P can be defined as described below; The way and selected sampling size of (29) can affect significantly on the signal parameters estimations. To justify this point, one can investigate the relationship between the sampled matrices and the obtained eigen/singular values of the L arrival signals. It is known the estimation accuracy is directly proportional to the power of the eigenvalues. To this end, a covariance matrix that contains data for seven signals is generated randomly with size (30 × 30). Then, Q P has been sampled based on different values of P namely: P = ( 7,10,15,20,24,27), where the Singular Value Decomposition (SVD) applied to each sampled matrix as follows: The summation of the L largest singular values for each sampled matrix is calculated as follows: The calculated results are plotted as a Cumulative Distribution Function (CDF), as shown in Figure 2. Apparently, the obtained power of the L dominated singular values increased significantly with the increasing number of sampled columns. Thus, one can improve the performance of the PM method by increasing only the number of sampled columns at the projection matrix construction step. Then, the projection matrix based on the proposed sampled matrix, Q P , can be computed as follows: Finally, the spatial spectrum of the proposed method can be constructed as given below: The procedures of the proposed sampling methodology for AoA estimation are given in Table 1. Input: The measured data matrix, X ∈ C M×N , with M receivers, N number of measurements, L incident signals. Output: The estimated AoAs.
Step 1: Calculate the CM using (25) Step 2: Sample Q P matrix based on a specific P as follows: Q P = R xx (:, 1 : P) Step 3: Construct the projection matrix (i.e., U P ) using (32) Step 4: Construct the pseudo spectrum using (33) Step 5: Find the locations of the produced peaks to determine the AoAs

Estimation Accuracy Analysis
In this section, it will be proven that an increase in the number of sampled columns leads to an increase in the Degrees of Freedom (DOFs). For the AoA estimation, the aim is to determine the number of the resultant nulls of the array factor instead of the number of maxima. To justify the advantage of angle resolution using the proposed sampled approach compared to the classically used one, we will compute the possible nulls produced using both methods. Using the classical sampling approach, the Array Factor (AF) based on the L selected columns can be shaped as follows: where ψ = π cos θ; multiplying both sides of the above equation by function e − jψ , leads to: Subtracting equation (35) from (34) results in: which simplifies to a sin x x function: From the above equation, it can be concluded there are L-1 nulls available to point in the directions of the incident signals. For the proposed weighting methodology, it is decided to select P columns and, therefore, the array factor can be given by: Multiplying both sides of the above equation by function e −jψ yields: Subtracting equation (39) from (38) results in: The generated array nulls using the proposed sampling method occur when the numerator argument (P−1) 2 cos θ P = ±nπ. Hence, , n = 1, 2, . . . , (P − 1) (41) Obviously, the total number of nulls that can be obtained from the above equation is P-1, and this number can be increased by increasing the P sampled columns. To compare the expected number of nulls that can be pointed towards the angles of the received signals between the classical and proposed sampling methodology, Equation (40) is divided by Equation (37) as follows: Based on the above arguments, the Nulls Ratio (NR) number can be increased by increasing the number of the sampled columns, which, in turn, will increase DOFs. The NR can play a significant role in adjusting the AoAs resolution to resolve very closely spaced incidents signals on the array.

The Computational Complexity Analysis
Several works in the DoA estimation area show that the linear methods such as the PM algorithm have much lower computational complexity than subspace methods [39,44]. This is because it does not require inversion or decomposition of the CM, which costs approximately O (M 3 ), where M is a number of the antenna elements. With massive MIMO, the O (M 3 ) entails forbiddingly high-computational complexity. Generally, obtaining limited hardware cost with large array apertures cause serious challenges for real-time signal processing and parameter estimation.
The computational burden of the projection matrix construction needs O (M 2 P) arithmetic operations. The PM method based on the new methodology replaces the computation of the matrix inverse with a new problem of size (M × M) by (P × P) matrix, where P is the number of selected sampled columns. Then, the calculations reduction factor of this method compared to the methods that need covariance matrix decomposition such as MUSIC, ESPRIT, etc., or the methods that require the covariance matrix inversion such as Capon is O (P/M).
Finally, the complexity of computations of the PM method at the searching grid step is O (M 2 J θ ), where J θ = 180/δ θ and δ θ is the scanning step. As can be seen from the above Equations (24), (28), and (33), the dimensions of U SS , U L and U P are equal, which means we have the same computational burden at the grid scanning stage. The overall computational complexity of the proposed method is compared with some well-known AoA algorithms, as in Table 2. Table 2. Computational complexity comparison for several AoA methods.

Algorithm
The Needed Computational Operations

Numerical Simulations and Discussions
To verify the theoretical claims of this investigation, MATLAB software has been used to simulate the presented method and compare it with other AoA methods. The simulation is divided into three main scenarios: the first scenario investigates the effect of changing a number of rows/columns used (P) with SNR variations. The second scenario shows the impact of changing P with the available numbers of snapshots that are used to collect the matrix data. Finally, the proposed methodology is compared with several AoA methods in terms of the estimation accuracy and the required computational burden. Each scenario has been implemented with intensive Monte Carlo simulation to ensure a fair and robust comparison of results. In each scenario, the Average Root Mean Square Error (ARMSE) is calculated as follows: where, K is the number of Monte Carlo trials, θ k is the randomly generated AoAs, andθ k is the estimated AoAs. The Probability of Successful Detection (PSD) of AoAs was also computed according to the following formula: where, Nr is the number of successful-detected AoAs at each trial.

Intercomparison: Numerical Example
A numerical example is presented to show the advantage of the increasing number of sampled columns in the calculation of the projection matrix. In this example, seven plane waves (L = 7) are assumed impinging on the ULA with M = 30 and d = 0.5λ. The number of snapshots used to collect the data matrix is N =100 and the SNR at the array output is 10 dB. The scanning step angle is δ θ = 0.5 • . The directions of the plane waves have been generated randomly and indicated by a red circle. Two sets of angles are supposed incident on the antenna array with close directions: the first set consists of two angles and the second of three angles. Six different values of P = (7, 10, 15, 20, 24, 27) columns have been used to construct the projection matrix.
The performance estimation of the PM method with P = L = 7 failed to detect three angles, as shown in Figure 3, whereas, its performance with P = 10 failed to detect two targets. However, with P = 15, the PM method found difficulty in resolving one of the sets with two close AoAs on the right side. At P = 20, 24, and 27, the PM method detects all the arrival angles successfully and the produced peaks become sharp and accurate. This demonstrates that the estimation resolution of the PM technique increases with increasing the number of sampled columns in the matrix projection construction stage.
However, as P increases, the computational burden is slightly increased, as shown in Table 3. It can be concluded the PM method is still working even with the overestimating case (i.e., P > L) and its estimation accuracy increases with increasing P. Table 3. The computational burden of the PM method with a different number of sampled columns.

No. of Sampled Columns
The Computational Load

The Performance Comparison Based on the Various Signal-to-Noise Ratios (SNRs)
The SNR observed at the input to the antenna array receiver plays a crucial role in the performance estimation of direction-finding systems. This scenario investigates the variations in the performance estimation of PM methods with a variation of SNR. Five different values of P = (7,10,15,20,25) are considered in this scenario, and for each P, the SNR is assumed to vary from -10 dB to 20 dB in 2.5 dB increments. In order to investigate the effect of P values intensively, two thousand independent trials of Monte Carlo simulation (K = 2000) are carried out at each step. The 2000 sets of seven AoAs are randomly generated within the angular range (-90 • , 90 • ), each set being applied to each P to ensure a fair comparison. The other simulation parameters are assumed the same as those given in Section 5.1. The ARMSE of every SNR is calculated and then plotted for every selected P as shown in Figure 4, whereas Figure 5 illustrates the PSD (AoAs) of the PM method at each P. Unquestionably, the ARMSE and PSD (AoAs) of the PM method are improved considerably with increased P. This is due to increasing P resulting in an increase in the aperture size of the sampled matrix and the obtained data from the covariance matrix. These advantages will enable the PM method of resolving two or many closely spaced signals with a limited number of antenna array elements.

Performance Comparison Based on Different Numbers of Snapshots
The second scenario is an investigation of the number of measurements of data taken with various values of P for obtaining a reliable estimation for signals emitters. Six numbers of sampled columns have been used to construct the projection matrix, namely: P = (7, 10, 15, 20, 24, and 27). At each number of selected columns, six numbers of snapshots, namely: N = (30, 50, 75, 100, 200, and 300) with SNR = 3 dB were used to study the impact of sampled columns on the effectiveness of the projection matrix method. The other simulation parameters were the same as in the previous section, while the Monte Carlo simulation applied in Section 5.2 was considered here to ensure a reliable investigation. The ARMSE and the percentage detection probability of AoAs were computed for all the P values and are plotted in Figures 6 and 7.  Clearly, increasing P increases both the estimation accuracy and the detection probability of the PM method through the whole number of snapshots range. It can be seen that any increase in the number of covariance-sampled columns improves the capability of the PM method to distinguish two or many closely spaced signals with a limited number of antenna elements. This issue has a positive effect on the localization system's ability to detect the correct number of arrival signals without increasing the aperture size of the array. On the other hand, the computational burden of the projection matrix (U P ) will rise slightly.

Comparison with Other AoA Methods
This section compares the performance of the PM method based on the proposed sampled matrix approach with the well-known AoA methods such as Capon, MUSIC, Propagator, and the original PM method. This comparison includes the impact of the collected number of snapshots on the estimation precision. Six values of the number of data observations are assumed namely: N = (30, 50, 75, 100, 200, 300). The number of the receivers is selected as M = 30, and the distance between the adjacent sensors is chosen as a half-wavelength. The SNR at the array output is set to 10 dB, and the proposed projection matrix is constructed based on P = 20 column samples. Two thousand Monte Carlo simulations were applied to generate the arrival angles randomly, and these were applied to all the relevant methods to ensure a fair comparison. The estimation resolution is evaluated in terms of the RMSE and. Figure 8 shows the ARMSE of each method against the number of snapshots that are used to measure the received data matrix, while Figure 9 shows the number of successfully detected angles. These graphs show that the proposed method outperforms the other methods through the whole tested range.  Notably, the proposed method can be applied directly to the covariance matrix without needing to know the number of arriving signals, unlike the MUSIC, Propagator, and the original PM methods. As can be seen from Table 4, the computational burden of the proposed method is less than for the Capon and MUSIC algorithms and higher than for the Propagator and original PM algorithms. However, the estimation accuracy of the presented method is significantly better than the Propagator, and original PM techniques through the whole tested range.

Conclusions
The effect of the matrix sampling on the covariance matrix has been investigated in this work. It was proven that the way of sampling the rows/columns of a matrix has a substantial influence on the performance of the PM algorithm. Various sampled matrices have been applied to construct the projection matrix in order to estimate the DoAs of the signals. With the proposed methodology, the PM method will not need prior knowledge of the number of arriving signals. Moreover, the estimation accuracy and PSD (AoAs) of the PM method were improved significantly without the requirement to increase the physical size of the aperture array. These improvements have been tested over a wide range of SNRs and numbers of collected data. The present method has outperformed the well-known methods and showed less computational burden compared to the MUSIC and Capon methods. The results obtained in the simulation also justified our theoretical claims.