Next Article in Journal
A Joint Extraction System Based on Conditional Layer Normalization for Health Monitoring
Next Article in Special Issue
Deep Voxelized Feature Maps for Self-Localization in Autonomous Driving
Previous Article in Journal
A Direct Immunoassay Based on Surface-Enhanced Spectroscopy Using AuNP/PS-b-P2VP Nanocomposites
Previous Article in Special Issue
Reinforcement and Curriculum Learning for Off-Road Navigation of an UGV with a 3D LiDAR
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EM and SAGE Algorithms for DOA Estimation in the Presence of Unknown Uniform Noise

1
School of Information and Electronics, Beijing Institute of Technology, Beijing 100081, China
2
Key Laboratory of Ministry of Education in Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(10), 4811; https://doi.org/10.3390/s23104811
Submission received: 27 April 2023 / Revised: 9 May 2023 / Accepted: 15 May 2023 / Published: 16 May 2023
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)

Abstract

:
The existing expectation maximization (EM) and space-alternating generalized EM (SAGE) algorithms are only applied to direction of arrival (DOA) estimation in known noise. In this paper, the two algorithms are designed for DOA estimation in unknown uniform noise. Both the deterministic and random signal models are considered. In addition, a new modified EM (MEM) algorithm applicable to the noise assumption is also proposed. Next, these EM-type algorithms are improved to ensure the stability when the powers of sources are not equal. After being improved, simulation results illustrate that the EM algorithm has similar convergence with the MEM algorithm, the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model, and the SAGE algorithm cannot always outperform the EM and MEM algorithms for the random signal model. Furthermore, simulation results show that processing the same snapshots from the random signal model, the SAGE algorithm for the deterministic signal model can require the fewest computations.

1. Introduction

Direction of arrival (DOA) estimation is an important part of array signal processing and some high-resolution estimation techniques have been developed in the literature [1,2]. In particular, the maximum likelihood (ML) technique plays a critical role since it can offer the highest advantage in terms of both accuracy and spatial resolution. However, ML direction finding problems are non-convex and difficult to obtain their solutions in closed form.
One computationally efficient method to solve ML estimation problems is the classic expectation maximization (EM) algorithm [3,4], which has been employed for ML direction finding [5,6]. Each iteration of the EM algorithm is composed of an expectation step (E-step) and a maximization step (M-step). At the M-step, however, the EM algorithm updates all of the parameter estimates simultaneously, which causes slow convergence. In order to speed up the convergence of the EM algorithm, the space-alternating generalized EM (SAGE) algorithm has been proposed in [7]. References [8,9] show that the SAGE algorithm does yield faster convergence in terms of DOA estimation.
The existing EM and SAGE algorithms are usually derived under known noise [5,6,8,9]. The known noise is without unknown parameters, which may be unrealistic in certain applications. In fact, many seminal works in ML direction finding consider the so-called unknown uniform noise model [1,2,10,11,12], i.e., the noise covariance matrix can be expressed as τ I K , where τ is the only unknown noise parameter and I K is the K × K identity matrix. Under this noise assumption, a computationally attractive alternating projection algorithm is presented for computing the deterministic ML based DOA estimator in [10]. The authors in [11] investigate the statistical performance of this ML estimator and derive the Cramer–Rao lower bound. Moreover, some statistical properties of both the deterministic and random ML based DOA estimators under unknown uniform noise are compared in [12]. In addition to uniform noise, non-uniform noise has also attracted increasing attention [13,14,15,16]. The non-uniform noise has an arbitrary diagonal covariance matrix and, thus, makes DOA estimation complex. For efficiently computing both the deterministic and random ML based DOA estimators in unknown non-uniform noise, some feasible algorithms have been proposed in [17,18,19,20].
In this paper, we apply and design the EM and SAGE algorithms for DOA estimation in unknown uniform noise. Theoretical analyses indicate that for the deterministic signal model, τ has little effect on the two algorithms. However, the problem in the M-step of the EM algorithm for the random signal model can be no longer decomposed into parallel subproblems easily when τ is unknown. To proceed, we divide the M-step into two conditional M-steps (CM-steps) based on the expectation CM (ECM) algorithm [21]. In addition, we propose a new modified EM (MEM) algorithm applicable to the unknown uniform noise assumption. Note that although the EM algorithm in [22] is similar to the MEM algorithm, it is incorrectly derived. In brief, the proposed EM-type algorithms only need low-dimensional numerical searches at each iteration and are easy to perform. However, the proposed EM-type algorithms require accurate initial points, which is generally a computationally expensive task.
Existing simulations using EM-type algorithms always adopt sources of equal power [5,6,8,9,22]. We find, however, that when the powers of sources are unequal, the DOA estimates of multiple sources obtained by these EM-type algorithms tend to be consistent with the true DOA of the source with the largest power. To this end, we improve the proposed EM-type algorithms. After being improved, simulation results illustrate that (1) the EM algorithm has similar convergence with the MEM algorithm, (2) the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model, i.e., the SAGE algorithm converges faster and can avoid the convergence to an undesirable stationary point of the log-likelihood function (LLF) more efficiently, and (3) the SAGE algorithm cannot always outperform the EM and MEM algorithms for the random signal model.
The proposed EM-type algorithms for the deterministic signal model can process snapshots from the random signal model, so we, via simulation, compare these algorithms for both signal models. Simulations show that, under the same snapshots, initial DOA estimates, and stopping criterion, the SAGE algorithm for the deterministic signal model can require the fewest iterations and computations.
The contributions of this paper can be enumerated as follows:
  • We apply and design the EM and SAGE algorithms for DOA estimation in unknown uniform noise. In particular, we derive the SAGE algorithm for random ML direction finding, which is not discussed in [7,8,22].
  • We propose a new MEM algorithm applicable to the unknown uniform noise assumption.
  • We improve these EM-type algorithms to ensure the stability when the powers of sources are not equal.
  • Via simulation we show that the EM algorithm has similar convergence with the MEM algorithm and the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model. However, the SAGE algorithm cannot always outperform the EM and MEM algorithms for the random signal model.
  • Via simulation we show that processing the same snapshots from the random signal model, the SAGE algorithm for the deterministic signal model can require the fewest iterations and computations.
The rest of this paper is outlined as follows: in Section 2, we formulate both the deterministic and random ML direction finding problems in unknown uniform noise. In Section 3, Section 4 and Section 5, we design the EM, MEM, and SAGE algorithms, respectively. We analyze some convergence properties of these EM-type algorithms in Section 6 and provide simulation results to compare the convergence of these EM-type algorithms in Section 7. Finally, we conclude this paper in Section 8.

2. Signal Model and Problem Statement

An array of K sensors is assumed to receive the plane waves emitted from G narrowband sources, which share the same known center wavelength χ . We use the Cartesian coordinate ζ k = [ x k y k z k ] T and the Spherical coordinate ( 1 , μ g , η g ) to locate the kth sensor and the direction of the gth source, respectively. Here, [ · ] T denotes transpose, μ g and η g denote the elevation and azimuth angles of the gth source, respectively. For convenience, we transform ( 1 , μ g , η g ) into the corresponding Cartesian coordinate γ g = [ sin ( μ g ) cos ( η g ) sin ( μ g ) sin ( η g ) cos ( μ g ) ] T . Let the origin be the reference point such that the signal received at the array is written as
w ( t ) = g = 1 G b ( ξ g ) m g ( t ) + v ( t ) = B ( ξ ) m ( t ) + v ( t ) ,
where b ( ξ g ) = [ e 𝚥 ϕ 1 , g e 𝚥 ϕ K , g ] T with DOA ξ g = ( μ g , η g ) Ω , ϕ k , g = 2 π χ ζ k T γ g , and 𝚥 = 1 , m g ( t ) is the signal of the gth source, and  v ( t ) is complex Gaussian noise of zero mean and covariance τ I K , i.e.,  v ( t ) CN ( 0 , τ I K ) with 0 = [ 0 0 ] T . In (1), B ( ξ ) = [ b ( ξ 1 ) b ( ξ G ) ] , ξ = ( ξ 1 , , ξ G ) Ω with Ω = Ω G , and  m ( t ) = [ m 1 ( t ) m G ( t ) ] T .
EM-type algorithms need to define the unavailable complete data. According to the classic EM paradigm for superimposed signals [5,6], we construct L independent snapshots, the incomplete data of the EM algorithm, by 
w ( t ) = g = 1 G b ( ξ g ) m g ( t ) + v g ( t ) = g = 1 G h g ( t ) , t = 1 , 2 , , L ,
where the h g ( t ) ’s are the complete data. Moreover, the  v g ( t ) ’s are mutually uncorrelated and v g ( t ) CN ( 0 , β g τ I K ) , where β = [ β 1 β G ] T > 0 and 1 T β = 1 with 1 = [ 1 1 ] T . Note that the incomplete- and complete-data LLFs require the distributions of the m g ( t ) ’s, we adopt the following two statistical models separately.

2.1. Deterministic Signal Model

We let the m g ( t ) ’s be deterministic and unknown [5,6,10,11], which leads to h g ( t ) CN b ( ξ g ) m g ( t ) , β g τ I K and w ( t ) CN B ( ξ ) m ( t ) , τ I K . Then, the incomplete- and complete-data LLFs are formulated by
J ( Ψ , τ ) = t = 1 L log p w ( t ) ; ξ , m ( t ) , τ = L K log ( π τ ) 1 τ t = 1 L w ( t ) B ( ξ ) m ( t ) 2 ,
Z ( Ψ , τ ) = t = 1 L g = 1 G log p h g ( t ) ; ξ g , m g ( t ) , τ = L K G log ( π τ ) L K g = 1 G log ( β g ) 1 τ t = 1 L g = 1 G 1 β g h g ( t ) b ( ξ g ) m g ( t ) 2 ,
where · denotes Euclidean norm and M = [ m ( 1 ) m ( L ) ] . Note that Ψ = ( ξ , M ) denotes the signal parameters while τ is the only noise parameter. Finally, the ML estimation problem is
max ξ Ω , M , τ > 0 J ( Ψ , τ ) .

2.2. Random Signal Model

We assume m g ( t ) CN ( 0 , ρ g ) where ρ g is the power of the gth source. For simplicity, all of the m g ( t ) ’s and v g ( t ) ’s are assumed to be mutually uncorrelated [5,6]. Next, we have h g ( t ) CN ( 0 , N g ) with N g = ρ g b ( ξ g ) b H ( ξ g ) + β g τ I K , where [ · ] H denotes conjugate transpose, and  w ( t ) CN ( 0 , N w ) with N w = g = 1 G N g . The incomplete- and complete-data LLFs are formulated by
J ( Ψ , τ ) = t = 1 L log p w ( t ) ; ξ , ρ , τ = L K log ( π ) + log Det ( N w ) + Tr N w 1 P ^ w ,
Z ( Ψ , τ ) = t = 1 L g = 1 G log p h g ( t ) ; ξ g , ρ g , τ = L g = 1 G K log ( π ) + log Det ( N g ) + Tr N g 1 P ^ g ,
where [ · ] 1 , Det ( · ) , and  Tr ( · ) denote inversion, determinant, and trace, respectively. Moreover, Ψ = ( ξ , ρ ) , P ^ w = ( 1 / L ) t = 1 L w ( t ) w H ( t ) , P ^ g = ( 1 / L ) t = 1 L h g ( t ) h g H ( t ) , and  ρ = [ ρ 1 ρ G ] T . Finally, the ML estimation problem is
max ξ Ω , ρ 0 , τ > 0 J ( Ψ , τ ) .

3. EM Algorithm

In this section, we design and derive the EM algorithm for solving problems (5) and (8). The E- and M-steps at the rth iteration are introduced below. Let [ · ] ( r ) , E { · } , and  D { · } denote an iterative value at the rth iteration, expectation, and covariance, respectively. [ · ] ( 0 ) is an initial estimate.

3.1. Deterministic Signal Model

3.1.1. E-Step

Calculate the conditional expectation of the complete-data LLF in (4)
E Z ( Δ ) | W ; Δ ( r 1 ) = C L { K G log ( τ ) + K g = 1 G log ( β g ) + 1 τ [ u ( r ) + 1 L t = 1 L g = 1 G 1 β g h g ( r ) ( t ) b ( ξ g ) m g ( t ) 2 ] } ,
where Δ = ( Ψ , τ ) = ( ξ , M , τ ) , W = [ w ( 1 ) w ( L ) ] , and  C = L K G log ( π ) . In (9), the conditional distribution of h g ( t ) can be derived from [23] and
h g ( r ) ( t ) = E h g ( t ) | W ; Δ ( r 1 ) = b ( ξ g ( r 1 ) ) m g ( r 1 ) ( t ) + β g w ( t ) B ( ξ ( r 1 ) ) m ( r 1 ) ( t ) ,
u ( r ) = 1 L t = 1 L g = 1 G 1 β g Tr D h g ( t ) | W ; Δ ( r 1 ) = 1 L t = 1 L g = 1 G 1 β g Tr β g ( 1 β g ) τ ( r 1 ) I K = K ( G 1 ) τ ( r 1 ) .

3.1.2. M-Step

Update the estimates of Ψ and τ by solving
min ξ Ω , M , τ > 0 K G log ( τ ) + 1 τ u ( r ) + 1 L t = 1 L g = 1 G 1 β g h g ( r ) ( t ) b ( ξ g ) m g ( t ) 2 .
Ψ ( r ) = ( ξ ( r ) , M ( r ) ) and τ ( r ) are obtained by [6]
ξ g ( r ) = arg max ξ g Ω Tr Γ g P ^ g ( r ) , g ,
m g ( r ) ( t ) = b H ( ξ g ( r ) ) h g ( r ) ( t ) / K , g , t ,
τ ( r ) = ( 1 1 / G ) τ ( r 1 ) + ( 1 / K / G ) g = 1 G d g ( r ) / β g ,
where Γ g = b ( ξ g ) b H ( ξ g ) / K , P ^ g ( r ) = ( 1 / L ) t = 1 L h g ( r ) ( t ) h g ( r ) ( t ) H , and  d g ( r ) = Tr ( I K Γ g ( r ) ) P ^ g ( r ) 0 . In (15), τ ( r ) > 0 if τ ( r 1 ) > 0 .
Remark 1.
Note that the h g ( r ) ( t ) ’s in (10), the  ξ g ( r ) ’s in (13), and the m g ( r ) ( t ) ’s in (14) are unrelated to τ ( r 1 ) , we can omit (15) due to the nuisance parameter τ.

3.2. Random Signal Model

3.2.1. E-Step

Calculate the conditional expectation of the complete-data LLF in (7)
E Z ( Δ ) | W ; Δ ( r 1 ) = C L g = 1 G log Det ( N g ) + Tr N g 1 P ^ g ( r ) ,
where Δ = ( Ψ , τ ) = ( ξ , ρ , τ ) and
P ^ g ( r ) = E P ^ g | W ; Δ ( r 1 ) = N g ( r 1 ) N g ( r 1 ) [ N w ( r 1 ) ] 1 N g ( r 1 ) + N g ( r 1 ) [ N w ( r 1 ) ] 1 P ^ w [ N w ( r 1 ) ] 1 N g ( r 1 ) .

3.2.2. M-Step

Update the estimates of Ψ and τ by solving
min ξ Ω , ρ 0 , τ > 0 g = 1 G log Det ( N g ) + Tr N g 1 P ^ g ( r ) ,
which is difficult to be decomposed into parallel subproblems due to τ . To obtain Ψ ( r ) and τ ( r ) easily, we rewrite (18) as
min ξ Ω , σ 0 , τ > 0 K G log ( τ ) + g = 1 G log Det ( Q g ) + 1 τ Tr Q g 1 P ^ g ( r ) ,
where N g = τ Q g with Q g = σ g b ( ξ g ) b H ( ξ g ) + β g I K and σ = [ σ 1 σ G ] T with σ g = ρ g / τ . We now divide the M-step into the following two CM-steps based on the ECM algorithm [21], i.e., the algorithm becomes the ECM algorithm. For convenience, we still call it the EM algorithm.
  • First CM-step: Estimate Ψ but hold τ = τ ( r 1 ) fixed. Then, problem (19) can be decomposed into the G parallel subproblems
    min ξ g Ω , σ g 0 log Det ( Q g ) + 1 τ ( r 1 ) Tr Q g 1 P ^ g ( r ) , g .
    Ψ ( r ) = ( ξ ( r ) , ρ ( r ) ) is obtained by [6]
    ξ g ( r ) = arg max ξ g Ω Tr Γ g P ^ g ( r ) , g ,
    ρ g ( r ) = σ g ( r ) τ ( r 1 ) = max e g ( r ) β g τ ( r 1 ) / K , 0 , g ,
    where e g ( r ) = Tr Γ g ( r ) P ^ g ( r ) and ξ g ( r ) is indeterminate if ρ g ( r ) = 0 .
  • Second CM-step: Estimate τ but hold Ψ = Ψ ( r ) fixed. Then, problem (19) is simplified to
    min τ > 0 K G log ( τ ) + 1 τ g = 1 G Tr [ Q g ( r ) ] 1 P ^ g ( r ) .
    τ ( r ) is obtained by
    τ ( r ) = 1 K G g = 1 G Tr [ Q g ( r ) ] 1 P ^ g ( r ) = 1 K τ ( r 1 ) + 1 K G g = 1 G d g ( r ) / β g ,
    where d g ( r ) = Tr ( I K Γ g ( r ) ) P ^ g ( r ) 0 and τ ( r ) > 0 if τ ( r 1 ) > 0 .

4. MEM Algorithm

In the previous section, β is fixed and known. In this section, we regard β as a parameter to be estimated and, thus, propose an MEM algorithm applicable to the unknown uniform noise assumption.
To estimate τ and β easily, we introduce τ g = β g τ as the common noise variance of the gth source and have
v g ( t ) CN ( 0 , τ g I K ) .
Clearly, τ = g = 1 G τ g and β g = τ g / τ . The E- and M-steps at the rth iteration are introduced below. Let τ = [ τ 1 τ G ] T .

4.1. Deterministic Signal Model

Based on (25), the complete-data LLF in (4) is rewritten as
Z ( Δ ) = C L K g = 1 G log ( τ g ) t = 1 L g = 1 G 1 τ g h g ( t ) b ( ξ g ) m g ( t ) 2 ,
where Δ = ( Ψ , τ ) = ( ξ , M , τ ) and h g ( t ) CN b ( ξ g ) m g ( t ) , τ g I K .

4.1.1. E-Step

Calculate the conditional expectation of the complete-data LLF in (26)
E Z ( Δ ) | W ; Δ ( r 1 ) = C L g = 1 G { K log ( τ g ) + 1 τ g [ u g ( r ) + 1 L t = 1 L h g ( r ) ( t ) b ( ξ g ) m g ( t ) 2 ] } ,
where
h g ( r ) ( t ) = E h g ( t ) | W ; Δ ( r 1 ) = b ( ξ g ( r 1 ) ) m g ( r 1 ) ( t ) + τ g ( r 1 ) / τ ( r 1 ) w ( t ) B ( ξ ( r 1 ) ) m ( r 1 ) ( t ) ,
u g ( r ) = 1 L t = 1 L Tr D h g ( t ) | W ; Δ ( r 1 ) = K τ g ( r 1 ) 1 τ g ( r 1 ) / τ ( r 1 ) .

4.1.2. M-Step

Update the estimates of Ψ and τ  by solving the G parallel subproblems
min ξ g Ω , m g , τ g > 0 K log ( τ g ) + 1 τ g u g ( r ) + 1 L t = 1 L h g ( r ) ( t ) b ( ξ g ) m g ( t ) 2 , g ,
where m g = [ m g ( 1 ) m g ( L ) ] . Ψ ( r ) = ( ξ ( r ) , M ( r ) ) and τ ( r ) are obtained by
ξ g ( r ) = arg max ξ g Ω Tr Γ g P ^ g ( r ) , g ,
m g ( r ) ( t ) = b H ( ξ g ( r ) ) h g ( r ) ( t ) / K , g , t ,
τ g ( r ) = τ g ( r 1 ) 1 τ g ( r 1 ) / τ ( r 1 ) + d g ( r ) / K , g ,
where P ^ g ( r ) = ( 1 / L ) t = 1 L h g ( r ) ( t ) h g ( r ) ( t ) H and d g ( r ) = Tr ( I K Γ g ( r ) ) P ^ g ( r ) 0 . In (33), if  τ ( r 1 ) > 0 , we have 1 τ g ( r 1 ) / τ ( r 1 ) > 0 , g , and then τ g ( r ) > 0 , g , i.e.,  τ ( r ) > 0 .

4.2. Random Signal Model

Based on (25), the complete-data LLF in (7) is rewritten as
Z ( Δ ) = C L g = 1 G log Det ( N g ) + Tr N g 1 P ^ g ,
where Δ = ( Ψ , τ ) = ( ξ , ρ , τ ) and N g = ρ g b ( ξ g ) b H ( ξ g ) + τ g I K .

4.2.1. E-Step

Calculate the conditional expectation of the complete-data LLF in (34)
E Z ( Δ ) | W ; Δ ( r 1 ) = C L g = 1 G log Det ( N g ) + Tr N g 1 P ^ g ( r ) ,
where
P ^ g ( r ) = E P ^ g | W ; Δ ( r 1 ) = N g ( r 1 ) N g ( r 1 ) [ N w ( r 1 ) ] 1 N g ( r 1 ) + N g ( r 1 ) [ N w ( r 1 ) ] 1 P ^ w [ N w ( r 1 ) ] 1 N g ( r 1 ) .

4.2.2. M-Step

Update the estimates of Ψ and τ by solving the G parallel subproblems
min ξ g Ω , σ g 0 , τ g > 0 K log ( τ g ) + log Det ( Q g ) + 1 τ g Tr Q g 1 P ^ g ( r ) , g ,
where N g = τ g Q g with Q g = σ g b ( ξ g ) b H ( ξ g ) + I K and σ g = ρ g / τ g . Since Det ( Q g ) = K σ g + 1 and Q g 1 = I K K σ g K σ g + 1 Γ g , subproblems (37) are rewritten as
min ξ g Ω , σ g 0 , τ g > 0 K log ( τ g ) + log ( K σ g + 1 ) + 1 τ g Tr P ^ g ( r ) K σ g τ g ( K σ g + 1 ) Tr Γ g P ^ g ( r ) , g .
To proceed, we first eliminate σ = [ σ 1 σ G ] T in (38) [24,25]. Thus, when obtaining ξ ( r ) and τ ( r ) , σ ( r ) and ρ ( r ) are obtained by [6]
σ g ( r ) = max e g ( r ) / τ g ( r ) 1 / K , 0 , g ,
ρ g ( r ) = σ g ( r ) τ g ( r ) = max e g ( r ) τ g ( r ) / K , 0 , g ,
where e g ( r ) = Tr Γ g ( r ) P ^ g ( r ) . Note that if σ g ( r ) = 0 , ξ g ( r ) is indeterminate and τ g ( r ) = Tr P ^ g ( r ) / K by (38). To obtain ξ ( r ) and τ ( r ) , we assume σ ( r ) > 0 . After eliminating σ , subproblems (38) are simplified to
min ξ g Ω , τ g > 0 ( K 1 ) log ( τ g ) + log Tr ( Γ g P ^ g ( r ) ) + 1 τ g Tr ( I K Γ g ) P ^ g ( r ) , g .
Next, we eliminate τ in (41). Thus, when obtaining ξ ( r ) , τ ( r ) is obtained by τ g ( r ) = d g ( r ) / ( K 1 ) , g , where d g ( r ) = Tr ( I K Γ g ( r ) ) P ^ g ( r ) . After eliminating τ , subproblems (41) are simplified to
min ξ g Ω ( K 1 ) log Tr ( P ^ g ( r ) ) Tr ( Γ g P ^ g ( r ) ) + log Tr ( Γ g P ^ g ( r ) ) , g ,
where ξ g δ g ( r ) = ξ g Ω Tr ( Γ g P ^ g ( r ) ) > Tr ( P ^ g ( r ) ) / K due to the fact that when ξ g ( r ) δ g ( r ) , e g ( r ) = Tr ( Γ g ( r ) P ^ g ( r ) ) > Tr ( P ^ g ( r ) ) / K and
σ g ( r ) = max e g ( r ) / τ g ( r ) 1 / K , 0 = e g ( r ) Tr ( P ^ g ( r ) ) / K / d g ( r ) > 0 , g .
Since ( K 1 ) log Tr ( P ^ g ( r ) ) x + log ( x ) is a monotonically decreasing function of x for x Tr ( P ^ g ( r ) ) / K , subproblems (42) are equivalent to
max ξ g δ g ( r ) Tr Γ g P ^ g ( r ) , g .
Based on the above analysis, Ψ ( r ) = ( ξ ( r ) , ρ ( r ) ) and τ ( r ) are obtained by
ξ g ( r ) = arg max ξ g Ω Tr Γ g P ^ g ( r ) , g ,
τ g ( r ) = d g ( r ) / ( K 1 ) 0 , e g ( r ) > Tr ( P ^ g ( r ) ) / K , Tr ( P ^ g ( r ) ) / K 0 , e g ( r ) Tr ( P ^ g ( r ) ) / K , g ,
ρ g ( r ) = max e g ( r ) Tr ( P ^ g ( r ) ) / K / ( K 1 ) , 0 , g ,
where we can use a proof by contradiction to verify that τ ( r ) > 0 if τ ( r 1 ) > 0 .

5. SAGE Algorithm

In the SAGE algorithm, each iteration consists of G cycles and ξ q ( r ) is obtained at the qth cycle of the rth iteration. Let [ · ] ( r , q ) mean an iterative value at the qth cycle of the rth iteration, [ · ] ( r 1 ) = [ · ] ( r 1 , G ) = [ · ] ( r , 0 ) .
At the qth cycle of the rth iteration, the SAGE algorithm first constructs the complete data by [7,8]
h g ( t ) = b ( ξ g ) m g ( t ) + v ( t ) g = q , b ( ξ g ) m g ( t ) g q .
Then, the E- and M-steps at the qth cycle of the rth iteration are introduced below.

5.1. Deterministic Signal Model

Based on (48), we have h q ( t ) CN b ( ξ q ) m q ( t ) , τ I K and the h g ( t ) ’s with g q are deterministic. The complete-data LLF is expressed as
Z ( ξ q , m q , τ ) = t = 1 L log p h q ( t ) ; ξ q , m q ( t ) , τ = L K log ( π τ ) 1 τ t = 1 L h q ( t ) b ( ξ q ) m q ( t ) 2 .

5.1.1. E-Step

Calculate the conditional expectation of the complete-data LLF in (49)
E Z ( ξ q , m q , τ ) | W ; Δ ( r , q 1 ) = L K log ( π τ ) 1 τ t = 1 L h q ( r ) ( t ) b ( ξ q ) m q ( t ) 2 ,
where Δ = ( ξ , M , τ ) , 0 K is the K × K zero matrix, and 
h q ( r ) ( t ) = h q ( r , q ) ( t ) = E h q ( t ) | W ; Δ ( r , q 1 ) = b ( ξ q ( r , q 1 ) ) m q ( r , q 1 ) ( t ) + w ( t ) B ( ξ ( r , q 1 ) ) m ( r , q 1 ) ( t ) ,
D h q ( t ) | W ; Δ ( r , q 1 ) = 0 K .

5.1.2. M-Step

Update the estimates of ξ q , m q , and  τ by solving
min ξ q Ω , m q , τ > 0 K log ( τ ) + 1 τ L t = 1 L h q ( r ) ( t ) b ( ξ q ) m q ( t ) 2 .
ξ q ( r ) , m q ( r ) , and  τ ( r , q ) are obtained by
ξ q ( r ) = ξ q ( r , q ) = arg max ξ q Ω Tr Γ q P ^ q ( r ) ,
m q ( r ) ( t ) = m q ( r , q ) ( t ) = b H ( ξ q ( r ) ) h q ( r ) ( t ) / K , t ,
τ ( r , q ) = d q ( r ) / K ,
where P ^ q ( r ) = ( 1 / L ) t = 1 L h q ( r ) ( t ) h q ( r ) ( t ) H and d q ( r ) = Tr ( I K Γ q ( r ) ) P ^ q ( r ) .
Moreover, the other parameter estimates are not updated at this cycle and their iterative values are
ξ g ( r , q ) = ξ g ( r , q 1 ) , g q ,
m g ( r , q ) ( t ) = m g ( r , q 1 ) ( t ) , g q , t .
Remark 2.
Since the h q ( r ) ( t ) ’s in (51), ξ q ( r ) in (54), and the m q ( r ) ( t ) ’s in (55) are unrelated to τ ( r , q 1 ) , we can omit (56) due to the nuisance parameter τ.

5.2. Random Signal Model

Based on (48), we have h q ( t ) CN ( 0 , N q ) with N q = ρ q b ( ξ q ) b H ( ξ q ) + τ I K . However, the distribution of h g ( t ) with g q is only associated with m g ( t ) . The complete-data LLF is written as
Z ( ξ q , ρ , τ ) = t = 1 L g q log p m g ( t ) ; ρ g + t = 1 L log p h q ( t ) ; ξ q , ρ q , τ = L ( G 1 ) log ( π ) L g q log ( ρ g ) + P ^ g / ρ g L K log ( π ) L log Det ( N q ) + Tr N q 1 P ^ q ,
where P ^ g = ( 1 / L ) t = 1 L | m g ( t ) | 2 and | · | denotes modulus.

5.2.1. E-Step

Calculate the conditional expectation of the complete-data LLF in (59)
E Z ( ξ q , ρ , τ ) | W ; Δ ( r , q 1 ) = V L g q log ( ρ g ) + P ^ g ( r , q ) / ρ g L log Det ( N q ) + Tr N q 1 P ^ q ( r ) ,
where Δ = ( ξ , ρ , τ ) and V = L ( K + G 1 ) log ( π ) . In (60),
P ^ g ( r , q ) = E P ^ g | W ; Δ ( r , q 1 ) = ρ g ( r , q 1 ) 1 b H ( ξ g ( r , q 1 ) ) d g ( r , q 1 ) + [ d g ( r , q 1 ) ] H P ^ w d g ( r , q 1 ) 0
with d g ( r , q 1 ) = [ N w ( r , q 1 ) ] 1 b ( ξ g ( r , q 1 ) ) ρ g ( r , q 1 ) and
P ^ q ( r ) = P ^ q ( r , q ) = E P ^ q | W ; Δ ( r , q 1 ) = N q ( r , q 1 ) N q ( r , q 1 ) [ N w ( r , q 1 ) ] 1 N q ( r , q 1 ) + N q ( r , q 1 ) [ N w ( r , q 1 ) ] 1 P ^ w [ N w ( r , q 1 ) ] 1 N q ( r , q 1 ) .

5.2.2. M-Step

Update the estimates of ξ q , ρ , and  τ by solving
min ξ q Ω , ρ 0 , τ > 0 g q log ( ρ g ) + P ^ g ( r , q ) / ρ g + log Det ( N q ) + Tr N q 1 P ^ q ( r ) .
We, thus, have ρ g ( r , q ) = P ^ g ( r , q ) , g q , and ξ q ( r ) , ρ q ( r , q ) , and  τ ( r , q ) are obtained by solving
min ξ q Ω , σ q 0 , τ > 0 K log ( τ ) + log Det ( Q q ) + 1 τ Tr Q q 1 P ^ q ( r ) ,
where N q = τ Q q with Q q = σ q b ( ξ q ) b H ( ξ q ) + I K and σ q = ρ q / τ . Following (45)–(47), ξ q ( r ) , ρ q ( r , q ) , and  τ ( r , q ) are obtained by
ξ q ( r ) = ξ q ( r , q ) = arg max ξ q Ω Tr Γ q P ^ q ( r ) ,
τ ( r , q ) = d q ( r ) / ( K 1 ) 0 , e q ( r ) > Tr P ^ q ( r ) / K , Tr P ^ q ( r ) / K 0 , e q ( r ) Tr P ^ q ( r ) / K ,
ρ q ( r , q ) = max e q ( r ) Tr P ^ q ( r ) / K / ( K 1 ) , 0 ,
where τ ( r , q ) = 0 is possible although its probability is very low. For example, if  G = 2 , L = 1 , and  ρ 1 ( 1 , 1 ) = 0 at 1st cycle of the 1st iteration, at the 2nd cycle of the 1st iteration we will have
N w ( 1 , 1 ) = ρ 2 ( 1 , 1 ) b ( ξ 2 ( 1 , 1 ) ) b H ( ξ 2 ( 1 , 1 ) ) + τ ( 1 , 1 ) I K = N 2 ( 1 , 1 )
and then P ^ 2 ( 1 ) = w ( 1 ) w H ( 1 ) by (62). Furthermore, if  w ( 1 ) = h b ( ξ ¯ ) ( h 0 , ξ ¯ Ω ) , we will obtain ξ 2 ( 1 ) = ξ ¯ by (65) and τ ( 1 , 2 ) = d 2 ( 1 ) / ( K 1 ) = 0 by (66).
To avoid τ ( r , q ) = 0 , we use the following two CM-steps to re-estimate ξ q , ρ q , and  τ based on problem (64) if τ ( r , q ) = 0 in (66).
  • First CM-step: Estimate ξ q and ρ q but hold τ = τ ( r , q 1 ) fixed. Then, problem (64) is simplified to
    min ξ q Ω , σ q 0 log Det ( Q q ) + 1 τ ( r , q 1 ) Tr Q q 1 P ^ q ( r ) .
    Following (21) and (22), ξ q ( r ) and ρ q ( r , q ) are obtained by
    ξ q ( r ) = ξ q ( r , q ) = arg max ξ q Ω Tr Γ q P ^ q ( r ) ,
    ρ q ( r , q ) = σ q ( r , q ) τ ( r , q 1 ) = max e q ( r ) τ ( r , q 1 ) / K , 0 .
  • Second CM-step: Estimate τ but hold ξ q = ξ q ( r ) and σ q = σ q ( r , q ) fixed. Then, problem (64) is simplified to
    min τ > 0 K log ( τ ) + 1 τ Tr [ Q q ( r ) ] 1 P ^ q ( r ) ,
    where Q q ( r ) = σ q ( r , q ) b ( ξ q ( r ) ) b H ( ξ q ( r ) ) + I K . We obtain τ ( r , q ) by
    τ ( r , q ) = Tr [ Q q ( r ) ] 1 P ^ q ( r ) / K = τ ( r , q 1 ) + d q ( r ) / K ,
    where d q ( r ) = Tr ( I K Γ q ( r ) ) P ^ q ( r ) 0 and τ ( r , q ) > 0 if τ ( r , q 1 ) > 0 .
Moreover, the other parameter estimate(s) is (are) not updated at this cycle and the iterative value(s) is (are)
ξ g ( r , q ) = ξ g ( r , q 1 ) , g q .

6. Properties of the Proposed EM, MEM, and SAGE Algorithms

6.1. Convergence Point

It is easy to verify that the above EM-type algorithms satisfy standard regularity conditions [5,21,26] and always converge to stationary points of J ( Ψ , τ ) . Of course, the convergence points of these EM-type algorithms depend on their initial points. To generate appropriate initial points, we can employ the method presented in [10] using the deterministic signal model.

6.2. Complexity and Stability

At the rth iteration, the computational burdens of the above EM-type algorithms lie in solving the G maximization problems
ξ g ( r ) = arg max ξ g Ω Tr Γ g P ^ g ( r ) , g .
Thus, these EM-type algorithms have almost the same computational complexity at every iteration and if an algorithm has faster convergence, its number of iterations required will be smaller and its computations will be fewer.
However, when the powers of sources are unequal, we have found via simulation that the DOA estimates of multiple sources, updated by (75), tend to be consistent with the true DOA of the source with the largest power. Accordingly, these EM-type algorithms may be unstable. To address this issue, we can reduce the difference between ξ g ( r ) and ξ g ( r 1 ) by increasing Tr Γ g P ^ g ( r ) rather than maximizing it, i.e.,
Tr Γ g ( r ) P ^ g ( r ) Tr Γ g ( r 1 ) P ^ g ( r ) , g ,
which guarantees the monotonicity of these EM-type algorithms [3]. As a good example, Algorithm 1 in the next section has given excellent simulation results.
Algorithm 1 Gradient ascent with backtracking line search
1:
y ( η g ) = K × Tr Γ g P ^ g ( r ) , η g = η g ( r 1 ) ( 0 , π ) ( radian ) .
2:
while  | y ( η g ) | > 0.001   do
3:
    s = 0.1 × ( π η g ) / y ( η g ) , y ( η g ) > 0 , η g / y ( η g ) , y ( η g ) < 0 .
4:
   while  y η g + s y ( η g ) < y ( η g ) + 0.3 s y ( η g ) 2  do
5:
      s = 0.5 s .
6:
   end while
7:
    η g = η g + s y ( η g ) ( 0 , π ) ( radian ) .
8:
end while
9:
η g ( r ) = η g .

7. Simulation Results

We give simulation results to compare the proposed EM-type algorithms. For convenience, the array is assumed to be a uniformly spaced linear array, in which ζ k = [ χ 2 ( k 1 ) 0 0 ] T . We set G = 2 , L = 20 , and β = 1 / G in the EM algorithm. Here, μ 1 = μ 2 = 90 ° is known while η 1 and η 2 need to be estimated. M in the deterministic signal model is also generated by the independent random numbers m g ( t ) CN ( 0 , ρ g ) . All the algorithms adopt the stopping criterion ξ ( r + 1 ) ξ ( r ) 0.001 ° . Algorithm 1 is designed to search the η g ( r ) ’s in (76) [27]. Moreover, M ( 0 ) = [ 1 1 ] T , ρ ( 0 ) = 1 , τ ( 0 ) = 1 / G , and τ ( 0 ) = 1 . In this section, the EM, MEM, and SAGE algorithms are simply written as EM, MEM, and SAGE, respectively.

7.1. Deterministic Signal Model

To compare the convergence of EM, MEM, and SAGE, Figure 1 plots their J ( Ψ ( r ) , τ ( r ) ) ’s, η 1 ( r ) ’s, and η 2 ( r ) ’s under one trial. In Figure 1, K = 10 , η 1 = 20 ° , η 2 = 80 ° , ρ 1 = 2 dB , ρ 2 = 4 dB , τ = 4 dB , η 1 ( 0 ) = 24 ° , and η 2 ( 0 ) = 84 ° . It is easy to see that EM, MEM, and SAGE converge to a consistent ( η 1 , η 2 ) estimate given an accurate initial point. Moreover, EM has a similar convergence with MEM while SAGE converges faster than EM and MEM.
Figure 2 and Figure 3 show two scatter plots of ( η 1 , η 2 ) estimates under 200 trials. In Figure 2, K = 10 , η 1 = 25 ° , η 2 = 75 ° , ρ 1 = 4 dB , ρ 2 = 2 dB , τ = 4 dB , η 1 ( 0 ) = 40 ° , and η 2 ( 0 ) = 60 ° . Moreover, the numbers of desirable points obtained by EM, MEM, and SAGE are 68, 72, and 179, respectively. In Figure 3, K = 10 , η 1 = 70 ° , η 2 = 78 ° , ρ 1 = 2 dB , ρ 2 = 4 dB , τ = 4 dB , η 1 ( 0 ) = 50 ° , and η 2 ( 0 ) = 58 ° . Moreover, the numbers of desirable points obtained by EM, MEM, and SAGE are 159, 157, and 190, respectively. Figure 2 and Figure 3 imply that given a poor initial point, SAGE can avoid the convergence to an undesirable stationary point of J ( Ψ , τ ) more efficiently than EM and MEM.
Note that in each of Figure 2 and Figure 3, the SAGE algorithm requires the least processing time due to the fastest convergence and thus performs the fewest computations required. Moreover, both sources in Figure 2 are not closely spaced, so it is very difficult to mix up both sources and the desirable points in Figure 2 are centered around the true position ( 25 ° , 75 ° ) . However, both sources in Figure 3 are closely spaced and the desirable points are centered around ( 78 ° , 70 ° ) or ( 70 ° , 78 ° ) , i.e., these EM-type algorithms are likely to mix up closely spaced sources.
According to these simulations, we can conclude that (1) EM has similar convergence with MEM, and (2) SAGE outperforms EM and MEM.

7.2. Random Signal Model

To compare the convergence of EM, MEM, and SAGE, Figure 4 plots their J ( Ψ ( r ) , τ ( r ) ) ’s, η 1 ( r ) ’s, and η 2 ( r ) ’s under one trial. In Figure 4, K = 10 , η 1 = 20 ° , η 2 = 80 ° , ρ 1 = 4 dB , ρ 2 = 4 dB , τ = 4 dB , η 1 ( 0 ) = 24 ° , and η 2 ( 0 ) = 84 ° . We can also observe that given an accurate initial point, EM, MEM, and SAGE converge to a consistent ( η 1 , η 2 ) estimate. Moreover, EM has similar convergence with MEM while SAGE converges faster than EM and MEM.
Figure 5 and Figure 6 show two scatter plots of ( η 1 , η 2 ) estimates under 200 trials. In Figure 5, K = 10 , η 1 = 25 ° , η 2 = 75 ° , ρ 1 = 4 dB , ρ 2 = 2 dB , τ = 4 dB , η 1 ( 0 ) = 40 ° , and η 2 ( 0 ) = 60 ° . Moreover, the numbers of desirable points obtained by EM, MEM, and SAGE are 185, 186, and 175, respectively. In Figure 6, K = 10 , η 1 = 70 ° , η 2 = 78 ° , ρ 1 = 2 dB , ρ 2 = 1 dB , τ = 4 dB , η 1 ( 0 ) = 55 ° , and η 2 ( 0 ) = 63 ° . Moreover, the numbers of desirable points obtained by EM, MEM, and SAGE are 161, 161, and 172, respectively. Figure 5 and Figure 6 imply that EM has similar convergence with MEM but compared to EM and MEM, SAGE is less and more efficient for avoiding the convergence to an undesirable stationary point of J ( Ψ , τ ) in Figure 5 and Figure 6, respectively. Note that in each of Figure 5 and Figure 6, the SAGE algorithm requires the least processing time due to the fastest convergence and thus performs the fewest computations required. In addition, these EM-type algorithms are likely to mix up closely spaced sources, so the desirable points in Figure 5 are centered around ( 25 ° , 75 ° ) and the desirable points in Figure 6 are centered around ( 78 ° , 70 ° ) or ( 70 ° , 78 ° ) .
Based on the above figures, we can conclude that (1) EM has similar convergence with MEM, and (2) SAGE cannot always outperform EM and MEM.

7.3. Deterministic and Random Signal Models

Snapshots from the random signal model can be processed by these EM-type algorithms for the deterministic signal model, which means that we can compare these algorithms for both signal models. The above simulation results have shown that EM has similar convergence with MEM, so we only compare EM and SAGE for both signal models in this subsection for simplicity.
Since both signal models have the same DOA parameter ξ , the stopping criterion ξ ( r + 1 ) ξ ( r ) 0.001 ° is suitable. Figure 7 shows a scatter plot of ( η 1 , η 2 ) estimates under 50 trials. In Figure 7, K = 10 , η 1 = 50 ° , η 2 = 100 ° , ρ 1 = 4 dB , ρ 2 = 4 dB , τ = 4 dB , η 1 ( 0 ) = 55 ° , and η 2 ( 0 ) = 95 ° . We can see that EM and SAGE for both signal models yield close ( η 1 , η 2 ) estimates for each trial.
Based on Figure 7, Figure 8 compares the numbers of iterations required by these algorithms. We can observe that EM for the deterministic signal model generally needs more iterations than EM for the random signal model. The reason is that EM for deterministic signal model needs to update more parameter estimates at each iteration and, thus, has slower convergence than EM for the random signal model. Moreover, SAGE for the deterministic signal model generally needs fewer iterations than SAGE for the random signal model. More importantly, SAGE for the deterministic signal model always requires the fewest iterations for each trial. Thus, we can conclude that SAGE for the deterministic signal model is superior to the other algorithms in the computational cost.
Figure 9 shows the root mean square error (RMSE) performances of DOA estimation obtained by the SAGE algorithm for the deterministic and random signal models. In Figure 9, η 1 = 60 ° , η 2 = 120 ° , ρ 1 = 0 dB , ρ 2 = 1 dB , τ = 3 dB , η 1 ( 0 ) = 55 ° , and η 2 ( 0 ) = 115 ° . Each RMSE is computed from 1000 independent realizations. We can observe that as the number of sensors K increases, the SAGE algorithm for each signal model yields small RMSEs, which indicates that increasing the number of sensors K can improve the accuracy of DOA estimation.

8. Conclusions

In this paper, we applied and designed the EM and SAGE algorithms for DOA estimation in unknown uniform noise and proposed a new MEM algorithm applicable to the noise assumption. Next, we improved these EM-type algorithms to ensure the stability when the powers of sources are unequal. After being improved, simulation results illustrated that the EM algorithm has similar convergence with the MEM algorithm, the SAGE algorithm outperforms the EM and MEM algorithms for the deterministic signal model, and the SAGE algorithm cannot always outperform the EM and MEM algorithms for the random signal model. In addition, simulation results indicated that when these EM-type algorithms process the same snapshots from the random signal model, the SAGE algorithm for the deterministic signal model can require the fewest iterations and computations.

Author Contributions

This paper was co-authored by M.-Y.G. and B.L. Conceptualization, M.-Y.G.; methodology, M.-Y.G.; software, M.-Y.G.; validation, M.-Y.G. and B.L.; formal analysis, M.-Y.G.; investigation, M.-Y.G.; resources, B.L.; data curation, M.-Y.G.; writing—original draft preparation, M.-Y.G.; writing—review and editing, B.L.; supervision, B.L.; funding acquisition, B.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Open Research Project of Jiangsu Provincial Key Laboratory of Photonic and Electronic Materials Sciences and Technology, grant number NJUZDS 2022-008.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available, due to the data in this paper not being from publicly available datasets but obtained from the simulation of the signal models listed in the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Krim, H.; Viberg, M. Two decades of array signal processing research: The parametric approach. IEEE Signal Process. Mag. 1996, 13, 67–94. [Google Scholar] [CrossRef]
  2. Godara, L.C. Application of antenna arrays to mobile communications. II. Beam-forming and direction-of-arrival considerations. Proc. IEEE 1997, 85, 1195–1245. [Google Scholar] [CrossRef]
  3. Dempster, A.P.; Laird, N.M.; Rubin, D.B. Maximum likelihood from incomplete data via the EM algorithm. J. R. Stat. Soc. Ser. B (Methodol.) 1977, 39, 1–38. [Google Scholar]
  4. Meng, X.; Dyk, D.V. The EM algorithm–an old folk-song sung to a fast new tune. J. R. Stat. Soc. Ser. B (Methodol.) 1997, 59, 511–567. [Google Scholar] [CrossRef]
  5. Feder, M.; Weinstein, E. Parameter estimation of superimposed signals using the EM algorithm. IEEE Trans. Acoust. Speech Signal Process. 1988, 36, 477–489. [Google Scholar] [CrossRef]
  6. Miller, M.I.; Fuhrmann, D.R. Maximum-likelihood narrow-band direction finding and the EM algorithm. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1560–1577. [Google Scholar] [CrossRef]
  7. Fessler, J.A.; Hero, A.O. Space-alternating generalized expectation-maximization algorithm. IEEE Trans. Signal Process. 1994, 42, 2664–2677. [Google Scholar] [CrossRef]
  8. Chung, P.; Bohme, J.F. Comparative convergence analysis of EM and SAGE algorithms in DOA estimation. IEEE Trans. Signal Process. 2001, 49, 2940–2949. [Google Scholar] [CrossRef]
  9. Gong, M.-Y.; Lyu, B. Alternating maximization and the EM algorithm in maximum-likelihood direction finding. IEEE Trans. Veh. Technol. 2021, 70, 9634–9645. [Google Scholar] [CrossRef]
  10. Ziskind, I.; Wax, M. Maximum likelihood localization of multiple sources by alternating projection. IEEE Trans. Acoust. Speech Signal Process. 1988, 36, 1553–1560. [Google Scholar] [CrossRef]
  11. Stoica, P.; Nehorai, A. MUSIC, maximum likelihood, and Cramer-Rao bound. IEEE Trans. Acoust. Speech Signal Process. 1989, 37, 720–741. [Google Scholar] [CrossRef]
  12. Stoica, P.; Nehorai, A. Performance study of conditional and unconditional direction-of-arrival estimation. IEEE Trans. Acoust. Speech Signal Process. 1990, 38, 1783–1795. [Google Scholar] [CrossRef]
  13. Liao, B.; Chan, S.; Huang, L.; Guo, C. Iterative methods for subspace and DOA estimation in nonuniform noise. IEEE Trans. Signal Process. 2016, 64, 3008–3020. [Google Scholar] [CrossRef]
  14. Liao, B.; Huang, L.; Guo, C.; So, H.C. New approaches to direction-of-arrival estimation with sensor arrays in unknown nonuniform noise. IEEE Sens. J. 2016, 16, 8982–8989. [Google Scholar] [CrossRef]
  15. Esfandiari, M.; Vorobyov, S.A.; Alibani, S.; Karimi, M. Non-iterative subspace-based DOA estimation in the presence of nonuniform noise. IEEE Signal Process. Lett. 2019, 26, 848–852. [Google Scholar] [CrossRef]
  16. Esfandiari, M.; Vorobyov, S.A. A novel angular estimation method in the presence of nonuniform noise. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Singapore, 22–27 May 2022. [Google Scholar]
  17. Yang, J.; Yang, Y.; Lu, J.; Yang, L. Iterative methods for DOA estimation of correlated sources in spatially colored noise fields. Signal Process. 2021, 185, 108100. [Google Scholar] [CrossRef]
  18. Selva, J. Efficient computation of ML DOA estimates under unknown nonuniform sensor noise powers. Signal Process. 2023, 205, 108879. [Google Scholar] [CrossRef]
  19. Pesavento, M.; Gershman, A.B. Maximum-likelihood direction-of-arrival estimation in the presence of unknown nonuniform noise. IEEE Trans. Signal Process. 2001, 49, 1310–1324. [Google Scholar] [CrossRef]
  20. Chen, C.E.; Lorenzelli, F.; Hudson, R.E.; Yao, K. Stochastic maximum-likelihood DOA estimation in the presence of unknown nonuniform noise. IEEE Trans. Signal Process. 2008, 56, 3038–3044. [Google Scholar] [CrossRef]
  21. Meng, M.; Rubin, D.B. Maximum likelihood estimation via the ECM algorithm: A general framework. Biometrika 1993, 80, 267–278. [Google Scholar] [CrossRef]
  22. Chung, P.; Bohme, J.F. DOA estimation using fast EM and SAGE algorithms. Signal Process. 2002, 82, 1753–1762. [Google Scholar] [CrossRef]
  23. Rhodes, I.B. A tutorial introduction to estimation and filtering. IEEE Trans. Autom. Control 1971, 16, 688–706. [Google Scholar] [CrossRef]
  24. Jaffer, A.G. Maximum likelihood direction finding of stochastic sources: A separable solution. In Proceedings of the International Conference on Acoustics, Speech, and Signal Processing (ICASSP), New York, NY, USA, 11–14 April 1988. [Google Scholar]
  25. Stoica, P.; Nehorai, A. On the concentrated stochastic likelihood function in array signal processing. Circuits Syst. Signal Process. 1995, 14, 669–674. [Google Scholar] [CrossRef]
  26. Jeff Wu, C.F. On the convergence properties of the EM algorithm. Ann. Stat. 1983, 11, 95–103. [Google Scholar]
  27. Boyd, S.; Vandenberghe, L. Convex Optimization, 1st ed.; Cambridge University Press: New York, NY, USA, 2004; pp. 463–466. [Google Scholar]
Figure 1. J ( Ψ ( r ) , τ ( r ) ) , η 1 ( r ) , and η 2 ( r ) comparison.
Figure 1. J ( Ψ ( r ) , τ ( r ) ) , η 1 ( r ) , and η 2 ( r ) comparison.
Sensors 23 04811 g001
Figure 2. Scatter plot of ( η 1 , η 2 ) estimates.
Figure 2. Scatter plot of ( η 1 , η 2 ) estimates.
Sensors 23 04811 g002
Figure 3. Scatter plot of ( η 1 , η 2 ) estimates.
Figure 3. Scatter plot of ( η 1 , η 2 ) estimates.
Sensors 23 04811 g003
Figure 4. J ( Ψ ( r ) , τ ( r ) ) , η 1 ( r ) , and η 2 ( r ) comparison.
Figure 4. J ( Ψ ( r ) , τ ( r ) ) , η 1 ( r ) , and η 2 ( r ) comparison.
Sensors 23 04811 g004
Figure 5. Scatter plot of ( η 1 , η 2 ) estimates.
Figure 5. Scatter plot of ( η 1 , η 2 ) estimates.
Sensors 23 04811 g005
Figure 6. Scatter plot of ( η 1 , η 2 ) estimates.
Figure 6. Scatter plot of ( η 1 , η 2 ) estimates.
Sensors 23 04811 g006
Figure 7. Scatter plot of ( η 1 , η 2 ) estimates.
Figure 7. Scatter plot of ( η 1 , η 2 ) estimates.
Sensors 23 04811 g007
Figure 8. Numbers of iterations of different algorithms.
Figure 8. Numbers of iterations of different algorithms.
Sensors 23 04811 g008
Figure 9. RMSEs of the SAGE algorithm.
Figure 9. RMSEs of the SAGE algorithm.
Sensors 23 04811 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gong, M.-Y.; Lyu, B. EM and SAGE Algorithms for DOA Estimation in the Presence of Unknown Uniform Noise. Sensors 2023, 23, 4811. https://doi.org/10.3390/s23104811

AMA Style

Gong M-Y, Lyu B. EM and SAGE Algorithms for DOA Estimation in the Presence of Unknown Uniform Noise. Sensors. 2023; 23(10):4811. https://doi.org/10.3390/s23104811

Chicago/Turabian Style

Gong, Ming-Yan, and Bin Lyu. 2023. "EM and SAGE Algorithms for DOA Estimation in the Presence of Unknown Uniform Noise" Sensors 23, no. 10: 4811. https://doi.org/10.3390/s23104811

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop