Next Article in Journal
A Multi-Objective Optimization Problem on Evacuating 2 Robots from the Disk in the Face-to-Face Model; Trade-Offs between Worst-Case and Average-Case Analysis
Previous Article in Journal
Energy-Efficient Check-and-Spray Geocast Routing Protocol for Opportunistic Networks
Previous Article in Special Issue
Anti-Shake HDR Imaging Using RAW Image Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sequential Estimation of Relative Transfer Function in Application of Acoustic Beamforming

Intelligent Signal Processing Lab., Korea University, Seoul 136-713, Korea
Information 2020, 11(11), 505; https://doi.org/10.3390/info11110505
Submission received: 18 September 2020 / Revised: 16 October 2020 / Accepted: 26 October 2020 / Published: 28 October 2020
(This article belongs to the Special Issue Computational Intelligence for Audio Signal Processing)

Abstract

:
In this paper, a sequential approach is proposed to estimate the relative transfer functions (RTF) used in developing a generalized sidelobe canceller (GSC). The latency in calibrating microphone arrays for GSC, often suffered by conventional approaches involving batch operations, is significantly reduced in the proposed sequential method. This is accomplished by an immediate generation of the RTF from initial input segments and subsequent updates of the RTF as the input stream continues. From the experimental results via the mean square error (MSE) criterion, it has been shown that the proposed method exhibits improved performance over the conventional batch approach as well as over recently introduced least mean squares approaches.

1. Introduction

Acoustic beamforming using a microphone array has been considered as one of the most effective front-end tools for enhancing acoustic signal quality in speech communication and automatic speech recognition. In the recently proposed acoustic beamforming techniques based on the generalized sidelobe canceller (GSC) such as [1,2,3], it has been clearly shown that precise estimation of relative transfer functions (RTFs) between each microphone of an array is critical for the effective performance of a beamformer. The estimated RTF plays a key role in calibrating the microphone array for compensating the signal leakage problem [1] in GSC, and it is used for constructing the matched beamformers [2] in scenarios where the desired signal is contaminated by directional non-stationary interference, such as a competing speaker.
The aim of this paper is to develop an effective method to estimate the RTFs for acoustic beamforming. Existing least square-based methods, such as the batch least squares (BLS) approach, require a set of input data blocks for initial calibration [1,2]. As the input data block becomes larger for improved calibration, the latency issue becomes more significant. In addition, the estimate of RTF deteriorates as the target moves and continued large displacement of the target would eventually make the beamformer ineffective. An adaptive form of RTF estimation using the least mean squares (LMS) was introduced previously [3]. This adaptive method requires an added step of a speech detector to correctly determine whether the detected acoustic signal contains any speech. However, the overall RTF estimation process becomes sensitive to the performance of the speech detector, and the speech detector adds significant computational load. LMS can be best applied for the detection of an acoustic signal in a wide area of interest, such as wide-area surveillance applications. Its fast directional response feature would allow it to zero in on a new sound source rapidly and generate accurate RTF estimates accordingly. This, however, is not a required feature in the case of human computer interface applications wherein the subject human may be moving slowly direction-wise. In the case of the LMS type of algorithms, any noise from other directions may prompt the algorithm to reinitialize the RTFs and may cause the beamformer to listen in the wrong directions. We developed an algorithm with the human computer speech interface as the main application, thus updates to the RTF due to the fast directional change from the sound source are not necessary and may only hinder its performance when noise is present.
To counter these problems and to further improve the performance, we propose an RTF estimation method which employs sequential-mode least squares (SLS) [4]. This presents several advantages over the existing methods. First, it provides flexibility in the way the acoustic data are collected for estimating the RTFs. For the BLS type of methods, it is required that acoustic data should be collected for a set period of time for the RTF estimation. In experiments of implementing a BLS-based method, at least 3.2 s of collection period was required for sufficient RTF estimations to guarantee reasonable performance of the GSC. The proposed method accomplishes the estimation by updating the RTF on a frame-by-frame basis, therefore there is no threshold acoustic collection period needed.
The other advantage is the memory efficiency achievable in hardware-based processing. In the BLS implementation, in terms of bytes, the memory required is {(# of microphones – 1) × (2 × # of frequency bins) × (# of frame numbers to be used in estimation) × (# of bytes w.r.t. variable type)} per source. As we mentioned in this section, RTF is estimated per frame. Therefore, it only requires memory storage sufficient for a single frame.

2. Problem Formulation

Let s(m) denote the target signal to which the beam should be focused, where m is the discrete time index. Then, the observed signal at the ith microphone of an array, yi(m), is assumed to be given by
y i ( m ) = h i ( m ) s ( m ) + n i ( m ) ,       i = 1 , M
where hi(m) represents the acoustic impulse response between the ith microphone and the target, M denotes the number of microphones in the array, and ni(m) is ambient noise assumed to be stationary and uncorrelated with the source signal.
In Figure 1, the target signal is reproduced by the respective acoustic transfer function (ATF) between the signal and each microphone, with the noise signal. To steer a beam toward the target, we need to accurately estimate the ATFs for correctly recovering the original signal. However, it is very hard to estimate the ATF precisely because it needs to include other related parameters such as microphone directional responses, or boundary conditions such as room dimensions as well as wall reflective properties. Instead of using ATF, RTF can be applied to steer the beam toward the target effectively [1,2]. The goal of this paper is to estimate RTF efficiently.
The signal model of interest in this paper is formed in the short-time Fourier transform (STFT) domain. In the STFT domain, (1) can be rewritten as
Y i ( l , ω ) = H i ( ω ) S ( l , ω ) + N i ( l , ω )
where l and ω denote the frame and the frequency index, respectively.
RTFs of Hi(ω) are defined as ratios of the transfer function between the ith microphone and the reference one. Here, we have chosen the leftmost one, which is the first microphone, as reference.
P i ( ω ) H i ( ω ) H 1 ( ω ) , i = 1 , , M
An RTF is estimated when the current input frame is determined to contain an acoustic event caused by the target. Observation Y i ( l , ω ) in (2) is rearranged by using (3) as
Y i ( l , ω ) = H i ( ω ) S ( l , ω ) + N i ( l , ω ) = P i ( ω ) Y 1 ( l , ω ) + U i ( l , ω ) ,
where
U i ( l , ω ) = N i ( l , ω ) P i ( ω ) N 1 ( l , ω ) .
The goal here is to efficiently estimate the RTF Pi(ω) from the given observation Yi(l, ω).

3. Sequential Estimation of RTF

In (4) and (5), we assume that the analysis interval of the STFT is sufficiently long enough for the observed signal in the lth frame to be considered stationary. In addition, we have assumed that the ambient noise ni(m) is stationary. Thus, the cross power spectral density (CPSD) between Yi and Y1 in the lth frame is written as
Φ Y i Y 1 ( l , ω ) = P i ( ω ) Φ Y 1 Y 1 ( l , ω ) + Φ U i Y 1 ( ω ) .
Note that since Ui is uncorrelated with Si, Φ U i Y 1 is independent of the time index l.
Let Φ ^ Y i Y 1 ( l , ω ) denote an estimate of Φ Y i Y 1 ( l , ω ) , then by using (6), it can be rewritten as
Φ ^ Y i Y 1 ( l , ω ) = P i ( ω ) Φ ^ Y 1 Y 1 ( l , ω ) + Φ ^ U i Y 1 ( l , ω ) = P i ( ω ) Φ ^ Y 1 Y 1 ( l , ω ) + Φ U i Y 1 ( ω ) + ε i ( l , ω ) ,
where ε i ( l , ω ) = Φ U i Y 1 ( ω ) Φ ^ U i Y 1 ( l , ω ) is the estimation error. We now consider the acoustic data of a finite duration corresponding to the first l frames of the analysis segment of the input signal for estimating Pi(ω). Via the BLS approach, Pi(ω) can be obtained from the following Equations (1) and (2):
[ Φ ^ Y i Y 1 ( 1 , ω ) Φ ^ Y i Y 1 ( 2 , ω ) Φ ^ Y i Y 1 ( l , ω ) ] = [ Φ ^ Y 1 Y 1 ( 1 , ω ) 1 Φ ^ Y 1 Y 1 ( 2 , ω ) 1 Φ ^ Y 1 Y 1 ( l , ω ) 1 ] [ P i ( ω ) Φ U i Y 1 ( ω ) ] + [ ε i ( 1 , ω ) ε i ( 2 , ω ) ε i ( l , ω ) ]
The idea behind the SLS is to recursively update the least squares estimate as new observations are acquired [4]. The following vectors are defined for sequentially solving Equation (8).
y i ( l , ω ) = [ Φ ^ Y i Y 1 ( 1 , ω ) Φ ^ Y i Y 1 ( 2 , ω ) Φ ^ Y i Y 1 ( l , ω ) ] T
a ( l , ω ) = [ Φ ^ Y 1 Y 1 ( l , ω ) 1 ]
A ( l , ω ) = [ Φ ^ Y 1 Y 1 ( 1 , ω ) 1 Φ ^ Y 1 Y 1 ( 2 , ω )       1 Φ ^ Y 1 Y 1 ( l 1 , ω ) 1 Φ ^ Y 1 Y 1 ( l , ω )     1 ¯ ] = [ A ( l 1 ) a ( l ) ¯ ]
ε i ( l , ω ) = [ ε i ( 1 , ω ) ε i ( 2 , ω ) ε i ( l , ω ) ] T
θ i ( ω ) = [ P i ( ω ) Φ U i Y 1 ( ω ) ] T
Then, (8) can be rewritten as
y i ( l , ω ) = [ y i ( l 1 , ω ) Φ ^ Y i Y 1 ( l , ω ) ¯ ]
where
y i ( l 1 , ω ) = A ( l 1 , ω ) θ i ( ω ) + ε i ( l 1 , ω )
Φ ^ Y i Y 1 ( l , ω ) = a ( l , ω ) θ i ( ω ) + ε i ( l , ω )
Let θ ^ i ( l , ω ) denote the SLS solution at the lth frame when the measurement Φ ^ Y i Y 1 ( l , ω ) is given. By using (10) and (14) and the given measurement vector y i ( l , ω ) , θ ^ i ( l , ω ) is determined as follows (note that we omitted ω in the following derivation for compactness of the expression)
θ ^ i ( l ) = { A T ( l ) A ( l ) } 1 A T ( l ) y i ( l ) = { A T ( l 1 ) A ( l 1 ) + a T ( l ) a ( l ) } 1 { A T ( l 1 ) y i ( l 1 ) + a T ( l ) Φ ^ Y i Y 1 ( l ) } .
Let D(l) denote the inverse of the Gram matrix of A(l) such that
D ( l ) = { A T ( l ) A ( l ) } 1
Use of the matrix-inversion lemma and (15) leads us to
{ A T ( l 1 ) A ( l 1 ) + a T ( l ) a ( l ) } 1 = { D 1 ( l 1 ) + a T ( l ) a ( l ) } 1 = D ( l 1 ) μ ( l ) D ( l 1 ) a T ( l ) a ( l ) D ( l 1 )
where
μ ( l ) = { 1 + a ( l ) D ( l 1 ) a T ( l ) } 1   ( or   μ ( l ) a ( l ) D ( l 1 ) a T ( l ) = 1 μ ( l ) )
Then, substituting (16) and (17) into (15) yields the SLS solution which is recursively updated.
θ ^ i ( l ) = { D ( l 1 ) μ ( l ) D ( l 1 ) a T ( l ) a ( l ) D ( l 1 ) } { A T ( l 1 ) y i ( l 1 ) + a T ( l ) Φ ^ Y i Y 1 ( l ) } = D ( l 1 ) A T ( l 1 ) y ( l 1 ) μ ( l ) D ( l 1 ) a T ( l ) a ( l ) D ( l 1 ) A T ( l 1 ) y i ( l 1 ) + D ( l 1 ) a T ( l ) Φ ^ Y i Y 1 ( l ) μ ( l ) D ( l 1 ) a T ( l ) a ( l ) D ( l 1 ) a T ( l ) Φ ^ Y i Y 1 ( l ) = θ ^ i ( l 1 ) μ ( l ) D ( l 1 ) a T ( l ) a ( l ) θ ^ i ( l 1 ) + D ( l 1 ) a T ( l ) Φ ^ Y i Y 1 ( l ) D ( l 1 ) a T ( l ) ( 1 μ ( l ) ) Φ ^ Y i Y 1 ( l ) = θ ^ i ( l 1 ) μ ( l ) D ( l 1 ) a T ( l ) a ( l ) θ ^ i ( l 1 ) + μ ( l ) D ( l 1 ) a T ( l ) Φ ^ Y i Y 1 ( l ) = θ ^ i ( l 1 ) + μ ( l ) D ( l 1 ) a T ( l ) { Φ ^ Y i Y 1 ( l ) a ( l ) θ ^ i ( l 1 ) } .
Pi(ω), the first element of θ ^ i ( l ) which estimates the RTF between the ith microphone and the speaker, is the key signal to be captured here. Since we have assumed the background noise is stationary, by using (7), (10) and (13), { Φ ^ Y i Y 1 ( l ) a ( l ) θ ^ i ( l 1 ) } in (18) would lead to the estimation error of the RTF as follows:
Φ ^ Y i Y 1 ( l ) a ( l ) θ ^ i ( l 1 ) = P i Φ ^ Y 1 Y 1 ( l ) + Φ ^ U i Y 1 ( l ) P ^ i ( l 1 ) Φ ^ Y 1 Y 1 ( l ) Φ ^ U i Y 1 ( l 1 ) Φ ^ Y 1 Y 1 ( l ) { P i P ^ i ( l 1 ) } .
Equations (18) and (19) tell us that the error is reflected to the update of the current estimate of the RTF with gain μ ( l ) D ( l 1 ) a T ( l ) .
Rearranging (18) in deriving the RTF update formula leads us to
P ^ i ( l ) = P ^ i ( l 1 ) + c ( l ) { Φ ^ Y i Y 1 ( l ) P ^ i ( l 1 ) Φ ^ Y 1 Y 1 ( l ) }
where
c ( l ) = ( l 1 ) Φ ^ Y 1 Y 1 ( l ) γ ( l 1 ) ( l 1 ) Φ ^ Y 1 Y 1 2 ( l ) 2 γ ( l 1 ) Φ ^ Y 1 Y 1 ( l ) γ 2 ( l 1 ) + l β ( l 1 )
β ( l ) k = 1 l Φ ^ Y 1 Y 1 2 ( k ) = β ( l 1 ) + Φ ^ Y 1 Y 1 2 ( l )
γ ( l ) k = 1 l Φ ^ Y 1 Y 1 ( k ) = γ ( l 1 ) + Φ ^ Y 1 Y 1 ( l )
Equation (18) seems similar to that of the recursive least squares (RLS). Unlike the RLS, however, the SLS formulation seeks the solution which minimizes the total estimated error with equal weight placed on each error component ranging from the start of the adaptation process to the latest time frame. The RLS technique, on the other hand, weighs the contribution of the error components depending on the temporal proximity to the present time by assigning the “forgetting factor” somewhere between zero and one, as shown below [5].
ζ ( l , i ) = λ l i , i = 1 , 2 , . , l
with the minimized total error defined as
Ε ( l ) = i = 1 l ζ ( l , i ) | ε ( i ) | 2
It should be apparent that λ equals 1 in the case of the SLS method. With the λ value less than 1, the RLS method would exhibit adaptability with a limited memory, while the SLS method would retain its memory infinitely. In this sense, the SLS method would result in identical parameters to those calculated by the BLS method up to the batch length of the BLS method. It can be inferred that the SLS method would result in smaller errors in comparison to the RLS method, so long as the sound source remains in the same direction. Due to the smaller size of the influential memory, the RLS method is expected to exhibit a better solution in the early stage of the adaptation if there was a significant change in the sound source direction. Nevertheless, the SLS method would eventually reduce the overall error better than the RLS once the SLS accumulates enough of the input data for its adaptation to the change. A case study of the SLS and the RLS determined that of the two algorithms implemented on CMOS circuits, the SLS delivered better performance [6].
One disadvantage of having an infinite influential memory, as in the case with the SLS, is that the method may not be nimble in adapting to any abrupt change of the sound source. One way of correcting this problem is by resetting the memory once it is determined that a change occurred in the sound source. If there is an alternate mean of alerting the beam former of a significant change, we contend that the SLS technique would yield a better overall result.
In considering the convergence of the parameter estimates when →∞, Nassiri-Toussi and Ren [7] showed that the parameter estimated by least square-type minimization algorithms converged to its true value provided that the estimation error was white noise. The numerical errors and system noise are presumably considered as white noise. In the white noise condition, the least square solution converges if the estimation period gets longer and longer since the proposed method reformulates the BLS estimation to enable the parameter estimation in a sequential manner. During the first few hundreds of milliseconds, it can show unstable behavior. However, the resultant estimates by the SLS are expected to converge with a sufficient number of the input samples to the one derived from the BLS estimation.
Now, the next question concerns the length of the input sample sufficient for tolerable performance of the estimated RTF. We experimentally analyzed the convergence of the RTF in terms of speed and of the values compared to those obtained from the BLS.

4. Experiments

The mean square error (MSE) is used for evaluating the performance. The MSE represents the amount of difference between the target RTF and the estimated RTF. We measured the MSE at each segmented frame of the input signal per microphone except the first microphone, the reference, and it can be formed by
M S E ( l ) = 1 M 1 i = 2 M 1 π 0 π | | P i ( ω ) P ^ i ( l , ω ) | | 2 d ω
The RTF is obtained by a room impulse response (RIR) generator [8] that produces the imaginary data formed with respect to the specified environment. The input signals are generated by filtering the speech signal with the RIR. The imaginary room has a size of 4 × 6 × 3 m (width, length and height) and the reverberation time is set to 0.128 s. The four-microphone array (spaced by 5 cm) is located in the middle of the room, as depicted in Figure 2. The distance between the sound sources and the microphone array is 0.3 m. The sound source is initially set to −45° with respect to the center normal of the microphone array, as depicted in Figure 2. The experiment began with the source at −45° for the first 14 s, then it moved to −40° instantaneously and collected the signal for the next duration of 10.5 s. Finally, the source was moved to −10° and the acoustic signal was recorded for the remainder of the experiment. It should be noted that we do not need to know the exact location of the sound object but need to know whether there is a location change of the targeted sound object. To detect such changes, we rely on the visual sensor/algorithm as described in [9]. This is to consider the situation that the sound object can move without making any sound since usually a sound-based object tracking algorithm can lose the object’s track when there is a long pause (silence).
We conducted the evaluation to compare the two conventional methods—the BLS method [1,2] and LMS method [3]—with the proposed SLS method. Figure 3 and Table 1 summarize the MSEs evaluated for the three methods. The “no reset” in Figure 3a and Table 1 means that the RTF estimate values of the BLS and the SLS methods were not reinitialized after the movement of the target. The proposed method shows a lower MSE than that of the BLS method on average because of its cumulative nature in the RTF estimation yielding more accurate values in the long run. The same cumulative nature, however, caused it to have greater errors at least in the earlier time frame when the speaker was at −10° (at 24.5 s). The adaptive nature of the SLS-based estimator corrected its RTF values to more accurate values than the one computed by the batch method after about five seconds. The errors exhibited by the SLS after 24.5 s are of an artificial nature in that the speaker location changed abruptly from −40° to −10°. In actual applications of beamforming in isolating the subject speaker from others, it is safe to assume that the subject speaker moves from one location to another at moderate speed. Therefore, it is very unlikely that a human speaker would move instantaneously from one location to another by 30° as it was the case in the experiment. The cumulative and gradual nature of updating the RTF in the case of the SLS approach is therefore more suitable for tracking and isolating a moving speaker.
In comparison to the LMS method, it is shown in Figure 3a that the performance of the proposed method declined when the sound source moved abruptly in large angles. Since our sequential method does not include any speech detector which would prompt re-initialization of the RTF estimates for any movement of the source, the data corresponding to previous positions of the source give rise to adverse effects on estimating the RTF when the source moves in large angles. Due to the speech detector coupled with the RTF estimation based on recursive updating, the LMS method shows reasonable performance after the sound source moved abruptly. The LMS method, however, was shown to be sensitive to noise as its performance degraded significantly between 5 and 9 s, and between 19 and 21 s intervals. Incorporating the speech detector in the processing may have led to incorrect RTF estimate values as the detector reacted to false alarms in these time intervals.
In another comparison of the three methods as depicted in Figure 3b, RTFs were re-initialized for both the batch method and the sequential method when the target source moved. Since the LMS method per se has the function of adapting to any environmental change [3], the initialization was unnecessary. In this evaluation, the proposed method shows the best performance in Table 1. The batch method needs an input data block in its initialization to estimate the RTF, therefore high MSE values appear in the duration corresponding to the beginning input data blocks: 0–3.2 s, 14.5–17.7 s and 25–28.2 s For the BLS to be capable to estimate the RTF, at least a 3.2 s block was required. In the experiments, it is not considered that the user moves within the BLS block size to see the performance of the algorithm itself (not due to insufficient data). Regardless of the block size and algorithms, the RTF value should be re-initialized with the corresponding data. Otherwise, it will show the simulation result as depicted in Figure 3a. That experiment simulated the situation:
  • The targeted user started to speak initially at −45° and paused.
  • 5° change) The user moved to −40 degrees and spoke again at 14 s and paused.
  • (30° change) The user moved again to 10° and spoke again at 24.5 s.
  • Figure 3a has shown that if the RFT is not initialized, the estimation error gets a higher value as the angular distance gets farther. When comparing the 5° (−45° to −40°) and 30° changes (−40° to −10°), it can be easily seen that the MSE gap increased significantly when the 30° change was made.
In the two experiments conducted, the proposed SLS method with re-initializations of the RTFs demonstrated the best performance among the considered methods in the case of a moving sound source. This result is promising considering that the angular position change of a sound source of interest occurs frequently in real-life environments, and such position change can be detected by another sensor as assumed in [9]. However, it must be reminded that the proposed method performed well when the source location change was moderate such as the 5° displacement as considered in the experiment. For following acoustic speech from a moving source, the SLS without any source position change detection may yet perform well in the current implementation.

5. Conclusions

We have presented a sequential approach to estimate the RTF for beamforming. The SLS method recursively updates the RTF estimates with the current input signal. Thus, it requires a minimal initialization period in the data collection and results in a significant reduction in the memory requirement. In addition to these advantages over the conventional estimation methods, it is shown that the proposed SLS improves the accuracies of the estimated RTF. Thus, we conclude that the proposed method is efficient and effective in estimating the RTF, and it has been shown in limited experimental trials that it exhibits notable improvements over existing methods.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gannot, S.; Burshtein, D.; Weinstein, E. Signal enhancement using beamforming and nonstationarity with applications to speech. IEEE Trans. Signal Process. 2001, 49, 1614–1626. [Google Scholar] [CrossRef] [Green Version]
  2. Reuven, G.; Gannot, S.; Cohen, I. Dual-Source transfer-function generalized sidelobe canceller. IEEE Trans. Audio Speech Lang. Process. 2008, 16, 711–727. [Google Scholar] [CrossRef]
  3. Cohen, I. Relative transfer function identification using speech signals. IEEE Trans. Speech Audio Process. 2004, 12, 451–459. [Google Scholar] [CrossRef] [Green Version]
  4. Sharf, L. Statistical Signal Processing: Detection, Estimation, and Time Series Analysis; Addison-Wesley: Boston, MA, USA, 1991; pp. 384–386. [Google Scholar]
  5. Haykin, S. Adaptive Filter Theory, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1996; p. 564. [Google Scholar]
  6. Murugavel, A.K.; Ranganathan, N.; Chandramouli, R.; Chavali, S. Least-square estimation of average power in digital CMOS circuits. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 2002, 10, 55–58. [Google Scholar] [CrossRef]
  7. Nassiri-Toussi, K.; Ren, W. On the convergence of least squares estimates in white noise. IEEE Trans. Autom. Control. 1994, 39, 364–368. [Google Scholar] [CrossRef]
  8. RIR Generator. 2014. Available online: https://www.audiolabs-erlangen.de/fau/professor/habets/software/rir-generator (accessed on 28 October 2020).
  9. Beh, J.; Lee, T.; Ahn, S.; Kim, H.; Han, D.K.; Ko, H. Enabling directional human-robot speech interface via adaptive beamforming and spatial noise reduction. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems; Institute of Electrical and Electronics Engineers (IEEE), San Diego, CA, USA, 28 October–2 November 2007; Volume 28, pp. 3454–3459. [Google Scholar]
Figure 1. Observed signals modeling in the microphone array.
Figure 1. Observed signals modeling in the microphone array.
Information 11 00505 g001
Figure 2. Experimental setting.
Figure 2. Experimental setting.
Information 11 00505 g002
Figure 3. MSE as a function of time, (a) “no reset” case, (b) “reset” case.
Figure 3. MSE as a function of time, (a) “no reset” case, (b) “reset” case.
Information 11 00505 g003
Table 1. Averaged MSEs.
Table 1. Averaged MSEs.
MethodsAveraged MSE
“No Reset” Case, Averaged from 0 through 24.5 s in Figure 3a“Reset” Case, Averaged during Entire Evaluation Period in Figure 3b
BLS0.59780.4817
LMS0.23600.1723
SLS0.20520.1372
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Beh, J. Sequential Estimation of Relative Transfer Function in Application of Acoustic Beamforming. Information 2020, 11, 505. https://doi.org/10.3390/info11110505

AMA Style

Beh J. Sequential Estimation of Relative Transfer Function in Application of Acoustic Beamforming. Information. 2020; 11(11):505. https://doi.org/10.3390/info11110505

Chicago/Turabian Style

Beh, Jounghoon. 2020. "Sequential Estimation of Relative Transfer Function in Application of Acoustic Beamforming" Information 11, no. 11: 505. https://doi.org/10.3390/info11110505

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop