Next Article in Journal
eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations
Previous Article in Journal
An Innovative Applied Control System of Helicopter Turboshaft Engines Based on Neuro-Fuzzy Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Options for Performing DNN-Based Causal Speech Denoising Using the U-Net Architecture †

Department of Electronic Engineering, National I-Lan University, No. 1, Sec. 1, Shen-Lung Road, I-Lan 26047, Taiwan
*
Author to whom correspondence should be addressed.
This paper is an expanded version of Hu, H.-T.; Tsai, H.-H.; Lee, T.-T. Suitable domains for causal speech denoising using DNN with U-Net architecture. In Proceedings of the 7th International Conference on Knowledge Innovation and Invention 2024 (ICKII 2024), Nagoya, Japan, 16–18 August 2024.
Appl. Syst. Innov. 2024, 7(6), 120; https://doi.org/10.3390/asi7060120
Submission received: 30 September 2024 / Revised: 21 November 2024 / Accepted: 26 November 2024 / Published: 29 November 2024
(This article belongs to the Section Information Systems)

Abstract

:
Speech enhancement technology seeks to improve the quality and intelligibility of speech signals degraded by noise, particularly in telephone communications. Recent advancements have focused on leveraging deep neural networks (DNN), especially U-Net architectures, for effective denoising. In this study, we evaluate the performance of a 6-level skip-connected U-Net constructed using either conventional convolution activation blocks (CCAB) or innovative global local former blocks (GLFB) across different processing domains: temporal waveform, short-time Fourier transform (STFT), and short-time discrete cosine transform (STDCT). Our results indicate that the U-Nets can receive better signal-to-noise ratio (SNR) and perceptual evaluation of speech quality (PESQ) when applied in the STFT and STDCT domains, with comparable short-time objective intelligibility (STOI) scores across all domains. Notably, the GLFB-based U-Net outperforms its CCAB counterpart in metrics such as CSIG, CBAK, COVL, and PESQ, while maintaining fewer learnable parameters. Furthermore, we propose domain-specific composite loss functions, considering the acoustic and perceptual characteristics of the spectral domain, to enhance the perceptual quality of denoised speech. Our findings provide valuable insights that can guide the optimization of DNN designs for causal speech denoising.

1. Introduction

Most recorded speech signals are affected by noise, which degrades quality and hinders intelligibility. Speech enhancement techniques aim to maximize the perceptual quality of speech signals disturbed by background noise and reverberation. Background noise may include environmental sounds and instrumental interference, while reverberation occurs due to reflections in the transmission path. Although both types of noise can coexist and complicate the denoising task, effective solutions are achievable using deep neural networks (DNN) [1,2], which aim to predict clean speech from corrupted inputs. To this end, the scope of discussion in this study will be limited to speech denoising without losing generality.
Speech denoising is crucial for applications such as audio and video calls, hearing aids, and automatic speech recognition systems. While traditional statistical signal processing approaches have addressed this problem for years, recent research has shifted towards machine learning techniques that learn from real-world data. Following the success of deep learning in various classification and regression tasks [3], there has been a growing interest in applying DNNs for speech denoising.
The core concept of DNN-based speech denoising involves training models to learn the complex mapping from noise-corrupted speech representations to their clean counterparts. This kind of approach offers two significant advantages: (1) it operates without requiring knowledge of the statistical properties of the speech and noise, and (2) it can handle fast-varying non-stationary noise. Several effective DNN architectures have emerged, with the U-Net becoming a prominent choice for speech denoising. Initially designed for biomedical image segmentation [4], the U-Net has proven highly adaptable and effective for speech applications [2,5,6,7,8,9]. Its U-shaped structure comprises an encoder and a decoder linked by a bottleneck. The encoder compresses input data into a lower-dimensional representation, capturing essential features, while the decoder reconstructs the data to its original dimensions with improved output. The U-Net architecture preserves high-resolution features, facilitates hierarchical feature extraction, supports efficient training, and enhances generalization, making it a powerful tool in DNN-based speech denoising.
When employing U-Net for speech denoising, the input can be represented in various forms, such as time-domain waveforms or spectral transformations (STFT and STDCT). Previous studies often focused on a specific domain and adjusted the U-Net’s structure for optimization. However, discussions on the applicability of a U-Net model across multiple domains remain scarce. This paper aims to develop a versatile U-Net model and evaluate its denoising efficacy on narrowband speech sampled at 8 kHz. Through comparative analyses of experimental results, we hope to identify the domain that offers the greatest advantages for effective speech denoising.
The contributions of this paper are threefold. Firstly, we establish a comparison framework for assessing U-Net performance across different processing domains. Secondly, our examination of classical and advanced U-Nets justifies the design choices for layer configurations, facilitating a balance between model complexity and computational efficiency. Thirdly, upon identifying the optimal domain for DNN-based speech denoising, we explore the use of composite loss functions to enhance perceptual quality further.
The remainder of this paper is structured as follows: Section 2 outlines recent technical developments in the field. Section 3 discusses the U-Net architecture for speech denoising, including network design, causality implementation, input arrangements, and loss functions across domains. Section 4 presents experimental settings and performance evaluations. Conclusions are drawn in Section 5.

2. Related Works and Research Planning

Early DNN-based speech denoising methods often utilized time-frequency representations to analyze spectral features over time [5]. Recent trends indicate that incorporating phase information can significantly enhance speech quality [10,11]. As the phase information exists in the raw temporal waveform and its spectral transformation as well, DNN-based phase-aware speech denoising can be carried out straightforwardly using the speech waveform as the input [12,13,14]. For the denoising process conducted in the spectral domain [2,5,6,7,8], the short-time Fourier transform (STFT) presentation with the real and imaginary (or RI for short) components arranged in sequence is most popular.
DNN-based speech enhancement can be implemented in the form of mapping or masking. The masking approach estimates a suppression gain for each target value [2,15], while mapping directly predicts the output values [5,16,17]. Although the masking approach provides an auxiliary constraint that improves consistency with desired outputs, its advantages diminish as DNNs become more proficient.
Causality is crucial for real-time applications, as it ensures that DNNs only utilize past and present features. Two common methods for implementing causal DNN-based speech denoising include recurrent networks [18,19] (such as long short-term memory, LSTM [20]) and frame-buffering techniques that compile past frame data into a buffer to guide the DNN during denoising [21,22].
A well-defined loss function is essential for training DNNs in speech denoising, as it quantifies the alignment between the DNN’s predictions and target outputs. Here, the goal is to minimize noise while preserving original speech characteristics, balancing the trade-off between denoising and potential speech distortion. Recent advancements in loss functions that optimize both magnitude and phase spectra show promise [10]. Additionally, power compression has proven beneficial in enhancing denoising performance [23].
In this study, we adopt the frame-buffering approach to construct two representative causal U-Nets (classical and advanced) to evaluate the most suitable processing domain for speech denoising. The classical U-Net utilizes multi-level layers of conventional convolution and activation, while the advanced U-Net employs global local former blocks (GLFB) as introduced in [24]. After identifying the optimal domain, we will develop domain-specific loss functions tailored to the domain’s characteristics.

3. Speech Denoising

Based on the discussions above, we elaborate on the processing framework, DNN architecture, input arrangement, and loss function required for subsequent investigation.

3.1. Processing Framework

A noisy speech y [ n ] is commonly modeled as the sum of clean speech x [ n ] and additive noise z [ n ] , i.e.,
y [ n ] = x [ n ] + z [ n ]
Since y [ n ] may vary in length, the denoising process is generally performed using a frame-based overlap-and-add (OLA) method. That is, the noisy speech signal y [ n ] is first divided into frames of fixed length, each overlapping with its adjacent frames, and then weighted by a window w [ n ] , expressed as follows:
y w ( m ) [ n ] = y n + m L s w [ n ]   for   0 n L f 1 ; 0 m M 1 . = y [ i ] w i m L s   for   i = n + m L s ; 0 i ( M 1 ) L s + L f 1 .  
where y w ( m ) [ n ] denotes the windowed noisy speech signal at the m t h frame. L f represents the frame length and L s corresponds to the shift distance for each succeeding frame. Thus, the length of the overlap portion for two adjacent frames is L f L s . The function w [ n ] has the periodic form of the Hamming window:
w [ n ] = 0.54 0.46 cos 2 π n L f , n = 0 , 1 , , L f 1 ; 0 , otherwise .
After frame partition, y w ( m ) [ n ] in a short time frame can be fed into the DNN model to perform denoising, and the result generated by the DNN is the denoised speech signal, termed y ^ w ( m ) [ n ] . Assembling the denoised speech signals in all frames altogether results in a complete speech segment. Notably, during the re-synthesis stage of the OLA, we have to rescale the amplitude synchronously, as follows:
y ^ [ i ] = m = 0 M 1 y ^ w ( m ) [ n ] n = i m L s m = 0 M 1 w i m L s = m = 0 M 1 y ^ ( i ) w i m L s m = 0 M 1 w i m L s
where y ^ ( i ) denotes the denoised output. The denominator in the above expression is meant to restore the amplitude to the original regardless of which window is used. Figure 1 illustrates the concept of DNN-based speech denoising in the temporal domain, where the DNN is responsible for mapping y w ( m ) [ n ] to y ^ w ( m ) [ n ] in each frame.
If the processed object for the DNN is a sequence of spectral components, a common practice is to convert the windowed speech signal y w ( m ) [ n ] using discrete Fourier transform (DFT). The resulting output is widely known as the STFT representation, termed Y D F T ( m ) [ k ] :
Y D F T ( m ) [ k ] = n = 0 L f 1 y w ( m ) [ n ] e i 2 π L f k n ,   k = 0 , 1 , 2 , , L f 1 .
A well-designed DNN is supposed to retrieve the clean STFT coefficient, i.e., X D F T ( m ) [ k ] , from the noisy source, i.e., Y ^ D F T ( m ) [ k ] . This is equivalent to assuming that Y ^ D F T ( m ) [ k ] X D F T ( m ) [ k ] . To obtain the denoised speech signal, one must first convert Y ^ D F T ( m ) [ k ] to y ^ w ( m ) [ n ] through inverse DFT, as below, and then plug y ^ w ( m ) [ n ] into Formula (4).
y w ( m ) [ n ] = 1 L f k = 0 L f 1 Y D F T ( m ) [ k ] e i 2 π L f k n ,   n = 0 , 1 , 2 , , L f 1 .
Figure 2 depicts the denoising process in the frequency domain, where we employ the spectrogram to provide an insightful inspection of STFT coefficients across frames.
STDCT is another option in addition to STFT. In case the denoising DNN operates in the STDCT domain, the input changes from y w ( m ) [ n ] to its DCT transformation Y D C T ( m ) [ k ] , defined below:
Y D C T ( m ) [ k ] = 2 L f n = 0 L f 1 y w ( m ) [ n ] 1 1 + δ [ k ] cos π k L f n + 1 2 ,   k = 0 , 1 , 2 , , L f 1 .
where δ [ k ] denotes the Kronecker delta function. Since δ n = 1 when n = 0 , and δ n = 0 for n 0 . The Kronecker delta function only takes effect on the direct current term, i.e., Y D C T ( m ) [ 0 ] , to enable the transformation matrix to be orthogonal.
Similar to the situation in the STFT domain, the processed STDCT coefficients Y ^ D C T ( m ) [ k ] need to be converted to waveform sequences before applying the OLA (i.e., Equation (4)) to retrieve the denoised speech signal. The formula for converting Y D C T ( m ) [ k ] to y w ( m ) [ n ] is defined as the following:
y w ( m ) [ n ] = 2 L f k = 0 L f 1 Y D C T ( m ) [ k ] 1 1 + δ [ n ] cos π n L f k + 1 2 ,   n = 0 , 1 , 2 , , L f 1 .

3.2. DNN Architecture

The U-Net architecture is widely recognized for its effectiveness in denoising tasks. Building on previous discussions, we propose a 6-level skip-connected U-Net for our experiments, implementing it with two distinct component modules: the CCAB and the more advanced GLFB.
Our classical U-Net implemented with CCABs is based on [6,24], with two essential modifications. Firstly, we replace masking estimation with direction mapping at the final output. Secondly, by referring to the principle for causal speech denoising in [22], we incorporate a frame buffer to collect input data from the current and past seven frames, expanding the input from one-dimensional sequences to two-dimensional (2D) feature maps. The other setups include the use of a frame length L f of 256 and a shift distance L s of 64, resulting in a 32 ms window span with an 8 ms stride, causing a 40 ms delay in real-time processing. Consequently, the input to the U-Net is an array of size 256 × 8 .
The main architecture of the classical U-Net, depicted in Figure 3, features symmetrical encoder and decoder structures [6,24], each comprising six layers of CCABs. Each submodule on the encoder side contains a convolution followed by layer normalization [25] and a leaky rectified linear unit (ReLU) [26], while the convolution is changed to a transposed convolution in the decoder. Figure 4 presents the CCABs used in the classical U-Net. A two-layer dense block [27] is added at the encoder-decoder junction to enhance latent feature integration. In Figure 3, we also label the hyperparameter settings of involved convolutions in the form of “F: kernel size, output channels, S: strides.”
The contracting path of the encoder compresses the input features into a compact representation. Meanwhile, the expanding path of the decoder reconstructs the target output. Feature maps that capture local details from previous layers in the contracting path are concatenated with the upsampled feature maps in the expanding pipeline via skip connections.
Before feeding the data into the encoder, we additionally project the input features into higher dimensional space, i.e., keeping the size of the feature map unchanged but increasing the number of channels. The number of channels in the feature map is doubled for every two down-sampling layers and halved for every two up-sampling layers. Another projection layer at the terminal end is responsible for projecting the denoised features back into a single-channel output.
The advanced U-Net retains the overall architecture but replaces CCABs with GLFBs developed in [24]. The GLFB has the same structural features as the transformer architecture [28], signified by its global and local modeling. As depicted in Figure 5, the global section involves pointwise convolution, depth-wise separable convolution, gating, and channel attention, and the local section mainly involves pointwise convolutions and gating. Inside the GLFB, the gating mechanism aims to replace the commonly used activation function. Figure 5 showcases the detailed composition involved in a GLFB. Because the input and output within a GLFB have the same dimension, addition-based skip connections are therefore used in the advanced U-Net to maintain dimensional consistency. Furthermore, the advanced U-Net requires auxiliary down-sampling and up-sampling for feature extraction and expansion, utilizing convolution with a kernel size of 2 and a stride of 2 for down-sampling and pixel-shuffle [29] for up-sampling. Figure 6 shows such configurations.
While smaller kernel sizes in GLFB stacks can reduce parameters, our focus remains on optimizing denoising performance rather than minimizing model size. We retain the exact kernel sizes of the convolutional filters in the classical U-Net. Still, the advanced U-Net with GLFBs significantly reduces learnable parameters from 612 K to 238.6 K, achieving 39% memory savings. Notably, the above U-Net structure (either classical or advanced) is adaptable for spectral and temporal-domain denoising tasks, using STFT or STDCT inputs and inverse transformations to reconstruct speech waveforms.

3.3. Input Arrangements

Input arrangement is crucial to the learning speed and ultimate performance of the U-Net. As indicated earlier in the introduction, both temporal and spectral representations of noisy speech can be adopted as input for denoising tasks. When performing speech denoising in the temporal domain, the input consists solely of sequences of noisy speech waveforms. Figure 7 illustrates the input arrangement in the temporal domain. When considering a transformed domain, the noisy speech signals must be converted into the designated domain. For instance, the STDCT sequence is obtained by applying Equation (7) to a frame of the noisy speech signal. Since an STDCT sequence contains only real values, it can directly serve as input for the U-Net. Figure 8 shows the input arrangement of STDCT coefficients.
The situation is somewhat different when using STFT coefficients as the U-Net’s input, as STFT coefficients consist of complex values with conjugate symmetry in their first and second halves. Let us assume the number of points used in the DFT equals the frame length. Due to the conjugate symmetry of the STFT coefficients, only half of the coefficients are needed to conserve all available information. For a DFT sequence with an even length, the coefficients corresponding to the direct current (DC) and Nyquist frequency (NF) only have real values and require special attention in data arrangement. Researchers typically adopt the first half of the DFT coefficients plus one extra (i.e., NF) as input. However, this arrangement appears redundant, as the imaginary parts of both ends are essentially null. To address this issue, we take the first half of the DFT coefficients as input, arranging each coefficient’s real and imaginary parts in alternating order to form a real-value sequence [30].
Additionally, we insert the real NF value into the imaginary position of the DC term. The final arrangement is shown in Figure 9. This arrangement maintains a consistent input dimension regardless of the domain selected for denoising, allowing the proposed U-Net to be applied across various domains without the need for dimensional rescaling.

3.4. Loss Functions

Loss functions are critical for training DNNs because they guide the optimization process and determine how well a DNN performs by gauging the difference between the ideal values and its predictions. It can be shown that, regardless of the chosen domain for speech denoising, the utility of mean squared error (MSE) in computing the gradient for parameter updating has the same effect. We justify this argument through Parseval’s theorem. Let ξ [ n ] denote the difference between denoised y ^ w [ n ] and clean speech x w [ n ] in a single frame. Furthermore, Ξ D F T [ k ] = D F T ξ [ n ] and Ξ D C T [ k ] = D C T ξ [ n ] are the results of applying DFT and DCT to ξ [ n ] . Here, we have omitted the frame index in superscript while discussing the data within the same frame. The formulas for MSE loss in the temporal, STFT, and STDCT domains are listed below:
L M S E y ^ w [ n ] , x w [ n ] = 1 L f n = 0 L f 1 ξ 2 [ n ] = 1 L f n = 0 L f 1 y ^ w [ n ] x w [ n ] 2
L M S E Y ^ D F T [ k ] , X D F T [ k ] = 1 L f k = 0 L f 1 Ξ D F T [ k ] 2 = 1 L f n = 0 L f 1 Y ^ D F T [ k ] X D F T [ k ] 2
L M S E Y ^ D C T [ k ] , X D C T [ k ] = 1 L f k = 0 L f 1 Ξ D C T [ k ] 2 = 1 L f n = 0 L f 1 Y ^ D C T [ k ] X D C T [ k ] 2
According to Parseval’s theorem [31], the MSE of ξ [ n ] , Ξ D F T [ k ] , and Ξ D C T [ k ] are essentially congruent.
L M S E y ^ w [ n ] , x w [ n ] = 1 L f   L M S E Y ^ D F T [ k ] , X D F T [ k ] = L M S E Y ^ D C T [ k ] , X D C T [ k ]
Based on the above derivation, we can employ MSE as a universal loss function when comparing and analyzing U-Net’s performance in heterogeneous domains.

4. Experiment and Performance Evaluation

In Section 4, we delve into the experimental settings and performance evaluations essential to understanding the efficacy of our proposed models. This section outlines the datasets employed for model training, provides a comprehensive assessment of model performance, and discusses further considerations of the loss function within the STFT and STDCT domains.

4.1. Datasets for Model Training

In our experiments, the speech samples were sourced from the Centre for Speech Technology Voice Cloning ToolKit (CSTR VCTK) Corpus [32], which includes utterances from 56 individuals (28 males and 28 females). We utilized recordings from 54 of these individuals, each contributing approximately 400 sentences, as training material. The recordings of the remaining two (one male and one female) were set aside for testing. Originally sampled at 48 kHz, these files were down-sampled to 8 kHz for our tests. Noise data were incorporated from the Diverse Environments Multi-channel Acoustic Noise Database (DEMAND) [33], featuring six categories of ambient noise, each with three distinct recordings. During training, noise was randomly mixed with speech at signal-to-noise ratios (SNR) of −5, 5, 10, and 15 dB. For real-time speech denoising, we employed the OLA method with L f = 256 and L s   = 64 at a frame updating rate of 125 times per second, ensuring that processing for each frame was completed within 8 milliseconds.
During the training phase, two percent of our data served as the validation set. We selected the Adam optimizer [34] and processed mini-batches of 2048 observations. The training process was empirically set at a maximum of 60 epochs. The best model was identified based on the lowest validation loss. We conducted the above model training in MATLAB® R2024a, utilizing an NVIDIA® 3090 GPU (NVIDIA, Santa Clara, CA, USA) to accelerate processing speed.

4.2. Performance Evaluation

We evaluated the performance of the proposed U-Nets across different domains, focusing on the speech quality and intelligibility of the denoised output. The speech enhancement metrics used included CSIG, CBAK, COVL (proposed by Hu and Loizou [35]), and the Perceptual Evaluation of Speech Quality (PESQ) [36]. CSIG rates speech signal quality, CBAK assesses background noise distortion, and COVL evaluates overall quality—all on a scale from 1 (poor) to 5 (excellent). The PESQ metric ranges from −0.5 to 4.5, with higher scores indicating better speech quality. Additionally, we used the short-time objective intelligibility (STOI) metric [37], ranging from 0 to 1 in terms of percentage, to assess speech intelligibility.
Our tests involved corrupting clean speech with 18 different noise types at initial SNRs of −2.5, 2.5, 7.5, and 12.5 dB. We repeated each test ten times to minimize variation and averaged the results for consistency. According to the results presented in Table 1, the U-Net’s performance in enhancing SNR and improving speech quality and intelligibility was robust across all tested domains, with slight variations. For the classical U-Net constructed using CCABs, the SNR improvement was noteworthy for low SNR conditions where the SNR level jumps from −2.5 dB to above 13 dB. By contrast, the improvement is less than 8 dB when the initial SNR is 12.5 dB. Notably, the advanced U-Net model, equipped with GLFBs, generally outperformed the classical model with CCABs. Improvements of approximately 0.48 dB in SNR and 0.033 in PESQ scores were observed for the input taken from the STFT domain, and the gains were 0.42 dB in SNR and 0.041 in PESQ for the input taken from the STDCT domain.
Our findings suggest that the U-Net’s performance was comparably strong in both the STFT and STDCT domains, with marginal superiority to that attained directly using time sequences in the temporal domain. Furthermore, metrics sensitive to the Fourier spectra, like CSIG, CBAK, and COVL, also demonstrated an apparent preference for the STFT and STDCT domains over the temporal domain. This adaptability and performance consistency underline the potential of U-Net architectures for a broad range of audio processing applications.

4.3. Further Considerations of Loss Function in the STFT and STDCT Domains

The results in Table 1 demonstrate that the U-Net architecture is practical and efficient for speech enhancement. The performance discussed in the previous section is based on training U-Nets using MSE as the loss function. In DNN-based speech denoising, loss functions that integrate magnitude constraints with complex spectral optimization are commonly used to enhance speech quality and intelligibility. Additionally, power compression has been employed to improve the estimated speech quality further. Given the proposed U-Nets’ superior performance in the STFT and STDCT domains, we apply techniques in [10,23] to refine the loss function, potentially improving speech quality further:
L c o m p Y ^ D F T [ k ] , X D F T [ k ] = α L M S E _ M a g | Y ^ D F T , β [ k ] | , | X D F T , β [ k ] | + ( 1 α ) L M S E _ R I Y ^ D F T , β [ k ] , X D F T , β [ k ]
with Y ^ D F T , β [ k ] and X D F T , β [ k ] defined as
Y ^ D F T , β ( k ) = Y ^ D F T [ k ] β Y ^ D F T [ k ] Y ^ D F T [ k ] = Y ^ D F T [ k ] Y ^ D F T * [ k ] β / 2 Y ^ D F T [ k ] Y ^ D F T [ k ] Y ^ D F T * [ k ] ; X β ( k ) = X D F T [ k ] β X D F T [ k ] X D F T [ k ] = X D F T [ k ] X D F T * [ k ] β / 2 X D F T [ k ] X D F T [ k ] X D F T * [ k ] .
It is worth noting that the second term on the right-hand side of Equation (13) also contains the phase information, as the calculation of power spectra (namely, Y ^ D F T [ k ] Y ^ D F T * [ k ] and X D F T [ k ] X D F T * [ k ] ) involves both the real and imaginary components.
Y ^ D F T [ k ] Y ^ D F T * [ k ] = Re Y ^ D F T [ k ] 2 + Im Y ^ D F T [ k ] 2 = Y D F T R 2 [ k ] + Y D F T I 2 [ k ] X D F T [ k ] X D F T * [ k ] = Re X D F T [ k ] 2 + Im X D F T [ k ] 2 = X D F T R 2 [ k ] + X D F T I 2 [ k ]
In the above context, the subscript β attached to a DFT coefficient indicates the exponent used to modify the magnitude. A β value between 0 and 1 not only aligns with human auditory perception of sound intensity but also reduces the dynamic range of spectral coefficients, thus enhancing network estimation. Parameter α represents a mixing ratio for combining the magnitude loss L M S E _ M a g ( ) and the STFT-RI loss L M S E _ R I ( ) . The resulting loss function, termed L c o m p Y ^ D F T ( k ) , X D F T ( k ) and referred to as composite mean squared error (CMSE), is formulated on the understanding that human auditory perception aligns more closely with a logarithmic scale, and the sensitivities to spectral magnitudes and phases differ. Values of α = 0.5 and β = 0.5 have been reported to achieve satisfactory results [10,23].
Liu et al. [24] employed a similar approach with STDCT coefficients. They used a composite loss function comprising two loss values: the MSE loss calculated from absolute STDCT values and the MSE loss derived from the original polar values. Following the expression outlined in Equation (13), we formulate the composite loss function in the STDCT domain as follows:
L c o m p Y ^ D C T [ k ] , X D C T [ k ] = α L M S E _ M a g | Y ^ D C T , β [ k ] | , | X D C T , β [ k ] | + ( 1 α ) L M S E _ P o l a r Y ^ D C T , β [ k ] , X D C T , β [ k ]
with Y ^ D C T , β [ k ] and X D C T , β [ k ] defined as follows:
Y ^ D F T , β ( k ) = sgn Y ^ D C T [ k ] Y ^ D C T [ k ] β ; X β ( k ) = sgn X D C T [ k ] X D C T [ k ] β .
where sgn ( ) denotes the sign function.
Our experiments assessed the effects of four combinations of α and β. Specifically, the combination (α, β) = (0, 1) directly applies MSE to the target coefficients. The setting (α, β) = (0.5, 1), which concurrently minimizes the magnitude and phase spectra, replicates the original parameters used in prior studies [9]. The combination (α, β) = (0, 0.5) solely considers power compression factors, whereas (α, β) = (0.5, 0.5) engages both magnitude estimation and phase recovery with power compression integrated. The choice of α and β at 0.5 reflects their proven efficacy in STFT-based speech denoising applications, anticipating similar results with the STDCT sequences.
Both classical and advanced U-Net models were retrained and evaluated under the above four settings. As shown in Table 2 and Table 3, trends in response to adjustments in α and β were similar across both U-Net configurations. Modifying either α or β independently showed minimal impact on all evaluated metrics. Metrics from the advanced U-Net, which incorporates GLFBs, generally surpassed those from the classical U-Net. Our experimental results indicate that simultaneously optimizing phase and magnitude spectra without considering power compression actually lowered perceived quality. Also, applying power compression alone (α, β) = (0, 0.5) can only yield minor improvements. However, combining power compression with balanced magnitude and phase adjustments significantly enhanced speech quality, with improvements of 0.093 and 0.102 in PESQ for the classical U-Net in the STFT and STDCT domains, respectively.
Further data analyses from Table 2 and Table 3 highlight the advantages of substituting CCABs with GLFBs within the U-Net framework, regardless of compared indicators. GLFBs employ several state-of-the-art techniques, including the MetaFormer architecture [38], channel attention, gating mechanism, and depthwise separable convolution, contributing to performance enhancements. Aside from depthwise separable convolution, the specific impact of these techniques warrants further investigation.

5. Conclusions

This study evaluates a 6-level U-Net constructed with either CCABs or GLFBs to assess the efficacy of DNN-based speech denoising across various domains. To ensure causality, the U-Net employs a frame-buffering mechanism that collects feature sequences from the current and previous seven frames. Our experimental results demonstrate consistent enhancements in CSIG, CBAK, COVL, and PESQ for U-Nets operating in the STFT and STDCT domains, outperforming those in the temporal domain. Importantly, the U-Net built with GLFBs features fewer learnable parameters and enhanced denoising efficiency. Given the compatibility of STDCT and STFT with perceptual-based loss functions, we explored domain-specific composite loss functions to improve perceptual quality further. Notable improvements in PESQ and STOI scores were observed when accounting for factors like power compression and the trade-off between spectral magnitudes and phases.
In future work, we plan to expand the capabilities of our denoising DNN based on these findings. Although the proposed U-Net significantly enhances speech quality, the output still remains at 8 kHz narrowband. Our next objective is integrating super-resolution techniques into the denoising DNN to obtain high-quality 32- or even 48-kHz wideband speech.

Author Contributions

Conceptualization, H.-T.H.; data curation, H.-T.H. and T.-T.L.; formal analysis, H.-T.H.; funding acquisition, H.-T.H.; investigation, H.-T.H. and T.-T.L.; project administration, H.-T.H.; resources, H.-T.H. and T.-T.L.; software, H.-T.H. and T.-T.L.; supervision, H.-T.H.; validation, H.-T.H.; visualization, H.-T.H.; writing—original draft, H.-T.H.; writing—review and editing, H.-T.H. and T.-T.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research work was supported by the National Science and Technology Council, Taiwan (R.O.C.) under Grants NSTC 112-2221-E-197-026 and 113-2221-E-197-024.

Data Availability Statement

The speech and noise datasets analyzed during this study are accessible from the CSTR VCTK Corpus [https://www.kaggle.com/datasets/muhmagdy/valentini-noisy (accessed on 24 September 2024)] and DEMAND database [https://www.kaggle.com/datasets/chrisfilo/demand?resource=download (accessed on 24 September 2024)], respectively. The programs implemented in MATLAB® code are available upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Sun, Y.; Wang, W.; Chambers, J.A.; Naqvi, S.M. Enhanced Time-Frequency Masking by Using Neural Networks for Monaural Source Separation in Reverberant Room Environments. In Proceedings of the 2018 26th European Signal Processing Conference (EUSIPCO), Rome, Italy, 3–7 September 2018; pp. 1647–1651. [Google Scholar]
  2. Choi, H.-S.; Heo, H.; Lee, J.H.; Lee, K. Phase-aware Single-stage Speech Denoising and Dereverberation with U-Net. arXiv 2020, arXiv:2006.00687. [Google Scholar]
  3. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef] [PubMed]
  4. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  5. Tan, K.; Wang, D. Learning Complex Spectral Mapping With Gated Convolutional Recurrent Networks for Monaural Speech Enhancement. IEEE/ACM Trans. Audio Speech Lang. Process. 2020, 28, 380–390. [Google Scholar] [CrossRef]
  6. Choi, H.-S.; Kim, J.-H.; Huh, J.; Kim, A.; Ha, J.-W.; Lee, K. Phase-aware Speech Enhancement with Deep Complex U-Net. arXiv 2019, arXiv:1903.03107. [Google Scholar]
  7. Li, A.; Liu, W.; Zheng, C.; Fan, C.; Li, X. Two Heads are Better Than One: A Two-Stage Complex Spectral Mapping Approach for Monaural Speech Enhancement. IEEE/ACM Trans. Audio Speech Lang. Process. 2021, 29, 1829–1843. [Google Scholar] [CrossRef]
  8. Yuan, W. A time–frequency smoothing neural network for speech enhancement. Speech Commun. 2020, 124, 75–84. [Google Scholar] [CrossRef]
  9. Kang, Z.; Huang, Z.; Lu, C. Speech Enhancement Using U-Net with Compressed Sensing. Appl. Sci. 2022, 12, 4161. [Google Scholar] [CrossRef]
  10. Luo, X.; Zheng, C.; Li, A.; Ke, Y.; Li, X. Analysis of trade-offs between magnitude and phase estimation in loss functions for speech denoising and dereverberation. Speech Commun. 2022, 145, 71–87. [Google Scholar] [CrossRef]
  11. Azarang, A.; Kehtarnavaz, N. A review of multi-objective deep learning speech denoising methods. Speech Commun. 2020, 122, 1–10. [Google Scholar] [CrossRef]
  12. Pandey, A.; Wang, D. Dense CNN With Self-Attention for Time-Domain Speech Enhancement. IEEE/ACM Trans. Audio Speech Lang. Process. 2020, 29, 1270–1279. [Google Scholar] [CrossRef] [PubMed]
  13. Défossez, A.; Synnaeve, G.; Adi, Y. Real Time Speech Enhancement in the Waveform Domain. arXiv 2020, arXiv:2006.12847. [Google Scholar]
  14. Germain, F.G.; Chen, Q.; Koltun, V. Speech Denoising with Deep Feature Losses. In Proceedings of the Interspeech, Hyderabad, India, 2–6 September 2018. [Google Scholar]
  15. Erdogan, H.; Hershey, J.R.; Watanabe, S.; Roux, J.L. Phase-sensitive and recognition-boosted speech separation using deep recurrent neural networks. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; pp. 708–712. [Google Scholar]
  16. Kulmer, J.; Mahale, P.M.B. Phase Estimation in Single Channel Speech Enhancement Using Phase Decomposition. IEEE Signal Process. Lett. 2015, 22, 598–602. [Google Scholar] [CrossRef]
  17. Lu, X.; Tsao, Y.; Matsuda, S.; Hori, C. Speech enhancement based on deep denoising autoencoder. In Proceedings of the Interspeech, Lyon, France, 25–29 August 2013. [Google Scholar]
  18. Zhao, H.; Zarar, S.; Tashev, I.; Lee, C.H. Convolutional-Recurrent Neural Networks for Speech Enhancement. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 2401–2405. [Google Scholar]
  19. Tan, K.; Wang, D. A Convolutional Recurrent Neural Network for Real-Time Speech Enhancement. In Proceedings of the Interspeech, Hyderabad, India, 2–6 September 2018; pp. 3229–3233. [Google Scholar]
  20. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef]
  21. Mirsamadi, S.; Tashev, I.J. Causal Speech Enhancement Combining Data-Driven Learning and Suppression Rule Estimation. In Proceedings of the Interspeech, San Francisco, CA, USA, 8–12 September 2016. [Google Scholar]
  22. Park, S.R.; Lee, J. A Fully Convolutional Neural Network for Speech Enhancement. In Proceedings of the Interspeech 2017, Stockholm, Sweden, 20–24 August 2017; pp. 1993–1997. [Google Scholar]
  23. Li, A.; Zheng, C.; Peng, R.; Li, X. On the importance of power compression and phase estimation in monaural speech dereverberation. JASA Express Lett. 2021, 1, 014802. [Google Scholar] [CrossRef]
  24. Liu, L.; Guan, H.; Ma, J.; Dai, W.; Wang, G.-Y.; Ding, S. A Mask Free Neural Network for Monaural Speech Enhancement. arXiv 2023, arXiv:2306.04286. [Google Scholar]
  25. Ba, J.; Kiros, J.R.; Hinton, G.E. Layer Normalization. arXiv 2016, arXiv:1607.06450. [Google Scholar]
  26. Maas, A.L. Rectifier Nonlinearities Improve Neural Network Acoustic Models. In Proceedings of the 30th International Conference on Machine Learning, Atlanta, GA, USA, 16–21 June 2013. [Google Scholar]
  27. Huang, G.; Liu, Z.; Maaten, L.V.D.; Weinberger, K.Q. Densely Connected Convolutional Networks. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 2261–2269. [Google Scholar]
  28. Vaswani, A.; Shazeer, N.M.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention is All you Need. In Proceedings of the Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  29. Shi, W.; Caballero, J.; Huszár, F.; Totz, J.; Aitken, A.P.; Bishop, R.; Rueckert, D.; Wang, Z. Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1874–1883. [Google Scholar]
  30. Hu, H.-T.; Tsai, H.-H.; Lee, T.-T. Suitable domains for causal speech denoising using DNN with U-Net architecture. In Proceedings of the 7th International Conference on Knowledge Innovation and Invention 2024 (ICKII 2024), Nagoya, Japan, 16–18 August 2024. [Google Scholar]
  31. Oppenheim, A.V.; Willsky, A.S.; Nawab, S.H. Signals & Systems, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1997. [Google Scholar]
  32. Valentini-Botinhao, C.; Wang, X.; Takaki, S.; Yamagishi, J. Investigating RNN-based speech enhancement methods for noise-robust Text-to-Speech. In Proceedings of the Speech Synthesis Workshop, Sunnyvale, CA, USA, 13–15 September 2016. [Google Scholar]
  33. Thiemann, J.; Ito, N.; Vincent, E. The Diverse Environments Multi-channel Acoustic Noise Database (DEMAND): A database of multichannel environmental noise recordings. J. Acoust. Soc. Am. 2013, 133, 3591. [Google Scholar] [CrossRef]
  34. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  35. Hu, Y.; Loizou, P.C. Evaluation of Objective Quality Measures for Speech Enhancement. IEEE Trans. Audio Speech Lang. Process. 2008, 16, 229–238. [Google Scholar] [CrossRef]
  36. International Telecommunications Union. ITU-T Recommendation P.800: Methods for Subjective Determination of Transmission Quality; International Telecommunications Union: Geneva, Switzerland, 1996. [Google Scholar]
  37. Taal, C.H.; Hendriks, R.C.; Heusdens, R.; Jensen, J. An Algorithm for Intelligibility Prediction of Time–Frequency Weighted Noisy Speech. IEEE Trans. Audio Speech Lang. Process. 2011, 19, 2125–2136. [Google Scholar] [CrossRef]
  38. Yu, W.; Luo, M.; Zhou, P.; Si, C.; Zhou, Y.; Wang, X.; Feng, J.; Yan, S. MetaFormer is Actually What You Need for Vision. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 10809–10819. [Google Scholar]
Figure 1. Speech denoising through deep neural networks in the temporal domain.
Figure 1. Speech denoising through deep neural networks in the temporal domain.
Asi 07 00120 g001
Figure 2. Speech denoising through deep neural networks in the STFT domain.
Figure 2. Speech denoising through deep neural networks in the STFT domain.
Asi 07 00120 g002
Figure 3. Network architecture for the proposed U-Net.
Figure 3. Network architecture for the proposed U-Net.
Asi 07 00120 g003
Figure 4. CCABs used in the encoder and decoder sides of the classical U-Net.
Figure 4. CCABs used in the encoder and decoder sides of the classical U-Net.
Asi 07 00120 g004
Figure 5. The composition of GLFB.
Figure 5. The composition of GLFB.
Asi 07 00120 g005
Figure 6. Encoder and decoder submodules used in the advanced U-Net.
Figure 6. Encoder and decoder submodules used in the advanced U-Net.
Asi 07 00120 g006
Figure 7. U-Net’s input adopted in the temporal domain.
Figure 7. U-Net’s input adopted in the temporal domain.
Asi 07 00120 g007
Figure 8. U-Net’s input adopted in the STDCT domain.
Figure 8. U-Net’s input adopted in the STDCT domain.
Asi 07 00120 g008
Figure 9. U-Net’s input adopted in the STFT domain: (a) Illustration of inserting the real NF value into the imaginary position of the DC term; (b) Final arrangement.
Figure 9. U-Net’s input adopted in the STFT domain: (a) Illustration of inserting the real NF value into the imaginary position of the DC term; (b) Final arrangement.
Asi 07 00120 g009
Table 1. Performance comparison for the classical and advanced U-Nets operating in the STFT, STDCT, and temporal domains.
Table 1. Performance comparison for the classical and advanced U-Nets operating in the STFT, STDCT, and temporal domains.
Input TypeInitial
SNR
Classical U-Net with CCABsAdvanced U-Net with GLFBs
Resulting
SNR (dB)
CSIGCBAKCOVLPESQSTOI (%)Resulting
SNR (dB)
CSIGCBAKCOVLPESQSTOI (%)
STFT-RI sequences–2.5 dB12.973.4603.0553.0912.81180.5913.733.4953.1143.1322.85181.69
2.5 dB15.963.9413.3723.4963.09785.6916.513.9483.4103.5153.12386.39
7.5 dB18.374.3223.6243.8143.32389.0318.754.3343.6603.8383.35589.65
12.5 dB20.454.6183.8544.0723.51891.4520.694.6323.8874.0973.54991.99
Average16.944.0853.4763.6183.18786.6917.424.1023.5183.6463.22087.43
STDCT sequences–2.5 dB13.043.4583.0623.0962.82180.8513.793.5063.1223.1492.87881.95
2.5 dB16.043.9403.3753.4983.10285.8116.533.9643.4213.5333.14586.53
7.5 dB18.384.3453.6263.8303.33289.1418.664.3583.6663.8583.37089.68
12.5 dB20.504.6613.8614.1013.53591.6120.674.6603.8904.1163.56292.00
Average16.994.1013.4813.6313.19886.8517.414.1223.5253.6643.23987.54
Waveform sequences–2.5 dB13.403.3383.0142.9962.73981.1813.453.3133.0092.9802.73481.13
2.5 dB15.963.7843.2973.3642.99285.6116.173.7613.3063.3553.00085.81
7.5 dB18.214.1783.5473.6903.21888.8018.304.1523.5433.6733.21888.88
12.5 dB20.054.4923.7713.9583.41891.1320.134.4693.7613.9423.41891.06
Average16.903.9483.4073.5023.09286.6817.013.9243.4053.4873.09286.72
Table 2. Performance comparison for the classical and advanced U-Nets in the STFT domain with four loss functions exploiting the power compression and trade-off between magnitude estimation and phase recovery.
Table 2. Performance comparison for the classical and advanced U-Nets in the STFT domain with four loss functions exploiting the power compression and trade-off between magnitude estimation and phase recovery.
Loss Function
( α , β )
Initial
SNR
Classical U-Net with CCABsAdvanced U-Net with GLFBs
Resulting
SNR (dB)
CSIGCBAKCOVLPESQSTOI (%)Resulting
SNR (dB)
CSIGCBAKCOVLPESQSTOI (%)
(0, 1)–2.5 dB12.973.4603.0553.0912.81180.5913.733.4953.1143.1322.85181.69
2.5 dB15.963.9413.3723.4963.09785.6916.513.9483.4103.5153.12386.39
7.5 dB18.374.3223.6243.8143.32389.0318.754.3343.6603.8383.35589.65
12.5 dB20.454.6183.8544.0723.51891.4520.694.6323.8874.0973.54991.99
Average16.944.0853.4763.6183.18786.6917.424.1023.5183.6463.22087.43
(0.5, 1)–2.5 dB12.753.6383.0533.1902.80981.0213.583.7373.1253.2772.87682.41
2.5 dB15.654.0523.3493.5483.07685.8216.234.1103.3993.6043.12686.83
7.5 dB18.004.3883.6003.8433.30589.2118.464.4243.6353.8803.33989.91
12.5 dB20.154.6553.8344.0853.49991.6520.444.6753.8594.1093.52692.07
Average16.644.1833.4593.6663.17286.9217.184.2373.5043.7183.21787.81
(0, 0.5)–2.5 dB13.303.2983.0242.9732.75880.3713.823.3973.0983.0622.82181.36
2.5 dB16.273.8373.3753.4423.10785.6716.633.9303.4273.5173.15386.30
7.5 dB18.634.3183.6663.8443.39689.2518.844.3893.7013.9033.43489.58
12.5 dB20.524.6793.9124.1563.62791.7020.724.7393.9444.2073.66692.02
Average17.184.0333.4943.6043.22286.7517.504.1143.5423.6723.26987.32
(0.5, 0.5)–2.5 dB13.083.7503.1203.2672.84481.1413.933.9013.2223.4022.94982.65
2.5 dB16.044.2063.4483.6773.17186.2516.664.3123.5203.7773.25987.24
7.5 dB18.444.5693.7154.0073.44289.7418.894.6393.7694.0783.50990.30
12.5 dB20.504.8483.9544.2683.66392.1820.904.9074.0044.3313.72792.67
Average17.014.3433.5593.8053.28087.3217.594.4403.6293.8973.36188.21
Table 3. Performance comparison for the classical and advanced U-Nets in the STDCT domain with four loss functions exploiting the power compression and trade-off between magnitude estimation and phase recovery.
Table 3. Performance comparison for the classical and advanced U-Nets in the STDCT domain with four loss functions exploiting the power compression and trade-off between magnitude estimation and phase recovery.
Loss Function
( α , β )
Initial
SNR
Classical U-Net with CCABsAdvanced U-Net with GLFBs
Resulting
SNR (dB)
CSIGCBAKCOVLPESQSTOI (%)Resulting
SNR (dB)
CSIGCBAKCOVLPESQSTOI (%)
(0, 1)–2.5 dB13.043.4583.0623.0962.82180.8513.793.5063.1223.1492.87881.95
2.5 dB16.043.9403.3753.4983.10285.8116.533.9643.4213.5333.14586.53
7.5 dB18.384.3453.6263.8303.33289.1418.664.3583.6663.8583.37089.68
12.5 dB20.504.6613.8614.1013.53591.6120.674.6603.8904.1163.56292.00
Average16.994.1013.4813.6313.19886.8517.414.1223.5253.6643.23987.54
(0.5, 1)–2.5 dB12.913.6423.0773.2062.83781.2813.193.7003.1033.2452.85781.79
2.5 dB15.924.0773.3853.5803.11486.2716.104.0903.3953.5913.12386.50
7.5 dB18.264.4123.6273.8703.33489.4818.384.4073.6323.8713.34189.67
12.5 dB20.304.6763.8524.1083.52491.6820.414.6673.8574.1053.52891.99
Average16.854.2023.4853.6913.20287.1817.024.2163.4973.7033.21287.48
(0, 0.5)–2.5 dB13.093.3453.0353.0082.77680.3213.793.4013.0993.0652.82881.41
2.5 dB16.063.8453.3763.4483.10785.6016.553.9133.4203.5063.15586.25
7.5 dB18.434.3143.6583.8413.39089.1018.844.3843.7033.9023.44089.58
12.5 dB20.424.6873.9084.1603.62691.6720.684.7403.9434.2093.67091.98
Average17.004.0483.4943.6143.22586.6717.474.1093.5413.6703.27387.30
(0.5, 0.5)–2.5 dB13.233.7923.1413.3032.87781.4813.803.8893.2083.3862.93482.40
2.5 dB16.034.2293.4523.6953.18886.3416.634.3133.5163.7743.25487.06
7.5 dB18.524.5913.7264.0263.46089.7318.864.6363.7644.0723.50390.13
12.5 dB20.484.8623.9574.2803.67492.0820.874.9034.0004.3243.72092.52
Average17.074.3683.5693.8263.30087.4017.544.4353.6223.8893.35388.03
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, H.-T.; Lee, T.-T. Options for Performing DNN-Based Causal Speech Denoising Using the U-Net Architecture. Appl. Syst. Innov. 2024, 7, 120. https://doi.org/10.3390/asi7060120

AMA Style

Hu H-T, Lee T-T. Options for Performing DNN-Based Causal Speech Denoising Using the U-Net Architecture. Applied System Innovation. 2024; 7(6):120. https://doi.org/10.3390/asi7060120

Chicago/Turabian Style

Hu, Hwai-Tsu, and Tung-Tsun Lee. 2024. "Options for Performing DNN-Based Causal Speech Denoising Using the U-Net Architecture" Applied System Innovation 7, no. 6: 120. https://doi.org/10.3390/asi7060120

APA Style

Hu, H. -T., & Lee, T. -T. (2024). Options for Performing DNN-Based Causal Speech Denoising Using the U-Net Architecture. Applied System Innovation, 7(6), 120. https://doi.org/10.3390/asi7060120

Article Metrics

Back to TopTop