Next Article in Journal
Experiments and Agent Based Models of Zooplankton Movement within Complex Flow Environments
Next Article in Special Issue
Using a Convolutional Siamese Network for Image-Based Plant Species Identification with Small Datasets
Previous Article in Journal
Application of Bionic Technologies on the Fracturing Plug
Previous Article in Special Issue
An Approximation of Heart Failure Using Cardiovascular Simulation Toolbox
Open AccessArticle

Evaluation of Mixed Deep Neural Networks for Reverberant Speech Enhancement

Escuela de Ingeniería Eléctrica, Universidad de Costa Rica, San José 11501-2060, Costa Rica
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Biomimetics 2020, 5(1), 1; https://doi.org/10.3390/biomimetics5010001
Received: 30 October 2019 / Revised: 6 December 2019 / Accepted: 16 December 2019 / Published: 20 December 2019
(This article belongs to the Special Issue Bioinspired Intelligence)
Speech signals are degraded in real-life environments, as a product of background noise or other factors. The processing of such signals for voice recognition and voice analysis systems presents important challenges. One of the conditions that make adverse quality difficult to handle in those systems is reverberation, produced by sound wave reflections that travel from the source to the microphone in multiple directions. To enhance signals in such adverse conditions, several deep learning-based methods have been proposed and proven to be effective. Recently, recurrent neural networks, especially those with long short-term memory (LSTM), have presented surprising results in tasks related to time-dependent processing of signals, such as speech. One of the most challenging aspects of LSTM networks is the high computational cost of the training procedure, which has limited extended experimentation in several cases. In this work, we present a proposal to evaluate the hybrid models of neural networks to learn different reverberation conditions without any previous information. The results show that some combinations of LSTM and perceptron layers produce good results in comparison to those from pure LSTM networks, given a fixed number of layers. The evaluation was made based on quality measurements of the signal’s spectrum, the training time of the networks, and statistical validation of results. In total, 120 artificial neural networks of eight different types were trained and compared. The results help to affirm the fact that hybrid networks represent an important solution for speech signal enhancement, given that reduction in training time is on the order of 30%, in processes that can normally take several days or weeks, depending on the amount of data. The results also present advantages in efficiency, but without a significant drop in quality. View Full-Text
Keywords: artificial neural network; deep learning; LSTM; speech processing artificial neural network; deep learning; LSTM; speech processing
Show Figures

Figure 1

MDPI and ACS Style

Gutiérrez-Muñoz, M.; González-Salazar, A.; Coto-Jiménez, M. Evaluation of Mixed Deep Neural Networks for Reverberant Speech Enhancement. Biomimetics 2020, 5, 1.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop