Next Article in Journal
Changes of Photovoltaic Performance as a Function of Positioning Relative to the Focus Points of a Concentrator PV Module: Case Study
Next Article in Special Issue
An Analysis of the Short Utterance Problem for Speaker Characterization
Previous Article in Journal
Adsorption and Sensing Behaviors of Pd-Doped InN Monolayer upon CO and NO Molecules: A First-Principles Study
Previous Article in Special Issue
Supervector Extraction for Encoding Speaker and Phrase Information with Neural Networks for Text-Dependent Speaker Verification
Article

Exploring Efficient Neural Architectures for Linguistic–Acoustic Mapping in Text-To-Speech

1
Department of Signal Theory and Communications, Universitat Politècnica de Catalunya, 08034 Barcelona, Spain
2
Telefónica Research, 08019 Barcelona, Spain
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in IberSPEECH2018.
Current address: Amazon Research, Cambridge CB1 2GA, UK.
Appl. Sci. 2019, 9(16), 3391; https://doi.org/10.3390/app9163391
Received: 4 July 2019 / Revised: 29 July 2019 / Accepted: 2 August 2019 / Published: 17 August 2019
Conversion from text to speech relies on the accurate mapping from linguistic to acoustic symbol sequences, for which current practice employs recurrent statistical models such as recurrent neural networks. Despite the good performance of such models (in terms of low distortion in the generated speech), their recursive structure with intermediate affine transformations tends to make them slow to train and to sample from. In this work, we explore two different mechanisms that enhance the operational efficiency of recurrent neural networks, and study their performance–speed trade-off. The first mechanism is based on the quasi-recurrent neural network, where expensive affine transformations are removed from temporal connections and placed only on feed-forward computational directions. The second mechanism includes a module based on the transformer decoder network, designed without recurrent connections but emulating them with attention and positioning codes. Our results show that the proposed decoder networks are competitive in terms of distortion when compared to a recurrent baseline, whilst being significantly faster in terms of CPU and GPU inference time. The best performing model is the one based on the quasi-recurrent mechanism, reaching the same level of naturalness as the recurrent neural network based model with a speedup of 11.2 on CPU and 3.3 on GPU. View Full-Text
Keywords: recurrent neural networks; self-attention; quasi-recurrent neural networks; deep learning; acoustic model; speech synthesis; text-to-speech recurrent neural networks; self-attention; quasi-recurrent neural networks; deep learning; acoustic model; speech synthesis; text-to-speech
Show Figures

Figure 1

  • Externally hosted supplementary file 1
    Link: http://veu.talp.cat/efftts/
    Description: Audio samples as qualitative results for the reader
MDPI and ACS Style

Pascual, S.; Serrà, J.; Bonafonte, A. Exploring Efficient Neural Architectures for Linguistic–Acoustic Mapping in Text-To-Speech. Appl. Sci. 2019, 9, 3391. https://doi.org/10.3390/app9163391

AMA Style

Pascual S, Serrà J, Bonafonte A. Exploring Efficient Neural Architectures for Linguistic–Acoustic Mapping in Text-To-Speech. Applied Sciences. 2019; 9(16):3391. https://doi.org/10.3390/app9163391

Chicago/Turabian Style

Pascual, Santiago; Serrà, Joan; Bonafonte, Antonio. 2019. "Exploring Efficient Neural Architectures for Linguistic–Acoustic Mapping in Text-To-Speech" Appl. Sci. 9, no. 16: 3391. https://doi.org/10.3390/app9163391

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Search more from Scilit
 
Search
Back to TopTop