Next Article in Journal
Evaluation of Automated Measurement of Hair Density Using Deep Neural Networks
Previous Article in Journal
Marine Robotics for Deep-Sea Specimen Collection: A Systematic Review of Underwater Grippers
Previous Article in Special Issue
Lipreading Architecture Based on Multiple Convolutional Neural Networks for Sentence-Level Visual Speech Recognition
Article

Exploring Silent Speech Interfaces Based on Frequency-Modulated Continuous-Wave Radar

1
Department of Electronics, Telecommunications & Informatics, University of Aveiro, 3810-193 Aveiro, Portugal
2
Institute of Electronics and Informatics Engineering of Aveiro (IEETA), 3810-193 Aveiro, Portugal
*
Authors to whom correspondence should be addressed.
Academic Editors: Bruce Denby, Tamás Gábor Csapó and Michael Wand
Sensors 2022, 22(2), 649; https://doi.org/10.3390/s22020649
Received: 1 December 2021 / Revised: 5 January 2022 / Accepted: 10 January 2022 / Published: 14 January 2022
(This article belongs to the Special Issue Future Speech Interfaces with Sensors and Machine Intelligence)
Speech is our most natural and efficient form of communication and offers a strong potential to improve how we interact with machines. However, speech communication can sometimes be limited by environmental (e.g., ambient noise), contextual (e.g., need for privacy), or health conditions (e.g., laryngectomy), preventing the consideration of audible speech. In this regard, silent speech interfaces (SSI) have been proposed as an alternative, considering technologies that do not require the production of acoustic signals (e.g., electromyography and video). Unfortunately, despite their plentitude, many still face limitations regarding their everyday use, e.g., being intrusive, non-portable, or raising technical (e.g., lighting conditions for video) or privacy concerns. In line with this necessity, this article explores the consideration of contactless continuous-wave radar to assess its potential for SSI development. A corpus of 13 European Portuguese words was acquired for four speakers and three of them enrolled in a second acquisition session, three months later. Regarding the speaker-dependent models, trained and tested with data from each speaker while using 5-fold cross-validation, average accuracies of 84.50% and 88.00% were respectively obtained from Bagging (BAG) and Linear Regression (LR) classifiers, respectively. Additionally, recognition accuracies of 81.79% and 81.80% were also, respectively, achieved for the session and speaker-independent experiments, establishing promising grounds for further exploring this technology towards silent speech recognition. View Full-Text
Keywords: silent speech; continuous-wave radar; European Portuguese; machine learning silent speech; continuous-wave radar; European Portuguese; machine learning
Show Figures

Figure 1

MDPI and ACS Style

Ferreira, D.; Silva, S.; Curado, F.; Teixeira, A. Exploring Silent Speech Interfaces Based on Frequency-Modulated Continuous-Wave Radar. Sensors 2022, 22, 649. https://doi.org/10.3390/s22020649

AMA Style

Ferreira D, Silva S, Curado F, Teixeira A. Exploring Silent Speech Interfaces Based on Frequency-Modulated Continuous-Wave Radar. Sensors. 2022; 22(2):649. https://doi.org/10.3390/s22020649

Chicago/Turabian Style

Ferreira, David, Samuel Silva, Francisco Curado, and António Teixeira. 2022. "Exploring Silent Speech Interfaces Based on Frequency-Modulated Continuous-Wave Radar" Sensors 22, no. 2: 649. https://doi.org/10.3390/s22020649

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop