Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (3)

Search Parameters:
Keywords = mobile-assisted pronunciation training

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 885 KiB  
Review
A Systematic Review of Empirical Mobile-Assisted Pronunciation Studies through a Perception–Production Lens
by Anne M. Stoughton and Okim Kang
Languages 2024, 9(7), 251; https://doi.org/10.3390/languages9070251 - 16 Jul 2024
Viewed by 2644
Abstract
The communicative approach to language learning, a teaching method commonly used in second language (L2) classrooms, places little to no emphasis on pronunciation training. As a result, mobile-assisted pronunciation training (MAPT) platforms provide an alternative to classroom-based pronunciation training. To date, there have [...] Read more.
The communicative approach to language learning, a teaching method commonly used in second language (L2) classrooms, places little to no emphasis on pronunciation training. As a result, mobile-assisted pronunciation training (MAPT) platforms provide an alternative to classroom-based pronunciation training. To date, there have been several meta-analyses and systematic reviews of mobile-assisted language learning (MALL) studies, but only a few of these meta-analyses have concentrated on pronunciation. To better understand MAPT’s impact on L2 learners’ perceptions and production of targeted pronunciation features, this study conducted a systematic review of the MAPT literature following PRISMA 2020 guidelines. Potential mobile-assisted articles were identified through searches of the ERIC, Educational Full Text, Linguistics and Language Behavior Abstract, MLI International, and Scopus databases and specific journal searches. Criteria for article inclusion in this study included the following: the article must be a peer-reviewed empirical or quasi-empirical research study using both experimental and control groups to assess the impact of pronunciation training. Pronunciation training must have been conducted via MALL or MAPT technologies, and the studies must have been published between 2014 and 2024. A total of 232 papers were identified; however, only ten articles with a total of 524 participants met the established criteria. Data pertaining to the participants used in the study (nationality and education level), the MPAT applications and platforms used, the pronunciation features targeted, the concentration on perception and/or production of these features, and the methods used for training and assessments were collected and discussed. Effect sizes using Cohen’s d were also calculated for each study. The findings of this review reveal that only two of the articles assessed the impact of MAPT on L2 learners’ perceptions of targeted features, with results indicating that the use of MPAT did not significantly improve L2 learners’ abilities to perceive segmental features. In terms of production, all ten articles assessed MPAT’s impact on L2 learners’ production of the targeted features. The results of these assessments varied greatly, with some studies indicating a significant and large effect of MAPT and others citing non-significant gains and negligible effect sizes. The variation in these results, in addition to differences in the types of participants, the targeted pronunciation features, and MAPT apps and platforms used, makes it difficult to conclude that MAPT has a significant impact on L2 learners’ production. Furthermore, the selected studies’ concentration on mostly segmental features (i.e., phoneme and word pronunciation) is likely to have had only a limited impact on participants’ intelligibility. This paper provides suggestions for further MAPT research, including increased emphasis on suprasegmental features and perception assessments, to further our understanding of the effectiveness of MAPT for pronunciation training. Full article
(This article belongs to the Special Issue Advances in L2 Perception and Production)
Show Figures

Figure 1

20 pages, 2132 KiB  
Article
An Open CAPT System for Prosody Practice: Practical Steps towards Multilingual Setup
by John Blake, Natalia Bogach, Akemi Kusakari, Iurii Lezhenin, Veronica Khaustova, Son Luu Xuan, Van Nhi Nguyen, Nam Ba Pham, Roman Svechnikov, Andrey Ostapchuk, Dmitrei Efimov and Evgeny Pyshkin
Languages 2024, 9(1), 27; https://doi.org/10.3390/languages9010027 - 12 Jan 2024
Cited by 4 | Viewed by 2847
Abstract
This paper discusses the challenges posed in creating a Computer-Assisted Pronunciation Training (CAPT) environment for multiple languages. By selecting one language from each of three different language families, we show that a single environment may be tailored to cater for different target languages. [...] Read more.
This paper discusses the challenges posed in creating a Computer-Assisted Pronunciation Training (CAPT) environment for multiple languages. By selecting one language from each of three different language families, we show that a single environment may be tailored to cater for different target languages. We detail the challenges faced during the development of a multimodal CAPT environment comprising a toolkit that manages mobile applications using speech signal processing, visualization, and estimation algorithms. Since the applied underlying mathematical and phonological models, as well as the feedback production algorithms, are based on sound signal processing and modeling rather than on particular languages, the system is language-agnostic and serves as an open toolkit for developing phrasal intonation training exercises for an open selection of languages. However, it was necessary to tailor the CAPT environment to the language-specific particularities in the multilingual setups, especially the additional requirements for adequate and consistent speech evaluation and feedback production. In our work, we describe our response to the challenges in visualizing and segmenting recorded pitch signals and modeling the language melody and rhythm necessary for such a multilingual adaptation, particularly for tonal syllable-timed and mora-timed languages. Full article
(This article belongs to the Special Issue Speech Analysis and Tools in L2 Pronunciation Acquisition)
Show Figures

Figure 1

22 pages, 1420 KiB  
Article
Speech Processing for Language Learning: A Practical Approach to Computer-Assisted Pronunciation Teaching
by Natalia Bogach, Elena Boitsova, Sergey Chernonog, Anton Lamtev, Maria Lesnichaya, Iurii Lezhenin, Andrey Novopashenny, Roman Svechnikov, Daria Tsikach, Konstantin Vasiliev, Evgeny Pyshkin and John Blake
Electronics 2021, 10(3), 235; https://doi.org/10.3390/electronics10030235 - 20 Jan 2021
Cited by 39 | Viewed by 7153
Abstract
This article contributes to the discourse on how contemporary computer and information technology may help in improving foreign language learning not only by supporting better and more flexible workflow and digitizing study materials but also through creating completely new use cases made possible [...] Read more.
This article contributes to the discourse on how contemporary computer and information technology may help in improving foreign language learning not only by supporting better and more flexible workflow and digitizing study materials but also through creating completely new use cases made possible by technological improvements in signal processing algorithms. We discuss an approach and propose a holistic solution to teaching the phonological phenomena which are crucial for correct pronunciation, such as the phonemes; the energy and duration of syllables and pauses, which construct the phrasal rhythm; and the tone movement within an utterance, i.e., the phrasal intonation. The working prototype of StudyIntonation Computer-Assisted Pronunciation Training (CAPT) system is a tool for mobile devices, which offers a set of tasks based on a “listen and repeat” approach and gives the audio-visual feedback in real time. The present work summarizes the efforts taken to enrich the current version of this CAPT tool with two new functions: the phonetic transcription and rhythmic patterns of model and learner speech. Both are designed on a base of a third-party automatic speech recognition (ASR) library Kaldi, which was incorporated inside StudyIntonation signal processing software core. We also examine the scope of automatic speech recognition applicability within the CAPT system workflow and evaluate the Levenstein distance between the transcription made by human experts and that obtained automatically in our code. We developed an algorithm of rhythm reconstruction using acoustic and language ASR models. It is also shown that even having sufficiently correct production of phonemes, the learners do not produce a correct phrasal rhythm and intonation, and therefore, the joint training of sounds, rhythm and intonation within a single learning environment is beneficial. To mitigate the recording imperfections voice activity detection (VAD) is applied to all the speech records processed. The try-outs showed that StudyIntonation can create transcriptions and process rhythmic patterns, but some specific problems with connected speech transcription were detected. The learners feedback in the sense of pronunciation assessment was also updated and a conventional mechanism based on dynamic time warping (DTW) was combined with cross-recurrence quantification analysis (CRQA) approach, which resulted in a better discriminating ability. The CRQA metrics combined with those of DTW were shown to add to the accuracy of learner performance estimation. The major implications for computer-assisted English pronunciation teaching are discussed. Full article
(This article belongs to the Special Issue Recent Advances in Multimedia Signal Processing and Communications)
Show Figures

Figure 1

Back to TopTop