Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline

Search Results (2)

Search Parameters:
Keywords = Korean phonemic features

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
28 pages, 1093 KiB  
Article
Blended Phonetic Training with HVPT Features for EFL Children: Effects on L2 Perception and Listening Comprehension
by KyungA Lee and Hyunkee Ahn
Languages 2025, 10(6), 122; https://doi.org/10.3390/languages10060122 - 26 May 2025
Viewed by 866
Abstract
Despite being fundamental for speech processing, L2 perceptual training often lacks attention in L2 classrooms, especially among English as Foreign Language (EFL) learners navigating complex English phonology. The current study investigates the impact of the blended phonetic training program incorporating HVPT features on [...] Read more.
Despite being fundamental for speech processing, L2 perceptual training often lacks attention in L2 classrooms, especially among English as Foreign Language (EFL) learners navigating complex English phonology. The current study investigates the impact of the blended phonetic training program incorporating HVPT features on enhancing L2 perception and listening comprehension skills in Korean elementary EFL learners. Fifty-seven learners, aged 11 to 12 years, participated in a four-week intervention program. They were trained on 13 challenging consonant phonemes for Korean learners, using multimedia tools for practice. Pre- and posttests assessed L2 perception and listening comprehension. They are grouped into three proficiency levels based on listening comprehension tests. The results showed significant improvements in L2 perception (p = 0.01) with small and in listening comprehension (p < 0.001) with small-to-medium effects. The lower proficiency students demonstrated the largest gains. The correlation between L2 perception and listening comprehension was observed both in pre- (r = 0.427 **) and posttests (r = 0.479 ***). Findings underscore the importance of integrating explicit phonetic instruction with HVPT to enhance L2 listening skills among EFL learners. Full article
(This article belongs to the Special Issue L2 Speech Perception and Production in the Globalized World)
Show Figures

Figure 1

11 pages, 32386 KiB  
Communication
Detecting Forged Audio Files Using “Mixed Paste” Command: A Deep Learning Approach Based on Korean Phonemic Features
by Yeongmin Son and Jae Wan Park
Sensors 2024, 24(6), 1872; https://doi.org/10.3390/s24061872 - 14 Mar 2024
Cited by 2 | Viewed by 1703
Abstract
The ubiquity of smartphones today enables the widespread utilization of voice recording for diverse purposes. Consequently, the submission of voice recordings as digital evidence in legal proceedings has notably increased, alongside a rise in allegations of recording file forgery. This trend highlights the [...] Read more.
The ubiquity of smartphones today enables the widespread utilization of voice recording for diverse purposes. Consequently, the submission of voice recordings as digital evidence in legal proceedings has notably increased, alongside a rise in allegations of recording file forgery. This trend highlights the growing significance of audio file authentication. This study aims to develop a deep learning methodology capable of identifying forged files, particularly those altered using “Mixed Paste” commands, a technique not previously addressed. The proposed deep learning framework is a composite model, integrating a convolutional neural network and a long short-term memory model. It is designed based on the extraction of features from spectrograms and sequences of Korean consonant types. The training of this model utilizes an authentic dataset of forged audio recordings created on an iPhone, modified via “Mixed Paste”, and encoded. This hybrid model demonstrates a high accuracy rate of 97.5%. To validate the model’s efficacy, tests were conducted using various manipulated audio files. The findings reveal that the model’s effectiveness is not contingent on the smartphone model or the audio editing software employed. We anticipate that this research will advance the field of audio forensics through a novel hybrid model approach. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

Back to TopTop