Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (383)

Search Parameters:
Keywords = listening test

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 821 KiB  
Article
The Role of Phoneme Discrimination in the Variability of Speech and Language Outcomes Among Children with Hearing Loss
by Kerry A. Walker, Jinal K. Shah, Lauren Alexander, Stacy Stiell, Christine Yoshinaga-Itano and Kristin M. Uhler
Behav. Sci. 2025, 15(8), 1072; https://doi.org/10.3390/bs15081072 - 6 Aug 2025
Abstract
This research compares speech discrimination abilities between 17 children who are hard-of-hearing (CHH) and 13 children with normal hearing (CNH), aged 9 to 36 months, using either a conditioned head turn (CHT) or condition play paradigm, for two phoneme pairs /ba-da/ and /sa-ʃa/. [...] Read more.
This research compares speech discrimination abilities between 17 children who are hard-of-hearing (CHH) and 13 children with normal hearing (CNH), aged 9 to 36 months, using either a conditioned head turn (CHT) or condition play paradigm, for two phoneme pairs /ba-da/ and /sa-ʃa/. As CHH were tested in the aided and unaided conditions, CNH were also tested on each phoneme contrast twice to control for learning effects. When speech discrimination abilities were compared between CHH, with hearing aids (HAs), and CNH, there were no statistical differences observed in performance on stop consonant discrimination, but a significant statistical difference was observed for fricative discrimination performance. Among CHH, significant benefits were observed for /ba-da/ speech discrimination while wearing HAs, compared to the no HA condition. All CHH were early-identified, early amplified, and were enrolled in parent-centered early intervention services. Under these conditions, CHH demonstrated the ability to discriminate speech comparable to CNH. Additionally, repeated testing within 1-month did not result in a change in speech discrimination scores, indicating good test–retest reliability of speech discrimination scores. Finally, this research explored the question of infant/toddler listening fatigue in the behavioral speech discrimination task. The CHT paradigm included returning to a contrast (i.e., /a-i/) previously shown to be easier for both CHH and CNH to discriminate to examine if failure to discriminate /ba-da/ or /sa-ʃa/ was due to listening fatigue or off-task behavior. Full article
(This article belongs to the Special Issue Language and Cognitive Development in Deaf Children)
Show Figures

Figure 1

34 pages, 1876 KiB  
Article
The Interaction of Target and Masker Speech in Competing Speech Perception
by Sheyenne Fishero, Joan A. Sereno and Allard Jongman
Brain Sci. 2025, 15(8), 834; https://doi.org/10.3390/brainsci15080834 - 4 Aug 2025
Viewed by 175
Abstract
Background/Objectives: Speech perception typically takes place against a background of other speech or noise. The present study investigates the effectiveness of segregating speech streams within a competing speech signal, examining whether cues such as pitch, which typically denote a difference in talker, [...] Read more.
Background/Objectives: Speech perception typically takes place against a background of other speech or noise. The present study investigates the effectiveness of segregating speech streams within a competing speech signal, examining whether cues such as pitch, which typically denote a difference in talker, behave in the same way as cues such as speaking rate, which typically do not denote the presence of a new talker. Methods: Native English speakers listened to English target speech within English two-talker babble of a similar or different pitch and/or a similar or different speaking rate to identify whether mismatched properties between target speech and masker babble improve speech segregation. Additionally, Dutch and French masker babble was tested to identify whether an unknown language masker improves speech segregation capacity and whether the rhythm patterns of the unknown language modulate the improvement. Results: Results indicated that a difference in pitch or speaking rate between target and masker improved speech segregation, but when both pitch and speaking rate differed, only a difference in pitch improved speech segregation. Results also indicated improved speech segregation for an unknown language masker, with little to no role of rhythm pattern of the unknown language. Conclusions: This study increases the understanding of speech perception in a noisy ecologically valid context and suggests that there is a link between a cue’s potential to denote a new speaker and its ability to aid in speech segregation during competing speech perception. Full article
(This article belongs to the Special Issue Language Perception and Processing)
Show Figures

Figure 1

13 pages, 769 KiB  
Article
A Novel You Only Listen Once (YOLO) Deep Learning Model for Automatic Prominent Bowel Sounds Detection: Feasibility Study in Healthy Subjects
by Rohan Kalahasty, Gayathri Yerrapragada, Jieun Lee, Keerthy Gopalakrishnan, Avneet Kaur, Pratyusha Muddaloor, Divyanshi Sood, Charmy Parikh, Jay Gohri, Gianeshwaree Alias Rachna Panjwani, Naghmeh Asadimanesh, Rabiah Aslam Ansari, Swetha Rapolu, Poonguzhali Elangovan, Shiva Sankari Karuppiah, Vijaya M. Dasari, Scott A. Helgeson, Venkata S. Akshintala and Shivaram P. Arunachalam
Sensors 2025, 25(15), 4735; https://doi.org/10.3390/s25154735 - 31 Jul 2025
Viewed by 283
Abstract
Accurate diagnosis of gastrointestinal (GI) diseases typically requires invasive procedures or imaging studies that pose the risk of various post-procedural complications or involve radiation exposure. Bowel sounds (BSs), though typically described during a GI-focused physical exam, are highly inaccurate and variable, with low [...] Read more.
Accurate diagnosis of gastrointestinal (GI) diseases typically requires invasive procedures or imaging studies that pose the risk of various post-procedural complications or involve radiation exposure. Bowel sounds (BSs), though typically described during a GI-focused physical exam, are highly inaccurate and variable, with low clinical value in diagnosis. Interpretation of the acoustic characteristics of BSs, i.e., using a phonoenterogram (PEG), may aid in diagnosing various GI conditions non-invasively. Use of artificial intelligence (AI) and improvements in computational analysis can enhance the use of PEGs in different GI diseases and lead to a non-invasive, cost-effective diagnostic modality that has not been explored before. The purpose of this work was to develop an automated AI model, You Only Listen Once (YOLO), to detect prominent bowel sounds that can enable real-time analysis for future GI disease detection and diagnosis. A total of 110 2-minute PEGs sampled at 44.1 kHz were recorded using the Eko DUO® stethoscope from eight healthy volunteers at two locations, namely, left upper quadrant (LUQ) and right lower quadrant (RLQ) after IRB approval. The datasets were annotated by trained physicians, categorizing BSs as prominent or obscure using version 1.7 of Label Studio Software®. Each BS recording was split up into 375 ms segments with 200 ms overlap for real-time BS detection. Each segment was binned based on whether it contained a prominent BS, resulting in a dataset of 36,149 non-prominent segments and 6435 prominent segments. Our dataset was divided into training, validation, and test sets (60/20/20% split). A 1D-CNN augmented transformer was trained to classify these segments via the input of Mel-frequency cepstral coefficients. The developed AI model achieved area under the receiver operating curve (ROC) of 0.92, accuracy of 86.6%, precision of 86.85%, and recall of 86.08%. This shows that the 1D-CNN augmented transformer with Mel-frequency cepstral coefficients achieved creditable performance metrics, signifying the YOLO model’s capability to classify prominent bowel sounds that can be further analyzed for various GI diseases. This proof-of-concept study in healthy volunteers demonstrates that automated BS detection can pave the way for developing more intuitive and efficient AI-PEG devices that can be trained and utilized to diagnose various GI conditions. To ensure the robustness and generalizability of these findings, further investigations encompassing a broader cohort, inclusive of both healthy and disease states are needed. Full article
(This article belongs to the Special Issue Biomedical Signals, Images and Healthcare Data Analysis: 2nd Edition)
Show Figures

Figure 1

14 pages, 1974 KiB  
Article
Effect of Transducer Burn-In on Subjective and Objective Parameters of Loudspeakers
by Tomasz Kopciński, Bartłomiej Kruk and Jan Kucharczyk
Appl. Sci. 2025, 15(15), 8425; https://doi.org/10.3390/app15158425 - 29 Jul 2025
Viewed by 258
Abstract
Speaker burn-in is a controversial practice in the audio world, based on the belief that new devices reach optimal performance only after a certain period of use. Supporters claim it improves component flexibility, reduces initial distortion, and enhances sound quality—especially in the low-frequency [...] Read more.
Speaker burn-in is a controversial practice in the audio world, based on the belief that new devices reach optimal performance only after a certain period of use. Supporters claim it improves component flexibility, reduces initial distortion, and enhances sound quality—especially in the low-frequency range. Critics, however, emphasize the lack of scientific evidence for audible changes and point to the placebo effect in subjective listening tests. They argue that modern manufacturing and strict quality control minimize differences between new and “burned-in” devices. This study cites a standard describing a preliminary burn-in procedure, specifying the exact conditions and duration required. Objective tests revealed slight changes in speaker impedance and amplitude response after burn-in, but these differences are inaudible to the average listener. Notably, significant variation was observed between speakers of the same series, attributed to production line tolerances rather than use-related changes. The study also explored aging processes in speaker materials to better understand potential long-term effects. However, subjective listening tests showed that listeners rated the sound consistently across all test cases, regardless of whether the speaker had undergone burn-in. Overall, while minor physical changes may occur, their audible impact is negligible, especially for non-expert users. Full article
Show Figures

Figure 1

24 pages, 4226 KiB  
Article
Digital Signal Processing of the Inharmonic Complex Tone
by Tatjana Miljković, Jelena Ćertić, Miloš Bjelić and Dragana Šumarac Pavlović
Appl. Sci. 2025, 15(15), 8293; https://doi.org/10.3390/app15158293 - 25 Jul 2025
Viewed by 190
Abstract
In this paper, a set of digital signal processing (DSP) procedures tailored for the analysis of complex musical tones with prominent inharmonicity is presented. These procedures are implemented within a MATLAB-based application and organized into three submodules. The application follows a structured DSP [...] Read more.
In this paper, a set of digital signal processing (DSP) procedures tailored for the analysis of complex musical tones with prominent inharmonicity is presented. These procedures are implemented within a MATLAB-based application and organized into three submodules. The application follows a structured DSP chain: basic signal manipulation; spectral content analysis; estimation of the inharmonicity coefficient and the number of prominent partials; design of a dedicated filter bank; signal decomposition into subchannels; subchannel analysis and envelope extraction; and, finally, recombination of the subchannels into a wideband signal. Each stage in the chain is described in detail, and the overall process is demonstrated through representative examples. The concept and the accompanying application are initially intended for rapid post-processing of recorded signals, offering a tool for enhanced signal annotation. Additionally, the built-in features for subchannel manipulation and recombination enable the preparation of stimuli for perceptual listening tests. The procedures have been tested on a set of recorded tones from various string instruments, including those with pronounced inharmonicity, such as the piano, harp, and harpsichord. Full article
(This article belongs to the Special Issue Musical Acoustics and Sound Perception)
Show Figures

Figure 1

7 pages, 426 KiB  
Proceeding Paper
Using Artificial Intelligence to Support Students in Developing Startup Products in English as a Foreign Language Course
by Wen-Chi Hu and Shih-Tsung Hsu
Eng. Proc. 2025, 98(1), 23; https://doi.org/10.3390/engproc2025098023 - 27 Jun 2025
Viewed by 219
Abstract
We explored the use of artificial intelligence (AI) in enhancing the English proficiency of students in the English as a Foreign Language (EFL) course through a startup product development curriculum. In the course, real-world business scenarios of startup companies were offered for students [...] Read more.
We explored the use of artificial intelligence (AI) in enhancing the English proficiency of students in the English as a Foreign Language (EFL) course through a startup product development curriculum. In the course, real-world business scenarios of startup companies were offered for students to analyze English communication skills on crowdfunding platforms and in product promotional videos. The EFL students used entrepreneurial skills to create and present their product videos in a team to the class who acted as potential investors. Pre- and post-test analyses were conducted to assess the impact of AI-assisted learning on enhancing English listening and reading ability. Significant improvements were observed, suggesting AI-enhanced entrepreneurial experiences and the listening and reading ability of the EFL students. Full article
Show Figures

Figure 1

15 pages, 1545 KiB  
Article
Speech Recognition in Noise: Analyzing Phoneme, Syllable, and Word-Based Scoring Methods and Their Interaction with Hearing Loss
by Saransh Jain, Vijaya Kumar Narne, Bharani, Hema Valayutham, Thejaswini Madan, Sunil Kumar Ravi and Chandni Jain
Diagnostics 2025, 15(13), 1619; https://doi.org/10.3390/diagnostics15131619 - 26 Jun 2025
Viewed by 512
Abstract
Introduction: This study aimed to compare different scoring methods, such as phoneme, syllable, and word-based scoring, during word recognition in noise testing and their interaction with hearing loss severity. These scoring methods provided a structured framework for refining clinical audiological diagnosis by revealing [...] Read more.
Introduction: This study aimed to compare different scoring methods, such as phoneme, syllable, and word-based scoring, during word recognition in noise testing and their interaction with hearing loss severity. These scoring methods provided a structured framework for refining clinical audiological diagnosis by revealing underlying auditory processing at multiple linguistic levels. We highlight how scoring differences inform differential diagnosis and guide targeted audiological interventions. Methods: Pure tone audiometry and word-in-noise testing were conducted on 100 subjects with a wide range of hearing loss severity. Speech recognition was scored using phoneme, syllable, and word-based methods. All procedures were designed to reflect standard diagnostic protocols in clinical audiology. Discriminant function analysis examined how these scoring methods differentiate the degree of hearing loss. Results: Results showed that each method provides unique information about auditory processing. Phoneme-based scoring has pointed out basic auditory discrimination; syllable-based scoring can capture temporal and phonological processing, while word-based scoring reflects real-world listening conditions by incorporating contextual knowledge. These findings emphasize the diagnostic value of each scoring approach in clinical settings, aiding differential diagnosis and treatment planning. Conclusions: This study showed the effect of different scoring methods on hearing loss differentiation concerning severity. We recommend the integration of phoneme-based scoring into standard diagnostic batteries to enhance early detection and personalize rehabilitation strategies. Future research must involve studies about integration with other speech perception tests and applicability across different clinical settings. Full article
Show Figures

Figure 1

15 pages, 1258 KiB  
Article
Are Children Sensitive to Ironic Prosody? A Novel Task to Settle the Issue
by Francesca Panzeri and Beatrice Giustolisi
Languages 2025, 10(7), 152; https://doi.org/10.3390/languages10070152 - 25 Jun 2025
Viewed by 355
Abstract
Ironic remarks are often pronounced with a distinctive intonation. It is not clear whether children rely on acoustic cues to attribute an ironic intent. This question has been only indirectly tackled, with studies that manipulated the intonation with which the final remark is [...] Read more.
Ironic remarks are often pronounced with a distinctive intonation. It is not clear whether children rely on acoustic cues to attribute an ironic intent. This question has been only indirectly tackled, with studies that manipulated the intonation with which the final remark is pronounced within an irony comprehension task. We propose a new task that is meant to assess whether children rely on prosody to infer speakers’ sincere or ironic communicative intentions, without requiring meta-linguistic judgments (since pragmatic awareness is challenging for young children). Children listen to evaluative remarks (e.g., “That house is really beautiful”), pronounced with sincere or ironic intonation, and they are asked to identify what the speaker is referring to by selecting one of two pictures depicting an image corresponding to a literal interpretation (a luxury house) and one to its reverse interpretation (a hovel). We tested eighty children aged 3 to 11 years and found a clear developmental trend, with children consistently responding above the chance level from age seven, and there was no correlation with the recognition of emotions transmitted through the vocal channel. Full article
(This article belongs to the Special Issue Advances in the Acquisition of Prosody)
Show Figures

Figure 1

28 pages, 1093 KiB  
Article
Blended Phonetic Training with HVPT Features for EFL Children: Effects on L2 Perception and Listening Comprehension
by KyungA Lee and Hyunkee Ahn
Languages 2025, 10(6), 122; https://doi.org/10.3390/languages10060122 - 26 May 2025
Viewed by 866
Abstract
Despite being fundamental for speech processing, L2 perceptual training often lacks attention in L2 classrooms, especially among English as Foreign Language (EFL) learners navigating complex English phonology. The current study investigates the impact of the blended phonetic training program incorporating HVPT features on [...] Read more.
Despite being fundamental for speech processing, L2 perceptual training often lacks attention in L2 classrooms, especially among English as Foreign Language (EFL) learners navigating complex English phonology. The current study investigates the impact of the blended phonetic training program incorporating HVPT features on enhancing L2 perception and listening comprehension skills in Korean elementary EFL learners. Fifty-seven learners, aged 11 to 12 years, participated in a four-week intervention program. They were trained on 13 challenging consonant phonemes for Korean learners, using multimedia tools for practice. Pre- and posttests assessed L2 perception and listening comprehension. They are grouped into three proficiency levels based on listening comprehension tests. The results showed significant improvements in L2 perception (p = 0.01) with small and in listening comprehension (p < 0.001) with small-to-medium effects. The lower proficiency students demonstrated the largest gains. The correlation between L2 perception and listening comprehension was observed both in pre- (r = 0.427 **) and posttests (r = 0.479 ***). Findings underscore the importance of integrating explicit phonetic instruction with HVPT to enhance L2 listening skills among EFL learners. Full article
(This article belongs to the Special Issue L2 Speech Perception and Production in the Globalized World)
Show Figures

Figure 1

14 pages, 2755 KiB  
Article
Objective Detection of Auditory Steady-State Responses (ASSRs) Based on Mutual Information: Receiver Operating Characteristics and Performance Across Modulation Rates and Levels
by Gavin M. Bidelman and Claire McElwain Horn
Audiol. Res. 2025, 15(3), 60; https://doi.org/10.3390/audiolres15030060 - 15 May 2025
Viewed by 941
Abstract
Background: Auditory steady-state responses (ASSRs) are sustained potentials used to assess the physiological integrity of the auditory pathway and objectively estimate hearing thresholds. ASSRs are typically analyzed using statistical procedures to remove the subjective bias of human operators. Knowing when to terminate [...] Read more.
Background: Auditory steady-state responses (ASSRs) are sustained potentials used to assess the physiological integrity of the auditory pathway and objectively estimate hearing thresholds. ASSRs are typically analyzed using statistical procedures to remove the subjective bias of human operators. Knowing when to terminate signal averaging in ASSR testing is critical for making efficient clinical decisions and obtaining high-quality data in empirical research. Here, we report on stimulus-specific (frequency, level) properties and operating ranges of a novel ASSR detection metric based on mutual information (MI). Methods: ASSRs were measured in n = 10 normal-hearing listeners exposed to various stimuli varying in modulation rate (40, 80 Hz) and level (80–20 dB SPL). Results: MI-based classifiers applied to ASSR recordings showed that the accuracy of ASSR detection ranged from ~75 to 99% and was better for 40 compared to 80 Hz responses and for higher compared to lower stimulus levels. Receiver operating characteristics (ROCs) were used to establish normative ranges for MI for reliable ASSR detection across levels and rates (MI = 0.9–1.6). Relative to current statistics for ASSR identification (F-test), MI was a more efficient metric for determining the stopping criterion for signal averaging. Conclusions: Our results confirm that MI can be applied across a broad range of ASSR stimuli and might offer improvements to conventional objective techniques for ASSR detection. Full article
Show Figures

Figure 1

28 pages, 19935 KiB  
Article
Effects of Violin Back Arch Height Variations on Auditory Perception
by Luca Jost, Mehmet Ercan Altinsoy and Hannes Vereecke
Acoustics 2025, 7(2), 27; https://doi.org/10.3390/acoustics7020027 - 14 May 2025
Viewed by 1547
Abstract
One of the quintessential goals of musical instrument acoustics is to improve the perceived sound produced by, e.g., a violin. To achieve this, the connections between physical (mechanical and geometrical) properties and perceived sound output need to be understood. In this article, a [...] Read more.
One of the quintessential goals of musical instrument acoustics is to improve the perceived sound produced by, e.g., a violin. To achieve this, the connections between physical (mechanical and geometrical) properties and perceived sound output need to be understood. In this article, a single facet of this complex problem will be discussed using experimental results obtained for six violins of varying back arch height. This is the first investigation of its kind to focus on back arch height. It may serve to inform instrument makers and researchers alike about the variation in sound that can be achieved by varying this parameter. The test instruments were constructed using state-of-the-art methodology to best represent the theoretical case of changing back arch height on a single instrument. Three values of back arch height (12.1, 14.8 and 17.5 mm) were investigated. The subsequent perceptual tests consisted of a free sorting task in the playing situation and three two-alternative forced choice listening tests. The descriptors “round” and “warm” were found to be linked to back arch height. The trend was non-linear, meaning that both low- and high-arch height instruments were rated as possessing more of these descriptors than their medium-arch height counterparts. Additional results were obtained using stimuli created by hybrid synthesis. However, these could not be linked to those using real playing or recordings. The results of this study serve to inform violin makers about the relative importance of back arch height and its specific influence on sound output. The discussion of the applied methodology and interpretation of results may serve to inform researchers about important new directions in the field of musical instrument acoustics. Full article
Show Figures

Figure 1

12 pages, 517 KiB  
Article
Preliminary Investigation of a Novel Measure of Speech Recognition in Noise
by Linda Thibodeau, Emma Freeman, Kristin Kronenberger, Emily Suarez, Hyun-Woong Kim, Shuang Qi and Yune Sang Lee
Audiol. Res. 2025, 15(3), 59; https://doi.org/10.3390/audiolres15030059 - 13 May 2025
Viewed by 712
Abstract
Background/Objectives: Previous research has shown that listeners may use acoustic cues for speech processing that are perceived during brief segments in the noise when there is an optimal signal-to-noise ratio (SNR). This “glimpsing” effect requires higher cognitive skills than the speech tasks used [...] Read more.
Background/Objectives: Previous research has shown that listeners may use acoustic cues for speech processing that are perceived during brief segments in the noise when there is an optimal signal-to-noise ratio (SNR). This “glimpsing” effect requires higher cognitive skills than the speech tasks used in typical audiometric evaluations. Purpose: The aim of this study was to investigate the use of an online test of speech processing in noise in listeners with typical hearing sensitivity (TH, defined as thresholds ≤ 25 dB HL) who were asked to determine the gender of the subject in sentences that were presented in increasing levels of continuous and interrupted noise. Methods: This was a repeated-measures design with three factors (SNR, noise type, and syntactic complexity). Study Sample: Participants with self-reported TH (N = 153, ages 18–39 years, mean age = 20.7 years) who passed an online hearing screening were invited to complete an online questionnaire. Data Collection and Analysis: Participants completed a sentence recognition task under four SNRs (−6, −9, −12, and −15 dB), two syntactic complexity settings (subjective-relative and objective-relative center-embedded), and two noise types (interrupted and continuous). They were asked to listen to 64 sentences through their own headphones/earphones that were presented in an online format at a user-selected comfortable listening level. Their task was to identify the gender of the person performing the action in each sentence. Results: Significant main effects of all three factors as well as the SNR by noise-type two-way interaction were identified (p < 0.05). This interaction indicated that the effect of SNR on sentence comprehension was more pronounced in the continuous noise compared to the interrupted noise condition. Conclusions: Listeners with self-reported TH benefited from the glimpsing effect in the interrupted noise even under low SNRs (i.e., −15 dB). The evaluation of glimpsing may be a sensitive measure of auditory processing beyond the traditional word recognition used in clinical evaluations in persons who report hearing challenges and may hold promise for the development of auditory training programs. Full article
Show Figures

Figure 1

23 pages, 1764 KiB  
Article
Ergogenic Effects of Combined Caffeine Supplementation and Motivational Music on Anaerobic Performance in Female Handball Players: A Randomized Double-Blind Controlled Trial
by Houda Bougrine, Thierry Paillard, Nidhal Jebabli, Halil İbrahim Ceylan, Julien Maitre, Ismail Dergaa, Valentina Stefanica and Abderraouf Ben Abderrahman
Nutrients 2025, 17(10), 1613; https://doi.org/10.3390/nu17101613 - 8 May 2025
Cited by 1 | Viewed by 1247
Abstract
Listening to self-selected motivational music (SSMM) during warm-ups and caffeine (CAF) intake prior to exercise can independently enhance athletic performance among female athletes. Likewise, the potential synergistic effects of these interventions have not yet been thoroughly examined. Objective: The purpose of the study [...] Read more.
Listening to self-selected motivational music (SSMM) during warm-ups and caffeine (CAF) intake prior to exercise can independently enhance athletic performance among female athletes. Likewise, the potential synergistic effects of these interventions have not yet been thoroughly examined. Objective: The purpose of the study was to assess the independent and combined effects of SSMM during warm-up and pre-exercise CAF intake on maximal short-duration performance in female athletes. Methods: Seventeen female handball players (aged 16.7 ± 0.4 years) participated in a randomized, double-blind, crossover study. Each athlete completed four conditions: (i) placebo (PLA) with no interventions, (ii) music and placebo (MUS), (iii) caffeine intake only (CAF), and (iv) a combination of music and caffeine (MUS + CAF). Performance assessments included the countermovement jump (CMJ), modified agility t-test (MAT), repeated-sprint ability (RSA) test (mean and peak sprint performance), and rating of perceived exertion (RPE). Results: The MUS (p > 0.05; p < 0.01; p < 0.01; p < 0.001, respectively), CAF (all p < 0.001), and MUS + CAF (all p < 0.01) conditions significantly outperformed the PLA condition in CMJ, MAT, RSA mean, and RSA peak measures. No significant differences were observed between the CAF and MUS + CAF conditions; however, the best performances were recorded during MUS + CAF. RPE scores remained consistent across conditions. Conclusions: Warm-up routines incorporating either SSMM or a moderate dose of CAF (6 mg·kg−1) enhance anaerobic performance in female athletes. While both interventions are effective independently, CAF intake elicits a stronger effect. Although no significant difference was demonstrated for this combination, the concurrent use of SSMM and CAF appears to produce a potential effect, emerging as the most effective strategy for optimizing anaerobic performance. Full article
(This article belongs to the Section Sports Nutrition)
Show Figures

Figure 1

17 pages, 1503 KiB  
Article
The Influence of Language Experience on Speech Perception: Heritage Spanish Speaker Perception of Contrastive and Allophonic Consonants
by Amanda Boomershine and Keith Johnson
Languages 2025, 10(5), 86; https://doi.org/10.3390/languages10050086 - 23 Apr 2025
Viewed by 634
Abstract
It is well known that a listener’s native phonological background has an impact on how speech sounds are perceived. Native speakers can distinguish sounds that serve a contrastive function in their language better than sounds that are not contrastive. However, the role of [...] Read more.
It is well known that a listener’s native phonological background has an impact on how speech sounds are perceived. Native speakers can distinguish sounds that serve a contrastive function in their language better than sounds that are not contrastive. However, the role of allophony in speech perception is understudied, especially among heritage speakers. This paper highlights a study that directly tests the influence of the allophonic/phonemic distinction on perception by Spanish heritage speakers, comparing their results to those of late bilingual and monolingual speakers of Spanish and English in the US. Building on an earlier study, the unique contribution of this paper is a study of the perceptual pattern shown by heritage speakers of Spanish and a comparison of bilingual and monolingual speakers of English and Spanish. The participants completed a similarity rating task with stimuli containing VCV sequences with the intervocalic consonants [d], [ð], and [ɾ]. The heritage speakers, who are early sequential bilinguals of Spanish and English, showed a perceptual pattern that is more like monolingual Spanish listeners than monolingual English listeners, but still intermediate between the two monolingual groups. Specifically, they perceived [d]/[ɾ] like the L1 Spanish participants, treating them as very different sounds. They perceived the pair [d]/[ð], which is contrastive in English but allophonic in Spanish, like the L1 Spanish participants, as fairly similar sounds. Finally, heritage speakers perceived [ɾ]/[ð], contrastive in both languages, as very different sounds, identical to all other participant groups. The results underscore both the importance of surface oppositions, suggesting the need to reconsider the traditional definition of contrast, as well as the importance of considering level and age of exposure to the second language when studying the perception of sounds by bilingual speakers. Full article
(This article belongs to the Special Issue Phonetics and Phonology of Ibero-Romance Languages)
Show Figures

Figure 1

29 pages, 3169 KiB  
Review
Recent Developments in Investigating and Understanding Impact Sound Annoyance—A Literature Review
by Martina Marija Vrhovnik and Rok Prislan
Acoustics 2025, 7(2), 21; https://doi.org/10.3390/acoustics7020021 - 14 Apr 2025
Viewed by 1073
Abstract
Impact sound, particularly prevalent indoors, emerges as a major source of annoyance necessitating a deeper and more comprehensive understanding of its implications. This literature review provides a systematic overview of recent research developments in the study of impact sound annoyance, focusing on advances [...] Read more.
Impact sound, particularly prevalent indoors, emerges as a major source of annoyance necessitating a deeper and more comprehensive understanding of its implications. This literature review provides a systematic overview of recent research developments in the study of impact sound annoyance, focusing on advances in the assessment of impact sound perception through laboratory listening testing and standardization efforts. This review provides a detailed summary of the listening setup, assessment procedure and key findings of each study. The studied correlations between SNQs and annoyance ratings are summarized and key research challenges are highlighted. Among the studies, considerable research effort has focused on the assessment of walking impact sound and the use of spectrum adaptation terms, albeit with inconsistent outcomes. Comparison with the previous literature also shows the influence of spatial and temporal characteristics of impact sound sources on perceived annoyance, with higher spatial fidelity leading to higher annoyance ratings. Furthermore, it has been shown that the consideration of non-acoustic factors such as noise sensitivity and visual features are important for the assessment. This review provides a comprehensive overview of recent advances in the understanding and assessment of impact sound annoyance and provides information for future research directions and standardization efforts. Full article
(This article belongs to the Special Issue Vibration and Noise (2nd Edition))
Show Figures

Figure 1

Back to TopTop