Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (93)

Search Parameters:
Keywords = listening difficulties

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 506 KiB  
Article
The Use of Filled Pauses Across Multiple Discourse Contexts in Children Who Are Hard of Hearing and Children with Typical Hearing
by Charlotte Hilker, Jacob J. Oleson, Mariia Tertyshnaia, Ryan W. McCreery and Elizabeth A. Walker
Behav. Sci. 2025, 15(8), 1053; https://doi.org/10.3390/bs15081053 - 4 Aug 2025
Viewed by 46
Abstract
Filled pauses are thought to be reflections of linguistic processes (e.g., lexical retrieval, speech planning and execution). Uh may be a self-directed cue for when a speaker needs more time to retrieve lexical–semantic representations, whereas um serves as a listener-directed, pragmatic cue. The [...] Read more.
Filled pauses are thought to be reflections of linguistic processes (e.g., lexical retrieval, speech planning and execution). Uh may be a self-directed cue for when a speaker needs more time to retrieve lexical–semantic representations, whereas um serves as a listener-directed, pragmatic cue. The use of filled pauses has not been examined in children who are hard of hearing (CHH). Participants included 68 CHH and 33 children with typical hearing (CTH). Participants engaged in conversations, expository discourse, and fable retells. We analyzed filled pauses as a function of hearing status and discourse contexts and evaluated the relationship between filled pauses and language ability. CHH produced uh across discourse contexts more often than their hearing peers. CHH did not differ in their use of um relative to CTH. Both um and uh were used more often in conversational samples compared to other types of discourse. Spearman’s correlations did not show any significant associations between the rate of filled pauses and standardized language scores. These results indicate that CHH produces uh more often than CTH, suggesting that they may have difficulty retrieving lexical–semantic items during ongoing speech. This information may be useful for interventionists who are collecting language samples during assessment. Full article
(This article belongs to the Special Issue Language and Cognitive Development in Deaf Children)
Show Figures

Figure 1

11 pages, 230 KiB  
Article
Hearing and Listening Difficulties in High Schools and Universities: The Results of an Exploratory Survey of a Large Number of Students and Teachers in the Friuli-Venezia Giulia and Umbria Regions, Italy
by Valeria Gambacorta, Davide Stivalini, Niccolò Granieri, Raffaella Marchi, Alessia Fabbri, Pasquale Viola, Alessia Astorina, Ambra Fastelli, Giampietro Ricci and Eva Orzan
Audiol. Res. 2025, 15(3), 66; https://doi.org/10.3390/audiolres15030066 - 6 Jun 2025
Viewed by 547
Abstract
Background/Objectives: with the aim of describing how students and their teachers perceive and define their hearing and auditory experience in the classroom, we present the results of a questionnaire that examined the listening challenges faced by students and teachers at the University of [...] Read more.
Background/Objectives: with the aim of describing how students and their teachers perceive and define their hearing and auditory experience in the classroom, we present the results of a questionnaire that examined the listening challenges faced by students and teachers at the University of Perugia and in four secondary schools in Friuli-Venezia Giulia, Italy. Methods: A survey was developed as part of the A.Ba.Co. project (Overcoming Communication Barriers). Closed or open-ended questions were used to analyze the responses of students and teachers regarding diagnosed or only perceived hearing difficulties in daily life and the quality of listening in school classes. Results: Hearing difficulties, either clinically diagnosed or only perceived, were reported by 8–9% of students. Between teachers, the reported hearing difficulties were 27.1% in high school and 12% at university (p < 0.001). The most frequent reason for less-than-optimal ease of listening in class differed between the two educational levels; 45.8% of high school students blamed it on the noise in the room compared to 18.2% of university students (p < 0.001). Inversely, 40.9% of university students connected listening difficulty with their place in class compared to 9.5% (101/1065) of high school students (p < 0.001). Conclusions: Although the minimum acoustic requirements for educational facilities have been established by the UNI 11532-2 standard, it is speculated that the majority of high school and university classrooms in Italy do not meet optimal listening conditions. Furthermore, the reasons for students’ poor listening quality appear to not be fully understood, neither by students nor by teachers. In addition to the need for greater attention to physical learning spaces (advocating the universal design principles), effective change will also need to involve a greater awareness of what the barriers to listening are and how much they influence both teaching and learning quality and effectiveness. Full article
25 pages, 3131 KiB  
Article
Evaluating the Clinical- and Cost-Effectiveness of Cochlear Implant Sound Processor Upgrades in Older Adults: Outcomes from a Large Australian Multicenter Study
by Paola Vittoria Incerti, Jermy Pang, Jason Gavrilis, Vicky W. Zhang, Jessica Tsiolkas, Rajan Sharma, Elizabeth Seil, Antonio Ahumada-Canale, Bonny Parkinson and Padraig Thomas Kitterick
J. Clin. Med. 2025, 14(11), 3765; https://doi.org/10.3390/jcm14113765 - 28 May 2025
Viewed by 1028
Abstract
Background: Many older Australian adults with cochlear implants (CI) lack funding for replacement sound processors, risking complete device failure and reduced quality of life. The need for replacement CI devices for individuals with obsolete sound processors and no access to funding poses an [...] Read more.
Background: Many older Australian adults with cochlear implants (CI) lack funding for replacement sound processors, risking complete device failure and reduced quality of life. The need for replacement CI devices for individuals with obsolete sound processors and no access to funding poses an increasing public health challenge in Australia and worldwide. We aimed to investigate the clinical and cost-effectiveness of upgrading obsolete CI sound processors in older adults. Methods: Alongside an Australian Government-funded upgrade program, a prospective, mixed-methodology design study was undertaken. Participants were aged 65 and over, with obsolete Cochlear™ sound processors and no funding for replacements. This study compared speech perception in noise, as well as self-reported outcome measures, including cognition, listening effort, fatigue, device benefit, mental well-being, participation, empowerment and user experiences, between upgraded and obsolete hearing aid processors. The economic impact of the upgrade was evaluated using two state-transition microsimulation models of adults using CIs. Results: The multi-site study ran from 20 May 2021 to 21 April 2023, with recruitment from June 2021 to May 2022. A total of 340 Cochlear™ sound processors were upgraded in 304 adults. The adults’ mean age was 77.4 years (SD 6.6), and 48.5% were female. Hearing loss onset occurred on average at 30 years (SD 21.0), with 12 years (SD 6.2) of CI use. The outcomes show significant improvements in speech understanding in noise and reduced communication difficulties, self-reported listening effort and fatigue. Semi-structured interviews have revealed that upgrades alleviated the anxiety and fear of sudden processor failure. Health economic analysis found that the cost-effectiveness of upgrades stemmed from preventing device failures, rather than from access to newer technology features. Conclusions: Our study identified significant clinical and self-reported benefits from upgrading Cochlear™ sound processors. Economic value came from avoiding scenarios where a total failure of device renders its user unable to access sound. The evidence gathered can be used to inform policy on CI processor upgrades for older adults. Full article
(This article belongs to the Special Issue The Challenges and Prospects in Cochlear Implantation)
Show Figures

Figure 1

17 pages, 1017 KiB  
Article
Using Voice-to-Text Transcription to Examine Outcomes of AirPods Pro Receivers When Used as Part of Remote Microphone System
by Shuang Qi and Linda Thibodeau
Appl. Sci. 2025, 15(10), 5451; https://doi.org/10.3390/app15105451 - 13 May 2025
Viewed by 491
Abstract
Hearing difficulty in noise can occur in 10–15% of listeners with typical hearing in the general population of the United States. Using one’s smartphone as a remote microphone (RM) system with the AirPods Pro (AP) may be considered an assistive device given its [...] Read more.
Hearing difficulty in noise can occur in 10–15% of listeners with typical hearing in the general population of the United States. Using one’s smartphone as a remote microphone (RM) system with the AirPods Pro (AP) may be considered an assistive device given its wide availability and potentially lower price. To evaluate this possibility, the accuracy of voice-to-text transcription for sentences presented in noise was compared, when KEMAR wore an AP receiver connected to an iPhone set to function as an RM system, to the accuracy obtained when it wore a sophisticated Phonak Roger RM system. A ten-sentence list was presented for six technology arrangements at three signal-to-noise ratios (SNRs; +5, 0, and −5 dB) in two types of noise (speech-shaped and babble noise). Each sentence was transcribed by Otter AI to obtain an overall percent accuracy for each condition. At the most challenging SNR (−5 dB SNR) across both noise types, the Roger system and smartphone/AP set to noise cancelation mode showed significantly higher accuracy relative to the condition when the smartphone/AP was in transparency mode. However, the major limitation of Bluetooth signal delay when using the AP/smartphone system would require further investigation in real-world settings with human users. Full article
Show Figures

Figure 1

18 pages, 292 KiB  
Article
Listen or Read? The Impact of Proficiency and Visual Complexity on Learners’ Reliance on Captions
by Yan Li
Behav. Sci. 2025, 15(4), 542; https://doi.org/10.3390/bs15040542 - 17 Apr 2025
Viewed by 707
Abstract
This study investigates how Chinese EFL (English as a foreign language) learners of low- and high-proficiency levels allocate attention between captions and audio while watching videos, and how visual complexity (single- vs. multi-speaker content) influences caption reliance. The study employed a novel paused [...] Read more.
This study investigates how Chinese EFL (English as a foreign language) learners of low- and high-proficiency levels allocate attention between captions and audio while watching videos, and how visual complexity (single- vs. multi-speaker content) influences caption reliance. The study employed a novel paused transcription method to assess real-time processing. A total of 64 participants (31 low-proficiency [A1–A2] and 33 high-proficiency [C1–C2] learners) viewed single- and multi-speaker videos with English captions. Misleading captions were inserted to objectively measure reliance on captions versus audio. Results revealed significant proficiency effects: Low-proficiency learners prioritized captions (reading scores > listening, Z = −4.55, p < 0.001, r = 0.82), while high-proficiency learners focused on audio (listening > reading, Z = −5.12, p < 0.001, r = 0.89). Multi-speaker videos amplified caption reliance for low-proficiency learners (r = 0.75) and moderately increased reliance for high-proficiency learners (r = 0.52). These findings demonstrate that low-proficiency learners rely overwhelmingly on captions during video viewing, while high-proficiency learners integrate multimodal inputs. Notably, increased visual complexity amplifies caption reliance across proficiency levels. Implications are twofold: Pedagogically, educators could design tiered caption removal protocols as skills improve while incorporating adjustable caption opacity tools. Technologically, future research could focus on developing dynamic captioning systems leveraging eye-tracking and AI to adapt to real-time proficiency, optimizing learning experiences. Additionally, video complexity should be calibrated to learners’ proficiency levels. Full article
(This article belongs to the Special Issue Educational Applications of Cognitive Psychology)
20 pages, 1460 KiB  
Article
Using Tangible User Interfaces (TUIs): Preliminary Evidence on Memory and Comprehension Skills in Children with Autism Spectrum Disorder
by Mariagiovanna De Luca, Ciro Rosario Ilardi, Pasquale Dolce, Angelo Rega, Raffaele Di Fuccio, Franco Rubinacci, Maria Gallucci and Paola Marangolo
Behav. Sci. 2025, 15(3), 267; https://doi.org/10.3390/bs15030267 - 25 Feb 2025
Viewed by 1026
Abstract
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition involving persistent challenges with social communication, as well as memory and language comprehension difficulties. This study investigated the effects of a storytelling paradigm on language comprehension and memory skills in children with ASD. A [...] Read more.
Autism spectrum disorder (ASD) is a complex neurodevelopmental condition involving persistent challenges with social communication, as well as memory and language comprehension difficulties. This study investigated the effects of a storytelling paradigm on language comprehension and memory skills in children with ASD. A traditional approach, using an illustrated book to deliver the narrative, was compared to a novel paradigm based on Tangible User Interfaces (TUIs) combined with multisensory stimulation. A group of 28 children (ages between 6 and 10 years old) was asked to listen to a story over four weeks, two times a week, in two different experimental conditions. The experimental group (n = 14) engaged with the story using TUIs, while the control group (n = 14) interacted with a corresponding illustrated book. Pre- and post-intervention assessments were conducted using NEPSY-II subtests on language comprehension and memory. At the end of the intervention, a trend of improved performance was found. In particular, a greater number of subjects benefited from the intervention in the experimental group compared with the control group in instruction comprehension and narrative memory-cued recall. These preliminary findings suggest that TUIs may enhance learning outcomes for children with ASD, warranting further investigation into their potential benefits. Full article
(This article belongs to the Special Issue Neural Correlates of Cognitive and Affective Processing)
Show Figures

Figure 1

18 pages, 3092 KiB  
Article
The Relations Between Sensory Modulation, Hyper Arousability and Psychopathology in Adolescents with Anxiety Disorders
by Ginan Hammud, Ayelet Avital-Magen, Hiba Jabareen, Reut Adler-Tsafir and Batya Engel-Yeger
Children 2025, 12(2), 187; https://doi.org/10.3390/children12020187 - 5 Feb 2025
Cited by 1 | Viewed by 2153
Abstract
Background: Sensory modulation may play a significant role in psychiatric conditions, including anxiety, and explain arousability levels, behavioral disorders, and functional deficits. Yet, studies about sensory modulation in adolescents with anxiety disorders are scarce. Purpose: To profile the prevalence of sensory modulation difficulties [...] Read more.
Background: Sensory modulation may play a significant role in psychiatric conditions, including anxiety, and explain arousability levels, behavioral disorders, and functional deficits. Yet, studies about sensory modulation in adolescents with anxiety disorders are scarce. Purpose: To profile the prevalence of sensory modulation difficulties (SMDs) in adolescents with anxiety and examine their relations to arousability and psychopathology. The study compared adolescents with anxiety disorders to healthy controls using physiological measures and self-reports that reflect daily life scenarios. Then, the study examined the relationship between SMDs, arousability, and psychopathological severity in the study group. Method: Participants were 106 adolescents, aged 10.5–18 years and their parents. The study group included 44 participants diagnosed with anxiety disorder by psychiatrists. The control group included 62 healthy participants matched by age and gender to the study group. Parents completed the demographic questionnaire and the Child Behavior Checklist (CBCL). The adolescents completed The Revised Children’s Manifest Anxiety Scale (RCMAS) and the Adolescent/Adult Sensory Profile (AASP) and underwent the electrodermal activity (EDA) and pulse rate tests while listening to extreme sensory stimuli of auditory startles. Results: Based on AASP, the study group had a higher prevalence of SMDs expressed in lower sensory seeking, difficulties in registering sensory stimuli, and higher sensory sensitivity and avoidance. The study group presented higher arousability while listening to the startles as manifested in higher heart rate and EDA responses. The physiological results correlated with SMD levels measured by the AASP self-reports. SMDs correlated with psychopathological severity. Conclusions: SMDs may characterize adolescents with anxiety disorders and impact their arousability, symptoms severity, and daily functioning. Therefore, sensory modulation should be evaluated using both self-reports (to reflect implications in real life from patients’ own voices) along with objective measures to explain daily behaviors by underlying physiological mechanisms. This may focus intervention towards better health, function, and development. Full article
Show Figures

Figure 1

18 pages, 1468 KiB  
Article
Eyes on the Pupil Size: Pupillary Response During Sentence Processing in Aphasia
by Christina Sen, Noelle Abbott, Niloofar Akhavan, Carolyn Baker and Tracy Love
Brain Sci. 2025, 15(2), 107; https://doi.org/10.3390/brainsci15020107 - 23 Jan 2025
Viewed by 953
Abstract
Background/Objectives: Individuals with chronic agrammatic aphasia demonstrate real-time sentence processing difficulties at the lexical and structural levels. Research using time-sensitive measures, such as priming and eye-tracking, have associated these difficulties with temporal delays in accessing semantic representations that are needed in real time [...] Read more.
Background/Objectives: Individuals with chronic agrammatic aphasia demonstrate real-time sentence processing difficulties at the lexical and structural levels. Research using time-sensitive measures, such as priming and eye-tracking, have associated these difficulties with temporal delays in accessing semantic representations that are needed in real time during sentence structure building. In this study, we examined the real-time processing effort linked to sentence processing in individuals with aphasia and neurotypical, age-matched control participants as measured through pupil reactivity (i.e., pupillometry). Specifically, we investigated whether a semantically biased lexical cue (i.e., adjective) influences the processing effort while listening to complex noncanonical sentences. Methods: In this eye-tracking while listening study (within-subjects design), participants listened to sentences that either contained biased or unbiased adjectives (e.g., venomous snake vs. voracious snake) while viewing four images, three related to nouns in the sentence and one unrelated, but a plausible match for the unbiased adjective. Pupillary responses were collected every 17 ms throughout the entire sentence. Results: While age-matched controls demonstrated increased pupil response throughout the course of the sentence, individuals with aphasia showed a plateau in pupil response early on in the sentence. Nevertheless, both controls and individuals with aphasia demonstrated reduced processing effort in the biased adjective condition. Conclusions: Individuals with aphasia are sensitive to lexical–semantic cues despite impairments in real-time lexical activation during sentence processing. Full article
(This article belongs to the Collection Collection on Neurobiology of Language)
Show Figures

Figure 1

11 pages, 1174 KiB  
Article
Unilateral Versus Bilateral Cochlear Implants in Adults: A Cross-Sectional Questionnaire Study Across Multiple Hearing Domains
by Alessandra Pantaleo, Luigi Curatoli, Giada Cavallaro, Debora Auricchio, Alessandra Murri and Nicola Quaranta
Audiol. Res. 2025, 15(1), 6; https://doi.org/10.3390/audiolres15010006 - 20 Jan 2025
Viewed by 1357
Abstract
Aim: The aim of this study was to assess the subjective experiences of adults with different cochlear implant (CI) configurations—unilateral cochlear implant (UCI), bilateral cochlear implant (BCI), and bimodal stimulation (BM)—focusing on their perception of speech in quiet and noisy environments, music, environmental [...] Read more.
Aim: The aim of this study was to assess the subjective experiences of adults with different cochlear implant (CI) configurations—unilateral cochlear implant (UCI), bilateral cochlear implant (BCI), and bimodal stimulation (BM)—focusing on their perception of speech in quiet and noisy environments, music, environmental sounds, people’s voices and tinnitus. Methods: A cross-sectional survey of 130 adults who had undergone UCI, BCI, or BM was conducted. Participants completed a six-item online questionnaire, assessing difficulty levels and psychological impact across auditory domains, with responses measured on a 10-point scale. Statistical analyses were performed to compare the subjective experiences of the three groups. Results: Patients reported that understanding speech in noise and tinnitus perception were their main concerns. BCI users experienced fewer difficulties with understanding speech in both quiet (p < 0.001) and noisy (p = 0.008) environments and with perceiving non-vocal sounds (p = 0.038) compared to UCI and BM users; no significant differences were found for music perception (p = 0.099), tinnitus perception (p = 0.397), or voice naturalness (p = 0.157). BCI users also reported less annoyance in quiet (p = 0.004) and noisy (p = 0.047) environments, and in the perception of voices (p = 0.009) and non-vocal sounds (p = 0.019). Tinnitus-related psychological impact showed no significant differences between groups (p = 0.090). Conclusions: Although speech perception in noise and tinnitus remain major problems for CI users, the results of our study suggest that bilateral cochlear implantation offers significant subjective advantages over unilateral implantation and bimodal stimulation in adults, particularly in difficult listening environments. Full article
Show Figures

Figure 1

20 pages, 2569 KiB  
Article
Accentedness Perception in L2: An Investigation of Thai Monophthong Pronunciation of Chinese Students
by Peng Hou and Sarawut Kraisame
Languages 2025, 10(1), 11; https://doi.org/10.3390/languages10010011 - 15 Jan 2025
Viewed by 1072
Abstract
This paper aims to investigate the Thai monophthong pronunciation of Chinese students speaking Thai as a second language (L2), and to examine how native Thai listeners perceived these Chinese-accented Thai monophthongs. This study involves an acoustic analysis targeted on the Thai monophthongs articulated [...] Read more.
This paper aims to investigate the Thai monophthong pronunciation of Chinese students speaking Thai as a second language (L2), and to examine how native Thai listeners perceived these Chinese-accented Thai monophthongs. This study involves an acoustic analysis targeted on the Thai monophthongs articulated by Chinese students of Thai (n = 15) in a picture description task in terms of duration and quality. The participants exhibited varying proficiency levels in different monophthongs, with the greatest difficulty being with Thai back monophthongs and certain central monophthongs, including /ɔ, ɔː/, /o, oː/, and /ɤː/. Moreover, a perception experiment among 30 native Thai listeners proved that Chinese students’ pronunciation of Thai monophthongs had varying levels of impact on accentedness perception. Specifically, /ɯ/, /ɤ/, /o/, /ɔ/, and their long counterparts significantly influenced accentedness perception. Conversely, /i/, /e/, /ɛ/, /a/, /u/, and their long counterparts showed less robustness in predicting the level of accentedness. Among the whole Thai monophthong inventory, teachers should prioritize those monophthongs that significantly influence accentedness perception for teaching Thai pronunciation to Chinese students. Full article
Show Figures

Figure 1

13 pages, 1049 KiB  
Review
An Overview of Dentist–Patient Communication in Quality Dental Care
by Jasmine Cheuk Ying Ho, Hollis Haotian Chai, Bella Weijia Luo, Edward Chin Man Lo, Michelle Zeping Huang and Chun Hung Chu
Dent. J. 2025, 13(1), 31; https://doi.org/10.3390/dj13010031 - 14 Jan 2025
Cited by 7 | Viewed by 6469
Abstract
Dentist–patient communication is at the core of providing quality dental care. This study aims to review the importance, challenges, strategies, and training of dentist–patient communication. The World Dental Federation (FDI) emphasizes the importance of effective communication between oral healthcare providers and patients as [...] Read more.
Dentist–patient communication is at the core of providing quality dental care. This study aims to review the importance, challenges, strategies, and training of dentist–patient communication. The World Dental Federation (FDI) emphasizes the importance of effective communication between oral healthcare providers and patients as a critical component of high-quality care. Effective dentist–patient communication allows dentists to accurately and effectively pass on essential medical information to patients. It improves the dentist’s efficiency, boosts self-confidence, reduces occupational stress, and minimizes the risks of complaint or litigation. Moreover, it alleviates dental anxiety and fear, helps build trust between dentists and patients, addresses patients’ needs and preferences, increases patients’ adherence to improved treatment outcomes, and ultimately leads to enhanced patient satisfaction. Nonetheless, it has been widely acknowledged that dentists universally encounter the repercussions arising from suboptimal communication strategies. Time constraints, difficulties in establishing rapport, the oral-health illiteracy of the patients, the poor communication skills of the dentists, dentists’ perceptions, and language barriers often hinder dentist–patient communication. Dentists should take the patient-centered approach as a premise and acquire verbal and non-verbal communication skills to overcome these communication barriers. The patient-centered approach comprises the understanding of patients’ illness, shared decision-making, and intervention with mindfulness of the patient’s own pace. Simple, succinct, and jargon-free language should be used in verbal communication. Proper body postures and gestures are fundamental for showing positive attitudes towards patients. Communication training for dental students should involve a structured pedagogical approach that includes didactic instruction, role-playing exercises, patient interviewing, and ongoing assessments. Key components of effective communication skills training in dental education include motivational interviewing, open-ended questioning, affirmations, reflective listening, and summaries to enhance patient engagement and adherence to treatment plans. Full article
(This article belongs to the Topic Preventive Dentistry and Public Health)
Show Figures

Figure 1

34 pages, 740 KiB  
Systematic Review
Exploring the Intersection of ADHD and Music: A Systematic Review
by Phoebe Saville, Caitlin Kinney, Annie Heiderscheit and Hubertus Himmerich
Behav. Sci. 2025, 15(1), 65; https://doi.org/10.3390/bs15010065 - 13 Jan 2025
Viewed by 9988
Abstract
Attention Deficit Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental disorder, affecting both children and adults, which often leads to significant difficulties with attention, impulsivity, and working memory. These challenges can impact various cognitive and perceptual domains, including music perception and performance. Despite [...] Read more.
Attention Deficit Hyperactivity Disorder (ADHD) is a highly prevalent neurodevelopmental disorder, affecting both children and adults, which often leads to significant difficulties with attention, impulsivity, and working memory. These challenges can impact various cognitive and perceptual domains, including music perception and performance. Despite these difficulties, individuals with ADHD frequently engage with music, and previous research has shown that music listening can serve as a means of increasing stimulation and self-regulation. Moreover, music therapy has been explored as a potential treatment option for individuals with ADHD. As there is a lack of integrative reviews on the interaction between ADHD and music, the present review aimed to fill the gap in research. Following PRISMA guidelines, a comprehensive literature search was conducted across PsychInfo (Ovid), PubMed, and Web of Science. A narrative synthesis was conducted on 20 eligible studies published between 1981 and 2023, involving 1170 participants, of whom 830 had ADHD or ADD. The review identified three main areas of research: (1) music performance and processing in individuals with ADHD, (2) the use of music listening as a source of stimulation for those with ADHD, and (3) music-based interventions aimed at mitigating ADHD symptoms. The analysis revealed that individuals with ADHD often experience unique challenges in musical tasks, particularly those related to timing, rhythm, and complex auditory stimuli perception, though these deficits did not extend to rhythmic improvisation and musical expression. Most studies indicated that music listening positively affects various domains for individuals with ADHD. Furthermore, most studies of music therapy found that it can generate significant benefits for individuals with ADHD. The strength of these findings, however, was limited by inconsistencies among the studies, such as variations in ADHD diagnosis, comorbidities, medication use, and gender. Despite these limitations, this review provides a valuable foundation for future research on the interaction between ADHD and music. Full article
(This article belongs to the Special Issue Innovations in Music Based Interventions for Psychological Wellbeing)
Show Figures

Figure 1

12 pages, 1551 KiB  
Article
Prevalence of High Frequency Noise-Induced Hearing Loss Among Medical Students Using Personalized Listening Devices
by Aishwarya Gajendran, Gayathri Devi Rajendiran, Aishwarya Prateep, Harshith Satindra and Rashmika Rajendran
J. Clin. Med. 2025, 14(1), 49; https://doi.org/10.3390/jcm14010049 - 26 Dec 2024
Cited by 1 | Viewed by 2615
Abstract
The misuse of personalized listening devices (PLDs) resulting in noise-induced hearing loss (NIHL) has become a public health concern, especially among youths, including medical students. The occupational use of PLDs that produce high-intensity sounds amplifies the danger of cochlear deterioration and high-frequency NIHL [...] Read more.
The misuse of personalized listening devices (PLDs) resulting in noise-induced hearing loss (NIHL) has become a public health concern, especially among youths, including medical students. The occupational use of PLDs that produce high-intensity sounds amplifies the danger of cochlear deterioration and high-frequency NIHL especially when used in noisy environments. This study aims to evaluate the incidence and trends of NIHL among medical students using PLDs. Background/Objectives: The purpose of this study is to assess the prevalence of high-frequency NIHL among PLD-using medical students. Methods: A semi-structured questionnaire covering details on PLD usage, exposure to noisy environments, and hearing difficulties was used to gather the data required. Conventional pure-tone audiometry with extended high-frequency audiometry was preceded by routine clinical evaluation using tuning fork tests and otoscopic examination for hearing loss assessment and to rule out middle-ear pathology. Hearing impairment was determined and categorized according to the Goodman and Clark classification system (250 Hz to 8000 kHz). SPSS version 21 was used in the analysis of the frequency data collected. Results: Out of 100 participants, using conventional PTA, 33% were found to have hearing loss, with 42.9% of males and 23.5% of females affected. Bilateral hearing loss was seen in 36.4% of the cases. Left-sided hearing loss was found to be more common (28%). The duration of usage of PLD had a significant correlation with hearing loss with a p-value < 0.0001. Hearing thresholds were significantly elevated at 16 kHz and 18 kHz in both the right and left ear. Conclusions: The high prevalence of PLD misuse among medical students is a major risk factor for NIHL. To help combat chronic hearing loss, students need to be educated about safe listening levels that can prevent further damage to the cochlea and auditory system. Full article
(This article belongs to the Section Otolaryngology)
Show Figures

Figure A1

12 pages, 1645 KiB  
Article
Impact of Hearing Loss Severity on Hearing Aid Benefit Among Adult Users
by Marlena Ziemska-Gorczyca, Karolina Dżaman and Ireneusz Kantor
Healthcare 2024, 12(23), 2450; https://doi.org/10.3390/healthcare12232450 - 5 Dec 2024
Viewed by 1247
Abstract
Background: Hearing loss (HL) among older adults is a major global health concern. Hearing aids (HAs) offer an effective solution to manage HL and enhance the quality of life. However, the adoption and the consistent use of HAs remain low, making non-use a [...] Read more.
Background: Hearing loss (HL) among older adults is a major global health concern. Hearing aids (HAs) offer an effective solution to manage HL and enhance the quality of life. However, the adoption and the consistent use of HAs remain low, making non-use a significant barrier to successful audiological rehabilitation. The aim of the study was to assess the benefit of HAs among patients with different degrees of HL and to determine the profiles of patients who have the least benefit from HAs. Methods: the HA benefits were assessed by using the Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire. Participants were assigned to the study groups based on the pure-tone audiometry. This paper presents the results obtained by using HAs in various listening environments among 167 patients. Results: The majority of individuals benefited from HAs in a noisy environment while a reverberant environment provided the lowest benefit. It was observed that the degree of HL had a statistically significant impact on the benefits of HAs in terms of the communication ease, the reverberation, the background noise, and the global score. A moderately positive correlation was observed between the unaided APHAB and the HL degree. The subjects’ APHAB scores ranged from the 50th to the 65th percentile. Additionally, women had a significantly better improvement than men. Conclusions: HAs improved communication in everyday life situations among 91.6% of HA users. The degree of HL influences APHAB scores. Patients with a severe degree of HL achieved the greatest APHAB scores while male patients with mild HL received the lowest benefits of HAs. Both HL and the age, gender, and HA type are factors that also play important roles. The APHAB questionnaire is a reliable screening test for patients with hearing difficulties. Full article
(This article belongs to the Special Issue Care and Treatment of Ear, Nose, and Throat)
Show Figures

Figure 1

12 pages, 259 KiB  
Article
A Shift from an Audio- to a Video-Based Exam Format to Reflect Real-Life Clinical Interactions in the Language-Learning Classroom
by Gabriella Hild, Anna Dávidovics, Vilmos Warta and Timea Németh
Educ. Sci. 2024, 14(12), 1278; https://doi.org/10.3390/educsci14121278 - 22 Nov 2024
Cited by 1 | Viewed by 817
Abstract
Effective patient communication is vital in medical training. At a Hungarian Medical university, international students in the English-medium program are required to study Hungarian for two years to prepare for clinical rotations in Hungarian hospitals. The final language assessment traditionally included an audio-based [...] Read more.
Effective patient communication is vital in medical training. At a Hungarian Medical university, international students in the English-medium program are required to study Hungarian for two years to prepare for clinical rotations in Hungarian hospitals. The final language assessment traditionally included an audio-based listening exam, but both students and teachers raised concerns about its difficulty and its lack of relevance to real-life clinical interactions and the students’ actual language needs. A needs analysis was conducted with 52 second-year international medical students through focus-group interviews after they took the exam to address these issues. Based on the feedback, a video-based exam format was developed and piloted. The new format incorporated visual cues such as gestures and facial expressions, better reflecting face-to-face patient communication. A total of 38 third-year students who had previously taken the audio-based version of the exam participated in the pilot, with focus-group interviews conducted to directly compare the two formats. The majority of the students found the video-based exam more engaging and relevant to their clinical experience. The findings suggest that the video-based exam better prepares students for real-life medical communication and provides a more meaningful assessment experience, bridging the gap between language learning and clinical practice. Full article
Back to TopTop