Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (502)

Search Parameters:
Keywords = auditory modeling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1060 KB  
Article
Influence of Live Music and Tasting Assessment on Hedonic and Emotional Responses of Wine in Public Tasting Events
by Roberto Marangoni, Isabella Taglieri, Alessandro Bianchi, Chiara Sanmartin, Pierina Díaz-Guerrero, Alessandro Tonacci, Francesco Sansone and Francesca Venturi
Foods 2026, 15(3), 504; https://doi.org/10.3390/foods15030504 - 1 Feb 2026
Viewed by 145
Abstract
Wine represents one of the most complex food matrices from a sensory perspective, as its appreciation emerges from the interaction between chemical composition, perceptual mechanisms, and contextual influences. Contemporary research in oenology and sensory science increasingly recognizes wine evaluation as an integrated perceptual [...] Read more.
Wine represents one of the most complex food matrices from a sensory perspective, as its appreciation emerges from the interaction between chemical composition, perceptual mechanisms, and contextual influences. Contemporary research in oenology and sensory science increasingly recognizes wine evaluation as an integrated perceptual event shaped by cognition, memory, and affect, rather than a simple response to aroma or flavor cues. Live music is widely used in hospitality settings to enhance consumer experience; however, its specific influence on wine appreciation and emotional responses remains insufficiently explored, particularly in real-world contexts. This study investigates how two contrasting musical atmospheres—melancholic/relaxing and upbeat/motivational—modulate hedonic evaluations and emotional profiles during public wine tastings, compared with a no-music condition. Data were collected across five live tasting events (5 Wednesdays of Emotions) using structured questionnaires that included hedonic ratings and multidimensional emotional measures. Statistical analyses were conducted using non-parametric tests, meta-analytic p-value combination, and cumulative link mixed models for ordinal data. The presence of music significantly enhanced overall wine appreciation compared to the silent condition, although the magnitude and direction of the effect varied across individuals and musical styles. Upbeat/motivational music generally produced stronger and more consistent increases in liking than melancholic/relaxing music. Emotional responses—particularly positive surprise—emerged as key mediators of hedonic improvement and showed strong associations with overall liking. Preference profiling revealed distinct response patterns, indicating that auditory modulation of wine perception is not uniform across consumers. These findings support a crossmodal interpretation in which music shapes wine appreciation primarily through emotion-based and expectancy-related mechanisms rather than through direct sensory enhancement. By demonstrating these effects in ecologically valid tasting environments, the study highlights the role of auditory context as a meaningful component of multisensory wine experiences. Full article
Show Figures

Graphical abstract

16 pages, 1652 KB  
Article
Impact of Amplification and Noise on Subjective Cognitive Effort and Fatigue in Older Adults with Hearing Loss
by Devan M. Lander and Christina M. Roup
Brain Sci. 2026, 16(2), 182; https://doi.org/10.3390/brainsci16020182 - 31 Jan 2026
Viewed by 170
Abstract
Background/Objectives: Older adults with hearing loss frequently report increased listening effort and fatigue, particularly in complex auditory environments. These subjective experiences may reflect increased cognitive resource allocation during both auditory and visual tasks, yet the impact of hearing aids on task-related effort [...] Read more.
Background/Objectives: Older adults with hearing loss frequently report increased listening effort and fatigue, particularly in complex auditory environments. These subjective experiences may reflect increased cognitive resource allocation during both auditory and visual tasks, yet the impact of hearing aids on task-related effort and fatigue remains unclear. This study examined subjective effort and fatigue in experienced older adult hearing aid users while completing cognitively demanding auditory and visual tasks in quiet and background noise, with and without hearing aids. Methods: Thirty-one adults aged 60–87 years completed a cognitive battery assessing inhibition, attention, executive function, and auditory and visual working memory across four listening conditions: aided-quiet, unaided-quiet, aided-noise, and unaided-noise. Subjective effort was measured using the NASA Task Load Index, and task-related fatigue was assessed using a situational fatigue scale. Linear mixed-effects models controlled for age and pure-tone average hearing thresholds. Results: Participants reported significantly lower effort and fatigue in quiet compared to background noise, regardless of hearing aid use. The aided-quiet condition was rated as the least effortful and fatiguing, whereas the unaided-noise condition was rated as the most demanding. Subjective effort and fatigue were moderately to strongly correlated across conditions, particularly in noise. Auditory working memory performance was significantly associated with subjective fatigue across listening conditions, while visual working memory was not associated with effort or fatigue. Hearing aid use did not produce significant reductions in effort or fatigue across conditions. Conclusions: Background noise substantially increases perceived task-related effort and fatigue during cognitively demanding auditory and visual tasks in older adults with hearing loss. While hearing aids did not significantly reduce effort or fatigue across conditions, optimal listening environments were associated with the lowest subjective reports. Auditory working memory emerged as a key factor related to fatigue, highlighting the interplay between hearing, cognition, and subjective listening experiences in older adulthood. Full article
19 pages, 3617 KB  
Article
Deep Learning-Based Classification of Common Lung Sounds via Auto-Detected Respiratory Cycles
by Mustafa Alptekin Engin, Rukiye Uzun Arslan, İrem Senyer Yapici, Selim Aras and Ali Gangal
Bioengineering 2026, 13(2), 170; https://doi.org/10.3390/bioengineering13020170 - 30 Jan 2026
Viewed by 277
Abstract
Chronic respiratory diseases, the third leading cause of mortality on a global scale, can be diagnosed at an early stage through non-invasive auscultation. However, effective manual differentiation of lung sounds (LSs) requires not only sharp auditory skills but also significant clinical experience. With [...] Read more.
Chronic respiratory diseases, the third leading cause of mortality on a global scale, can be diagnosed at an early stage through non-invasive auscultation. However, effective manual differentiation of lung sounds (LSs) requires not only sharp auditory skills but also significant clinical experience. With technological advancements, artificial intelligence (AI) has demonstrated the capability to distinguish LSs with accuracy comparable to or surpassing that of human experts. This study broadly compares the methods used in AI-based LSs classification. Firstly, respiratory cycles—consisting of inhalation and exhalation parts in LSs of different lengths depending on individual variability, obtained and labelled under expert guidance—were automatically detected using a series of signal processing procedures and a database was obtained in this way. This database of common LSs was then classified using various time-frequency representations such as spectrograms, scalograms, Mel-spectrograms and gammatonegrams for comparison. The utilisation of proven, convolutional neural network (CNN)-based pre-trained models through the application of transfer learning facilitated the comparison, thereby enabling the acquisition of the features to be employed in the classification process. The performances of CNN, CNN and Long Short-Term Memory (LSTM) hybrid architecture and support vector machine methods were compared in the classification process. When the spectral structure of gammatonegrams, which capture the spectral structure of signals in the low-frequency range with high fidelity and their noise-resistant structures, is combined with a CNN architecture, the best classification accuracy of 97.3% ± 1.9 is obtained. Full article
Show Figures

Figure 1

13 pages, 2805 KB  
Article
Hemispheric Asymmetry in Cortical Auditory Processing: The Interactive Effects of Attention and Background Noise
by Anoop Basavanahalli Jagadeesh and Ajith Kumar Uppunda
Audiol. Res. 2026, 16(1), 17; https://doi.org/10.3390/audiolres16010017 - 28 Jan 2026
Viewed by 101
Abstract
Background/Objectives: Speech processing engages both hemispheres of the brain but exhibits a degree of hemispheric asymmetry. This asymmetry, however, is not fixed and can be shaped by stimulus-related and listener-related factors. The present study examined how background noise and attention influence hemispheric [...] Read more.
Background/Objectives: Speech processing engages both hemispheres of the brain but exhibits a degree of hemispheric asymmetry. This asymmetry, however, is not fixed and can be shaped by stimulus-related and listener-related factors. The present study examined how background noise and attention influence hemispheric differences in speech processing using high-density cortical auditory evoked potentials (CAEPs). Methods: Twenty-five young adults with clinically normal hearing listened to meaningful bisyllabic Kannada words under two background conditions (quiet, speech-shaped noise) and two attentional conditions (active, passive). N1 peak amplitudes were compared between the left and right hemispheres across conditions using linear mixed-effects modeling. Results: Results revealed significantly larger N1 amplitudes in the left hemisphere and during active compared to passive listening, confirming left-hemisphere dominance for speech processing and robust attentional modulation. In contrast, background noise did not significantly modulate N1 amplitude or hemispheric asymmetry. Importantly, a significant Hemisphere × Attention interaction indicated that hemispheric asymmetry depended on attentional state, with clear left-hemisphere dominance being observed during active listening in both quiet and noise conditions, whereas hemispheric differences were reduced or absent during passive listening, irrespective of background. Conclusions: Together, these findings demonstrate that attentional engagement, rather than background noise, plays a critical role in modulating hemispheric specialization during early cortical speech processing, highlighting the adaptive nature of auditory cortical mechanisms in challenging listening environments. Full article
(This article belongs to the Section Hearing)
Show Figures

Figure 1

18 pages, 1838 KB  
Article
A Deep Learning Model for Wave V Peak Detection in Auditory Brainstem Response Data
by Jun Ma, Nak-Jun Sung, Sungjun Choi, Min Hong and Sungyeup Kim
Electronics 2026, 15(3), 511; https://doi.org/10.3390/electronics15030511 - 25 Jan 2026
Viewed by 143
Abstract
In this study, we propose a YOLO-based object detection algorithm for the automated and accurate identification of the fifth wave (Wave V) in auditory brainstem response (ABR) graphs. The ABR test plays a critical role in the diagnosis of hearing disorders, with the [...] Read more.
In this study, we propose a YOLO-based object detection algorithm for the automated and accurate identification of the fifth wave (Wave V) in auditory brainstem response (ABR) graphs. The ABR test plays a critical role in the diagnosis of hearing disorders, with the fifth wave serving as a key marker for clinical assessment. However, conventional manual detection is time-consuming and subject to variability depending on the examiner’s expertise. To address these limitations, we developed a real-time detection method that utilizes a YOLO object detection model applied to ABR graph images. Prior to YOLO training, we employed a U-Net-based preprocessing algorithm to automatically remove existing annotated peaks from the ABR images, thereby generating training data suitable for peak detection. The proposed model was evaluated in terms of precision, recall, and mean average precision (mAP). The experimental results demonstrate that the YOLO-based approach achieves high detection performance across these metrics, indicating its potential as an effective tool for reliable Wave V peak localization in audiological applications. Full article
Show Figures

Figure 1

19 pages, 801 KB  
Article
The Impact of Executive Functions on Metaphonological Skills: Correlation and Treatment Implication for ADHD Children
by Adriana Piccolo, Margherita La Fauci, Carmela De Domenico, Marcella Di Cara, Alessia Fulgenzi, Noemi Mancuso, Lilla Bonanno, Maria Tresoldi, Rosalia Muratore, Caterina Impallomeni, Emanuela Tripodi and Francesca Cucinotta
J. Clin. Med. 2026, 15(2), 906; https://doi.org/10.3390/jcm15020906 - 22 Jan 2026
Viewed by 80
Abstract
Attention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder frequently associated with impairments in executive functions (EF). These deficits have been linked to difficulties across various cognitive domains, including metaphonological skills (MS), essential for phonological awareness and processing abilities. Background/Objectives: This pilot study examines [...] Read more.
Attention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder frequently associated with impairments in executive functions (EF). These deficits have been linked to difficulties across various cognitive domains, including metaphonological skills (MS), essential for phonological awareness and processing abilities. Background/Objectives: This pilot study examines the correlations between EF and MS in ADHD children. Methods: A total of 84 children aged 6–14 years, diagnosed with ADHD and an IQ ≥ 70, were assessed using the NEPSY-II test to evaluate executive functions and the Assessment of Metaphonological Skills Test to assess phonological processing abilities. Results: Correlational analyses and multiple regression models were employed to explore the relationships between EF and MS, focusing on attention, cognitive flexibility, and response inhibition. Rhyme was positively correlated with processing speed and negatively correlated with response inhibition. Phonemic segmentation was significantly related to auditory attention and response inhibition. Age emerged as a significant predictor of phonemic synthesis and final syllable deletion, consistent with the developmental maturation of executive and phonological abilities. Conclusions: The findings suggest that deficits in executive functioning in ADHD children are closely linked to metaphonological abilities, which play a crucial role in the acquisition of reading and writing skills. Integrating EF training into phonological interventions can help reduce learning difficulties and improve cognitive and language outcomes. Full article
Show Figures

Figure 1

22 pages, 5824 KB  
Article
In Silico Hazard Assessment of Ototoxicants Through Machine Learning and Computational Systems Biology
by Shu Luan, Chao Ji, Gregory M. Zarus, Christopher M. Reh and Patricia Ruiz
Toxics 2026, 14(1), 82; https://doi.org/10.3390/toxics14010082 - 16 Jan 2026
Viewed by 434
Abstract
Individuals across their lifespan may experience hearing loss from medications or chemicals, prompting concern about ototoxic environmental exposures. This study applies computational modeling as a screening-level hazard identification and chemical prioritization approach and is not intended to constitute a human health risk assessment [...] Read more.
Individuals across their lifespan may experience hearing loss from medications or chemicals, prompting concern about ototoxic environmental exposures. This study applies computational modeling as a screening-level hazard identification and chemical prioritization approach and is not intended to constitute a human health risk assessment or to estimate exposure- or dose-dependent ototoxic risk. We evaluated in silico drug-induced ototoxicity models on 80 environmental chemicals, excluding 4 with known ototoxicity, and analyzed 76 chemicals using fingerprinting, similarity assessment, and machine learning classification. We compared predicted environmental ototoxicants with ototoxic drugs, paired select polychlorinated biphenyls with the antineoplastic drug mitotane, and used PCB 177 as a case study to construct an ototoxicity pathway. A systems biology framework predicted and compared molecular targets of mitotane and PCB 177 to generate a network-level mechanism. The consensus model (accuracy 0.95 test; 0.90 validation) identified 18 of 76 chemicals as potential ototoxicants within acceptable confidence ranges. Mitotane and PCB 177 were both predicted to disrupt thyroid-stimulating hormone receptor signaling, suggesting thyroid-mediated pathways may contribute to auditory harm; additional targets included AhR, transthyretin, and PXR. Findings indicate overlapping mechanisms involving metabolic, cellular, and inflammatory processes. This work shows that integrated computational modeling can support virtual screening and prioritization for chemical and drug ototoxicity risk assessment. Full article
(This article belongs to the Section Novel Methods in Toxicology Research)
Show Figures

Graphical abstract

16 pages, 2728 KB  
Review
Advancements in Preclinical Models for NF2-Related Schwannomatosis Research
by Bo-Shi Zhang, Simeng Lu, Scott R. Plotkin and Lei Xu
Cancers 2026, 18(2), 224; https://doi.org/10.3390/cancers18020224 - 11 Jan 2026
Viewed by 359
Abstract
NF2-related Schwannomatosis (NF2-SWN) remains a disorder with few effective treatment options. Patients develop vestibular schwannomas (VSs) on both auditory nerves, which gradually impair hearing and often result in significant communication difficulties, social withdrawal, and higher rates of depression. Progress in [...] Read more.
NF2-related Schwannomatosis (NF2-SWN) remains a disorder with few effective treatment options. Patients develop vestibular schwannomas (VSs) on both auditory nerves, which gradually impair hearing and often result in significant communication difficulties, social withdrawal, and higher rates of depression. Progress in understanding NF2-SWN biology and translating discoveries into therapies has been slowed by the absence of robust animal models that faithfully reproduce both tumor behavior and the associated neurological deficits. In this review, we summarized the development of animal models that not only reproduce tumor growth in the peripheral nerve microenvironment but also reproduce tumor-induced neurological symptoms, such as hearing loss and ataxia. We further highlight the currently available organotypic models for NF2-SWN. Together, these systems provide an essential foundation for advancing mechanistic studies and accelerating the development of effective therapies for this devastating disorder. Full article
(This article belongs to the Special Issue Advancements in Preclinical Models for Solid Cancers)
Show Figures

Figure 1

14 pages, 537 KB  
Article
Startle Habituation and Vagally Mediated Heart Rate Variability Influence the Use of Emotion Regulation Strategies
by Xiao Yang, Fang Fang and Angela Ximena Babb
Psychol. Int. 2026, 8(1), 2; https://doi.org/10.3390/psycholint8010002 - 7 Jan 2026
Viewed by 319
Abstract
Emotion regulation refers to the processes through which people modulate their emotional experiences and expressions, and difficulties in these processes underpin many forms of psychopathology. According to the process model, emotion regulation encompasses five classes of strategies, commonly grouped into antecedent-focused strategies (e.g., [...] Read more.
Emotion regulation refers to the processes through which people modulate their emotional experiences and expressions, and difficulties in these processes underpin many forms of psychopathology. According to the process model, emotion regulation encompasses five classes of strategies, commonly grouped into antecedent-focused strategies (e.g., cognitive reappraisal) and response-focused strategies (e.g., expressive suppression). These strategies involve both explicit and implicit processes, which can be objectively assessed using physiological indices. The present study examined the effects of startle habituation and vagally mediated heart rate variability (vmHRV) on the use of cognitive appraisal and suppression. Forty-nine college-aged participants were recruited, and their resting heart rate variability (HRV) and response habituation to an auditory startle-eliciting stimulus were measured. Emotion regulation strategies were assessed by a self-report questionnaire. Multiple regressions were used to analyze the effects of startle habituation, vmHRV, and their interaction on emotion regulation strategies. Results indicated that, although suppression was not associated with any physiological indices in the regression models, cognitive reappraisal was predicted by both vmHRV and startle habituation. Notably, vmHRV and startle habituation interacted such that the positive association between vmHRV and cognitive reappraisal emerged only among individuals who exhibited slow startle habituation. These findings have practical implications for the prevention and treatment of psychopathology, as well as for promoting more adaptive emotion regulation in daily life. Full article
(This article belongs to the Section Neuropsychology, Clinical Psychology, and Mental Health)
Show Figures

Figure 1

17 pages, 4220 KB  
Brief Report
New Digital Workflow for the Use of a Modified Stimulating Palatal Plate in Infants with Down Syndrome
by Maria Joana Castro, Cátia Severino, Jovana Pejovic, Marina Vigário, Miguel Palha, David Casimiro de Andrade and Sónia Frota
Dent. J. 2026, 14(1), 26; https://doi.org/10.3390/dj14010026 - 4 Jan 2026
Viewed by 403
Abstract
Background/Objectives: Down Syndrome (DS) is frequently associated with oral-motor dysmorphologies, like oral hypotonia, tongue protrusion, short palate, and malocclusion, compromising the oral functions of sucking, chewing, swallowing, and speech production. Therapeutic interventions with stimulating palatal plates (SPP) have been proposed to prevent [...] Read more.
Background/Objectives: Down Syndrome (DS) is frequently associated with oral-motor dysmorphologies, like oral hypotonia, tongue protrusion, short palate, and malocclusion, compromising the oral functions of sucking, chewing, swallowing, and speech production. Therapeutic interventions with stimulating palatal plates (SPP) have been proposed to prevent and improve oral-motor dysmorphologies in DS. This study proposes a new digital workflow for the manufacturing and use of a modified SPP. Methods: We report the application of the new workflow to five clinical cases, all infants with DS showing oral-motor disorders, aged between 5 and 11 months. The workflow is described step-by-step, from the mouth scanning protocol and model printing to SPP manufacturing and delivering, and assessment of oral-morphological features and language abilities via video captures and parental questionnaires. Key novel features include an SPP with an acrylic extension with a pacifier terminal and, importantly, the use of an infant-friendly intraoral scanner. Results: The new workflow had good acceptability by infants and parents, offering a safe, easy-to-implement, and feasible solution for SPP design, as it avoided the high risks associated with impression materials. It also supported the use of the SPP to promote tongue stimulation, retraction, and overall oral-muscle function in oral-motor disorders in children with DS, especially in infants. Conclusions: Within the limitations of the current study, it was shown that the proposed digital workflow constitutes a viable and infant-friendly approach to the production and use of a modified SPP, and thus promises to contribute to improving oral morphology and auditory-motor language abilities. Full article
(This article belongs to the Section Digital Technologies)
Show Figures

Graphical abstract

12 pages, 2092 KB  
Article
Development and In Vivo Evaluation of a Novel Bioabsorbable Polylactic Acid Middle Ear Ventilation Tube
by Ying-Chang Lu, Chi-Chieh Chang, Ping-Tun Teng, Chien-Hsing Wu, Hsuan-Hsuan Wu, Chiung-Ju Lin, Tien-Chen Liu, Yen-Hui Chan and Chen-Chi Wu
J. Funct. Biomater. 2026, 17(1), 25; https://doi.org/10.3390/jfb17010025 - 30 Dec 2025
Viewed by 499
Abstract
Background: Otitis media with effusion (OME) is a widespread condition that causes hearing impairment, particularly in pediatric populations. Existing non-absorbable tubes often require elective or unplanned removal surgery. Bioabsorbable polylactic acid (PLA) offers a promising alternative due to its inherent biocompatibility and tunable [...] Read more.
Background: Otitis media with effusion (OME) is a widespread condition that causes hearing impairment, particularly in pediatric populations. Existing non-absorbable tubes often require elective or unplanned removal surgery. Bioabsorbable polylactic acid (PLA) offers a promising alternative due to its inherent biocompatibility and tunable degradation characteristics. In this study, we designed, fabricated, and comprehensively evaluated a novel PLA middle-ear ventilation tube. Methods: Bioabsorbable PLA tubes were designed and fabricated based on commercial models. In vitro biocompatibility was assessed according to ISO 10993 guidelines. A guinea pig model was used to perform in vivo evaluations, including otoscopic examinations, auditory brainstem response (ABR) measurements, micro-computed tomography (micro-CT) imaging, and histological analyses. Results: The PLA tubes were successfully designed and fabricated, exhibiting dimensions comparable to those of commercially available products. In vitro testing confirmed their biocompatibility. In vivo observations revealed that the PLA segments remained stable, with no significant inflammation detected. ABR measurements revealed no adverse impacts on hearing function. Micro-CT imaging confirmed tube integrity and indicated initial signs of degradation over a 9-month period, as evidenced by radiographic morphology. Histological analyses indicated a favorable tissue response with minimal foreign body reaction. Conclusions: The developed PLA middle-ear ventilation tube represents a highly promising alternative to conventional non-absorbable tubes. It demonstrates excellent biocompatibility, preserves auditory function, and exhibits a controlled degradation profile. This preclinical study provides strong support for further investigation and subsequent clinical trials to validate its safety and efficacy in human patients. Full article
(This article belongs to the Special Issue Biomaterials for Wound Healing and Tissue Repair)
Show Figures

Graphical abstract

17 pages, 1042 KB  
Article
Cross-Cultural Identification of Acoustic Voice Features for Depression: A Cross-Sectional Study of Vietnamese and Japanese Datasets
by Phuc Truong Vinh Le, Mitsuteru Nakamura, Masakazu Higuchi, Lanh Thi My Vuu, Nhu Huynh and Shinichi Tokuno
Bioengineering 2026, 13(1), 33; https://doi.org/10.3390/bioengineering13010033 - 27 Dec 2025
Viewed by 512
Abstract
Acoustic voice analysis demonstrates potential as a non-invasive biomarker for depression, yet its generalizability across languages remains underexplored. This cross-sectional study aimed to identify a set of cross-culturally consistent acoustic features for depression screening using distinct Vietnamese and Japanese voice datasets. We analyzed [...] Read more.
Acoustic voice analysis demonstrates potential as a non-invasive biomarker for depression, yet its generalizability across languages remains underexplored. This cross-sectional study aimed to identify a set of cross-culturally consistent acoustic features for depression screening using distinct Vietnamese and Japanese voice datasets. We analyzed anonymized recordings from 251 participants, comprising 123 Vietnamese individuals assessed via the self-report Beck Depression Inventory (BDI) and 128 Japanese individuals assessed via the clinician-rated Hamilton Depression Rating Scale (HAM-D). From 6373 features extracted with openSMILE, a multi-stage selection pipeline identified 12 cross-cultural features, primarily from the auditory spectrum (AudSpec), Mel-Frequency Cepstral Coefficients (MFCCs), and logarithmic Harmonics-to-Noise Ratio (logHNR) domains. The cross-cultural model achieved a combined Area Under the Curve (AUC) of 0.934, with performance disparities observed between the Japanese (AUC = 0.993) and Vietnamese (AUC = 0.913) cohorts. This disparity may be attributed to dataset heterogeneity, including mismatched diagnostic tools and differing sample compositions (clinical vs. mixed community). Furthermore, the limited number of high-risk cases (n = 33) warrants cautious interpretation regarding the reliability of reported AUC values for severe depression classification. These findings suggest the presence of a core acoustic signature related to physiological psychomotor changes that may transcend linguistic boundaries. This study advances the exploration of global vocal biomarkers but underscores the need for prospective, standardized multilingual trials to overcome the limitations of secondary data analysis. Full article
(This article belongs to the Special Issue Voice Analysis Techniques for Medical Diagnosis)
Show Figures

Figure 1

14 pages, 396 KB  
Article
Advancing Pediatric Cochlear Implant Care Through a Multidisciplinary Telehealth Model: Insights from Implementation and Family Perspectives
by Chrisanda Marie Sanchez, Jennifer Coto, Jordan Ian McNair, Domitille Lochet, Alexandria Susan Mestres, Christina Sarangoulis, Meredith A. Holcomb and Ivette Cejas
Children 2026, 13(1), 39; https://doi.org/10.3390/children13010039 - 26 Dec 2025
Viewed by 373
Abstract
Background/Objectives: Multidisciplinary care is the gold-standard approach for delivering comprehensive pediatric healthcare. For children undergoing cochlear implant (CI) evaluation, multiple appointments are required to assess candidacy, set realistic expectations, and counsel families on rehabilitation and the psychosocial impact of hearing loss. Established pediatric [...] Read more.
Background/Objectives: Multidisciplinary care is the gold-standard approach for delivering comprehensive pediatric healthcare. For children undergoing cochlear implant (CI) evaluation, multiple appointments are required to assess candidacy, set realistic expectations, and counsel families on rehabilitation and the psychosocial impact of hearing loss. Established pediatric CI users also need coordinated follow-up to address ongoing auditory, educational, and psychosocial needs. This study evaluated the satisfaction and family perspectives of the implementation of a virtual, team-based multidisciplinary model for both CI candidates and established CI users. Methods: Thirty-nine children and their families participated in discipline-specific telehealth consultations, including audiology, listening and spoken language (LSL) therapy, psychology, and educational services, followed by a 60 min multidisciplinary team meeting. Team meetings occurred during pre-implantation and at six months post-activation for CI candidates. Team meetings for established CI users were scheduled following completion of individual consultations. Providers summarized findings from their individual visits before transitioning to a caregiver-led discussion. Post-visit surveys assessed satisfaction and perceived benefit from the multidisciplinary model. Results: Thirty-nine dyads were enrolled (11 Pre-CI; 28 Established CI). Caregivers were predominantly mothers (89.7%), most identified as Hispanic (55.3%) and White (71.1%). Over half of children identified as Hispanic (59%) and White (71.8%); most were diagnosed with hearing loss at birth (55.9%). Satisfaction with the virtual model was uniformly high: 100% of caregivers were satisfied or very satisfied, and most rated care quality as “very good” or “excellent.” LSL therapy was most frequently rated as the most beneficial visit (70% Pre-CI; 45% Established CI). Caregivers strongly preferred ongoing team-based care, with 55–80% reporting that they would like it to occur every six months and 95–100% preferring remote meetings. Conclusions: A virtual multidisciplinary model offers a high-quality, family-centered approach for both CI evaluations and ongoing management of established CI users. By integrating simultaneous team-based sessions, this model not only supports the ‘whole child’ but also strengthens the family system by improving communication, streamlining care, and reducing the burden of multiple in-person appointments. Families consistently report high levels of satisfaction with the convenience, clarity, and collaboration provided through virtual team visits. Incorporating routine check-ins with families is essential to ensure their needs are addressed, reinforce progress, and guide timely, targeted interventions that maximize each child’s developmental outcomes. Full article
(This article belongs to the Special Issue Hearing Loss in Children: The Present and a Challenge for Future)
Show Figures

Figure 1

16 pages, 4487 KB  
Article
A Modeling Approach to Aggregated Noise Effects of Offshore Wind Farms in the Canary and North Seas
by Ion Urtiaga-Chasco and Alonso Hernández-Guerra
J. Mar. Sci. Eng. 2026, 14(1), 2; https://doi.org/10.3390/jmse14010002 - 19 Dec 2025
Viewed by 518
Abstract
Offshore wind farms (OWFs) represent an increasingly important renewable energy source, yet their environmental impacts, particularly underwater noise, require systematic study. Estimating the operational source level (SL) of a single turbine and predicting sound pressure levels (SPLs) at sensitive locations can be challenging. [...] Read more.
Offshore wind farms (OWFs) represent an increasingly important renewable energy source, yet their environmental impacts, particularly underwater noise, require systematic study. Estimating the operational source level (SL) of a single turbine and predicting sound pressure levels (SPLs) at sensitive locations can be challenging. Here, we integrate a turbine SL prediction algorithm with open-source propagation models in a Jupyter Notebook (version 7.4.7) to streamline aggregated SPL estimation for OWFs. Species-specific audiograms and weighting functions are included to assess potential biological impacts. The tool is applied to four planned OWFs, two in the Canary region and two in the Belgian and German North Seas, under conservative assumptions. Results indicate that at 10 m/s wind speed, a single turbine’s SL reaches 143 dB re 1 µPa in the one-third octave band centered at 160 Hz. Sensitivity analyses indicate that variations in wind speed can cause the operational source level at 160 Hz to increase by up to approximately 2 dB re 1 µPa2/Hz from the nominal value used in this study, while differences in sediment type can lead to transmission loss variations ranging from 0 to on the order of 100 dB, depending on bathymetry and range. Maximum SPLs of 112 dB re 1 µPa are predicted within OWFs, decreasing to ~50 dB re 1 µPa at ~100 km. Within OWFs, Low-Frequency (LF) cetaceans and Phocid Carnivores in Water (PCW) would likely perceive the noise; National Marine Fisheries Service (NMFS) marine mammals’ auditory-injury thresholds are not exceeded, but behavioral-harassment thresholds may be crossed. Outside the farms, only LF audiograms are crossed. In high-traffic North Sea regions, OWF noise is largely masked, whereas in lower-noise areas, such as the Canary Islands, it can exceed ambient levels, highlighting the importance of site-specific assessments, accurate ambient noise monitoring and propagation modeling for ecological impact evaluation. Full article
Show Figures

Figure 1

13 pages, 654 KB  
Article
Singing Behavior and Availability of Golden-Cheeked Warblers
by Jennifer L. Reidy
Birds 2025, 6(4), 66; https://doi.org/10.3390/birds6040066 - 18 Dec 2025
Viewed by 616
Abstract
Incomplete detection during auditory point counts includes the component that individuals are present but silent (“availability”). If the probability of being ‘available’ is less than one and is not random with respect to time or space, population estimates that fail to address availability [...] Read more.
Incomplete detection during auditory point counts includes the component that individuals are present but silent (“availability”). If the probability of being ‘available’ is less than one and is not random with respect to time or space, population estimates that fail to address availability will be biased. I recorded minute-by-minute singing of 60 male Golden-cheeked Warblers (Setophaga chrysoparia) in 2010–2011 (133 surveys; 6517 min) to estimate availability, evaluate predictors, and provide survey guidance. The per-minute availability was 0.45 (95% confidence intervals [CI]: 0.37–0.54). The availability was higher for unpaired versus paired males (0.82 [0.64–0.92] versus 0.30 [0.20–0.42]) and when ≥1 conspecific was singing (0.61 [0.46–0.75] vs. 0.54 [0.39–0.68]). Availability declined across both day of year and hour of day. Aggregating to common survey lengths, the probability of ≥ 1 song per bin increased with duration but showed the same temporal declines: 3 min = 0.61 (0.52–0.70), 5 min = 0.72 (0.63–0.79), and 10 min = 0.83 (0.74–0.90). Temperature had a modest positive effect, clearest at the 10 min bins. Interaction terms among day, hour, and temperature were unsupported (all likelihood ratio tests p > 0.10). These findings indicate that availability is <1 and varies predictably with day and time, implying that point count protocols should standardize survey windows or model availability explicitly. Full article
Show Figures

Figure 1

Back to TopTop