Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (11)

Search Parameters:
Keywords = monaural localization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4559 KiB  
Article
Evaluating Auditory Localization Capabilities in Young Patients with Single-Side Deafness
by Alessandro Aruffo, Giovanni Nicoli, Marta Fantoni, Raffaella Marchi, Edoardo Carini and Eva Orzan
Audiol. Res. 2025, 15(4), 85; https://doi.org/10.3390/audiolres15040085 - 9 Jul 2025
Viewed by 234
Abstract
Background/Objectives: Unilateral hearing loss (UHL), particularly single-sided deafness (SSD), disrupts spatial hearing in children, leading to academic and social challenges. This study aimed to (1) compare azimuthal sound-localization accuracy and compensatory strategies between children with single-sided deafness (SSD) and their normal-hearing (NH) peers [...] Read more.
Background/Objectives: Unilateral hearing loss (UHL), particularly single-sided deafness (SSD), disrupts spatial hearing in children, leading to academic and social challenges. This study aimed to (1) compare azimuthal sound-localization accuracy and compensatory strategies between children with single-sided deafness (SSD) and their normal-hearing (NH) peers within a virtual reality environment, and (2) investigate sound-localization performance across various azimuths by contrasting left-SSD (L-SSD) and right-SSD (R-SSD) groups. Methods: A cohort of 44 participants (20 NH, 24 SSD) performed sound localization tasks in a 3D virtual environment. Unsigned azimuth error (UAE), unsigned elevation error (UEE), and head movement distance were analyzed across six azimuthal angles (−75° to 75°) at 0°elevation. Non-parametric statistics (Mann–Whitney U tests, Holm–Bonferroni correction) compared performance between NH and SSD groups and within SSD subgroups (L-SSD vs. R-SSD). Results: The SSD group exhibited significantly higher UAE (mean: 22.4° vs. 3.69°, p < 0.0001), UEE (mean: 5.95° vs. 3.77°, p < 0.0001) and head movement distance (mean: 0.35° vs. 0.12°, p < 0.0001) compared with NH peers, indicating persistent localization deficits and compensatory effort. Within the SSD group, elevation performance was superior to azimuthal accuracy (mean UEE: 3.77° vs. mean UAE: 22.4°). Participants with R-SSD exhibited greater azimuthal errors at rightward angles (45°and 75°) and at −15°, as well as increased elevation errors at 75°. Hemifield-specific advantages were strongest at extreme lateral angles (75°). Conclusions: Children with SSD rely on insufficient compensatory head movements to resolve monaural spatial ambiguity in order to localize sounds. Localization deficits and the effort associated with localization task call for action in addressing these issues in dynamic environments such as the classroom. L-SSD subjects outperformed R-SSD peers, highlighting hemispheric specialization in spatial hearing and the need to study its neural basis to develop targeted rehabilitation and classroom support. The hemifield advantages described in this study call for further data collection and research on the topic. Full article
Show Figures

Figure 1

14 pages, 2263 KiB  
Article
Contralateral Routing of Signal Disrupts Monaural Sound Localization
by Sebastian A. Ausili and Hillary A. Snapp
Audiol. Res. 2023, 13(4), 586-599; https://doi.org/10.3390/audiolres13040051 - 3 Aug 2023
Cited by 1 | Viewed by 2850
Abstract
Objectives: In the absence of binaural hearing, individuals with single-sided deafness can adapt to use monaural level and spectral cues to improve their spatial hearing abilities. Contralateral routing of signal is the most common form of rehabilitation for individuals with single-sided deafness. However, [...] Read more.
Objectives: In the absence of binaural hearing, individuals with single-sided deafness can adapt to use monaural level and spectral cues to improve their spatial hearing abilities. Contralateral routing of signal is the most common form of rehabilitation for individuals with single-sided deafness. However, little is known about how these devices affect monaural localization cues, which single-sided deafness listeners may become reliant on. This study aimed to investigate the effects of contralateral routing of signal hearing aids on localization performance in azimuth and elevation under monaural listening conditions. Design: Localization was assessed in 10 normal hearing adults under three listening conditions: (1) normal hearing (NH), (2) unilateral plug (NH-plug), and (3) unilateral plug and CROS aided (NH-plug + CROS). Monaural hearing simulation was achieved by plugging the ear with E-A-Rsoft™ FX™ foam earplugs. Stimuli consisted of 150 ms high-pass noise bursts (3–20 kHz), presented in a random order from fifty locations spanning ±70° in the horizontal and ±30° in the vertical plane at 45, 55, and 65 dBA. Results: In the unilateral plugged listening condition, participants demonstrated good localization in elevation and a response bias in azimuth for signals directed at the open ear. A significant decrease in performance in elevation occurs with the contralateral routing of signal hearing device on, evidenced by significant reductions in response gain and low r2 value. Additionally, performance in azimuth is further reduced for contralateral routing of signal aided localization compared to the simulated unilateral hearing loss condition. Use of the contralateral routing of signal device also results in a reduction in promptness of the listener response and an increase in response variability. Conclusions: Results suggest contralateral routing of signal hearing aids disrupt monaural spectral and level cues, which leads to detriments in localization performance in both the horizontal and vertical dimensions. Increased reaction time and increasing variability in responses suggests localization is more effortful when wearing the contralateral rerouting of signal device. Full article
Show Figures

Figure 1

14 pages, 572 KiB  
Article
Binaural Listening with Head Rotation Helps Persons with Blindness Perceive Narrow Obstacles
by Takahiro Miura, Naoyuki Okochi, Junya Suzuki and Tohru Ifukube
Int. J. Environ. Res. Public Health 2023, 20(8), 5573; https://doi.org/10.3390/ijerph20085573 - 19 Apr 2023
Cited by 3 | Viewed by 2021
Abstract
Orientation and mobility (O&M) are important abilities that people with visual impairments use in their independent performance of daily activities. In orientation, people with total blindness pinpoint nonsounding objects and sounding objects. The ability to perceive nonsounding objects is called obstacle sense, [...] Read more.
Orientation and mobility (O&M) are important abilities that people with visual impairments use in their independent performance of daily activities. In orientation, people with total blindness pinpoint nonsounding objects and sounding objects. The ability to perceive nonsounding objects is called obstacle sense, wherein people with blindness recognize the various characteristics of an obstacle using acoustic cues. Although body movement and listening style may enhance the sensing of obstacles, experimental studies on this topic are lacking. Elucidating their contributions to obstacle sense may lead to the further systematization of techniques of O&M training. This study sheds light on the contribution of head rotation and binaural hearing to obstacle sense among people with blindness. We conducted an experiment on the perceived presence and distance of nonsounding obstacles, which varied width and distance, for participants with blindness under the conditions of binaural or monaural hearing, with or without head rotation. The results indicated that head rotation and binaural listening can enhance the localization of nonsounding obstacles. Further, when people with blindness are unable to perform head rotation or use binaural hearing, their judgment can become biased in favor of the presence of an obstacle due to risk avoidance. Full article
Show Figures

Figure 1

14 pages, 2948 KiB  
Article
Audiovisual Training in Virtual Reality Improves Auditory Spatial Adaptation in Unilateral Hearing Loss Patients
by Mariam Alzaher, Chiara Valzolgher, Grégoire Verdelet, Francesco Pavani, Alessandro Farnè, Pascal Barone and Mathieu Marx
J. Clin. Med. 2023, 12(6), 2357; https://doi.org/10.3390/jcm12062357 - 17 Mar 2023
Cited by 12 | Viewed by 3111
Abstract
Unilateral hearing loss (UHL) leads to an alteration of binaural cues resulting in a significant increment of spatial errors in the horizontal plane. In this study, nineteen patients with UHL were recruited and randomized in a cross-over design into two groups; a first [...] Read more.
Unilateral hearing loss (UHL) leads to an alteration of binaural cues resulting in a significant increment of spatial errors in the horizontal plane. In this study, nineteen patients with UHL were recruited and randomized in a cross-over design into two groups; a first group (n = 9) that received spatial audiovisual training in the first session and a non-spatial audiovisual training in the second session (2 to 4 weeks after the first session). A second group (n = 10) received the same training in the opposite order (non-spatial and then spatial). A sound localization test using head-pointing (LOCATEST) was completed prior to and following each training session. The results showed a significant decrease in head-pointing localization errors after spatial training for group 1 (24.85° ± 15.8° vs. 16.17° ± 11.28°; p < 0.001). The number of head movements during the spatial training for the 19 participants did not change (p = 0.79); nonetheless, the hand-pointing errors and reaction times significantly decreased at the end of the spatial training (p < 0.001). This study suggests that audiovisual spatial training can improve and induce spatial adaptation to a monaural deficit through the optimization of effective head movements. Virtual reality systems are relevant tools that can be used in clinics to develop training programs for patients with hearing impairments. Full article
(This article belongs to the Special Issue Innovative Technologies and Translational Therapies for Deafness)
Show Figures

Figure 1

15 pages, 2315 KiB  
Article
Hybrid Dilated and Recursive Recurrent Convolution Network for Time-Domain Speech Enhancement
by Zhendong Song, Yupeng Ma, Fang Tan and Xiaoyi Feng
Appl. Sci. 2022, 12(7), 3461; https://doi.org/10.3390/app12073461 - 29 Mar 2022
Cited by 10 | Viewed by 2585
Abstract
In this paper, we propose a fully convolutional neural network based on recursive recurrent convolution for monaural speech enhancement in the time domain. The proposed network is an encoder-decoder structure using a series of hybrid dilated modules (HDM). The encoder creates low-dimensional features [...] Read more.
In this paper, we propose a fully convolutional neural network based on recursive recurrent convolution for monaural speech enhancement in the time domain. The proposed network is an encoder-decoder structure using a series of hybrid dilated modules (HDM). The encoder creates low-dimensional features of a noisy input frame. In the HDM, the dilated convolution is used to expand the receptive field of the network model. In contrast, the standard convolution is used to make up for the under-utilized local information of the dilated convolution. The decoder is used to reconstruct enhanced frames. The recursive recurrent convolutional network uses GRU to solve the problem of multiple training parameters and complex structures. State-of-the-art results are achieved on two commonly used speech datasets. Full article
(This article belongs to the Special Issue Automatic Speech Recognition)
Show Figures

Figure 1

16 pages, 1942 KiB  
Study Protocol
Functional Reorganization of the Central Auditory System in Children with Single-Sided Deafness: A Protocol Using fNIRS
by Marie-Noëlle Calmels, Yohan Gallois, Mathieu Marx, Olivier Deguine, Soumia Taoui, Emma Arnaud, Kuzma Strelnikov and Pascal Barone
Brain Sci. 2022, 12(4), 423; https://doi.org/10.3390/brainsci12040423 - 22 Mar 2022
Cited by 13 | Viewed by 3486
Abstract
In children, single-sided deafness (SSD) affects the development of linguistic and social skills and can impede educational progress. These difficulties may relate to cortical changes that occur following SSD, such as reduced inter-hemispheric functional asymmetry and maladaptive brain plasticity. To investigate these neuronal [...] Read more.
In children, single-sided deafness (SSD) affects the development of linguistic and social skills and can impede educational progress. These difficulties may relate to cortical changes that occur following SSD, such as reduced inter-hemispheric functional asymmetry and maladaptive brain plasticity. To investigate these neuronal changes and their evolution in children, a non-invasive technique is required that is little affected by motion artifacts. Here, we present a research protocol that uses functional near-infrared spectroscopy (fNIRS) to evaluate the reorganization of cortical auditory asymmetry in children with SSD; it also examines how the cortical changes relate to auditory and language skills. The protocol is designed for children whose SSD has not been treated, because hearing restoration can alter both brain reorganization and behavioral performance. We propose a single-center, cross-sectional study that includes 30 children with SSD (congenital or acquired moderate-to-profound deafness) and 30 children with normal hearing (NH), all aged 5–16 years. The children undergo fNIRS during monaural and binaural stimulation, and the pattern of cortical activity is analyzed using measures of the peak amplitude and area under the curve for both oxy- and deoxyhemoglobin. These cortical measures can be compared between the two groups of children, and analyses can be run to determine whether they relate to binaural hearing (speech-in-noise and sound localization), speech perception and production, and quality of life (QoL). The results could be of relevance for developing individualized rehabilitation programs for SSD, which could reduce patients’ difficulties and prevent long-term neurofunctional and clinical consequences. Full article
(This article belongs to the Special Issue Advances in Hearing Loss Diagnosis and Management)
Show Figures

Figure 1

22 pages, 2446 KiB  
Article
Binaural Heterophasic Superdirective Beamforming
by Yuzhu Wang, Jingdong Chen, Jacob Benesty, Jilu Jin and Gongping Huang
Sensors 2021, 21(1), 74; https://doi.org/10.3390/s21010074 - 25 Dec 2020
Cited by 8 | Viewed by 3132
Abstract
The superdirective beamformer, while attractive for processing broadband acoustic signals, often suffers from the problem of white noise amplification. So, its application requires well-designed acoustic arrays with sensors of extremely low self-noise level, which is difficult if not impossible to attain. In this [...] Read more.
The superdirective beamformer, while attractive for processing broadband acoustic signals, often suffers from the problem of white noise amplification. So, its application requires well-designed acoustic arrays with sensors of extremely low self-noise level, which is difficult if not impossible to attain. In this paper, a new binaural superdirective beamformer is proposed, which is divided into two sub-beamformers. Based on studies and facts in psychoacoustics, these two filters are designed in such a way that they are orthogonal to each other to make the white noise components in the binaural beamforming outputs incoherent while maximizing the output interaural coherence of the diffuse noise, which is important for the brain to localize the sound source of interest. As a result, the signal of interest in the binaural superdirective beamformer’s outputs is in phase but the white noise components in the outputs are random phase, so the human auditory system can better separate the acoustic signal of interest from white noise by listening to the outputs of the proposed approach. Experimental results show that the derived binaural superdirective beamformer is superior to its conventional monaural counterpart. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

13 pages, 7265 KiB  
Review
Hearing with One Ear: Consequences and Treatments for Profound Unilateral Hearing Loss
by Hillary A. Snapp and Sebastian A. Ausili
J. Clin. Med. 2020, 9(4), 1010; https://doi.org/10.3390/jcm9041010 - 3 Apr 2020
Cited by 57 | Viewed by 11976
Abstract
There is an increasing global recognition of the negative impact of hearing loss, and its association to many chronic health conditions. The deficits and disabilities associated with profound unilateral hearing loss, however, continue to be under-recognized and lack public awareness. Profound unilateral hearing [...] Read more.
There is an increasing global recognition of the negative impact of hearing loss, and its association to many chronic health conditions. The deficits and disabilities associated with profound unilateral hearing loss, however, continue to be under-recognized and lack public awareness. Profound unilateral hearing loss significantly impairs spatial hearing abilities, which is reliant on the complex interaction of monaural and binaural hearing cues. Unilaterally deafened listeners lose access to critical binaural hearing cues. Consequently, this leads to a reduced ability to understand speech in competing noise and to localize sounds. The functional deficits of profound unilateral hearing loss have a substantial impact on socialization, learning and work productivity. In recognition of this, rehabilitative solutions such as the rerouting of signal and hearing implants are on the rise. This review focuses on the latest insights into the deficits of profound unilateral hearing impairment, and current treatment approaches. Full article
(This article belongs to the Special Issue Therapies for Hearing Loss)
Show Figures

Figure 1

19 pages, 8257 KiB  
Article
Monaural Sound Localization Based on Reflective Structure and Homomorphic Deconvolution
by Yeonseok Park, Anthony Choi and Keonwook Kim
Sensors 2017, 17(10), 2189; https://doi.org/10.3390/s17102189 - 23 Sep 2017
Cited by 7 | Viewed by 4962
Abstract
The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, [...] Read more.
The asymmetric structure around the receiver provides a particular time delay for the specific incoming propagation. This paper designs a monaural sound localization system based on the reflective structure around the microphone. The reflective plates are placed to present the direction-wise time delay, which is naturally processed by convolutional operation with a sound source. The received signal is separated for estimating the dominant time delay by using homomorphic deconvolution, which utilizes the real cepstrum and inverse cepstrum sequentially to derive the propagation response’s autocorrelation. Once the localization system accurately estimates the information, the time delay model computes the corresponding reflection for localization. Because of the structure limitation, two stages of the localization process perform the estimation procedure as range and angle. The software toolchain from propagation physics and algorithm simulation realizes the optimal 3D-printed structure. The acoustic experiments in the anechoic chamber denote that 79.0% of the study range data from the isotropic signal is properly detected by the response value, and 87.5% of the specific direction data from the study range signal is properly estimated by the response time. The product of both rates shows the overall hit rate to be 69.1%. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Figure 1

22 pages, 4163 KiB  
Article
Near-Field Sound Localization Based on the Small Profile Monaural Structure
by Youngwoong Kim and Keonwook Kim
Sensors 2015, 15(11), 28742-28763; https://doi.org/10.3390/s151128742 - 13 Nov 2015
Cited by 10 | Viewed by 5668
Abstract
The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The [...] Read more.
The acoustic wave around a sound source in the near-field area presents unconventional properties in the temporal, spectral, and spatial domains due to the propagation mechanism. This paper investigates a near-field sound localizer in a small profile structure with a single microphone. The asymmetric structure around the microphone provides a distinctive spectral variation that can be recognized by the dedicated algorithm for directional localization. The physical structure consists of ten pipes of different lengths in a vertical fashion and rectangular wings positioned between the pipes in radial directions. The sound from an individual direction travels through the nearest open pipe, which generates the particular fundamental frequency according to the acoustic resonance. The Cepstral parameter is modified to evaluate the fundamental frequency. Once the system estimates the fundamental frequency of the received signal, the length of arrival and angle of arrival (AoA) are derived by the designed model. From an azimuthal distance of 3–15 cm from the outer body of the pipes, the extensive acoustic experiments with a 3D-printed structure show that the direct and side directions deliver average hit rates of 89% and 73%, respectively. The closer positions to the system demonstrate higher accuracy, and the overall hit rate performance is 78% up to 15 cm away from the structure body. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Graphical abstract

24 pages, 2693 KiB  
Article
Monaural Sound Localization Based on Structure-Induced Acoustic Resonance
by Keonwook Kim and Youngwoong Kim
Sensors 2015, 15(2), 3872-3895; https://doi.org/10.3390/s150203872 - 6 Feb 2015
Cited by 10 | Viewed by 6848
Abstract
A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. [...] Read more.
A physical structure such as a cylindrical pipe controls the propagated sound spectrum in a predictable way that can be used to localize the sound source. This paper designs a monaural sound localization system based on multiple pyramidal horns around a single microphone. The acoustic resonance within the horn provides a periodicity in the spectral domain known as the fundamental frequency which is inversely proportional to the radial horn length. Once the system accurately estimates the fundamental frequency, the horn length and corresponding angle can be derived by the relationship. The modified Cepstrum algorithm is employed to evaluate the fundamental frequency. In an anechoic chamber, localization experiments over azimuthal configuration show that up to 61% of the proper signal is recognized correctly with 30% misfire. With a speculated detection threshold, the system estimates direction 52% in positive-to-positive and 34% in negative-to-positive decision rate, on average. Full article
(This article belongs to the Special Issue Acoustic Waveguide Sensors)
Show Figures

Back to TopTop