Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (10)

Search Parameters:
Keywords = binaural auralization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 46059 KB  
Article
Real and Virtual Lecture Rooms: Validation of a Virtual Reality System for the Perceptual Assessment of Room Acoustical Quality
by Angela Guastamacchia, Riccardo Giovanni Rosso, Giuseppina Emma Puglisi, Fabrizio Riente, Louena Shtrepi and Arianna Astolfi
Acoustics 2024, 6(4), 933-965; https://doi.org/10.3390/acoustics6040052 - 30 Oct 2024
Cited by 2 | Viewed by 3760
Abstract
Enhancing the acoustical quality in learning environments is necessary, especially for hearing aid (HA) users. When in-field evaluations cannot be performed, virtual reality (VR) can be adopted for acoustical quality assessments of existing and new buildings, contributing to the acquisition of subjective impressions [...] Read more.
Enhancing the acoustical quality in learning environments is necessary, especially for hearing aid (HA) users. When in-field evaluations cannot be performed, virtual reality (VR) can be adopted for acoustical quality assessments of existing and new buildings, contributing to the acquisition of subjective impressions in lab settings. To ensure an accurate spatial reproduction of the sound field in VR for HA users, multi-speaker-based systems can be employed to auralize a given environment. However, most systems require a lot of effort due to cost, size, and construction. This work deals with the validation of a VR-system based on a 16-speaker-array synced with a VR headset, arranged to be easily replicated in small non-anechoic spaces and suitable for HA users. Both objective and subjective validations are performed against a real university lecture room of 800 m3 and with 2.3 s of reverberation time at mid-frequencies. Comparisons of binaural and monoaural room acoustic parameters are performed between measurements in the real lecture room and its lab reproduction. To validate the audiovisual experience, 32 normal-hearing subjects were administered the Igroup Presence Questionnaire (IPQ) on the overall sense of perceived presence. The outcomes confirm that the system is a promising and feasible tool to predict the perceived acoustical quality of a room. Full article
(This article belongs to the Special Issue Acoustical Comfort in Educational Buildings)
Show Figures

Figure 1

21 pages, 1923 KB  
Article
Binaural Auralization of Room Acoustics with a Highly Scalable Wave-Based Acoustics Simulation
by Takumi Yoshida, Takeshi Okuzono and Kimihiro Sakagami
Appl. Sci. 2023, 13(5), 2832; https://doi.org/10.3390/app13052832 - 22 Feb 2023
Cited by 4 | Viewed by 3822
Abstract
This paper presents a proposal of an efficient binaural room-acoustics auralization method, an essential goal of room-acoustics modeling. The method uses a massively parallel wave-based room-acoustics solver based on a dispersion-optimized explicit time-domain finite element method (TD-FEM). The binaural room-acoustics auralization uses a [...] Read more.
This paper presents a proposal of an efficient binaural room-acoustics auralization method, an essential goal of room-acoustics modeling. The method uses a massively parallel wave-based room-acoustics solver based on a dispersion-optimized explicit time-domain finite element method (TD-FEM). The binaural room-acoustics auralization uses a hybrid technique of first-order Ambisonics (FOA) and head-related transfer functions. Ambisonics encoding uses room impulse responses computed by a parallel wave-based room-acoustics solver that can model sound absorbers with complex-valued surface impedance. Details are given of the novel procedure for computing expansion coefficients of spherical harmonics composing the FOA signal. This report is the first presenting a parallel wave-based solver able to simulate room impulse responses with practical computational times using an HPC cloud environment. A meeting room problem and a classroom problem are used, respectively, having 35 million degrees of freedom (DOF) and 100 million DOF, to test the parallel performance of up to 6144 CPU cores. Then, the potential of the proposed binaural room-acoustics auralization method is demonstrated via an auditorium acoustics simulation of up to 5 kHz having 750,000,000 DOFs. Room-acoustics auralization is performed with two acoustics treatment scenarios and room-acoustics evaluations that use an FOA signal, binaural room impulse response, and four room acoustical parameters. The auditorium acoustics simulation showed that the proposed method enables binaural room-acoustics auralization within 13,000 s using 6144 cores. Full article
Show Figures

Figure 1

18 pages, 3158 KB  
Article
Creating Audio Object-Focused Acoustic Environments for Room-Scale Virtual Reality
by Constantin Popp and Damian T. Murphy
Appl. Sci. 2022, 12(14), 7306; https://doi.org/10.3390/app12147306 - 20 Jul 2022
Cited by 10 | Viewed by 6731
Abstract
Room-scale virtual reality (VR) affordance in movement and interactivity causes new challenges in creating virtual acoustic environments for VR experiences. Such environments are typically constructed from virtual interactive objects that are accompanied by an Ambisonic bed and an off-screen (“invisible”) music soundtrack, with [...] Read more.
Room-scale virtual reality (VR) affordance in movement and interactivity causes new challenges in creating virtual acoustic environments for VR experiences. Such environments are typically constructed from virtual interactive objects that are accompanied by an Ambisonic bed and an off-screen (“invisible”) music soundtrack, with the Ambisonic bed, music, and virtual acoustics describing the aural features of an area. This methodology can become problematic in room-scale VR as the player cannot approach or interact with such background sounds, contradicting the player’s motion aurally and limiting interactivity. Written from a sound designer’s perspective, the paper addresses these issues by proposing a musically inclusive novel methodology that reimagines an acoustic environment predominately using objects that are governed by multimodal rule-based systems and spatialized in six degrees of freedom using 3D binaural audio exclusively while minimizing the use of Ambisonic beds and non-diegetic music. This methodology is implemented using off-the-shelf, creator-oriented tools and methods and is evaluated through the development of a standalone, narrative, prototype room-scale VR experience. The experience’s target platform is a mobile, untethered VR system based on head-mounted displays, inside-out tracking, head-mounted loudspeakers or headphones, and hand-held controllers. The authors apply their methodology to the generation of ambiences based on sound-based music, sound effects, and virtual acoustics. The proposed methodology benefits the interactivity and spatial behavior of virtual acoustic environments but may be constrained by platform and project limitations. Full article
Show Figures

Figure 1

17 pages, 11946 KB  
Article
Towards Child-Appropriate Virtual Acoustic Environments: A Database of High-Resolution HRTF Measurements and 3D-Scans of Children
by Hark Simon Braren and Janina Fels
Int. J. Environ. Res. Public Health 2022, 19(1), 324; https://doi.org/10.3390/ijerph19010324 - 29 Dec 2021
Cited by 10 | Viewed by 4715
Abstract
Head-related transfer functions (HRTFs) play a significant role in modern acoustic experiment designs in the auralization of 3-dimensional virtual acoustic environments. This technique enables us to create close to real-life situations including room-acoustic effects, background noise and multiple sources in a controlled laboratory [...] Read more.
Head-related transfer functions (HRTFs) play a significant role in modern acoustic experiment designs in the auralization of 3-dimensional virtual acoustic environments. This technique enables us to create close to real-life situations including room-acoustic effects, background noise and multiple sources in a controlled laboratory environment. While adult HRTF databases are widely available to the research community, datasets of children are not. To fill this gap, children aged 5–10 years old were recruited among 1st and 2nd year primary school children in Aachen, Germany. Their HRTFs were measured in the hemi-anechoic chamber with a 5-degree × 5-degree resolution. Special care was taken to reduce artifacts from motion during the measurements by means of fast measurement routines. To complement the HRTF measurements with the anthropometric data needed for individualization methods, a high-resolution 3D-scan of the head and upper torso of each participant was recorded. The HRTF measurement took around 3 min. The children’s head movement during that time was larger compared to adult participants in comparable experiments but was generally kept within 5 degrees of rotary and 1 cm of translatory motion. Adult participants only exhibit this range of motion in longer duration measurements. A comparison of the HRTF measurements to the KEMAR artificial head shows that it is not representative of an average child HRTF. Difference can be seen in both the spectrum and in the interaural time delay (ITD) with differences of 70 μs on average and a maximum difference of 138 μs. For both spectrum and ITD, the KEMAR more closely resembles the 95th percentile of range of children’s data. This warrants a closer look at using child specific HRTFs in the binaural presentation of virtual acoustic environments in the future. Full article
Show Figures

Figure 1

16 pages, 533 KB  
Review
Sound Localization and Lateralization by Bilateral Bone Conduction Devices, Middle Ear Implants, and Cartilage Conduction Hearing Aids
by Kimio Shiraishi
Audiol. Res. 2021, 11(4), 508-523; https://doi.org/10.3390/audiolres11040046 - 30 Sep 2021
Cited by 13 | Viewed by 7460
Abstract
Sound localization in daily life is one of the important functions of binaural hearing. Bilateral bone conduction devices (BCDs), middle ear implants, and cartilage conduction hearing aids have been often applied for patients with conductive hearing loss (CHL) or mixed hearing loss, for [...] Read more.
Sound localization in daily life is one of the important functions of binaural hearing. Bilateral bone conduction devices (BCDs), middle ear implants, and cartilage conduction hearing aids have been often applied for patients with conductive hearing loss (CHL) or mixed hearing loss, for example, resulting from bilateral microtia and aural atresia. In this review, factors affecting the accuracy of sound localization with bilateral BCDs, middle ear implants, and cartilage conduction hearing aids were classified into four categories: (1) types of device, (2) experimental conditions, (3) participants, and (4) pathways from the stimulus sound to both cochleae. Recent studies within the past 10 years on sound localization and lateralization by BCDs, middle ear implants, and cartilage conduction hearing aids were discussed. Most studies showed benefits for sound localization or lateralization with bilateral devices. However, the judgment accuracy was generally lower than that for normal hearing, and the localization errors tended to be larger than for normal hearing. Moreover, it should be noted that the degree of accuracy in sound localization by bilateral BCDs varied considerably among patients. Further research on sound localization is necessary to analyze the complicated mechanism of bone conduction, including suprathreshold air conduction with bilateral devices. Full article
(This article belongs to the Special Issue Bone and Cartilage Conduction)
Show Figures

Figure 1

7 pages, 872 KB  
Article
Benefits of Cartilage Conduction Hearing Aids for Speech Perception in Unilateral Aural Atresia
by Sakie Akasaka, Tadashi Nishimura, Hiroshi Hosoi, Osamu Saito, Ryota Shimokura, Chihiro Morimoto and Tadashi Kitahara
Audiol. Res. 2021, 11(2), 284-290; https://doi.org/10.3390/audiolres11020026 - 17 Jun 2021
Cited by 12 | Viewed by 3223
Abstract
Severe conductive hearing loss due to unilateral aural atresia leads to auditory and developmental disorders, such as difficulty in hearing in challenging situations. Bone conduction devices compensate for the disability but unfortunately have several disadvantages. The aim of this study was to evaluate [...] Read more.
Severe conductive hearing loss due to unilateral aural atresia leads to auditory and developmental disorders, such as difficulty in hearing in challenging situations. Bone conduction devices compensate for the disability but unfortunately have several disadvantages. The aim of this study was to evaluate the benefits of cartilage conduction (CC) hearing aids for speech perception in unilateral aural atresia. Eleven patients with unilateral aural atresia were included. Each participant used a CC hearing aid in the atretic ear. Speech recognition scores in the binaural hearing condition were obtained at low speech levels to evaluate the contribution of aided atretic ears to speech perception. Speech recognition scores were also obtained with and without presentation of noise. These assessments were compared between the unaided and aided atretic ear conditions. Speech recognition scores at low speech levels were significantly improved under the aided atretic ear condition (p < 0.05). A CC hearing aid in the unilateral atretic ear did not significantly improve the speech recognition score in a symmetrical noise presentation condition. The binaural hearing benefits of CC hearing aids in unilateral aural atresia were predominantly considered a diotic summation. Other benefits of binaural hearing remain to be investigated. Full article
(This article belongs to the Special Issue Bone and Cartilage Conduction)
Show Figures

Figure 1

22 pages, 7414 KB  
Article
Auralization of High-Order Directional Sources from First-Order RIR Measurements
by Markus Zaunschirm, Franck Zagala and Franz Zotter
Appl. Sci. 2020, 10(11), 3747; https://doi.org/10.3390/app10113747 - 28 May 2020
Cited by 4 | Viewed by 4015
Abstract
Can auralization of a highly directional source in a room succeed if it employs a room impulse response (RIR) measurement or simulation relying on a first-order directional source, only? This contribution presents model and evaluation of a source-and-receiver-directional Ambisonics RIR capture and processing [...] Read more.
Can auralization of a highly directional source in a room succeed if it employs a room impulse response (RIR) measurement or simulation relying on a first-order directional source, only? This contribution presents model and evaluation of a source-and-receiver-directional Ambisonics RIR capture and processing approach (SRD ARIR) based on a small set of responses from a first-order source to a first-order receiver. To enhance the directional resolution, we extend the Ambisonic spatial decomposition method (ASDM) to upscale the first-order resolution of both source and receiver to higher orders. To evaluate the method, a listening experiment was conducted based on first-order SRD-ARIR measurements, into which the higher-order directivity of icosahedral loudspeaker’s (IKO) was inserted as directional source of well-studied perceptual effects. The results show how the proposed method performs and compares to alternative rendering methods based on measurements taken in the same acoustic environment, e.g., multiple-orientation binaural room impulse responses (MOBRIRs) from the physical IKO to the KU-100 dummy head, or higher-order SRD ARIRs from IKO to em32 Eigenmike. For optimal externalization, our experiments exploit the benefits of virtual reality, using a highly realistic visualization on head-mounted-display, and a user interface to report localization by placing interactive visual objects in the virtual space. Full article
(This article belongs to the Section Acoustics and Vibrations)
Show Figures

Figure 1

16 pages, 1795 KB  
Article
Representation of Multiple Acoustic Sources in a Virtual Image of the Field of Audition from Binaural Synthetic Aperture Processing as the Head is Turned
by Duncan Tamsett
Robotics 2019, 8(1), 1; https://doi.org/10.3390/robotics8010001 - 23 Dec 2018
Cited by 2 | Viewed by 5368
Abstract
The representation of multiple acoustic sources in a virtual image of the field of audition based on binaural synthetic-aperture computation (SAC) is described through use of simulated inter-aural time delay (ITD) data. Directions to the acoustic sources may be extracted from the image. [...] Read more.
The representation of multiple acoustic sources in a virtual image of the field of audition based on binaural synthetic-aperture computation (SAC) is described through use of simulated inter-aural time delay (ITD) data. Directions to the acoustic sources may be extracted from the image. ITDs for multiple acoustic sources at an effective instant in time are implied for example by multiple peaks in the coefficients of a short-time base (≈2.25 ms for an antennae separation of 0.15 m) cross correlation function (CCF) of acoustic signals received at the antennae. The CCF coefficients for such peaks at the time delays measured for a given orientation of the head are then distended over lambda circles in a short-time base instantaneous acoustic image of the field of audition. Numerous successive short-time base images of the field of audition generated as the head is turned are integrated into a mid-time base (up to say 0.5 s) acoustic image of the field of audition. This integration as the head turns constitutes a SAC. The intersections of many lambda circles at points in the SAC acoustic image generate maxima in the integrated CCF coefficient values recorded in the image. The positions of the maxima represent the directions to acoustic sources. The locations of acoustic sources so derived provide input for a process managing the long-time base (>10s of seconds) acoustic image of the field of audition representing the robot’s persistent acoustic environmental world view. The virtual images could optionally be displayed on monitors external to the robot to assist system debugging and inspire ongoing development. Full article
(This article belongs to the Special Issue Feature Papers)
Show Figures

Figure 1

18 pages, 2759 KB  
Article
Synthetic Aperture Computation as the Head is Turned in Binaural Direction Finding
by Duncan Tamsett
Robotics 2017, 6(1), 3; https://doi.org/10.3390/robotics6010003 - 12 Mar 2017
Cited by 4 | Viewed by 7302
Abstract
Binaural systems measure instantaneous time/level differences between acoustic signals received at the ears to determine angles λ between the auditory axis and directions to acoustic sources. An angle λ locates a source on a small circle of colatitude (a lamda circle) on a [...] Read more.
Binaural systems measure instantaneous time/level differences between acoustic signals received at the ears to determine angles λ between the auditory axis and directions to acoustic sources. An angle λ locates a source on a small circle of colatitude (a lamda circle) on a sphere symmetric about the auditory axis. As the head is turned while listening to a sound, acoustic energy over successive instantaneous lamda circles is integrated in a virtual/subconscious field of audition. The directions in azimuth and elevation to maxima in integrated acoustic energy, or to points of intersection of lamda circles, are the directions to acoustic sources. This process in a robotic system, or in nature in a neural implementation equivalent to it, delivers its solutions to the aurally informed worldview. The process is analogous to migration applied to seismic profiler data, and to that in synthetic aperture radar/sonar systems. A slanting auditory axis, e.g., possessed by species of owl, leads to the auditory axis sweeping the surface of a cone as the head is turned about a single axis. Thus, the plane in which the auditory axis turns continuously changes, enabling robustly unambiguous directions to acoustic sources to be determined. Full article
Show Figures

Figure 1

16 pages, 1839 KB  
Article
Synthesis of a Virtual Urban Soundscape
by Monika Rychtáriková, Martin Jedovnický, Andrea Vargová and Christ Glorieux
Buildings 2014, 4(2), 139-154; https://doi.org/10.3390/buildings4020139 - 15 May 2014
Cited by 5 | Viewed by 6864
Abstract
The main research question addressed in this article is to find out to what extent it is possible to predict statistical noise levels such as L5 and L95 on an urban public square, based on the information about the square’s functionality, [...] Read more.
The main research question addressed in this article is to find out to what extent it is possible to predict statistical noise levels such as L5 and L95 on an urban public square, based on the information about the square’s functionality, the activities going on, and the architecture of the surrounding buildings. The same information is also exploited to auralize the soundscape on the virtual square, in order to assess the disturbance perceived by people of the traffic noise by means of laboratory listening tests, which are based on binaural sound recordings acquired in situ and incorporated in simulations to evoke typical acoustical situations. Auralizations were carried out by two calculation algorithms (ray-tracing and image source method) and two acoustic scenarios (an anechoic situation and a virtually reconstructed square in Odeon®). The statistical noise levels, calculated from the auralized soundscapes, compare well with measurements in situ. The listening test results also show that there are significant differences in people’s perception of traffic noise, depending on their origin. Full article
(This article belongs to the Special Issue Architectural, Urban and Natural Soundscapes)
Show Figures

Figure 1

Back to TopTop