Immersive 3D Audio: From Architecture to Automotive

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Acoustics and Vibrations".

Deadline for manuscript submissions: closed (31 May 2022) | Viewed by 10764
I3DA 2021 International Conference

Special Issue Editors


E-Mail Website
Guest Editor
Department of Architecture, University of Bologna, 40126 Bologna, Italy
Interests: acoustics; room acoustics; musical acoustics; emulation of nonlinear acoustic systems; 3D auralisation; multiple arrays in 3D acoustic measurements; noise barriers
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Architecture, University of Bologna, Bologna, Italy
Interests: acoustic systems; 3D auralization; multiple arrays in 3D acoustic measurements; acoustics; room acoustics; musical acoustics; emulation of nonlinear

Special Issue Information

Dear Colleagues,

The subject of immersive and 3D audio represents one of the most important topics in audio engineering, room acoustics, automotive acoustics, virtual augmented reality, and their applications, ranging from applications in music composition to audio for automotive, including 3D audio for gaming.

This Special Issue includes selected papers to be presented at the I3DA Conference “Immersive 3D Audio: from Architecture to Automotive”, to be held 8–10 September 2021, which provides world-leading scientists with a venue to present the latest research in the field and discuss new directions and collaborations.

This Special Issue provides an opportunity to collect studies about 3D Audio and new methodologies and technologies in this field. This Special Issue is intended for scientists, researchers, PhD students, and curators that intend to propose a high level of unpublished research, using both theoretical and experimental approaches.

We welcome the submission of original manuscripts, case studies, and review papers in the field of room and building acoustics and related topics, such as measurements methods in architectural acoustics, methods for 3D sound reproduction, 3D audio for cinemas, augmented virtual reality for audio and video, new materials for room acoustics, signal processing in 3D audio, and musical acoustics.

Prof. Dr. Lamberto Tronchin
Dr. Francesca Merli
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • architectural acoustics
  • 3D auralization
  • signal processing in acoustics
  • nonlinear acoustics in auralization
  • building acoustics
  • musical acoustics

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 5989 KiB  
Article
Measurements of Room Acoustic and Thermo-Hygrometric Parameters—A Case Study
by Nicola Granzotto, Ruoran Yan and Lamberto Tronchin
Appl. Sci. 2023, 13(5), 2905; https://doi.org/10.3390/app13052905 - 24 Feb 2023
Cited by 9 | Viewed by 1018
Abstract
Equipment, sound sources, operators, microphone placement, calculation techniques, and thermal–humidity measurement conditions all have an impact on the measurement of impulse responses when several channels are present. However, the thermal–humidity variable, which is a significant component of these factors impacting the assessment of [...] Read more.
Equipment, sound sources, operators, microphone placement, calculation techniques, and thermal–humidity measurement conditions all have an impact on the measurement of impulse responses when several channels are present. However, the thermal–humidity variable, which is a significant component of these factors impacting the assessment of acoustic characteristics, is commonly overlooked in research. The effects of altering temperature, relative humidity, and air velocity on acoustic parameters are investigated in this paper through experimental activities carried out in an experimental room. The patterns of fluctuation of a range of room acoustic characteristics are examined, data are acquired, and statistical analyses based on R (language and environment for statistical computing and graphics) are generated in order to ascertain the relationship between the variation of acoustic parameters and the variation of thermo-hygrometric parameters. Finally, a statistical analysis reveals relationships between thermal and hygrometric variables and interior acoustic characteristics. Full article
(This article belongs to the Special Issue Immersive 3D Audio: From Architecture to Automotive)
Show Figures

Figure 1

13 pages, 1579 KiB  
Article
Ear Centering for Accurate Synthesis of Near-Field Head-Related Transfer Functions
by Ayrton Urviola, Shuichi Sakamoto and César D. Salvador
Appl. Sci. 2022, 12(16), 8290; https://doi.org/10.3390/app12168290 - 19 Aug 2022
Cited by 1 | Viewed by 1677
Abstract
The head-related transfer function (HRTF) is a major tool in spatial sound technology. The HRTF for a point source is defined as the ratio between the sound pressure at the ear position and the free-field sound pressure at a reference position. The reference [...] Read more.
The head-related transfer function (HRTF) is a major tool in spatial sound technology. The HRTF for a point source is defined as the ratio between the sound pressure at the ear position and the free-field sound pressure at a reference position. The reference is typically placed at the center of the listener’s head. When using the spherical Fourier transform (SFT) and distance-varying filters (DVF) to synthesize HRTFs for point sources very close to the head, the spherical symmetry of the model around the head center does not allow for distinguishing between the ear position and the head center. Ear centering is a technique that overcomes this source of inaccuracy by translating the reference position. Hitherto, plane-wave (PW) translation operators have yield effective ear centering when synthesizing far-field HRTFs. We propose spherical-wave (SW) translation operators for ear centering required in the accurate synthesis of near-field HRTFs. We contrasted the performance of PW and SW ear centering. The synthesis errors decreased consistently when applying SW ear centering and the enhancement was observed up to the maximum frequency determined by the spherical grid. Full article
(This article belongs to the Special Issue Immersive 3D Audio: From Architecture to Automotive)
Show Figures

Figure 1

20 pages, 5795 KiB  
Article
A Multi-Source Separation Approach Based on DOA Cue and DNN
by Yu Zhang, Maoshen Jia, Xinyu Jia and Tun-Wen Pai
Appl. Sci. 2022, 12(12), 6224; https://doi.org/10.3390/app12126224 - 19 Jun 2022
Viewed by 1222
Abstract
Multiple sound source separation in a reverberant environment has become popular in recent years. To improve the quality of the separated signal in a reverberant environment, a separation method based on a DOA cue and a deep neural network (DNN) is proposed in [...] Read more.
Multiple sound source separation in a reverberant environment has become popular in recent years. To improve the quality of the separated signal in a reverberant environment, a separation method based on a DOA cue and a deep neural network (DNN) is proposed in this paper. Firstly, a pre-processing model based on non-negative matrix factorization (NMF) is utilized for recorded signal dereverberation, which makes source separation more efficient. Then, we propose a multi-source separation algorithm combining sparse and non-sparse component points recovery to obtain each sound source signal from the dereverberated signal. For sparse component points, the dominant sound source for each sparse component point is determined by a DOA cue. For non-sparse component points, a DNN is used to recover each sound source signal. Finally, the signals separated from the sparse and non-sparse component points are well matched by temporal correlation to obtain each sound source signal. Both objective and subjective evaluation results indicate that compared with the existing method, the proposed separation approach shows a better performance in the case of a high-reverberation environment. Full article
(This article belongs to the Special Issue Immersive 3D Audio: From Architecture to Automotive)
Show Figures

Figure 1

24 pages, 1828 KiB  
Article
Interpolating the Directional Room Impulse Response for Dynamic Spatial Audio Reproduction
by Jiahong Zhao, Xiguang Zheng, Christian Ritz and Daeyoung Jang
Appl. Sci. 2022, 12(4), 2061; https://doi.org/10.3390/app12042061 - 16 Feb 2022
Cited by 9 | Viewed by 2967
Abstract
Virtual reality (VR) is increasingly important for exploring the real world, which has partially moved to virtual workplaces. In order to create immersive presence in a simulated scene for humans, VR needs to reproduce spatial audio that describes three-dimensional acoustic characteristics in the [...] Read more.
Virtual reality (VR) is increasingly important for exploring the real world, which has partially moved to virtual workplaces. In order to create immersive presence in a simulated scene for humans, VR needs to reproduce spatial audio that describes three-dimensional acoustic characteristics in the counterpart physical environment. When the user moves, this reproduction should be dynamically updated, which provides practical challenges because the bandwidth for continuously transmitting audio and video scene data may be limited. This paper proposes an interpolation approach for dynamic spatial audio reproduction using acoustic characteristics of direction and reverberation at limited numbers of positions, which are represented using a first order Ambisonics encoding of the room impulse response (RIR), called the directional RIR (DRIR). We decompose two known DRIRs into reflection components, before interpolating early dominant components for DRIR synthesis and utilizing DRIR recordings for accuracy evaluation. Results indicate that the most accurate interpolation is obtained by the proposed method over two comparative approaches, particularly in a simulated small room where most direction of arrival estimation errors of early components are below five degrees. These findings suggest precise interpolated DRIRs with limited data using the proposed approach, which is vital for dynamic spatial audio reproduction for VR applications. Full article
(This article belongs to the Special Issue Immersive 3D Audio: From Architecture to Automotive)
Show Figures

Figure 1

22 pages, 5747 KiB  
Article
Locating Image Sources from Multiple Spatial Room Impulse Responses
by Otto Puomio, Nils Meyer-Kahlen and Tapio Lokki
Appl. Sci. 2021, 11(6), 2485; https://doi.org/10.3390/app11062485 - 10 Mar 2021
Cited by 5 | Viewed by 2318
Abstract
Measured spatial room impulse responses have been used to compare acoustic spaces. One way to analyze and render such responses is to apply parametric methods, yet those methods have been bound to single measurement locations. This paper introduces a method that locates image [...] Read more.
Measured spatial room impulse responses have been used to compare acoustic spaces. One way to analyze and render such responses is to apply parametric methods, yet those methods have been bound to single measurement locations. This paper introduces a method that locates image sources from spatial room impulse responses measured at multiple source and receiver positions. The method aligns the measurements to a common coordinate frame and groups stable direction-of-arrival estimates to find image source positions. The performance of the method is validated with three case studies—one small room and two concert halls. The studies show that the method is able to locate the most prominent image sources even in complex spaces, providing new insights into available Spatial Room Impulse Response (SRIR) data and a starting point for six degrees of freedom (6DoF) acoustic rendering. Full article
(This article belongs to the Special Issue Immersive 3D Audio: From Architecture to Automotive)
Show Figures

Figure 1

Back to TopTop