Next Article in Journal
An Efficient Numerical Approach for Field Infrared Smoke Transmittance Based on Grayscale Images
Next Article in Special Issue
Audlet Filter Banks: A Versatile Analysis/Synthesis Framework Using Auditory Frequency Scales
Previous Article in Journal
A Fusion Link Prediction Method Based on Limit Theorem
Previous Article in Special Issue
Room Response Equalization—A Review

A Real-Time Sound Field Rendering Processor

RIKEN Advanced Institute for Computational Science, Kobe, Hyogo 650-0047, Japan
Research Center for Advanced Computing Infrastructure, Japan Advanced Institute of Science & Technology, Nomi, Ishikawa 923-1292, Japan
Department of Architecture and Architectural Engineering, Kyoto University, Kyoto 615-8540, Japan
Department of Electrical Engineering and Information Technology, Tohoku Gakuin University, Sendai, Miyagi 980-8511, Japan
Faculty of Science and Engineering, Doshisha University, Kyotanabe, Kyoto 610-0321, Japan
Author to whom correspondence should be addressed.
Academic Editor: Vesa Valimaki
Appl. Sci. 2018, 8(1), 35;
Received: 3 November 2017 / Revised: 5 December 2017 / Accepted: 18 December 2017 / Published: 28 December 2017
(This article belongs to the Special Issue Sound and Music Computing)
Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA)-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC) with 32 GB random access memory (RAM) and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE) and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan), and the power consumption is about 143.8 mW. View Full-Text
Keywords: sound field rendering; FPGA; FDTD sound field rendering; FPGA; FDTD
Show Figures

Graphical abstract

MDPI and ACS Style

Yiyu, T.; Inoguchi, Y.; Otani, M.; Iwaya, Y.; Tsuchiya, T. A Real-Time Sound Field Rendering Processor. Appl. Sci. 2018, 8, 35.

AMA Style

Yiyu T, Inoguchi Y, Otani M, Iwaya Y, Tsuchiya T. A Real-Time Sound Field Rendering Processor. Applied Sciences. 2018; 8(1):35.

Chicago/Turabian Style

Yiyu, Tan, Yasushi Inoguchi, Makoto Otani, Yukio Iwaya, and Takao Tsuchiya. 2018. "A Real-Time Sound Field Rendering Processor" Applied Sciences 8, no. 1: 35.

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Back to TopTop