Next Article in Journal
An Efficient Numerical Approach for Field Infrared Smoke Transmittance Based on Grayscale Images
Next Article in Special Issue
Audlet Filter Banks: A Versatile Analysis/Synthesis Framework Using Auditory Frequency Scales
Previous Article in Journal
A Fusion Link Prediction Method Based on Limit Theorem
Previous Article in Special Issue
Room Response Equalization—A Review
Article Menu
Issue 1 (January) cover image

Export Article

Open AccessArticle
Appl. Sci. 2018, 8(1), 35; https://doi.org/10.3390/app8010035

A Real-Time Sound Field Rendering Processor

1
RIKEN Advanced Institute for Computational Science, Kobe, Hyogo 650-0047, Japan
2
Research Center for Advanced Computing Infrastructure, Japan Advanced Institute of Science & Technology, Nomi, Ishikawa 923-1292, Japan
3
Department of Architecture and Architectural Engineering, Kyoto University, Kyoto 615-8540, Japan
4
Department of Electrical Engineering and Information Technology, Tohoku Gakuin University, Sendai, Miyagi 980-8511, Japan
5
Faculty of Science and Engineering, Doshisha University, Kyotanabe, Kyoto 610-0321, Japan
*
Author to whom correspondence should be addressed.
Academic Editor: Vesa Valimaki
Received: 3 November 2017 / Revised: 5 December 2017 / Accepted: 18 December 2017 / Published: 28 December 2017
(This article belongs to the Special Issue Sound and Music Computing)
Full-Text   |   PDF [4206 KB, uploaded 16 January 2018]   |  

Abstract

Real-time sound field renderings are computationally intensive and memory-intensive. Traditional rendering systems based on computer simulations suffer from memory bandwidth and arithmetic units. The computation is time-consuming, and the sample rate of the output sound is low because of the long computation time at each time step. In this work, a processor with a hybrid architecture is proposed to speed up computation and improve the sample rate of the output sound, and an interface is developed for system scalability through simply cascading many chips to enlarge the simulated area. To render a three-minute Beethoven wave sound in a small shoe-box room with dimensions of 1.28 m × 1.28 m × 0.64 m, the field programming gate array (FPGA)-based prototype machine with the proposed architecture carries out the sound rendering at run-time while the software simulation with the OpenMP parallelization takes about 12.70 min on a personal computer (PC) with 32 GB random access memory (RAM) and an Intel i7-6800K six-core processor running at 3.4 GHz. The throughput in the software simulation is about 194 M grids/s while it is 51.2 G grids/s in the prototype machine even if the clock frequency of the prototype machine is much lower than that of the PC. The rendering processor with a processing element (PE) and interfaces consumes about 238,515 gates after fabricated by the 0.18 µm processing technology from the ROHM semiconductor Co., Ltd. (Kyoto Japan), and the power consumption is about 143.8 mW. View Full-Text
Keywords: sound field rendering; FPGA; FDTD sound field rendering; FPGA; FDTD
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Yiyu, T.; Inoguchi, Y.; Otani, M.; Iwaya, Y.; Tsuchiya, T. A Real-Time Sound Field Rendering Processor. Appl. Sci. 2018, 8, 35.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top