Next Article in Journal
Magnetic-Responsive Microparticles that Switch Shape at 37 °C
Next Article in Special Issue
Exploring the Effects of Pitch Layout on Learning a New Musical Instrument
Previous Article in Journal
Fault Diagnosis Model of Photovoltaic Array Based on Least Squares Support Vector Machine in Bayesian Framework
Previous Article in Special Issue
Identifying Single Trial Event-Related Potentials in an Earphone-Based Auditory Brain-Computer Interface
Article Menu
Issue 11 (November) cover image

Export Article

Open AccessArticle
Appl. Sci. 2017, 7(11), 1204; doi:10.3390/app7111204

EigenScape: A Database of Spatial Acoustic Scene Recordings

Audio Lab, Department of Electronic Engineering, University of York, Heslington, York YO10 5DQ, UK
*
Author to whom correspondence should be addressed.
Academic Editor: Tapio Lokki
Received: 23 October 2017 / Revised: 21 November 2017 / Accepted: 8 November 2017 / Published: 22 November 2017
(This article belongs to the Special Issue Sound and Music Computing)
View Full-Text   |   Download PDF [3045 KB, uploaded 23 November 2017]   |  

Abstract

The classification of acoustic scenes and events is an emerging area of research in the field of machine listening. Most of the research conducted so far uses spectral features extracted from monaural or stereophonic audio rather than spatial features extracted from multichannel recordings. This is partly due to the lack thus far of a substantial body of spatial recordings of acoustic scenes. This paper formally introduces EigenScape, a new database of fourth-order Ambisonic recordings of eight different acoustic scene classes. The potential applications of a spatial machine listening system are discussed before detailed information on the recording process and dataset are provided. A baseline spatial classification system using directional audio coding (DirAC) techniques is detailed and results from this classifier are presented. The classifier is shown to give good overall scene classification accuracy across the dataset, with 7 of 8 scenes being classified with an accuracy of greater than 60% with an 11% improvement in overall accuracy compared to use of Mel-frequency cepstral coefficient (MFCC) features. Further analysis of the results shows potential improvements to the classifier. It is concluded that the results validate the new database and show that spatial features can characterise acoustic scenes and as such are worthy of further investigation. View Full-Text
Keywords: soundscape; acoustic environment; acoustic scene; ambisonics; spatial audio; Eigenmike; machine learning; dataset; recordings soundscape; acoustic environment; acoustic scene; ambisonics; spatial audio; Eigenmike; machine learning; dataset; recordings
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Green, M.C.; Murphy, D. EigenScape: A Database of Spatial Acoustic Scene Recordings. Appl. Sci. 2017, 7, 1204.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top