Next Article in Journal
Cartilage Differentiation of Bone Marrow-Derived Mesenchymal Stem Cells in Three-Dimensional Silica Nonwoven Fabrics
Previous Article in Journal
Comparative Life Cycle Assessment of End-of-Life Silicon Solar Photovoltaic Modules
Previous Article in Special Issue
Acoustic Scene Classification Using Efficient Summary Statistics and Multiple Spectro-Temporal Descriptor Fusion
Article Menu
Issue 8 (August) cover image

Export Article

Open AccessArticle
Appl. Sci. 2018, 8(8), 1397; https://doi.org/10.3390/app8081397

Deep Learning for Audio Event Detection and Tagging on Low-Resource Datasets

Machine Listening Lab, Centre for Digital Music (C4DM), Queen Mary University of London, London E1 4NS, UK
*
Author to whom correspondence should be addressed.
Received: 15 June 2018 / Revised: 11 August 2018 / Accepted: 14 August 2018 / Published: 18 August 2018
(This article belongs to the Special Issue Computational Acoustic Scene Analysis)
Full-Text   |   PDF [1051 KB, uploaded 18 August 2018]   |  

Abstract

In training a deep learning system to perform audio transcription, two practical problems may arise. Firstly, most datasets are weakly labelled, having only a list of events present in each recording without any temporal information for training. Secondly, deep neural networks need a very large amount of labelled training data to achieve good quality performance, yet in practice it is difficult to collect enough samples for most classes of interest. In this paper, we propose factorising the final task of audio transcription into multiple intermediate tasks in order to improve the training performance when dealing with this kind of low-resource datasets. We evaluate three data-efficient approaches of training a stacked convolutional and recurrent neural network for the intermediate tasks. Our results show that different methods of training have different advantages and disadvantages. View Full-Text
Keywords: deep learning; multi-task learning; audio event detection; audio tagging; weak learning; low-resource data deep learning; multi-task learning; audio event detection; audio tagging; weak learning; low-resource data
Figures

Graphical abstract

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Supplementary material

SciFeed

Share & Cite This Article

MDPI and ACS Style

Morfi, V.; Stowell, D. Deep Learning for Audio Event Detection and Tagging on Low-Resource Datasets. Appl. Sci. 2018, 8, 1397.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top