Special Issue "Sound and Music Computing"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computer Science and Electrical Engineering".

Deadline for manuscript submissions: 31 October 2017

Special Issue Editors

Guest Editor
Prof. Dr. Tapio Lokki

Aalto University, Department of Computer Science, Espoo, Finland
Website | E-Mail
Interests: virtual acoustics; spatial sound; psychoacoustics
Co-Guest Editor
Prof. Dr. Stefania Serafin

Aalborg University Copenhagen, Denmark
Website | E-Mail
Interests: multimodal interfaces; sonic interaction design
Co-Guest Editor
Prof. Dr. Meinard Müller

International Audio Laboratories Erlangen, Friedrich-Alexander Universität Erlangen-Nürnberg, Erlangen, Germany
Website | E-Mail
Interests: music information retrieval; music processing; audio signal processing
Co-Guest Editor
Prof. Dr. Vesa Valimaki

Aalto University, Department of Signal Processing and Acoustics, Espoo, Finland
Website | E-Mail
Interests: audio signal processing; sound synthesis

Special Issue Information

Dear Colleagues,

Sound and music computing is a young and highly multidisciplinary research field. It combines scientific, technological, and artistic methods to produce, model, and understand audio and sonic arts with the help of computers. Sound and music computing borrows methods, for example, from computer science, electrical engineering, mathematics, musicology, and psychology.

In this Special Issue, we want to address recent advances in the following topics:

·         Analysis, synthesis, and modification of sound

·         Automatic composition, accompaniment, and improvisation

·         Computational musicology and mathematical music theory

·         Computer-based music analysis

·         Computer music languages and software

·         High-performance computing for audio

·         Interactive performance systems and new interfaces

·         Multi-modal perception and emotion

·         Music information retrieval

·         Music games and educational tools

·         Music performance analysis and rendering

·         Robotics and music

·         Room acoustics modeling and auralization

·         Social interaction in sound and music computing

·         Sonic interaction design

·         Sonification

·         Soundscapes and environmental arts

·         Spatial sound

·         Virtual reality applications and technologies for sound and music

Submissions are invited for both original research and review articles. Additionally, invited papers based on excellent contributions to recent conferences in this field will be included in this Special Issue; for example, from the 2017 Sound and Music Computing Conference SMC-17. We hope that this collection of papers will serve as an inspiration for those interested in sound and music computing.

Prof. Dr. Tapio Lokki,
Prof. Dr. Stefania Serafin,
Prof. Dr. Meinard Müller,
Prof. Dr. Vesa Välimäki
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Audio signal processing

  • computer interfaces

  • computer music

  • multimedia

  • music cognition

  • music control and performance

  • music information retrieval

  • music technology

  • sonic interaction design

  • virtual reality

Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Open AccessArticle Supporting an Object-Oriented Approach to Unit Generator Development: The Csound Plugin Opcode Framework
Appl. Sci. 2017, 7(10), 970; doi:10.3390/app7100970 (registering DOI)
Received: 31 July 2017 / Revised: 8 September 2017 / Accepted: 18 September 2017 / Published: 21 September 2017
PDF Full-text (1164 KB)
Abstract
This article presents a new framework for unit generator development for Csound, supporting a full object-oriented programming approach. It introduces the concept of unit generators and opcodes, and its centrality with regards to music programming languages in general, and Csound in specific. The
[...] Read more.
This article presents a new framework for unit generator development for Csound, supporting a full object-oriented programming approach. It introduces the concept of unit generators and opcodes, and its centrality with regards to music programming languages in general, and Csound in specific. The layout of an opcode from the perspective of the Csound C-language API is presented, with some outline code examples. This is followed by a discussion which places the unit generator within the object-oriented paradigm and the motivation for a full C++ programming support, which is provided by the Csound Plugin Opcode Framework (CPOF). The design of CPOF is then explored in detail, supported by several opcode examples. The article concludes by discussing two key applications of object-orientation and their respective instances in the Csound code base. Full article
(This article belongs to the Special Issue Sound and Music Computing)
Open AccessArticle A Two-Stage Approach to Note-Level Transcription of a Specific Piano
Appl. Sci. 2017, 7(9), 901; doi:10.3390/app7090901
Received: 22 July 2017 / Revised: 25 August 2017 / Accepted: 29 August 2017 / Published: 2 September 2017
PDF Full-text (12209 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a two-stage transcription framework for a specific piano, which combines deep learning and spectrogram factorization techniques. In the first stage, two convolutional neural networks (CNNs) are adopted to recognize the notes of the piano preliminarily, and note verification for the
[...] Read more.
This paper presents a two-stage transcription framework for a specific piano, which combines deep learning and spectrogram factorization techniques. In the first stage, two convolutional neural networks (CNNs) are adopted to recognize the notes of the piano preliminarily, and note verification for the specific individual is conducted in the second stage. The note recognition stage is independent of piano individual, in which one CNN is used to detect onsets and another is used to estimate the probabilities of pitches at each detected onset. Hence, candidate pitches at candidate onsets are obtained in the first stage. During the note verification, templates for the specific piano are generated to model the attack of note per pitch. Then, the spectrogram of the segment around candidate onset is factorized using attack templates of candidate pitches. In this way, not only the pitches are picked up by note activations, but the onsets are revised. Experiments show that CNN outperforms other types of neural networks in both onset detection and pitch estimation, and the combination of two CNNs yields better performance than a single CNN in note recognition. We also observe that note verification further improves the performance of transcription. In the transcription of a specific piano, the proposed system achieves 82% on note-wise F-measure, which outperforms the state-of-the-art. Full article
(This article belongs to the Special Issue Sound and Music Computing)
Figures

Figure 1

Open AccessArticle A Low Cost Wireless Acoustic Sensor for Ambient Assisted Living Systems
Appl. Sci. 2017, 7(9), 877; doi:10.3390/app7090877
Received: 31 July 2017 / Revised: 24 August 2017 / Accepted: 25 August 2017 / Published: 27 August 2017
PDF Full-text (1629 KB) | HTML Full-text | XML Full-text
Abstract
Ambient Assisted Living (AAL) has become an attractive research topic due to growing interest in remote monitoring of older people. Development in sensor technologies and advances in wireless communications allows to remotely offer smart assistance and monitor those people at their own home,
[...] Read more.
Ambient Assisted Living (AAL) has become an attractive research topic due to growing interest in remote monitoring of older people. Development in sensor technologies and advances in wireless communications allows to remotely offer smart assistance and monitor those people at their own home, increasing their quality of life. In this context, Wireless Acoustic Sensor Networks (WASN) provide a suitable way for implementing AAL systems which can be used to infer hazardous situations via environmental sounds identification. Nevertheless, satisfying sensor solutions have not been found with the considerations of both low cost and high performance. In this paper, we report the design and implementation of a wireless acoustic sensor to be located at the edge of a WASN for recording and processing environmental sounds which can be applied to AAL systems for personal healthcare because it has the following significant advantages: low cost, small size, audio sampling and computation capabilities for audio processing. The proposed wireless acoustic sensor is able to record audio samples at least to 10 kHz sampling frequency and 12-bit resolution. Also, it is capable of doing audio signal processing without compromising the sample rate and the energy consumption by using a new microcontroller released at the last quarter of 2016. The proposed low cost wireless acoustic sensor has been verified using four randomness tests for doing statistical analysis and a classification system of the recorded sounds based on audio fingerprints. Full article
(This article belongs to the Special Issue Sound and Music Computing)
Figures

Figure 1

Journal Contact

MDPI AG
Applied Sciences Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Special Issue Edit a special issue Review for Applied Sciences
logo
loading...
Back to Top