Special Issue "Sound and Music Computing -- Music and Interaction"

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Acoustics and Vibrations".

Deadline for manuscript submissions: 30 September 2019.

Special Issue Editors

Guest Editor
Prof. Dr. Stefania Serafin

Multisensory Experience Lab, Department of Architecture, Design and Media Technology, Aalborg University, 2450 Copenhagen SV, Denmark
Website | E-Mail
Interests: multimodal interfaces; sonic interaction design; virtual and augmented reality
Guest Editor
Prof. Dr. Federico Avanzini

Lab of Music Informatics, Department of Computer Science, University of Milano. Via Celoria 18, 20133 Milano, Italy
Website | E-Mail
Guest Editor
Prof. Dr. Isabel Barbancho

ATIC Research Group, Dept. Ingeniería de Comunicaciones, E.T.S.I. Telecomunicación, Universidad de Málaga, Andalucía Tech, Campus de Teatinos s/n, Málaga 29071, Spain
Website | E-Mail
Interests: musical acoustics; signal processing; multimedia applications; audio content analysis; serious games and new methods for music learning
Guest Editor
Prof. Dr. Lorenzo J. Tardón

ATIC Research Group, Dept. Ingeniería de Comunicaciones, E.T.S.I. Telecomunicación, Universidad de Málaga, Andalucía Tech, Campus de Teatinos s/n, Málaga 29071, Spain
Website | E-Mail
Interests: serious games; digital audio and image processing; pattern analysis and recognition and applications of signal processing techniques and methods

Special Issue Information

Dear colleagues,

Sound and Music Computing is a highly multidisciplinary research field. It combines scientific, technological, and artistic methods to produce, model, and understand audio and sonic arts with the help of computers. Sound and music computing borrows methods from computer science, electrical engineering, mathematics, musicology, psychology, etc.

In this Special Issue, we aim for papers covering a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc. Therefore, the following topics will be considered:

  • Acoustics and psychoacoustics;
  • AI and music performance;
  • Analysis/synthesis of the singing voice;
  • Applications in audio and music;
  • Architectural acoustics modeling and auralization;
  • Assistive technologies;
  • Audio and music for AR/VR;
  • Audio and music for games;
  • Audio interactions;
  • Audio recognition and bird-singing;
  • Auditory display;
  • Automatic music generation/accompaniment systems;
  • Bioacoustic modeling;
  • Biomusic and sound installations;
  • Computational archeomusicology;
  • Computational musicology;
  • Computational ethnomusicology;
  • Computational ornithomusicology;
  • Computer-aided real time composition;
  • Computer music software and programming languages;
  • Data sonification;
  • Digital signal processing;
  • Digital systems of tuning;
  • Ethics of sound and new technologies;
  • Gesture, motion, and music;
  • History and aesthetics of electroacoustic music;
  • Immersive audio/soundscape environments;
  • Interaction and improvisation;
  • Interactive environments for voice training;
  • Interactive performance systems;
  • Jazz performance and machine learning;
  • Mathematical music theory;
  • Music and robotics;
  • Music games and music for games;
  • Music information retrieval;
  • Music technology in education;
  • Music therapy and technology for special needs;
  • New interfaces for musical expression;
  • New musical instruments;
  • Perception and cognition of sound and music;
  • Recording and mastering automation techniques;
  • Sonification;
  • Sound/music and the neurosciences;
  • Spatial sound and spatialization techniques;
  • Physical models for sound synthesis;
  • VR applications and technologies for sound and music.

Submissions are invited for both original research and review articles. Additionally, invited papers based on excellent contributions to the 2019 Sound and Music Computing Conference SMC-19 will be included. We hope that this collection of papers will serve as an inspiration for those interested in sound and music computing.

Prof. Dr. Stefania Serafin
Prof. Dr. Federico Avanzini
Prof. Dr. Isabel Barbancho
Prof. Dr. Lorenzo J. Tardón
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1500 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Audio signal processing
  • Computer music
  • Multimedia
  • Music cognition
  • Music information retrieval
  • Music technology
  • Sonic interaction design
  • Virtual reality
  • Interaction with music
  • Serious game for music

Published Papers (3 papers)

View options order results:
result details:
Displaying articles 1-3
Export citation of selected articles as:

Research

Jump to: Other

Open AccessArticle
State-of-the-Art Model for Music Object Recognition with Deep Learning
Appl. Sci. 2019, 9(13), 2645; https://doi.org/10.3390/app9132645
Received: 23 May 2019 / Revised: 11 June 2019 / Accepted: 26 June 2019 / Published: 29 June 2019
PDF Full-text (7786 KB) | HTML Full-text | XML Full-text
Abstract
Optical music recognition (OMR) is an area in music information retrieval. Music object detection is a key part of the OMR pipeline. Notes are used to record pitch and duration and have semantic information. Therefore, note recognition is the core and key aspect [...] Read more.
Optical music recognition (OMR) is an area in music information retrieval. Music object detection is a key part of the OMR pipeline. Notes are used to record pitch and duration and have semantic information. Therefore, note recognition is the core and key aspect of music score recognition. This paper proposes an end-to-end detection model based on a deep convolutional neural network and feature fusion. This model is able to directly process the entire image and then output the symbol categories and the pitch and duration of notes. We show a state-of-the-art recognition model for general music symbols which can get 0.92 duration accurary and 0.96 pitch accuracy . Full article
(This article belongs to the Special Issue Sound and Music Computing -- Music and Interaction)
Figures

Figure 1

Open AccessArticle
Adaptive Refinements of Pitch Tracking and HNR Estimation within a Vocoder for Statistical Parametric Speech Synthesis
Appl. Sci. 2019, 9(12), 2460; https://doi.org/10.3390/app9122460
Received: 8 May 2019 / Revised: 12 June 2019 / Accepted: 13 June 2019 / Published: 16 June 2019
PDF Full-text (3511 KB) | HTML Full-text | XML Full-text
Abstract
Recent studies in text-to-speech synthesis have shown the benefit of using a continuous pitch estimate; one that interpolates fundamental frequency (F0) even when voicing is not present. However, continuous F0 is still sensitive to additive noise in speech signals and suffers from short-term [...] Read more.
Recent studies in text-to-speech synthesis have shown the benefit of using a continuous pitch estimate; one that interpolates fundamental frequency (F0) even when voicing is not present. However, continuous F0 is still sensitive to additive noise in speech signals and suffers from short-term errors (when it changes rather quickly over time). To alleviate these issues, three adaptive techniques have been developed in this article for achieving a robust and accurate F0: (1) we weight the pitch estimates with state noise covariance using adaptive Kalman-filter framework, (2) we iteratively apply a time axis warping on the input frame signal, (3) we optimize all F0 candidates using an instantaneous-frequency-based approach. Additionally, the second goal of this study is to introduce an extension of a novel continuous-based speech synthesis system (i.e., in which all parameters are continuous). We propose adding a new excitation parameter named Harmonic-to-Noise Ratio (HNR) to the voiced and unvoiced components to indicate the degree of voicing in the excitation and to reduce the influence of buzziness caused by the vocoder. Results based on objective and perceptual tests demonstrate that the voice built with the proposed framework gives state-of-the-art speech synthesis performance while outperforming the previous baseline. Full article
(This article belongs to the Special Issue Sound and Music Computing -- Music and Interaction)
Figures

Figure 1

Other

Jump to: Research

Open AccessMeeting Report
16th Sound and Music Computing Conference SMC 2019 (28–31 May 2019, Malaga, Spain)
Appl. Sci. 2019, 9(12), 2492; https://doi.org/10.3390/app9122492
Received: 5 June 2019 / Accepted: 9 June 2019 / Published: 19 June 2019
PDF Full-text (301 KB) | HTML Full-text | XML Full-text
Abstract
The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28–31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 [...] Read more.
The 16th Sound and Music Computing Conference (SMC 2019) took place in Malaga, Spain, 28–31 May 2019 and it was organized by the Application of Information and Communication Technologies Research group (ATIC) of the University of Malaga (UMA). The SMC 2019 associated Summer School took place 25–28 May 2019. The First International Day of Women in Inclusive Engineering, Sound and Music Computing Research (WiSMC 2019) took place on 28 May 2019. The SMC 2019 TOPICS OF INTEREST included a wide selection of topics related to acoustics, psychoacoustics, music, technology for music, audio analysis, musicology, sonification, music games, machine learning, serious games, immersive audio, sound synthesis, etc. Full article
(This article belongs to the Special Issue Sound and Music Computing -- Music and Interaction)
Appl. Sci. EISSN 2076-3417 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top