Next Article in Journal
Application of Histogram-Based Outlier Scores to Detect Computer Network Anomalies
Next Article in Special Issue
A Prototype Photoplethysmography Electronic Device that Distinguishes Congestive Heart Failure from Healthy Individuals by Applying Natural Time Analysis
Previous Article in Journal
Research on Linear Active Disturbance Rejection Control in DC/DC Boost Converter
Previous Article in Special Issue
A Mechanism of Masking Identification Information regarding Moving Objects Recorded on Visual Surveillance Systems by Differentially Implementing Access Permission
Open AccessArticle

Melody Extraction and Encoding Method for Generating Healthcare Music Automatically

Department of Multimedia Engineering, Dongguk University-Seoul, Seoul 04620, Korea
*
Author to whom correspondence should be addressed.
Electronics 2019, 8(11), 1250; https://doi.org/10.3390/electronics8111250
Received: 28 August 2019 / Revised: 25 October 2019 / Accepted: 29 October 2019 / Published: 31 October 2019
(This article belongs to the Special Issue Electronic Solutions for Artificial Intelligence Healthcare)
The strong relationship between music and health has helped prove that soft and peaceful classical music can significantly reduce people’s stress; however, it is difficult to identify and collect examples of such music to build a library. Therefore, a system is required that can automatically generate similar classical music selections from a small amount of input music. Melody is the main element that reflects the rhythms and emotions of musical works; therefore, most automatic music generation research is based on melody. Given that melody varies frequently within musical bars, the latter are used as the basic units of composition. As such, there is a requirement for melody extraction techniques and bar-based encoding methods for automatic generation of bar-based music using melodies. This paper proposes a method that handles melody track extraction and bar encoding. First, the melody track is extracted using a pitch-based term frequency–inverse document frequency (TFIDF) algorithm and a feature-based filter. Subsequently, four specific features of the notes within a bar are encoded into a fixed-size matrix during bar encoding. We conduct experiments to determine the accuracy of track extraction based on verification data obtained with the TFIDF algorithm and the filter; an accuracy of 94.7% was calculated based on whether the extracted track was a melody track. The estimated value demonstrates that the proposed method can accurately extract melody tracks. This paper discusses methods for automatically extracting melody tracks from MIDI files and encoding based on bars. The possibility of generating music through deep learning neural networks is facilitated by the methods we examine within this work. To help the neural networks generate higher quality music, which is good for human health, the data preprocessing methods contained herein should be improved in future works. View Full-Text
Keywords: deep learning; encoding; feature engineering; melody; music generation; healthcare; term frequency–inverse document frequency deep learning; encoding; feature engineering; melody; music generation; healthcare; term frequency–inverse document frequency
Show Figures

Figure 1

MDPI and ACS Style

Li, S.; Jang, S.; Sung, Y. Melody Extraction and Encoding Method for Generating Healthcare Music Automatically. Electronics 2019, 8, 1250.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop