Next Article in Journal
Towards Adaptive Multipath Managing: A Lightweight Path Management Mechanism to Aid Multihomed Mobile Computing Devices
Next Article in Special Issue
Chaos Based Cryptographic Pseudo-Random Number Generator Template with Dynamic State Change
Previous Article in Journal
Mutual Coupling Reduction of Cross-Dipole Antenna for Base Stations by Using a Neural Network Approach
Previous Article in Special Issue
Integer-and Fractional-Order Integral and Derivative Two-Port Summations: Practical Design Considerations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Enhancement of Conventional Beat Tracking System Using Teager–Kaiser Energy Operator †

1
Department of Telecommunications, Faculty of Electrical Engineering and Communication, Brno University of Technology, Technicka 12, 61600 Brno, Czech Republic
2
Department of Musicology, Faculty of Arts, Masaryk University, Janackovo namesti 2a, 60200 Brno, Czech Republic
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary, 1–3 July 2019.
Appl. Sci. 2020, 10(1), 379; https://doi.org/10.3390/app10010379
Submission received: 30 October 2019 / Revised: 15 December 2019 / Accepted: 1 January 2020 / Published: 4 January 2020

Abstract

:
Beat detection systems are widely used in the music information retrieval (MIR) research field for the computation of tempo and beat time positions in audio signals. One of the most important parts of these systems is usually onset detection. There is an understandable tendency to employ the most accurate onset detector. However, there are options to increase the global tempo (GT) accuracy and also the detection accuracy of beat positions at the expense of less accurate onset detection. The aim of this study is to introduce an enhancement of a conventional beat detector. The enhancement is based on the Teager–Kaiser energy operator (TKEO), which pre-processes the input audio signal before the spectral flux calculation. The proposed approach is first evaluated in terms of the ability to estimate the GT and beat positions accuracy of given audio tracks compared to the same conventional system without the proposed enhancement. The accuracy of the GT and average beat differences (ABD) estimation is tested on the manually labelled reference database. Finally, this system is used for analysis of a string quartet music database. Results suggest that the presence of the TKEO lowers onset detection accuracy but also increases the GT and ABD estimation. The average deviation from the reference GT in the reference database is 9.99 BPM (11.28%), which improves the conventional methodology, where the average deviation is 18.19 BPM (17.74%). This study has a pilot character and provides some suggestions for improving the beat tracking system for music analysis.

Graphical Abstract

1. Introduction

Onset time in audio signal analysis represents the time position of a relevant sound event: usually when a music tone is created. Onset detection functions are algorithms that capture onsets (onset time positions), and thus ideally all tones in audio recordings. They can create a representation or an evolution of onset structure in given time of particular audio recording. There are also offsets of tones (indicating the end time position of a tone in a signal), e.g., see [1,2], but beat tracking systems do not need such information to work properly. The conventional beat tracking system is usually based on the calculation of repetitiveness of the dominant components in an onset function (onset curve) and its output represents a temporal framework, i.e., time instances, where a person would tap when listening to the corresponding piece of music. That is why it is important to have a robust and computationally effective onset detector. Calculation of the beat positions and global tempo (GT) is important for musicologists and the complex music analysis. With such automated systems, tempo and agogic changes can be measured much faster than only with manual approach alone. Thus, musicologists would have to spend less time correcting calculated beat positions. Therefore, we set a new parameter—the average deviation of reference beat positions to the calculated beat positions as the average beat deviation (ABD).
Most of the onset detectors are based on energy changes in spectra: the calculation of spectral flux. For bowed string instruments there is a method called SuperFlux that can suppress vibrato in an expressive performance and reduce the amount of false-positive detections [3]. Some methods enhance the spectral flux onset detection using logarithmic spectral compression and then compute the cyclic tempogram for a tempo analysis [4]. There is also a method that calculates tempograms using Predominant Local Pulse [5]. Besides, the onset detection and beat detection could be performed in several toolboxes and libraries such as Tempogram Toolbox [6], LibROSA [7], MIR Toolbox [8], etc. [9]. The state-of-art onset detectors are usually based on deep neural networks [10,11] using spectral components and parameters as their inputs. Beat detection systems contribute from the solid onset detectors, where periodicity is identified [6,8,12,13,14]. As in other MIR fields, neural networks are also used.
While onset detection in percussive music is considered to be highly accurate (already at MIREX 2012 conference [15], algorithms achieved F-measure values greater than 0.95 for percussive sounds), detection of soft onsets produced by bowed string or woodwind instruments is still challenging. Although a lot of improvements in onset detection have been made, no system is truly universal for all musical instruments and all types of music.
This work aims to enhance the conventional beat tracking system and to improve the tempo analysis methodology published in [16,17] using the more sophisticated approach of tempo structure creation based on the automated beat tracking system with the Teager–Kaiser energy operator (TKEO) included. This nonlinear energy operator is used, e.g., for the improvement of onset detection in EMG signals (electromyography) [18], to decompose audio into amplitude and frequency modulation components [19], for the detection of Voice Onset Time [20], or the highly efficient technique for LOS estimation in WCDMA mobile positioning [21]. So far there is no extensive study on the use of TKEO for the analysis of musical instruments.
Since we will focus on the detection of onsets of melody instruments with low-energy attacks, we will concentrate on the onset and beat detection method based on spectral changes. We have not chosen probabilistic models, because they are usually susceptible to noisy recordings, which can be a problem in the case of old recordings.
The rest of the paper is organised as follows: Section 2 describes the onset detection function, the Teager–Kaiser energy operator, the proposed enhancement of the conventional beat tracking system and the beat detection method. It shows, how is the TKEO changing the spectra and therefore the output onset detection. Then, it introduces the reference and the string quartet database used for the GT and ABD estimation. Furthermore, a possible application is shown and the system evaluation is defined. Results are reported in Section 3 and discussed in Section 4. Finally, conclusions are given in Section 5.

2. Dataset and Methods

2.1. Onset Detection

Usually, onset detection algorithms use some pre-processing steps to reduce redundant information and to improve detection accuracy. In this study, we propose a new method of pre-processing based on the TKEO. The TKEO ( Ψ { s ( t ) } ) is a nonlinear energy operator that can be calculated using the following formula:
Ψ { s ( t ) } = d s ( t ) d t 2 s ( t ) · d 2 s ( t ) d t 2 ,
i.e., we compute the square of the first derivative (which denotes the square of the rate of signal change) and then subtract the signal multiplied by the second derivative (which determines the acceleration at that point). We speed up the temporal changes of the signal module by removing the slow changes because we consider the rate of change. It is known that the faster the time changes, the higher the frequency components appear in the spectrum. By taking the first derivative into account, we increase the magnitude of higher frequencies of the spectrum [22].
In our discrete approach, we firstly downsample the input signal x [ n ] to 22,050 Hz. Next, we apply the TKEO, i.e., we calculate the corresponding discrete non-causal form:
Ψ [ x [ n ] ] = x 2 [ n ] x [ n 1 ] · x [ n + 1 ] ,
which creates an energy profile of the given audio sample. In comparison to the conventional squared energy operator, the TKEO takes into account also signal’s frequency [23] and it can have negative values, e.g., see Figure 1. Differences in spectra for the same audio track (clarinet recording) are shown in Figure 2. It is interesting how the dominant spectral components have changed—the clarinet has naturally strong odd harmonics, but the TKEO has changed their magnitude.
In the following step, we calculate the onset envelope using the perceptual model. We use Short-Time Fourier Transform (STFT) with Hann window (hop-factor: 512 samples) and then the conversion to the perceptual model with log-power mel-frequency representation: 120 mel bands, max frequency at 10 kHz and min frequency at 27.5 Hz. We get the matrix | X [ m , k ] | , where m denotes the index of the frame and k the frequency bin or index of the mel band. These settings were inspired by SuperFlux calculation [3].
In the next step, we calculated the spectral flux. The basic version of spectral flux is defined as the l 1 -norm of consecutive frames [24]:
S F [ m ] = 1 K k = 0 K 1 H ( X [ m + 1 , k ] X [ m , k ] ) ,
for m = 0 , 1 , 2 , , M 2 , where H [ x ] = ( x + | x | ) / 2 is the half-wave rectifier, M is the number of frames, and K is half of STFT frequency bins, or number of mel bands. A half-wave rectifier is used to set negative values to zero and positive differences are summed across all frequency bands. Spectral flux gives us information, how energy in spectra changes in time. Finally, a peak-picking function is applied (default LibROSA settings) to identify time positions of onsets and therefore new tones in the audio signal.
An example of this system based on the mel-frequency representation, but without the use of TKEO, is shown in Figure 3. It represents a solo clarinet part. The onset function detected many false peaks and marked positions, where tones were not played. For comparison, Figure 4 shows the same signal, but in this case, pre-processed by the TKEO. The peak-picking function now marked all real onsets with better accuracy and without any false positive detection. The colorbar in dB (Figure 5) is presented separately because of the proper alignment of a spectrogram and onset function but is the same for all spectrograms (produced by matplotlib package) in this paper.
As we can see on the second spectrogram (Figure 4), the energy in spectra changed, frequencies do not correspond properly to the original signal and new tones are sharpened and much more clear. We give this example for a good reason. Recording of a solo clarinet was the only audio track, in which the accuracy of the onset detection function was improved. Adding TKEO into this conventional detection method lowered the general detection accuracy. It decreased the number of detected false positives but also decreased the true positives. The cause of this phenomenon is explained in the following Section 2.2. We suggest that the general effect of the TKEO on onset detection function for woodwind instruments should be tested in more detail.

2.2. TKEO Influence

We applied the proposed method with the TKEO included on more recordings and observed, that in cases, where the tones are fast (e.g., violin playing thirty-second notes), or the energy difference is very low, this method does not detect every onset properly. Adding the TKEO increased the detection tolerance of fast changes in the signal. This means that the operator added additional “latency” to the signal values. It also decreased the ability of this system to capture low-energy spectral components. In general, fewer onsets were detected—only strong and more rhythmically important onsets remained. This is the advantage of the TKEO in the system. It suppresses less dominant spectral components and very fast tones even though onset detectors are usually set to do the opposite.
Figure 6 and Figure 7 show another analysed track—a violin solo in a very fast tempo. There is a clear difference in spectrograms for the described detector and the same detection with the TKEO included. Most of the tones are quite visible in the spectrogram of the first figure. However, the system with the TKEO has its changes in the spectrum vaguer and blurry which means that onset function detected a lower number of onsets (especially between the 1st and the 4th second of this track). In this case, the conventional system detected more onsets correctly but that still does not indicate that estimation of GT would be also more accurate.

2.3. Tempo Representation

To create a tempo structure of given recordings, we need a representation of tempo—how the density of onsets, or more precisely repetitiveness of significant onsets, is distributed. This can be done by several techniques, in this study we focused on the method of dynamic beat tracking system proposed in [12]. This system estimates beat positions in an onset envelope and uses them to pick the right peaks within a given interval (default tempo). The default tempo is set up before the calculation (or it is calculated automatically based on autocorrelation function with respect to the standard 120 BPM) and therefore it has to be estimated by listening to the particular audio track or estimated from the sheet music to work as we want. The calculated peak positions can deviate from the default tempo in adjustable boundaries (depends on settings, e.g., Ellis reports approximately 10% [12]). The parameter “tightness”, which corresponds to the detection tolerance (from the default tempo), was set to the number 50 in all cases. At first, this looks like an inappropriate method for the varying tempo of string quartet music (second database), but with good parameterization and segmentation of particular motifs, it fits our need.
Beat detectors are based on a calculation of beats in an audio signal and therefore the metric structure from an elementary point of view. Usually, there is not enough information to consider dividing beats into bars without manual correction, but with proper segmentation, midi reference and dynamic time warping (DTW) techniques, this is possible [25]. However, one does not need such a method to calculate the GT of a given track. In this case, we only focused on the GT and ABD. Figure 8 shows how this system picks onset candidates from the onset curve and creates the beat positions by using periodicity information.
Figure 9 shows the estimated time positions of beats at the beginning of a string quartet segment. As we can see, the system is using periodicity information to calculate beat positions even at places where no onsets are detected—in this specific part, second violin and viola are playing very quietly (and no onset is detected) and then a violin solo begins. Between the 6th and the 10th second of this track, there are strong onsets in the calculated onset curve. Their periodicity information is then used to fill the gap in the silent part of this recording, which is one of the advantages of the dynamic programming search system.
The disadvantage of such a beat tracking system is the adjustable default tempo—the algorithm searches for beat positions within a given interval, but there is no guarantee that true beat positions exist within specified limits (also concerning the tolerance parameter). The reference global tempo can be misleading if the recording is rhythmically unstable or the tempo changes significantly over time. A similar problem exists in the metric pulse. If the system detected 100 BPM as the GT and the reference is 50 BPM, it does not mean that the system is completely wrong. That is why we also calculated the ABD.

2.4. Dataset

First of all, we tested whether the TKEO improves the estimation of the GT in general. The GT is the median of differences between the time positions of beats throughout the whole analysed track. For this purpose, we used the SMC_MIREX database [26], which consists of different recordings, from classical pieces to guitar solos. The recordings are sampled by 44.1 kHz. Their annotations contain manually corrected beat time positions, which will be used as a reference.
Music by string quartets is very specific because the tempo can be more or less stable but the musical ornaments, intended gaps, fermatas, or other expressive musical attributes can be present. Every musician has her/his own style of agogic performance. If we define meaningful musical parts by choosing important musical motifs, we can create segments that could be processed separately.
The second dataset consists of 33 different interpretations of String Quartet No. 1 e minor “From My Life”, composed by the Czech composer Bedřich Smetana. We also included two interpretations played by orchestra. We divided the first movement into six segments of musical motifs in the view of the musical meaning. The first movement consists of an introduction (Beg), exposition (A), coda (B), development (C), recapitulation (D) and the last coda (E). For every segment, we calculated the estimated average tempo (EAT), but without any expressive elements and information about beat positions, using a physical length of the tracks and information of rhythmic patterns in sheet music. The EAT will be used as a reference tempo for setting up the default tempo parameter in the beat tracking system. The first page of the sheet music is provided as an example in Appendix A.

2.5. Application

Beat tracking systems are used in the music analysis software for the complex tempo, timbre, dynamic or other music analysis. Example of such freeware software is Sonic Visualiser [27]. Figure 10 shows an example of tempo analysis of the string quartet music from the second tested database. The first pane is the visualisation of the audio wave, the second one is the spectrogram and the last one is a layer of manually corrected beat positions. Beat positions were calculated automatically by the beat tracking system called BeatRoot [28] (Vamp plugin) and then corrected by trained ears. The green line shows how tempo evolves in time—if the audio track is locally slowing down or the tempo increases. The method which is proposed in this paper has not been developed as a Vamp plugin for Sonic Visualiser.
Musicologists can then draw conclusions from the measurement results. An automated beat tracking system is able to reduce the time of analysis significantly. For example, if we measure the EAT of the first motif of the second database for each recording, we get interesting results. One of the general assumptions is that presently we usually play the same piece of classical music faster than we did before. Figure 11 shows that this assumption may not be correct. There is a trend (see the slope of the linear regression line based on the sum of squares)—older recordings are on average at a faster pace. We do not have enough audio recordings to declare it as a fact, but the tendency is there. However, when we plot the EAT of the entire first movement (Figure 12), the tempo decrease is not so evident. Each black dot represents one interpretation and the blue line is a trend line. The sample from the year 1928 was an outlier and therefore we did not consider it in the regression analysis.

2.6. System Evaluation

During the analysis, we first used the reference dataset to determine the accuracy of the GT and ABD estimation. We computed the GT of each track by the proposed beat tracking method using both the proposed onset detection function (DS (default system)), and the same onset detection function with the TKEO (TS-system with the TKEO). Then we compared the reference values (annotation of the dataset) of each tested track with values estimated by the DS and the TS. The reference tempo was obtained as the number 60 (BPM definition) divided by the median of time differences between consecutive beat time positions. Then we calculated the median (Me) and the mean value ( x ¯ ) of time differences of consecutive beats in all recordings and also in which the average was less than 1 s. This represents the ABD of tracks that were close to the reference tempo (some recordings achieved more than ~20 BPM difference in the GT when tested; they were excluded for the extended ABD testing).
Next, we analysed the string quartet database. First, all 33 recordings were divided into six segments with a relatively steady tempo and then all motifs were tested by the TS and the DS to estimate the GT. We computed the reference EAT of all segments of each interpretation (Table 1) by calculating the number of quarter notes (Table 2) and dividing them by the time length of each recording. The complete table is in Table A1. Finally, the EAT and the computed GT were compared. Systems were implemented using Python language (especially NumPy and LibROSA packages).

3. Results

Table 3 presents results of the GT detection based on the first database for the first 30 analyzed tracks. The complete table is in Table A2. Average deviation from the reference tempo was 9.99 BPM (11.28%) in the case of TS and 18.19 BPM (17.74%) in the case of DS. The least accurate estimation was done on the recordings of a solo acoustic guitar. We also applied the t-Test (Paired Two Sample for Means) for each system (compared to the reference). P-value for the TS is 0.038 and 0.024 for the DS ( α = 0.05 ). Next, Table 4 presents general results of the GT testing: median, mean, standard deviation, relative standard deviation and variance for each tested system and the deviations from the reference tempo. The mean value of the reference GT was 76.78 BPM, the average computed GT 83.75 BPM for the TS and 88.97 BPM for the DS.
Table 5 shows the mean value and the median of the ABD testing for all analyzed tracks. The average difference between consecutive beat time positions of the reference and the TS was 2.30 s and 2.84 s for the DS. Table 6 shows the average of the arithmetic mean and the median of time difference values of the recordings in which the ABD were less than 1 s. This means 11 recordings for the TS (37% of the tested database) and 9 recordings for the DS (30%). The TS detected the right metric pulse in more recordings than the DS. Average deviations from the reference beat positions were 0.39 s and 0.29 s for the TS and 0.95 s and 0.36 s for the DS respectively.
Table 2 presents the length of the first movement of each motif of the second database and the corresponding number of quartet notes. Then, the EAT was calculated. Table 1 contains results based on the EAT of all motifs of our second database—33 different interpretations of String Quartet No. 1 e minor “From My Life”. Finally, Table 7 shows the difference between the estimated GT and the EAT for both proposed systems. The complete table is in Table A3. The average deviation for the TS is 6.42 BPM and 6.59 BPM for the DS. Due to the nature of the results of the second dataset, no further statistical processing of the values was used.
Figure 13 shows differences between the reference GT and calculated GT of the TS and DS of the first database. The TS generally follows the reference tempo more accurately mainly because it more often determined the correct metric pulse. The DS shows greater local deviations of the GT from the tested tracks.

4. Discussion

Generally, the newly proposed method provided some improvements to the reference database. We analyzed 30 tracks and the results are reported in Table 4. The results suggest that the TKEO can help the proposed beat tracking system to pick better onset candidates for the beat positions and to slightly improve the GT calculation. The difference was about 8 BPM on average for all tested recordings of the first database. However, many recordings reported the same estimated GT for both methods. Then, the ABD was calculated. We used the reference database with manually corrected beat positions to determine the accuracy of both systems. We did not use F-measures, but rather average differences between consecutive beats. P values show that there is a difference between both systems. This gives us an idea of how close the beat tracking was to the reference positions. The system with the TKEO generally reported lower ABD for all settings used. The results suggest that the TKEO pre-processing improved the accuracy of the beat tracking system. This does not apply for the general onset detection function. Onset detection accuracy was reduced in most cases. The only exception was the recording of the clarinet.
As far as the string quartet database is concerned, the results were again slightly in favour of the system based on TKEO. All 33 recordings of the second database were tested. The difference between the average deviation from the EAT of the TS and the DS was only 0.17 BPM, and therefore both systems had more or less the same detection accuracy. We chose such complex music to see how the enhancement would deal with a very difficult task. The actual usefulness of the application also depends on the settings of selected parameters, not just on the TKEO pre-processing.
The idea of using TKEO in the pre-processing stage was to help the onset detection function to find more relevant onsets and therefore enhance the beat tracking system in terms of choosing better candidates for beat positions. It reduced the number of insignificant onsets detected. Onset detection accuracy has usually been reduced, but the final beat detection output may be more stable; the algorithm chooses from less and more important onsets. This is useful for analyzing tracks where we suspect a stable and non-agogic rhythm. We tested the effect of the TKEO to see how the output detection function would behave. We did not change the parameters such as tightness of the beat tracking system for each tested track; the correct setting (set for the particular piece of music) would yield better results for complex music analysis.
The limitation of this study is that the EAT in the string quartet database may be a reference value for the beat tracking system, but it is not the actual GT of a particular track since we cannot include any expressive elements in it. It does not provide any information about beat positions or local tempo changes. The same thing applies to the reference global tempo. In enhanced interpretation analysis, we need to track all beat positions in the segment and compare them to the real beat positions. However, in this case, we analysed relatively stable tracks with no abrupt tempo changes. In the future, we would like to use this system to create a database and its additional information about manually corrected beat positions of segmented string quartet music. The impact of the TKEO on audio recordings will be tested in more detail in our future work.
Cooperation between researchers and musicologists is the crucial part of such interdisciplinary projects and MIR science field. Different base knowledge and tendencies can lead to mutual misunderstandings, but both sides could benefit greatly from each other. Projects like these are the important bridge for computer scientists, MIR researchers and musicologists.

5. Conclusions

This study introduces an enhancement of the conventional beat tracking system by adding the TKEO into the pre-processing stage. It briefly describes the onset detection function and the beat tracking method with its possible application. The onset detection accuracy decreased in most analyzed tracks, but the accuracy of the GT detection and the ABD detection increased.
The influence of the TKEO was tested on different recordings and it was found, that in the case of woodwind instruments, the TKEO increased the onset detection accuracy. This phenomenon will be studied in our future work. We would like to focus on the possible applications of the TKEO on music recordings in general. The TKEO is changing the magnitude of frequency components in a signal and acts as a filter. This could be the cause of increased onset detection accuracy, e.g., for the clarinet example.
The estimation of the GT was improved in the reference database. The average deviation from the reference GT in the reference database is 9.99 BPM (11.28%), which improves the conventional methodology, where the average deviation is 18.19 BPM (17.74%). P-values indicate that there is a clear difference between proposed systems. Both systems were also tested on the string quartet database. In this case, however, the results are not convincing. The proposed TS will be further used in the subsequent music analysis of the string quartet database. The aim is to create an automated system for capturing beat positions that are as close as possible to the actual beat positions in the recordings even for the complex music such as string quartet. In this way, it is possible to minimize the time required for manual processing and labelling. This study has a pilot character and provides some suggestions for improving the beat tracking system for music analysis.

Author Contributions

Conceptualization, M.I., Z.S. and L.S.; methodology, M.I.; software, M.I.; validation, M.I.; formal analysis, M.I., Z.S., L.S. and J.M.; investigation, M.I.; resources, M.I.; data curation, M.I.; writing—original draft preparation, M.I.; visualization, M.I.; supervision, Z.S., L.S. and J.M.; project administration, Z.S.; funding acquisition, Z.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the grant LO1401. For the research, infrastructure of the SIX Center was used.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABDAverage Beat Deviation
BPMBeats Per Minute
DSDefault System without the TKEO
EATEstimated Average Tempo
EMGElectromyography
GTGlobal Tempo
MIRMusic Information Retrieval
STFTShort-Time Fourier Transform
TKEOTeager–Kaiser Energy Operator
TSSystem with the TKEO included

Appendix A. Sheet Music—The First Page of the First Movement

Figure A1. The beginning of the first movement.
Figure A1. The beginning of the first movement.
Applsci 10 00379 g0a1

Appendix B. Complete Tables and Results

Table A1. The EAT of all motifs of the second database.
Table A1. The EAT of all motifs of the second database.
TrackBegABCDE
CD0180.6169.3734.4188.5655.6074.50
CD0277.8069.0344.1481.8459.5272.43
CD0377.9373.1941.6087.0962.3679.14
CD0480.4875.0848.9880.3769.1480.29
CD0569.5466.7832.8380.3154.6071.15
CD0672.7472.1442.2974.4161.9876.16
CD0775.9465.7139.3383.0658.0478.17
CD0869.6966.4234.4779.9853.6675.13
CD0983.4372.4840.1383.7060.5772.67
CD1082.9272.1840.7088.5661.3174.12
CD1170.9263.4943.3473.6556.7767.07
CD1282.3570.9148.3683.7663.0277.76
CD1369.2761.7145.2879.0854.6370.93
CD1481.2869.0646.3388.2461.2077.35
CD1574.6068.2330.1985.1254.2869.92
CD1679.0668.8739.5987.8159.4676.42
CD1771.0158.3837.3575.8251.3768.52
CD1872.7971.5951.0675.1364.1570.81
CD1974.1473.1756.7480.4769.0273.39
CD2077.3574.4851.7582.2768.1680.73
CD2177.1671.7247.7682.4964.7573.03
CD2273.3662.9142.7481.5455.7774.87
CD2373.0465.8934.7880.5053.3177.35
CD2478.5078.1458.3679.8170.0680.80
CD2575.6172.7344.0480.7062.3073.63
CD2683.7577.9246.2793.4866.5884.73
CD2782.6076.4349.7483.2668.0076.29
CD2872.9265.8048.4880.6262.2471.73
CD2970.2563.0937.8774.0955.3358.12
CD3068.2665.3538.1770.0156.3167.07
CD3173.7871.0643.1579.7159.6375.56
CD3283.5872.0740.0082.1460.5571.04
CD3376.9263.2442.0574.6256.2668.31
All values are in BPM—Beats Per Minute.
Table A2. Reference GT and computed GT of the reference database.
Table A2. Reference GT and computed GT of the reference database.
Track No.Reference
(BPM)
TS
(BPM)
DS
(BPM)
TS
Dev. (BPM)
DS
Dev. (BPM)
148.1547.8547.850.300.30
266.9973.8373.836.846.84
368.0095.7095.7027.7027.70
460.4148.7568.0011.667.59
539.7142.3642.362.652.65
662.7647.8547.8514.9114.91
753.6754.9854.981.311.31
8136.05136.00143.550.057.50
955.1556.1756.171.021.02
1075.8680.7578.304.892.44
1191.6395.7095.704.074.07
1287.2786.13184.571.1497.30
1393.7599.3895.705.631.95
1475.3886.1389.1010.7513.72
1535.3442.3642.367.027.02
1670.0166.2666.263.753.75
1772.2073.8373.831.631.63
1882.8789.10117.456.2334.58
1941.9946.9842.364.990.37
2080.6599.38123.0518.7342.40
2172.7383.3578.3010.625.57
2235.0944.5563.029.4627.93
2389.71172.27172.2782.5682.56
2451.8151.6851.680.130.13
2563.5699.38103.3635.8239.80
26117.46129.20129.2011.7411.74
27200.00198.77184.571.2315.43
28116.73117.4561.520.7255.21
2995.0983.35123.0511.7427.96
3063.3663.0263.020.340.34
Average76.7883.7588.979.9918.19
P-value0.0380.024
TS—System with the TKEO; DS—Default system without the TKEO; Dev.—deviation from the reference global tempo; BPM—Beats Per Minute; P—p-value for the t-Test (Paired Two Sample for Means), α = 0.05 .
Table A3. Differences between the estimated GT and the EAT for both systems.
Table A3. Differences between the estimated GT and the EAT for both systems.
TSDS
TrackBegABCDEBegABCDE
CD0115.0913.986.613.735.923.808.4922.926.613.731.823.80
CD0214.490.816.531.516.743.5714.499.272.841.513.501.40
CD0314.361.417.155.204.946.9911.1710.1611.135.2013.646.99
CD040.270.9234.370.3817.163.060.273.2231.770.389.163.06
CD054.293.064.620.4410.007.158.7611.5210.243.0421.404.89
CD063.266.168.381.594.284.593.266.165.561.5914.024.59
CD077.418.128.423.079.962.587.416.0714.503.0715.797.96
CD0811.069.587.213.3710.945.6211.069.586.550.7710.945.62
CD098.865.823.672.4315.433.338.865.823.222.4315.435.63
CD1012.786.1215.470.5412.524.189.373.8210.980.543.896.63
CD117.384.5116.752.357.832.775.080.4716.752.350.650.93
CD121.006.317.813.013.242.991.007.395.470.411.585.59
CD139.0314.2910.891.675.462.906.7319.048.551.6715.212.90
CD1411.0111.698.650.868.643.4011.016.948.654.050.328.78
CD1511.5312.528.381.015.8110.838.7512.527.811.0119.5519.18
CD1613.237.1312.091.2910.382.5913.239.435.741.298.541.88
CD177.2913.405.722.484.805.319.740.969.630.182.469.78
CD185.516.7110.460.871.1321.483.216.710.620.874.060.97
CD191.8610.184.010.2811.734.911.867.5813.100.289.284.91
CD203.408.870.981.083.560.023.406.270.981.080.1611.56
CD216.194.282.910.8611.257.728.976.5810.970.860.157.72
CD227.393.3511.091.8114.0711.2612.771.699.991.815.753.43
CD2316.063.9510.552.852.863.4016.0610.115.592.8524.993.40
CD240.205.211.730.9410.698.302.252.610.370.9410.692.55
CD252.698.020.512.6516.002.372.6910.624.712.659.572.37
CD268.546.1411.152.2216.777.565.356.1411.152.2214.174.37
CD273.536.925.240.0915.350.293.536.924.090.0912.754.46
CD285.3810.202.192.737.609.020.911.203.202.737.6011.62
CD298.051.518.274.2110.930.618.051.5719.554.217.690.70
CD307.742.337.160.172.4822.030.265.266.382.019.956.76
CD314.522.776.541.0410.210.444.522.7714.271.0414.200.44
CD3212.123.933.071.2113.282.7912.128.686.981.2115.457.26
CD333.836.609.630.795.263.473.836.609.630.7911.745.52
Average7.566.578.131.789.015.496.927.178.711.789.585.38
Result6.426.59
All values are in BPM—Beats Per Minute.

References

  1. Benetos, E.; Dixon, S. Polyphonic music transcription using note onset and offset detection. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing (ICASSP), Prague, Czech Republic, 22–27 May 2011. [Google Scholar]
  2. D’Amario, S.; Daffern, H.; Bailes, F. A new method of onset and offset detection in ensemble singing. Logop. Phoniatr. Vocol. 2019, 44, 143–158. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Böck, S.; Widmer, G. Maximum filter vibrato suppression for onset detection. In Proceedings of the 16th International Conference on Digital Audio Effects (DAFx-13), Maynooth, Ireland, 2–5 September 2013. [Google Scholar]
  4. Grosche, P.; Müller, M. Cyclic Tempogram—A Mid-level Tempo Representation for Music Signals. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Dallas, TX, USA, 14–19 March 2010. [Google Scholar]
  5. Grosche, P.; Müller, M. Extracting Predominant Local Pulse Information from Music Recordings. IEEE Trans. Audio Speech Lang. Process. 2011, 19, 1688–1701. [Google Scholar] [CrossRef]
  6. Grosche, P.; Müller, M. Tempogram Toolbox: MATLAB tempo and pulse analysis of music recordings. In Proceedings of the 12th International Conference on Music Information Retrieval, Miami, FL, USA, 24–28 October 2011. [Google Scholar]
  7. LibROSA. Available online: https://librosa.github.io/librosa/ (accessed on 3 January 2020).
  8. Lartillot, O.; Toiviainen, P. A Matlab Toolbox for Musical Feature Extraction From Audio. In Proceedings of the 31st Annual Conference of the Gesellschaft für Klassifikation e.V., Breisgau, Germany, 7–9 March 2007. [Google Scholar]
  9. Zapata, J.R.; Davies, M.E.P.; Gómez, E. Multi-feature beat tracking. IEEE Trans. Audio Speech Lang. Process. 2014, 22, 816–825. [Google Scholar] [CrossRef]
  10. Schlüter, J.; Böck, S. Improved musical onset detection with convolutional neural networks. In Proceedings of the International Conference on Acoustics, Speech and Signal Processing, Florence, Italy, 4–9 May 2014. [Google Scholar]
  11. Eyben, F.; Böck, S.; Schuller, B.; Graves, A. Universal onset detection with bidirectional long-short term memory neural networks. In Proceedings of the 11th International Society for Music Information Retrieval Conference (ISMIR), Utrecht, The Netherlands, 9–13 August 2010. [Google Scholar]
  12. Ellis, D. Beat Tracking by Dynamic Programming. J. New Music Res. Spec. Issue Beat Tempo Extr. 2007, 36, 51–60. [Google Scholar] [CrossRef]
  13. Böck, S.; Krebs, F.; Widmer, G. A Multi-Model Approach To Beat Tracking Considering Heterogeneous Music Styles. In Proceedings of the 15th International Society for Music Information Retrieval Conference (ISMIR), Taipei, Taiwan, 27–31 October 2014. [Google Scholar]
  14. Srinivasamurthy, A. A Data-Driven Bayesian Approach to Automatic Rhythm Analysis of Indian Art Music. Ph.D. Thesis, Pompeu Fabra University, Barcelona, Spain, October 2016. [Google Scholar]
  15. 2012:MIREX Home. Available online: https://www.music-ir.org/mirex/wiki/2012:MIREX_Home (accessed on 3 January 2020).
  16. Cook, N. Beyond the Score: Music as Performance; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  17. Bowen, J.A. Tempo, duration, and flexibility: Techniques in the analysis of performance. J. Musicol. Res. 1996, 16, 111–156. [Google Scholar] [CrossRef]
  18. Solnik, S.; DeVita, P.; Rider, P.; Long, B.; Hortobágyi, T. Teager-Kaiser Operator improves the accuracy of EMG onset detection independent of signal-to-noise ratio. Acta Bioeng. Biomech. 2008, 10, 65. [Google Scholar] [PubMed]
  19. Kopparapu, S.K.; Pandharipande, M.; Sita, G. Music and vocal separation using multiband modulation based features. In Proceedings of the Symposium on Industrial Electronics and Applications (ISIEA), Penang, Malaysia, 3–5 October 2010. [Google Scholar]
  20. Das, S.; Hansen, J. Detection of Voice Onset Time (VOT) for Unvoiced Stops (/p/, /t/, /k/) Using the Teager Energy Operator (TEO) for Automatic Detection of Accented English. In Proceedings of the 6th Nordic Signal Processing Symposium (NORSIG), Espoo, Finland, 9–11 June 2004. [Google Scholar]
  21. Hamila, R.; Lakhzouri, A.; Lohan, E.S.; Renfors, M. A Highly Efficient Generalized Teager-Kaiser-Based Technique for LOS Estimation in WCDMA Mobile Positioning. EURASIP J. Appl. Signal Process. 2005, 5, 698–708. [Google Scholar] [CrossRef] [Green Version]
  22. Kvedalen, E. Signal Processing Using the Teager Energy Operator and Other Nonlinear Operators. Master’s Thesis, University of Oslo, Oslo, Norway, May 2003. [Google Scholar]
  23. Dimitriadis, D.; Potamianos, A.; Maragos, P. A Comparison of the Squared Energy and Teager-Kaiser Operators for Short-Term Energy Estimation in Additive Noise. IEEE Trans. Signal Process. 2009, 57, 2569–2581. [Google Scholar] [CrossRef]
  24. Müller, M. Fundamentals of Music Processing; Springer: Berlin, Germany, 2015; pp. 309–311. [Google Scholar]
  25. Konz, V. Automated Methods for Audio-Based Music Analysis with Applications to Musicology. Ph.D. Thesis, Saarland University, Saarbrücken, Germany, 2012. [Google Scholar]
  26. Holzapfel, A.; Davies, M.E.P.; Zapata, J.R.; Oliveira, J.; Gouyon, F. Selective Sampling for Beat Tracking Evaluation. IEEE Trans. Audio Speech Lang. Process. 2012, 20, 2539–2548. [Google Scholar] [CrossRef] [Green Version]
  27. Sonic Visualiser. Available online: https://www.sonicvisualiser.org/ (accessed on 3 January 2020).
  28. Dixon, S. An Interactive Beat Tracking and Visualisation System. In Proceedings of the 2001 International Computer Music Conference (ICMC 2001), Havana, Cuba, 7–22 September 2001. [Google Scholar]
Figure 1. Original signal and the same signal after application of the TKEO pre-processing.
Figure 1. Original signal and the same signal after application of the TKEO pre-processing.
Applsci 10 00379 g001
Figure 2. Spectrograms of the same clarinet recording—the second one is using a TKEO step.
Figure 2. Spectrograms of the same clarinet recording—the second one is using a TKEO step.
Applsci 10 00379 g002
Figure 3. Spectrogram and onset detection function for a solo clarinet without the TKEO.
Figure 3. Spectrogram and onset detection function for a solo clarinet without the TKEO.
Applsci 10 00379 g003
Figure 4. Spectrogram and onset detection function for a solo clarinet with the TKEO.
Figure 4. Spectrogram and onset detection function for a solo clarinet with the TKEO.
Applsci 10 00379 g004
Figure 5. Colorbar.
Figure 5. Colorbar.
Applsci 10 00379 g005
Figure 6. Spectrogram and onset detection function for a solo violin without TKEO.
Figure 6. Spectrogram and onset detection function for a solo violin without TKEO.
Applsci 10 00379 g006
Figure 7. Spectrogram and onset detection function for a solo violin using TKEO.
Figure 7. Spectrogram and onset detection function for a solo violin using TKEO.
Applsci 10 00379 g007
Figure 8. Comparison of the onset and beat positions.
Figure 8. Comparison of the onset and beat positions.
Applsci 10 00379 g008
Figure 9. Estimated beat positions by the dynamic programming system.
Figure 9. Estimated beat positions by the dynamic programming system.
Applsci 10 00379 g009
Figure 10. Possible application of the beat tracking system.
Figure 10. Possible application of the beat tracking system.
Applsci 10 00379 g010
Figure 11. Results of the EAT calculation for the first motif of the string quartet database.
Figure 11. Results of the EAT calculation for the first motif of the string quartet database.
Applsci 10 00379 g011
Figure 12. Results of the EAT calculation for the entire first movement of the string quartet database.
Figure 12. Results of the EAT calculation for the entire first movement of the string quartet database.
Applsci 10 00379 g012
Figure 13. Visualisation of the GT computation—Ref, TS and DS estimation.
Figure 13. Visualisation of the GT computation—Ref, TS and DS estimation.
Applsci 10 00379 g013
Table 1. The EAT of all motifs of the second database.
Table 1. The EAT of all motifs of the second database.
TrackBegABCDE
CD0180.6169.3734.4188.5655.6074.50
CD0277.8069.0344.1481.8459.5272.43
CD0377.9373.1941.6087.0962.3679.14
.......
.......
.......
CD3376.9263.2442.0574.6256.2668.31
All values are in BPM—Beats Per Minute.
Table 2. Calculation of quarter notes in all motifs.
Table 2. Calculation of quarter notes in all motifs.
MotifBeginningABCDE
Bars1–7071–110111–118119–164165–225226–262
Quarter notes28016032184244148
Table 3. Reference GT and computed GT of the reference database.
Table 3. Reference GT and computed GT of the reference database.
Track No.Reference
(BPM)
TS
(BPM)
DS
(BPM)
TS
Dev. (BPM)
DS
Dev. (BPM)
148.1547.8547.850.300.30
266.9973.8373.836.846.84
368.0095.7095.7027.7027.70
......
......
......
3063.3663.0263.020.340.34
Average76.7883.7588.979.9918.19
P-value0.0380.024
TS—System with the TKEO; DS—Default system without the TKEO; Dev.—deviation from the reference global tempo; BPM—Beats Per Minute; P—p-value for the t-Test (Paired Two Sample for Means), α = 0.05 .
Table 4. Results of GT testing—the reference database.
Table 4. Results of GT testing—the reference database.
TypeMe x ¯ sdrsdvar
Reference77.1176.7833.010.431089.50
TS82.0583.7537.300.451391.05
DS76.0788.9741.050.461685.16
Dev. TS5.319.9915.751.58247.93
Dev. DS7.2618.1924.081.32579.71
Me—median; x ¯ —mean value; sd—standard deviation; rsd—relative standard deviation; var—variance; TS—System with the TKEO; DS—System without the TKEO; Dev.—deviation from the reference global tempo.
Table 5. Results of the ABD testing for all recordings.
Table 5. Results of the ABD testing for all recordings.
TS (s)DS (s)
x ¯ 2.302.84
Me1.812.57
sd of the x ¯ 1.902.17
sd of the Me2.142.31
TS—System with the TKEO; DS—System without the TKEO; Me—median, x ¯ —mean value; sd—standard deviation.
Table 6. Results of the ABD testing for recordings with the average ABD < 1 s.
Table 6. Results of the ABD testing for recordings with the average ABD < 1 s.
Dev. < 1 s in the Average of TS<1 s in the Average of DS
TSDSTSDS
x ¯ Me x ¯ Me x ¯ Me x ¯ Me
Average0.390.380.950.700.290.120.360.22
TS—System with the TKEO; DS—System without the TKEO; Dev.—deviation from the reference beat positions; Me—median; x ¯ —mean value.
Table 7. Differences between the estimated GT and the EAT for both systems.
Table 7. Differences between the estimated GT and the EAT for both systems.
TSDS
TrackBegABCDEBegABCDE
CD0115.0913.986.613.735.923.808.4922.926.613.731.823.80
CD0214.490.816.531.516.743.5714.499.272.841.513.501.40
CD0314.361.417.155.204.946.9911.1710.1611.135.2013.646.99
.............
.............
.............
CD333.836.609.630.795.263.473.836.609.630.7911.745.52
Average7.566.578.131.789.015.496.927.178.711.789.585.38
Result6.426.59
All values are in BPM—Beats Per Minute.

Share and Cite

MDPI and ACS Style

Istvanek, M.; Smekal, Z.; Spurny, L.; Mekyska, J. Enhancement of Conventional Beat Tracking System Using Teager–Kaiser Energy Operator. Appl. Sci. 2020, 10, 379. https://doi.org/10.3390/app10010379

AMA Style

Istvanek M, Smekal Z, Spurny L, Mekyska J. Enhancement of Conventional Beat Tracking System Using Teager–Kaiser Energy Operator. Applied Sciences. 2020; 10(1):379. https://doi.org/10.3390/app10010379

Chicago/Turabian Style

Istvanek, Matej, Zdenek Smekal, Lubomir Spurny, and Jiri Mekyska. 2020. "Enhancement of Conventional Beat Tracking System Using Teager–Kaiser Energy Operator" Applied Sciences 10, no. 1: 379. https://doi.org/10.3390/app10010379

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop