Next Article in Journal
Single-Phase Ground Fault Detection Method in Three-Phase Four-Wire Distribution Systems Using Optuna-Optimized TabNet
Previous Article in Journal
Source-Free Domain Adaptation for Medical Image Segmentation via Mutual Information Maximization and Prediction Bank
Previous Article in Special Issue
Development of a Smart Energy-Saving Driving Assistance System Integrating OBD-II, YOLOv11, and Generative AI
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Wearable Haptic Music Player with Multi-Feature Extraction Using Spectral Flux and Yin Algorithms

by
Aaron Benjmin R. Alcuitas
,
Thad Jacob T. Tiong
,
Hang-Hong Kuo
and
Aaron Raymond See
*
Department of Electronic Engineering, National Chin-Yi University of Technology, No. 57, Section 2, Zhongshan Rd., Taiping District, Taichung City 411030, Taiwan
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(18), 3658; https://doi.org/10.3390/electronics14183658
Submission received: 11 August 2025 / Revised: 7 September 2025 / Accepted: 13 September 2025 / Published: 16 September 2025
(This article belongs to the Special Issue Intelligent Computing and System Integration)

Abstract

Vibrotactile feedback synchronized with audio through haptic music players (HMPs) creates a synergistic effect that has been shown to improve the music listening experience. However, current HMPs are still unable to efficiently retrieve multiple music features, decelerating app scalability and jeopardizing long-term user engagement. This study introduces a wearable HMP that utilizes piezoelectric actuators and a novel audio-tactile rendering algorithm that uses YIN to extract pitch and spectral flux for rhythm. Building upon prior work, the system additionally features a modified discretization step and software optimization to improve multi-feature extraction and tactile display of music. The pitch, melody/timbre, and rhythm displays, respectively, were validated using Mean Average Error (MAE), Dynamic Time Warping (DTW) distance, and accuracy, yielding normalized averages of MAE = 0.1020 and DTW = 0.1518, and a rhythmic pattern accuracy of 97.56%. The Yin algorithm was shown to greatly improve the tactile display of vocals, with slight improvements for bass and accompaniments, while spectral flux and software optimizations significantly improved rhythm display. The wearable HMP effectively communicates multiple music features without the pitfalls of prior approaches. Future research can improve the system’s audio-tactile signal fidelity and explore the qualitative merits of multi-feature extraction in HMPs.

1. Introduction

1.1. Background

The synergy of sound with its mechanical vibrations in the music playing and listening experience, such as from instruments and speakers, suggests that the sense of touch could be provided with an even greater appreciation for music [1,2]. For this purpose, numerous haptic music players (HMPs) that integrate vibrotactile feedback with audio stimuli have been developed [3]. The form factor of HMPs in prior research ranges from chairs [4,5,6,7,8,9,10], belts [10,11,12], bracelets [6,13,14], jackets [15,16], and most recently due to the high concentration of tactile receptors in the hands, gloves [17,18,19,20,21]. Regardless of form, multiple user studies have shown that vibration signals with similar dynamics as the original audio provide more enjoyment to the music listening experience [4,18,19,21,22].
However, the issue that heavily limits the development of high-quality HMPs is their inability to communicate multiple music features, such as pitch, rhythm, melody, and timbre, in an efficient manner. Despite an overall positive feedback, users may still feel “bored” when there is too much focus on only one or a few music features [23], jeopardizing long-term use beyond the scope of prior user studies. One of the root causes is the kind of actuators used, as an overwhelming majority of HMPs use either voice coil actuators (VCA) [4,17,18,19] or coin motors [20,21], and oftentimes transduce the audio signals without further signal processing to preserve its vibrations. Even if VCAs have high frequency ranges, the human skin can perceive only up to 1 kHz, which is minuscule compared to the 20 kHz auditory perception limit [5,17,20]. Because of the perception limit of the human skin, designers of HMPs that use VCAs need to set the upper limit of their range of frequencies to 1 kHz [24], diminishing the communicated pitch. On the other hand, the vibration frequencies of coin motors cannot be controlled for the purposes of HMPs, so they are often used to solely simulate tempo instead [20,21]. While attempts have been made to communicate a higher frequency range using these actuators, such as assigning different actuators to frequency subsets, the discrete rather than the upwards or downwards movements created by these setups displayed rhythm instead of pitch or melody, as the exact frequency emitted by each VCA cannot be determined by users [4,5,25]. These biological constraints indicate that methods beyond transducing the signal and actuators other than VCAs or coin motors need to be explored if multiple music features are to be presented in future HMPs.
Another cause is the excessive complexity involved in extracting basic music features, particularly for pitch and rhythm. Aside from transducing audio signals, a dominant method in extracting pitch is the creation and/or use of MIDI data [1,5,6,24,25,26], which contains note information that correspond to various levels of pitch. Unfortunately, integrating MIDI data to the HMP development process requires the manual use of external software such as a Digital Audio Workstation [25,26], decelerating implementation and disallowing scalable, mobile HMP applications. For instance, an HMP study even created a MIDI player from scratch to display pitch [1], but this method exacerbates the difficulty of modifying such a system to also extract other music features. Additionally, working with MIDI data requires either access to the music’s MIDI file [24] or the capacity to accurately convert audio to MIDI, which is tedious and extremely difficult with our technology at present. The field has yet to provide an algorithm that, without merely transducing audio signals or using MIDI files, could take only an audio file as input and convert it to tactile signals containing multiple music features for HMP applications.
Moreover, the beat extraction in prior studies is mostly unspecified and unverified, with an example taking the beats per minute (bpm) of musical excerpts and creating 102 ms beats that match with them [20]. Another practice would be to use a low-frequency bass band and define beats as energy peaks in the resulting signal [23,27,28], with the assumption that the sound already contains enough haptic information on its own as is the principle behind transducing the signal [27]. However, since the bass band reaches up to only 200 Hz [23], this method cannot extract beats from higher frequency ranges where drums are usually situated. As volume will inevitably be a factor in an energy-based approach [23], current methods are also susceptible to erroneously considering fluctuations in volume and noise as beats, while being unable to detect beats that simultaneously occur at a softer volume. Thus, it becomes necessary to create an HMP application that utilizes a frequency-based beat extraction approach that is robust to noise and polyphony, which are common in modern music. Furthermore, only a handful of studies extracted and displayed timbre from music [5,26], underscoring the overall limitations of current HMPs in multi-feature extraction (Table 1).

1.2. Research Question

In this study, we introduce an HMP wearable device (HMP-WD) that uses piezoelectric actuators, equipped with an algorithm capable of extracting pitch, melody, rhythm, and timbre from an audio file and embedding them in tactile signals for mobile applications. We build upon our previous work [29] by enhancing its multi-feature extraction in four stages. First, we now utilize the YIN algorithm [30], a time-domain autocorrelation-based algorithm used for monophonic pitch tracking, making it ideal for detecting pitch in isolated vocal and instrument tracks. YIN is efficient, highly accurate, and robust to noise, making it a suitable alternative to MIDI in HMP applications. Our prior method used spectral peaks for pitch estimation; however, the results demonstrated that the tactile signals had a slight difficulty replicating the coarse pitch contours created from the original audio. Second, we now use spectral flux for beat estimation as opposed to our previous energy-based approach. Unlike energy-based methods, spectral flux detects subtle spectral changes independent of volume while emphasizing transients and sharp attacks. While the coarseness of its resulting signal makes it difficult for tactile pitch display, this quality may make it subjectively better for determining beats in polyphonic audio. Third, we discretize the pitch to eight levels rather than four, hypothesizing that this will enable the tactile signals to replicate pitch contours more accurately. Fourth, we implement software optimization on the mobile application to reduce latencies in synchronizing the visual, audio, and haptic feedback. In addition to evaluating the HMP-WD’s multi-feature display, this study will also compare both approaches for future HMP applications.
The study proceeds as follows: in Section 2, we further discuss the system components and the algorithms employed for audio-tactile rendering. In Section 3, we present the signal fidelity testing, which evaluates the HMP-WD’s ability to communicate music features from a wide frequency range while preserving the music’s original melodic contours and rhythmic patterns. In Section 4, we analyze the results and their implications on the efficacy of the methods for HMP-WDs. This work may provide the solution for current limitations on multi-feature extraction and display for future HMP applications.

2. Materials and Methods

The HMP-WD is composed of (1) the haptic glove, (2) the mobile app, and (3) the rendering system. Given an audio file, the audio-tactile rendering system extracts the music features pitch, melody, rhythm, and timbre using source separation, spectral flux, YIN, discretization, and feature-pin mapping, embedding the extracted features into tactile patterns. The tactile patterns are stored in the mobile app, and when a song is played, the app synchronizes the visual and audio feedback with the tactile feedback as it sends the patterns to the haptic glove. The haptic glove processes the patterns and activates the corresponding pins. Figure 1 illustrates the system architecture of the HMP-WD.
The haptic glove consists of P20 braille cells (metec Ingenieur GmbH, Stuttgart, Germany) connected to a custom-designed printed circuit board (PCB), with both housed in 3D-printed enclosures and attached to off-the-shelf gloves. The PCB features an STM32F10 microcontroller and a JDY33 Bluetooth module, which enables it to receive tactile patterns from the mobile app through Bluetooth Low Energy (BLE) communication. The system is powered by a 5 V lithium battery with a 2 A current rating and supports automatic recharging. A DC-DC boost converter supplies 185 V for the actuators and at least 3.3 V for control, managed by an LM3488 controller. The gloves’ components observe a balance of portability, efficiency, and reliability required for HMP-WD applications.
Each finger is equipped with two piezoelectric actuators, with each containing eight pins that can deliver a minimum tactile force of 17 cN and a fast 50 ms dot rising time, which are well within the human tactile sensitivity range (≥10 cN, 1–100 ms), providing high-resolution vibrotactile feedback that can communicate rapid changes in music features. The HMP-WD takes advantage of the piezoelectric actuator’s unique ability for fine localization, which is also made possible by their position at the user’s fingertips, where 44–60% of the Pacinian corpuscles of the hand are situated [31]. While seemingly no HMP has utilized piezoelectric actuators for audio-tactile feedback, the notion has been expressed before as these actuators were known to have extremely fast response times and wide frequency ranges that would fit the requirements for the tactile display of music [32]. Recently, the same configuration of actuators has been used to provide localized cutaneous force feedback in robotics assisted surgery systems [33] and virtual reality immersion [34,35], highlighting its reliability for HMP applications. Figure 2 presents the components and control pipeline of the haptic gloves used by the HMP-WD.
The mobile app, developed using Flutter for cross-platform compatibility, delivers synchronized visual, audio, and vibrotactile stimuli through real-time, BLE communication with the haptic gloves while the music plays. The feature-pin mapping assigns instruments to different fingers to communicate timbre, centered around the percussive elements being assigned to the index and middle fingers as these are commonly used when moving to the rhythm in finger-tapping studies [36]. Among the remaining melodic elements, the bass is assigned to the thumb as it has the lowest frequency band [23], and the vocals and accompaniment are sent to the ring and pinky fingers, respectively. The percussive elements are ordered from the lowest frequency band to the highest in a down-up, left-right manner, as kick and snare are assigned to the index finger while toms and cymbals belong to the middle finger. A similar behavior can be observed in the melodic elements’ pitch display, as a pitch movement from 0 to 4 reflect an upwards movement from the pins, while movement from 0 to 8 would show an upwards, rightwards, and upwards movement to communicate melody. The rightwards movement to signify an increase in pitch mirrors the behavior in HMPs that utilize tactile illusions [24,26], as lower-pitched notes are assigned to the left side while higher pitched notes are given to the right. An advantage of using piezoelectric actuators for this application, however, is the ability to further localize pitch information in different levels on solely the fingertips and without using tactile illusions, providing the space necessary to assign instruments to their own fingers. Figure 3 shows the mobile application with the assignment of music elements to their corresponding pins.
The rendering system consists of Python software (version 3.11.5) that takes an audio file path, extracts music features, and embeds them in tactile signals. The algorithm sequentially performs (1) source separation, (2) pitch and onset detection, and (3) discretization and feature-pin mapping.
Source separation isolates different instruments from the music into their own audio files. To achieve this, we employed a pre-trained hybrid separation model [37] that separates the audio into four stems: drums, bass, vocals, and other accompaniments, with the latter three comprising the melodic tracks. The model uses a Transformer architecture and leverages both spectrogram-based and waveform-based techniques to perform high-accuracy source separation. Since drums are unpitched percussive sounds, the drum stem is further split into four frequency subsets, which will later be used to detect beat onsets. The subdivisions were heuristically selected based on where a distinct drum sound is best heard, which are 30–150 Hz for kicks, 150–800 Hz for snares, 800–5000 Hz for toms, and 5000–12,000 Hz for cymbals. These were then separated from the drum stem using a Butterworth bandpass filter, resulting in four percussive tracks.
Pitch and onset detection performs feature extraction on each of the separated audio tracks, preventing features of one track from being overlapped by another. The process begins by removing noise from the audio at the signal level using two filters. A Butterworth high-pass filter removes low-frequency noise, including microphone rumbles and external movements. Afterward, a median filter replaces each sample with the median of its window to remove impulse noise such as clicks and pops. Denoising is an essential preprocessing step to make voiced signal parts and real beats more salient for detection, so it is first applied to both melodic and percussive tracks.
To extract pitch information, autocorrelation measures the similarity between the time-lagged version and the original signal, which will periodically repeat at certain lag values. The best match after zero lag occurs at the fundamental period, revealing the signal’s pitch. The autocorrelation function can be defined as
R τ = n x [ n ] · x [ n + τ ] ,
where x[n] is the signal sample at time n, τ is the time lag, and R( τ ) is the similarity between signal and itself shifted by the time lag. While autocorrelation has shown robustness to amplitude changes without the need for frequency analysis, it tends to perform poorly in the presence of harmonics, which occur simultaneously with the pitch. The YIN algorithm improves autocorrelation by using the difference and cumulative mean normalized difference functions to suppress harmonic peaks and amplify the true pitch [30]. While our previous method utilized Short-Time Fourier Transform and Quadratically Interpolated Fast Fourier Transform for spectral peak estimation [38], these are just as if not more susceptible to noise and harmonics that disrupt accurate pitch detection, suggesting that YIN may be a more fitting approach for HMP applications.
To detect beat onsets, spectral flux computes how much the frequency-domain representation of the audio changes at each frame. Sharp increases in the spectrum indicate potential points where a percussive instrument was used, signifying a beat onset. Changes in the spectrum are quantified by the onset strength envelope which is given by
o ( t ) = f m a x ( 0 , S ( f , t ) S ( f , t 1 ) ) ,
where o(t) is the onset strength at time frame t, f is the frequency bin index, S is the spectrum, and a max function implements half-wave rectification. From the envelope, local thresholds are determined to pick peaks that represent beat onsets [39]. While spectral flux has been used for onset detection in pitched instruments [40,41], it has also shown reliability in beat onset detection for rhythm-based applications [42,43]. Energy peaks as used in our previous method is still a simple approach that is effective when beats always occur with loudness spikes, but spectral flux looks beyond loudness and into the frequencies while being efficient enough for use in HMPs.
To remove unreliable pitch and beat detections as well as noise, the Root Mean Square (RMS) energy is calculated. The RMS energy of an audio signal is defined as
R M S = 1 N n = 1 N x 1 [ n ] 2 ,
where x[n] is the signal frame and N is the number of samples in the frame. The RMS energy is then expressed in decibels and paired with a predefined threshold, which is 35 dB in this study, to remove frames where the estimated pitch or beat onsets are too quiet and are likely false detections and noise. As the pitch values generally follow non-normal distributions that exhibit skewness and platykurtosis, Tukey’s method is used to detect and remove outliers. The technique uses a standard interquartile range with a threshold of 1.5 to clean the pitch contours.
Discretization involves transforming the continuous pitch contours into eight discrete levels based on their percentile relative to the maximum pitch value in the audio track. Since beat onsets are already in binary form, they are not discretized further. To ensure uniform spacing for BLE communication, the resulting discretized signals are aligned to 100 ms bins and empty values are set to zero. The signals finally undergo feature-pin mapping to convert them into tactile signals.
The third step involves converting the continuous pitch contours to eight discrete values based on the quartile they belong to relative to the maximum pitch value in the audio file. Time points with no pitch values are set to zero. Both the discretized pitch contours and the beat onsets are assigned to 100 ms time bins so that the tactile patterns are uniformly spaced. The onsets and discretized pitch values are formatted in a dictionary which are then converted to strings. The tactile patterns generated in this final step correspond to the feature-pin mappings, which are later stored in the mobile app and sent to the haptic gloves. Figure 4 illustrates the proposed audio-tactile rendering algorithm.

3. Results

3.1. Signal Fidelity Testing

As was achieved in our previous work [29], the HMP-WD should be validated in terms of its ability to display the extracted music features while maintaining the original audio’s melodic contour and rhythmic patterns. Tactile displays of pitch, melody/timbre, and rhythm, respectively, are evaluated using mean absolute error (MAE), dynamic time warping (DTW) distance, and accuracy with respect to the cleaned audio signals. In addition to testing the current methods and device, the results of both studies will be compared. In particular, our previous work used spectral peaks for pitch estimation, energy peaks for beat detection, and four levels in pitch discretization. Additionally, our previous work had no software optimization. After the creation of our previous work, we have since split the in-app timer that synchronized the visual, audio, and vibrotactile stimuli into two, which resulted in parallel processing and reduced latencies.
In this experiment, the mobile app recorded the tactile signals transmitted to the haptic glove with their corresponding timestamps, generating one log file per song across five selected tracks. The songs were chosen randomly based on the criteria of having the presence of vocals, multiple instruments, and variety of genres (Table 2), following the approach of related studies [6]. The log files contain the tactile patterns that were produced by the piezoelectric actuators as output, representing the data after it had been processed by both the audio-tactile rendering algorithm and the wearable HMP. After the experiments, the log files were later used in Python to reconstruct the tactile signals for comparison with their parent audio signals. The previous and current methods are then compared in terms of signal fidelity (Table 3).

3.1.1. Pitch Display

Figure 5a plots the pitch of the cleaned original audio for the vocals stem, while Figure 5b plots the pitch levels of the tactile signals designated for the vocals pin group. The bass, vocals, and others stems for both audio and tactile signals were normalized before MAE was computed using
M A E = 1 N i = 1 N y i y ^ i ,
where N is the number of time bins, y is the normalized pitch from the audio, and ŷ is the normalized pitch from the tactile signals. The MAE measures the average difference between the two pitch contours, with lower values indicating that the tactile signals convey the original audio’s pitch more accurately. Across all melodic elements, the average MAE is 0.1020 with a standard deviation of 0.0451 (Table 4). Figure 5c illustrates a comparison between the normalized tactile and audio signals. While discretization causes the tactile signals to slightly deviate from the original pitch, the heights and depths of the pitch contour are still generally preserved.

3.1.2. Melody/Timbre Display

DTW is applied to assess the similarity between audio and tactile pitch sequences while considering potential temporal misalignment. In both sequences, Euclidean distances between corresponding points are used to construct a distance matrix. We recursively compute the accumulated cost matrix before extracting the optimal warping path, which is used to calculate the Normalized DTW distance using
N o r m a l i z e d   D T W = 1 K i = 1 K d a i k , b j k ,
where K is the length of the warping path, a and b are the DTW-aligned pair from the audio and tactile sequences at index k, their respective indices i and j, and the Euclidean distance d between the two values. The size of K corresponds to the duration of the song in seconds multiplied by 10, as there are 10 discrete time bins used per second. To prevent increases of DTW distance as K increases, the raw DTW distance is normalized by K. Furthermore, the boundary, monotonicity, and continuity constraints are applied to the warping path to account for the full duration of both signals, to maintain temporal ordering, and to avoid point skips, respectively. Across all melodic elements, the average normalized DTW distance is 0.1518 with a standard deviation of 0.0729 (Table 5). Figure 6 visualizes the DTW alignment paths for the bass, vocals, and others stems using the previous and current methods. Paths that are nearly diagonal lines indicate strong temporal alignment with minimal warping, highlighting the tactile signals’ capacity to preserve movements of pitch. On the other hand, paths that show a curve deviating from the diagonal reflect a temporal misalignment caused by latency and a lower signal resolution, disrupting pitch movements.

3.1.3. Rhythm Display

Accuracy quantifies how closely the rhythm of the tactile signal matches that of the original audio. Actuations are considered accurate only if a beat onset was detected for the particular 100 ms time bin, while any violations of this are considered an inaccuracy in rhythm display. The accuracy is computed as
A C C = 1 N i = 1 N 1   y i = y ^ i
where N is the total number of time bins, y is the normalized pitch from the audio, ŷ is the normalized pitch from the tactile signals, and 1 (y = ŷ) is an indicator function that returns 1 when the tactile and audio signals match and 0 otherwise. Figure 7 illustrates an example of tactile pin actuations compared to the beat onsets from the original audio. The tactile actuations are almost always aligned with the beat onsets; however, latency sometimes causes delayed pin resets, leading to missed beats. The rhythmic pattern accuracy of the HMP-WD is 97.56% with a standard deviation of 0.0285 (Table 6).

3.2. Algorithm Processing Time

To prove that the methods used are relatively efficient, the HMP-WD should be validated in terms of processing speed, particularly on how long both previous and current methods take to fully convert an audio file to multi-feature tactile signals. To accomplish this, a Jupyter Notebook (version 6.5.4) was made for each method, where the algorithms would be executed in sequential order for each of the five songs. The first code cell starts the timer, which ends when the last code cell runs, displaying the processing time. The results show that using spectral flux and YIN for multi-feature extraction increased processing time for only a few seconds on average compared to our previous method (Table 7).

4. Discussion

The HMP-WD attained relatively high scores in its respective metrics on the tactile display of pitch, melody, timbre, and rhythm, and improvements were also observed when using spectral flux and YIN in the feature extraction process. Furthermore, these improvements came with only a small increase in the algorithm’s processing time compared to our previous method.
For pitch display, the current methods were formulated with the hypothesis that modifying the discretization step from four levels to eight would reduce the MAE of tactile signals, as the distances between each discretized time point and the original pitch would become smaller. Compared to the previous method whose MAE was 0.1240, the current implementation showed an overall improvement with an error of 0.1020 and deviating from the original audio by only 10.2%. While improvements on the bass stem are negligible, the accompaniment and vocal stems showed significant improvements, with the latter reducing MAE by 0.0416.
The disparity of results among the three stems can be explained by the non-zero pitch ranges in the original audio, which play an integral factor during discretization. It is important to note that when the eight levels are determined during the discretization step, only the non-zero pitch ranges are considered. The rationale behind this is that starting from zero reduces touch-perceptible level changes as the original pitch increases. For instance, a stem whose non-zero pitch range is 400–600 Hz would feel more muffled compared to one with a range of 200–400 Hz, even if both have the same melody, because the former would only actually utilize fewer upper levels than the latter. Conversely, if both utilize only their non-zero pitch range, the movements in pitch would be equally perceptible regardless of raw value, leading to better melody displays. Returning to the point on disparity, the bass stem will almost never have a higher raw pitch value compared to the other stems, and it will generally have a smaller non-zero pitch range. In one song, the estimated range for bass was 50–160 Hz, while the ranges were 50–725 Hz for vocals and 50–650 Hz for accompaniments. This means that, after discretization, pitch levels for bass are shifted relatively higher or lower than those of other stems. Despite the disparity of results across stems, increasing the number of levels during discretization was still shown to improve the HMP-WD’s pitch display.
For melody/timbre display, the hypotheses are that using YIN would lead to pitch estimates that are more stable, allowing the tactile signals to replicate them better compared to the coarse pitch contours retrieved using spectral peaks, and that increasing the discretized pitch levels would allow tactile signals to capture finer pitch movements. Compared to the previous DTW distance of 0.1392, the current implementation seems to display overall melody worse with a distance of 0.1518. Taking a closer inspection, it is revealed that the driving forces for the overall distance increase are the bass and accompaniment, whose distances increased by about 0.04 each, even when the vocal stem’s distance decreased by 0.0518.
Although it is reliable in general pitch estimation, YIN is best applied on monophonic vocal signals as observed in other studies [44,45]. Thus, the significant decrease in DTW distance for the vocal stem is not surprising and rather proves the working hypotheses. On the other hand, the accompaniment stem is usually polyphonic, with some songs containing multiple instruments such as piano and guitar. YIN still accurately estimates the pitch in polyphonic signals but because of harmonization, the resulting pitch contours are relatively rougher, resulting in an increased difficulty for the tactile signals to accurately replicate these sharp pitch movements. Similarly, while the bass stem is usually monophonic, its percussive aspects would inevitably lead to rough pitch contours even when using YIN, and when combined with its pitch display issues as discussed earlier, the resulting melody display is also impacted. The working hypotheses were proven for the vocal stem, whereas future research can explore methods in pitch estimation or discretization that can improve the melody display for the bass and accompaniment stems.
For the rhythm display, we sought to improve the results of our previous approach using spectral flux and software optimization. Since it detects beat onsets at the frequency level rather than finding energy peaks, it no longer includes volume spikes that may merely come from noise, leading to reduced payloads and latencies. At the same time, we optimized the mobile application’s synchronization with the haptic gloves and music playback. Thus, the accuracy for all drum frequency subsets was drastically improved. Unlike related applications that immediately attempted to detect beats from the audio, the methods in this study account for beats with softer volume and at four different frequency subsets to further isolate each type of drum, which were obtained from a source separation process that isolates it from vocals, bass, and accompaniments.
With these results, the HMP-WD has been shown to effectively display pitch, melody, timbre, and rhythm using only an audio file, extracting features and communicating them from a near-auditory frequency range. It retains the characteristics of the original audio while improving upon prior approaches in four key ways. First, this study uses piezoelectric actuators, which offer fine localization capabilities and can communicate frequencies beyond 1000 Hz through vibrotactile feedback, unlike other popular actuators. In HMP-WD studies with a glove form factor, this device aligns with prior studies that applied stimuli to the fingertips [19,20,21] instead of only the phalanges, backhand, or palm [17,18]. The use of a mobile app to transmit tactile signals via Bluetooth has been explored before [19], but this study advances the approach by using BLE to improve portability. Second, the study introduces an algorithm that does not require a music’s MIDI file to extract pitch information. It proposes YIN and discretization as an efficient and effective method. Unlike previous studies limited by external software [25,26] and a fixed number of notes [1,5,6,24,25,26], the proposed methods generate continuous pitch contours that can be discretized into any number of levels, adjusting for the number of actuators. This also addresses the need for tactile illusions to increase pitch display capacity [24,26]. Third, this study provides a framework for beat extraction for HMP applications. While prior studies mentioned aligning vibrations with a beat [20], their accuracy was not validated. The proposed methods allow for beat extraction without using a bass band that is limited to 200 Hz [23,27,28] and suggest a frequency-based approach to detect soft-volume beats rather than the energy-based approach [23]. Fourth, source separation is used to separate sounds from different instruments to display timbre, similar to prior studies [5,26].
The limitation in the proposed algorithm is that it requires preprocessing the audio file before it can be used in the HMP-WD. However, despite using spectral flux and YIN for multi-feature extraction, the methods take an average of only 180.45 s to process an audio file from start to finish for permanent, offline use. This is a minimal increase of less than 10 s compared to our previous methods, highlighting the overall efficiency of the audio-tactile rendering algorithm.

5. Conclusions

With a normalized respective MAE and DTW distance of 0.1020 and 0.1518, and a rhythmic pattern accuracy of 97.56%, the wearable HMP effectively communicates multiple musical features. The proposed audio-tactile rendering algorithm achieves this without the pitfalls of prior approaches. Using piezoelectric actuators, this HMP also overcomes the 1000 Hz frequency range barrier of previous devices. Additionally, the HMP-WD is more efficient, with an average preprocessing time of only 180.45 s and without required MIDI files or software. The study also proposes a volume-independent method to extract beats from music for HMP applications and a way to display timbre information using source separation and feature-pin mapping.
The findings suggest that the HMP-WD can communicate the pitch, melody, timbre, and rhythm of music at an auditory frequency range while retaining most of the characteristics of the original audio. The YIN algorithm was shown to greatly improve the tactile display of vocals and slightly improve the display of bass and accompaniments. Spectral flux and software optimizations significantly improved rhythm display.
Future research should focus on reducing the MAE and DTW distances of melodic displays and validating the results with more song data. Other actuators could be explored to improve the results, and adjustments to the methods could be made, such as using probabilistic YIN [45] or related source separation, beat extraction, and discretization methods. Subjective experiences with the device could also be validated through user testing. Furthermore, these methods could be used to enhance the music instrument playing experience [22] or integrated with other modalities like VR [2]. Future work could also study the human emotional and cognitive perception of music using the HMP-WD.

Author Contributions

Conceptualization, A.R.S.; methodology, A.B.R.A.; software, A.B.R.A.; validation, A.B.R.A. and T.J.T.T.; formal analysis, A.B.R.A. and T.J.T.T.; investigation, A.B.R.A. and T.J.T.T.; resources, A.R.S.; data curation, A.B.R.A.; writing—original draft preparation, A.B.R.A. and T.J.T.T.; writing—review and editing, A.R.S. and H.-H.K.; visualization, A.B.R.A.; supervision, A.R.S. and H.-H.K.; project administration, A.R.S.; funding acquisition, A.R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Science and Technology Council of Taiwan under grant numbers NSTC 113-2221-E-167-049 and the Ministry of Education of Taiwan under grant number 13001110179-EDU.

Data Availability Statement

The data presented in this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HMPHaptic Music Player
HMP-WDHaptic Music Player—Wearable Device
PCBPrinted Circuit Board
BLEBluetooth Low Energy
MAEMean Absolute Error
DTWDynamic Time Warping

References

  1. Balandra, A.; Mitake, H.; Hasegawa, S. Haptic Music Player—Synthetic audio-tactile stimuli generation based on the notes’ pitch and instruments’ envelope mapping. In Proceedings of the International Conference on New Interfaces for Musical Expression, Brisbane, Australia, 11–15 July 2016; pp. 90–95. [Google Scholar]
  2. Venkatesan, T.; Wang, Q.J. Feeling connected: The role of haptic feedback in VR concerts and the impact of haptic music players on the music listening experience. Arts 2023, 12, 148. [Google Scholar] [CrossRef]
  3. Remache-Vinueza, B.; Trujillo-León, A.; Zapata, M.; Sarmiento-Ortiz, F.; Vidal-Verdú, F. Audio-tactile rendering: A review on technology and methods to convey musical information through the sense of touch. Sensors 2021, 21, 6575. [Google Scholar] [CrossRef] [PubMed]
  4. Jack, R.; McPherson, A.; Stockman, T. Designing tactile musical devices with and for deaf users: A case study. In Proceedings of the International Conference on the Multimodal Experience of Music, Sheffield, UK, 23–25 March 2015; pp. 23–25. [Google Scholar]
  5. Karam, M.; Russo, F.A.; Branje, C.; Price, E.; Fels, D.I. Towards a model human cochlea: Sensory substitution for crossmodal audio-tactile displays. In Proceedings of the Graphics Interface, Windsor, ON, Canada, 28–30 May 2008; pp. 267–274. [Google Scholar]
  6. Alves Araujo, F.; Lima Brasil, F.; Candido Lima Santos, A.; de Sousa Batista Junior, L.; Pereira Fonseca Dutra, S.; Eduardo Coelho Freire Batista, C. Auris system: Providing vibrotactile feedback for hearing impaired population. BioMed Res. Int. 2017, 2017, 2181380. [Google Scholar] [CrossRef] [PubMed]
  7. Fontana, F.; Camponogara, I.; Vallicella, M.; Ruzzenente, M.; Cesari, P. An exploration on whole-body and foot-based vibrotactile sensitivity to melodic consonance. In Proceedings of the 13th International Conference in Sound and Music Computing (SMC 2016), Hamburg, Germany, 31 August–3 September 2016; pp. 143–150. [Google Scholar]
  8. Yamazaki, R.; Ohkura, M. Affective evaluation while listening to music with vibrations to the body. In Proceedings of the International Conference on Applied Human Factors and Ergonomics, Orlando, FL, USA, 21–25 July 2018; pp. 379–385. [Google Scholar]
  9. Nanayakkara, S.; Taylor, E.; Wyse, L.; Ong, S.H. An enhanced musical experience for the deaf: Design and evaluation of a music display and a haptic chair. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Boston, MA, USA, 4–9 April 2009; pp. 337–346. [Google Scholar]
  10. Petry, B.; Huber, J.; Nanayakkara, S. Scaffolding the music listening and music making experience for the deaf. In Assistive Augmentation; Springer: Singapore, 2017; pp. 23–48. [Google Scholar]
  11. Yamazaki, Y.; Mitake, H.; Oda, R.; Wu, H.-H.; Hasegawa, S.; Takekoshi, M.; Tsukamoto, Y.; Baba, T. Hapbeat: Single DOF wide range wearable haptic display. In ACM SIGGRAPH 2017 Emerging Technologies; Association for Computing Machinery: New York, NY, USA, 2017; pp. 1–2. [Google Scholar]
  12. Egloff, D.C.; Wanderley, M.M.; Frissen, I. Haptic display of melodic intervals for musical applications. In Proceedings of the 2018 IEEE Haptics Symposium (HAPTICS), San Francisco, CA, USA, 25–28 March 2018; pp. 284–289. [Google Scholar]
  13. Petry, B.; Illandara, T.; Elvitigala, D.S.; Nanayakkara, S. Supporting rhythm activities of deaf children using music-sensory-substitution systems. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–10. [Google Scholar]
  14. La Versa, B.; Peruzzi, I.; Diamanti, L.; Zemolin, M. MUVIB: Music and vibration. In Proceedings of the 2014 ACM International Symposium on Wearable Computers: Adjunct Program, Seattle, WA, USA, 13–17 September 2014; pp. 65–70. [Google Scholar]
  15. Hattwick, I.; Franco, I.; Giordano, M.; Egloff, D.; Wanderley, M.M.; Lamontagne, V.; Arawjo, I.A.; Salter, C.L.; Martinucci, M. Composition Techniques for the Ilinx Vibrotactile Garment. In Proceedings of the ICMC, Denton, TX, USA, 25 September–1 October 2015. [Google Scholar]
  16. Hashizume, S.; Sakamoto, S.; Suzuki, K.; Ochiai, Y. Livejacket: Wearable music experience device with multiple speakers. In Proceedings of the International Conference on Distributed, Ambient, and Pervasive Interactions, Las Vegas, NV, USA, 15–20 July 2018; pp. 359–371. [Google Scholar]
  17. Young, G.; Murphy, D.; Weeter, J. Audio-tactile glove. In Proceedings of the International Conference on Digital Audio Effects (DAFx), Maynooth, Ireland, 2–5 September 2013. [Google Scholar]
  18. Mazzoni, A.; Bryan-Kinns, N. Mood glove: A haptic wearable prototype system to enhance mood music in film. Entertain. Comput. 2016, 17, 9–17. [Google Scholar] [CrossRef]
  19. Enriquez, K.; Palacios, M.; Pallo, D.; Guerrero, G. SENSE: Sensory component VR application for hearing impaired people to enhance the music experience. In Proceedings of the 2020 15th Iberian Conference on Information Systems and Technologies (CISTI), Sevilla, Spain, 24–27 June 2020; pp. 1–6. [Google Scholar]
  20. Lucía, M.J.; Revuelta, P.; García, Á.; Ruiz, B.; Vergaz, R.; Cerdán, V.; Ortiz, T. Vibrotactile captioning of musical effects in audio-visual media as an alternative for deaf and hard of hearing people: An EEG study. IEEE Access 2020, 8, 190873–190881. [Google Scholar] [CrossRef]
  21. Alvaro, G.-L.; Ricardo, V.B.; Manuel, S.P.J.; Tomás, O.; Víctor, C. New haptic systems for elicit emotions in audio-visual events for hearing impaired people. Procedia Comput. Sci. 2024, 237, 533–543. [Google Scholar] [CrossRef]
  22. Papetti, S.; Järveläinen, H.; Schiesser, S. Interactive vibrotactile feedback enhances the perceived quality of a surface for musical expression and the playing experience. IEEE Trans. Haptics 2021, 14, 635–645. [Google Scholar] [CrossRef] [PubMed]
  23. Hwang, I.; Lee, H.; Choi, S. Real-time dual-band haptic music player for mobile devices. IEEE Trans. Haptics 2013, 6, 340–351. [Google Scholar] [CrossRef] [PubMed]
  24. Remache-Vinueza, B.; Trujillo-León, A.; Clim, M.-A.; Sarmiento-Ortiz, F.; Topon-Visarrea, L.; Jensenius, A.R.; Vidal-Verdú, F. Mapping monophonic MIDI tracks to vibrotactile stimuli using tactile illusions. In Proceedings of the International Workshop on Haptic and Audio Interaction Design, London, UK, 25–26 August 2022; pp. 115–124. [Google Scholar]
  25. Trivedi, U.; Alqasemi, R.; Dubey, R. Wearable musical haptic sleeves for people with hearing impairment. In Proceedings of the 12th ACM International Conference on Pervasive Technologies Related to Assistive Environments, Island of Rhodes, Greece, 5–7 June 2019; pp. 146–151. [Google Scholar]
  26. Moora, R.V.; Prabhakar, G. Tactile melodies: A desk-mounted haptics for perceiving musical experiences. arXiv 2024, arXiv:2408.06449. [Google Scholar] [CrossRef]
  27. Chang, A.; O’Sullivan, C. Audio-haptic feedback in mobile phones. In Proceedings of the CHI’05 Extended Abstracts on Human Factors in Computing Systems, Portland, OR, USA, 2–7 April 2005; pp. 1264–1267. [Google Scholar]
  28. Mayor, O. An adaptative real-time beat tracking system for polyphonic pieces of audio using multiple hypotheses. In Proceedings of the MOSART Workshop on Current Research Directions in Computer Music, Barcelona, Spain, 15–17 November 2001. [Google Scholar]
  29. See, A.R.; Alcuitas, A.B.R.; Tiong, T.J.T.; Sasing, V.J.A. Music Tactalizer: A Wearable Haptic Music Player with Multi-Feature Audio-Tactile Rendering. In Proceedings of the 2025 Seventh International Symposium on Computer, Consumer and Control (IS3C), Taichung, Taiwan, 27–30 June 2025; pp. 1–4. [Google Scholar]
  30. De Cheveigné, A.; Kawahara, H. YIN, a fundamental frequency estimator for speech and music. J. Acoust. Soc. Am. 2002, 111, 1917–1930. [Google Scholar] [CrossRef] [PubMed]
  31. Stark, B.; Carlstedt, T.; Hallin, R.; Risling, M. Distribution of human Pacinian corpuscles in the hand: A cadaver study. J. Hand Surg. Br. Eur. Vol. 1998, 23, 370–372. [Google Scholar] [CrossRef] [PubMed]
  32. Papetti, S.; Saitis, C. Musical Haptics; Springer Nature: Berlin, Germany, 2018. [Google Scholar]
  33. See, A.R.; Tiong, T.J.; De Guzman, L.B.; Contee, K.; Lim, G.G.; Yebes, C.S. Development of Localized Cutaneous Force Feedback System for Robotics Assisted Surgery Systems. Procedia Comput. Sci. 2024, 246, 1160–1169. [Google Scholar] [CrossRef]
  34. Abad, A.C.; Reid, D.; Ranasinghe, A. A novel untethered hand wearable with fine-grained cutaneous haptic feedback. Sensors 2022, 22, 1924. [Google Scholar] [CrossRef] [PubMed]
  35. Sasing, V.J.A.; See, A.R.; Tiong, T.J.T.; Alcuitas, A.B.R.; Kuo, H.-H.; Seepold, R. Development of Cutaneous Feedback Haptic Glove for VR Industrial Training. In Proceedings of the 2025 1st International Conference on Consumer Technology (ICCT-Pacific), Matsue, Japan, 29–31 March 2025; pp. 1–4. [Google Scholar]
  36. Sun, Q.; Li, S.; Yao, Z.; Feng, Y.-L.; Mi, H. PalmBeat: A Kinesthetic Way to Feel Groove With Music. In Proceedings of the 12th Augmented Human International Conference, Geneva, Switzerland, 27–28 May 2021; pp. 1–8. [Google Scholar]
  37. Défossez, A.; Usunier, N.; Bottou, L.; Bach, F. Demucs: Deep extractor for music sources with extra unlabeled data remixed. arXiv 2019, arXiv:1909.01174. [Google Scholar] [CrossRef]
  38. Smith, J.O., III. Spectral Audio Signal Processing; W3K: São Leopoldo, Brazil, 2011. [Google Scholar]
  39. Alonso, M.; Richard, G.; David, B. Extracting note onsets from musical recordings. In Proceedings of the 2005 IEEE International Conference on Multimedia and Expo, Amsterdam, The Netherlands, 6–8 July 2005; p. 4. [Google Scholar]
  40. Benetos, E. Pitched instrument onset detection based on auditory spectra. In Proceedings of the 10th International Society for Music Information Retrieval Conference (ISMIR 2009), Kobe, Japan, 26–30 October 2009. [Google Scholar]
  41. Dixon, S. Simple spectrum-based onset detection. MIREX 2006 2006, 62. Available online: https://www.eecs.qmul.ac.uk/~simond/pub/2006/mirex-onset.pdf (accessed on 8 August 2025).
  42. Cantri, F.M.; Darojah, Z.; Ningrum, E.S. Cumulative Scores Based for Real-Time Music Beat Detection System. In Proceedings of the 2019 International Electronics Symposium (IES), Surabaya, Indonesia, 27–28 September 2019; pp. 293–298. [Google Scholar]
  43. Yamada, M.; Matsuo, A. Development of rhythm practice supporting system with real-time onset detection. In Proceedings of the 2015 International Conference on Advances in Electrical Engineering (ICAEE), Dhaka, Bangladesh, 17–19 December 2015; pp. 153–156. [Google Scholar]
  44. Li, X.; He, C. Incorporating Cumulative Mean Normalized Difference Function Towards Intepretable Monophonic Singing Voice Pitch Extraction. In Proceedings of the 2024 9th International Conference on Intelligent Computing and Signal Processing (ICSP), Xi’an, China, 19–21 April 2024; pp. 709–713. [Google Scholar]
  45. Mauch, M.; Dixon, S. pYIN: A fundamental frequency estimator using probabilistic threshold distributions. In Proceedings of the 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Florence, Italy, 4–9 May 2014; pp. 659–663. [Google Scholar]
Figure 1. The rendering system converts an audio file to tactile patterns, which the mobile app synchronize with music and visuals and pass to the haptic gloves to provide vibrotactile feedback.
Figure 1. The rendering system converts an audio file to tactile patterns, which the mobile app synchronize with music and visuals and pass to the haptic gloves to provide vibrotactile feedback.
Electronics 14 03658 g001
Figure 2. The HMP-WD consists of piezoelectric actuators controlled by a customized printed circuit board and organized in a glove form factor: (a) parts of the device and (b) the control pipeline.
Figure 2. The HMP-WD consists of piezoelectric actuators controlled by a customized printed circuit board and organized in a glove form factor: (a) parts of the device and (b) the control pipeline.
Electronics 14 03658 g002
Figure 3. The HMP-WD mobile application and feature-pin mapping.
Figure 3. The HMP-WD mobile application and feature-pin mapping.
Electronics 14 03658 g003
Figure 4. The improved multi-feature audio-tactile rendering algorithm utilizes spectral flux and YIN algorithms to extract rhythm and pitch before discretization to tactile signals.
Figure 4. The improved multi-feature audio-tactile rendering algorithm utilizes spectral flux and YIN algorithms to extract rhythm and pitch before discretization to tactile signals.
Electronics 14 03658 g004
Figure 5. Comparison of pitch contours between audio and tactile signals of the vocal stem from song 3: (a) pitch contours of the cleaned audio; (b) pitch contours of the tactile signals; (c) a 10 s pitch contour comparison plot.
Figure 5. Comparison of pitch contours between audio and tactile signals of the vocal stem from song 3: (a) pitch contours of the cleaned audio; (b) pitch contours of the tactile signals; (c) a 10 s pitch contour comparison plot.
Electronics 14 03658 g005
Figure 6. DTW alignment paths for the same song: (ac) previous vs. (df) current methods.
Figure 6. DTW alignment paths for the same song: (ac) previous vs. (df) current methods.
Electronics 14 03658 g006
Figure 7. Comparison of beat onsets between audio and tactile signals of the “toms” frequency subset from the drums stem of a song showing some misses from the tactile onset.
Figure 7. Comparison of beat onsets between audio and tactile signals of the “toms” frequency subset from the drums stem of a song showing some misses from the tactile onset.
Electronics 14 03658 g007
Table 1. Overview of research on related HMPs.
Table 1. Overview of research on related HMPs.
HMPActuatorsFormMethodsFeatures
Model Human Cochlea [5]SpeakerBeltMIDI 3, TM 4, FM 5Pitch, Timbre
Audio-Tactile Glove [17]Voice coilGloveTransductionPitch
Dual-Band HMP [23]DMA 1HandheldEnergy-basedRhythm
Mood Glove [18]Voice coilGloveTransductionRhythm
HMP (Spidar-G2) [1]DC MotorsInstallationMIDI, ADSR 6 filterPitch
Musical Haptic Sleeves [25]SpeakerSleeveMIDI, transductionPitch
SENSE [19]Voice coilGloveTransductionRhythm
Vibrotactile Captioning [20]ERM 2GloveBPM DetectionTempo
Hap-phones [24]Voice coilActuatorsMIDI, Tactile IllusionsRhythm, Tempo, Melody
Haptic gloves to elicit emotions [21]ERMGloveBPM DetectionTempo
Tactile Melodies [26]ERMInstallationMIDI, Tactile IllusionsPitch, Melody, Rhythm, Timbre
Music Tactalizer (ours) [29]PiezoelectricGloveSpectral Flux, YINPitch, Melody, Rhythm, Timbre
1 Dual-Mode Actuator; 2 Eccentric Rotating Mass; 3 Musical Instrument Digital Interface; 4 Track Model; 5 Frequency Model; 6 Attack, Decay, Sustain, and Release.
Table 2. Songs used for the technical evaluation.
Table 2. Songs used for the technical evaluation.
Song NumberArtist/s—TitleGenre/s
1Maroon 5—She Will Be LovedPop Rock, Soft Rock
2Tame Impala—The Less I Know The BetterPsychedelic Pop, Disco, Funk, Pop
3Mark Ronson & Bruno Mars—Uptown FunkFunk, Contemporary Soul, Electronica
4Adele—Rolling In The DeepPop, Blues, R&B
5Rihanna & Jay-Z—UmbrellaR&B/Soul, Pop
Table 3. Summary of normalized signal fidelity testing results for previous and current HMP-WD.
Table 3. Summary of normalized signal fidelity testing results for previous and current HMP-WD.
MAEDTW DistanceAccuracy
BassVocalOtherBassVocalOtherKickSnareTomsCym
Previous0.11510.12960.12720.10150.19310.12300.820.790.760.76
Current0.11470.08800.10330.14700.14130.16700.980.980.970.97
Table 4. MAE of tactile pitch display from pitch of audio signals, by each melodic element.
Table 4. MAE of tactile pitch display from pitch of audio signals, by each melodic element.
Melodic ElementSong NumberAverage MAE
12345
Bass0.18290.10950.07540.12300.08290.1147
Vocal0.10470.11220.03260.11230.07840.0880
Other0.14500.20170.02760.11120.03110.1033
Average MAE0.14420.14110.04520.11550.06410.1020
Table 5. Normalized DTW distances of tactile melodic display from melody of audio signals, by each melodic element.
Table 5. Normalized DTW distances of tactile melodic display from melody of audio signals, by each melodic element.
Melodic ElementSong NumberAvg. Norm. DTW Dist.
12345
Bass0.21960.14840.11600.15520.09600.1470
Vocal0.11430.19340.04450.21370.14060.1413
Other0.30000.33500.02620.14190.03170.1670
Avg. Norm.
DTW Dist.
0.21130.22560.06220.17030.08940.1518
Table 6. Accuracy of tactile rhythm display from rhythm of audio, by each percussive element.
Table 6. Accuracy of tactile rhythm display from rhythm of audio, by each percussive element.
Percussive ElementSong NumberAvg. Accuracy
12345
Kick0.93860.99260.99151.00000.99820.9842
Snare0.92290.98210.99081.00000.99780.9787
Toms0.92000.97750.98750.97580.99780.9717
Cymbals0.92220.98340.98780.94690.99820.9677
Avg. Accuracy0.92590.98390.98940.98070.99800.9756
Table 7. Audio-tactile rendering algorithm processing time comparison: previous vs. current.
Table 7. Audio-tactile rendering algorithm processing time comparison: previous vs. current.
Song NumberAvg. Time (s)
12345
Previous185.04168.20182.00148.35185.31173.78
Current167.40168.36204.17168.13194.17180.45
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alcuitas, A.B.R.; Tiong, T.J.T.; Kuo, H.-H.; See, A.R. Wearable Haptic Music Player with Multi-Feature Extraction Using Spectral Flux and Yin Algorithms. Electronics 2025, 14, 3658. https://doi.org/10.3390/electronics14183658

AMA Style

Alcuitas ABR, Tiong TJT, Kuo H-H, See AR. Wearable Haptic Music Player with Multi-Feature Extraction Using Spectral Flux and Yin Algorithms. Electronics. 2025; 14(18):3658. https://doi.org/10.3390/electronics14183658

Chicago/Turabian Style

Alcuitas, Aaron Benjmin R., Thad Jacob T. Tiong, Hang-Hong Kuo, and Aaron Raymond See. 2025. "Wearable Haptic Music Player with Multi-Feature Extraction Using Spectral Flux and Yin Algorithms" Electronics 14, no. 18: 3658. https://doi.org/10.3390/electronics14183658

APA Style

Alcuitas, A. B. R., Tiong, T. J. T., Kuo, H.-H., & See, A. R. (2025). Wearable Haptic Music Player with Multi-Feature Extraction Using Spectral Flux and Yin Algorithms. Electronics, 14(18), 3658. https://doi.org/10.3390/electronics14183658

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop