Next Article in Journal
Recoil Energy in Electron Capture Beta Decay and the Search for Sterile Neutrinos
Previous Article in Journal
Contactless Pulse Rate Assessment: Results and Insights for Application in Driving Simulators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Diffusion-Inspired Masked Language Modeling for Symbolic Harmony Generation on a Fixed Time Grid

by
Maximos Kaliakatsos-Papakostas
1,2,3,*,
Dimos Makris
1,3,
Konstantinos Soiledis
1,3,
Konstantinos-Theodoros Tsamis
1,3,
Vassilis Katsouros
1,2 and
Emilios Cambouropoulos
4
1
Department of Music Technology and Acoustics, Hellenic Mediterranean University, 74100 Rethymno, Greece
2
Institute of Language and Speech Processing, Athena RC, 15125 Marousi, Greece
3
Archimedes, Athena RC, 15125 Marousi, Greece
4
School of Music Studies, Aristotle University of Thessaloniki, 57001 Thessaloniki, Greece
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(17), 9513; https://doi.org/10.3390/app15179513
Submission received: 1 August 2025 / Revised: 22 August 2025 / Accepted: 28 August 2025 / Published: 29 August 2025
(This article belongs to the Special Issue The Age of Transformers: Emerging Trends and Applications)

Abstract

We present a novel encoder-only Transformer model for symbolic music harmony generation, based on a fixed time-grid representation of melody and harmony. Inspired by denoising diffusion processes, our model progressively unmasks harmony tokens over a sequence of discrete stages, learning to reconstruct the full harmonic structure from partial context. Unlike autoregressive models, this formulation enables flexible, non-sequential generation and supports explicit control over harmony placement. The model is stage-aware, receiving timestep embeddings analogous to diffusion timesteps, and is conditioned on both a binary piano roll and a pitch class roll to capture melodic context. We explore two unmasking schedules—random token revealing and midpoint doubling—both requiring a fixed and significantly reduced number of model calls at inference time. While our approach achieves competitive performance with strong autoregressive baselines (GPT-2 and BART) across several harmonic metrics, its key advantages lie in controllability, structured decoding with fixed inference steps, and alignment with musical structure. Ablation studies further highlight the role of stage awareness and pitch class conditioning. Our results position this method as a viable and interpretable alternative for symbolic harmony generation and a foundation for future work on structured, controllable musical modeling.

1. Introduction

Transformer architectures are increasingly being studied for a wide range of sequence generation tasks. Melodic harmonization in symbolic music is a particularly interesting case. Given a melodic sequence (a series of notes), the goal is to generate a harmonic sequence (a series of chords) that is coherent and musically compatible with the melody. This task requires that the generated chords align not only with the local melodic context but also maintain harmonic coherence across the entire sequence. In this way, the decision for each chord must integrate information from both the melody and harmony, spanning local transitions (e.g., chord to chord) and global structures (e.g., recurring patterns). These properties make melodic harmonization a rich domain for exploring advanced sequence modeling methods.
Melodic harmonization has been approached with a variety of neural sequence models, including bidirectional LSTMs [1,2,3,4] and Transformer-based architectures [5,6,7,8]. In these approaches, harmonization is often framed as a translation or summarization task, where the melody is “translated” into a compatible harmonic sequence that can also serve as a reduction or harmonic abstraction or summarization of the original melody. Notably, all existing melodic harmonization methods, to our knowledge, rely on autoregressive decoding, which generates chords sequentially one at a time. Additionally, such methods consider chord rhythm patterns that include chord repetitions as part of the melodic harmonization process. This differs from a stricter definition of melodic harmonization as assigning harmony or chords to melody segments and disregarding chord repetitions.
Symbolic music diffusion has evolved in parallel to developments in image-based diffusion models, although they have not yet been applied to melodic harmonization. Some approaches perform diffusion in a continuous approximation of the discrete token space [9,10], while others apply diffusion in the latent space of a VAE [11]. For discrete representations, recent work treats the symbolic music surface as piano roll images and applies diffusion in the image domain using U-Net architectures [12,13,14,15,16,17].
Recent advances in discrete diffusion models for language have further demonstrated the potential of gradually denoising or unmasking sequences in non-autoregressive settings. Approaches like MaskGIT [18] and D3PMs [19] apply diffusion-style generation over discrete tokens rather than continuous vectors. These methods leverage iterative refinement, enabling flexible conditioning and often faster generation than autoregressive models. While diffusion was initially developed for continuous data in vision [20], its adaptation to discrete spaces has opened promising directions for text, code, and symbolic domains. Inspired by this trend, our work brings such iterative, flexible generation to the domain of melodic harmonization, which has remained underexplored in the context of diffusion.
Closer to our proposed approach are methods that combine transformer-based models with ideas from diffusion. These include models that apply diffusion to transformer logits [21] or to discrete latent codes learned through VQ-VAE [22]. Particularly relevant is the token-based masking and unmasking strategy of discrete diffusion probabilistic models (D3PMs) proposed in [23], where generation is performed by gradually unmasking tokens using a transformer encoder. This class of models allows flexible conditioning, enabling parts of the musical sequence to be fixed while the rest is generated. Such flexibility is difficult to achieve in autoregressive models, where left-to-right generation limits global control without backtracking.
This paper presents a novel encoder-only transformer model for melodic harmonization, trained using a discrete diffusion-inspired method for gradually unmasking harmony tokens. (The code is available at https://github.com/NeuraLLMuse/GridMLMelHarm (accessed on 22 August 2025) under the Apache 2.0 license. The dataset is subject to copyright and is available upon request.) The melody is provided as input in a binary piano roll format augmented with pitch class representations, which are directly embedded and injected into the encoder. The output of the model is a sequence of chord symbols that cover segments of the melody, reflecting pure harmonic rhythm (points where chord symbols change, disregarding chord repetitions). The key contributions are as follows:
  • We propose an encoder-only Transformer architecture for melodic harmonization generation, avoiding the limitations of autoregressive decoders and enabling efficient bidirectional context modeling.
  • We introduce two discrete diffusion-inspired training schemes for symbolic music harmonization implemented through progressive unmasking strategies compatible with masked language modeling objectives.
  • We design and evaluate a binary piano roll melody representation, incorporating both pitch-time and pitch class-time grids, that is injected directly into the encoder input stream for effective melody-to-harmony conditioning.
We evaluate the proposed model against autoregressive baselines trained and tested on the same datasets, highlighting that our non-autoregressive, context-aware generation approach is a viable alternative to autoregressive models that runs faster (constant number of model calls regardless of harmonization length) and additionally enables interactive controllability through predefined chord constraints. The purpose and usefulness of melodic harmonization systems that enable the use of chord constraints has been well documented in the literature. The purpose of this paper is to examine the proposed architecture and the diffusion-inspired unmasking strategies. The evaluation chord constraint capabilities would require extensive subjective evaluation with music experts, which is beyond the scope of this paper. An online version (https://huggingface.co/spaces/NeuraLLMuse/masked-mel-harm-gradio, accessed on 22 August 2025) of the proposed system variations that accepts chord constraints is available for exploration.

2. Method

This section presents the symbolic music representation, the encoder architecture with stage-aware conditioning, and the progressive unmasking strategy of our model. We begin by describing the discretized input encoding for the melody, harmony, and time signature, followed by the architectural components and training dynamics.

2.1. Input Representation

We represent each musical piece on a fixed temporal grid based on 16th-note intervals, corresponding to four positions per beat. All events are quantized and aligned to this grid to ensure temporal consistency across samples.
The melody is encoded using two components concatenated along the feature axis: a binary piano roll and a pitch class roll (pc-roll). The piano roll is a binary matrix m { 0 , 1 } T × P , where T = 256 is the number of 16th-note time steps and P = 88 corresponds to the MIDI pitches in the range from 21 (A0) to 108 (C8). The pc-roll is a binary matrix c { 0 , 1 } T × 12 encoding the pitch class (chroma) of the melody notes at each timestep; a similar melody encoding scheme was followed in [6]. This chroma channel encourages the model to reason over the harmonic context rather than the surface pitch alone. The concatenated melody input thus has a shape of ( T , 100 ) .
The harmony is represented as a sequence of discrete chord tokens drawn from a fixed vocabulary ( V ), denoted as y V T . To ensure uniformity across enharmonic or symbolic variants (e.g., Cmaj7 vs. C), chord symbols are normalized using the MIR_eval [24] standard (e.g., C:maj7). The vocabulary consists of 12 × 29 = 348 chord types, where 12 denotes the pitch class of the root and 29 denotes the possible chord qualities. The harmony is aligned to the 16th-note grid but changes less frequently than the melody. Each chord is repeated across all time steps that fall within its duration. For instance, if C:maj7 spans two beats, then it occupies eight consecutive grid positions. If no chord is present at some point in the harmonization, then a special “no chord” token (<nc>) is employed, while a sequence of trailing <pad> tokens is used to fill the harmonization if it finishes before the fixed 256-token duration.
A separate vector, g { 0 , 1 } 16 , encodes the time signature. This is a 16-dimensional binary vector where the first 14 bits represent the numerator (encoded as a one-hot from 2 to 15) and the final 2 bits represent the denominator (encoded as a one-hot for 4 or 8). This vector is prepended to the sequence and enables the model to capture the metrical structure.
Thus, the full input sequence has length of 513, with 1 position for the time signature and 256 positions each for the melody and harmony. The time signature vector is prepended at position 0. After that, at each timestep, the input includes the 100-dimensional melody piano roll and pc-roll (positions 1–257) and a chord token (positions 258–513). During training, only chord tokens may be masked, depending on the stage (see Section 2.3).

2.2. Model Architecture

The core of the proposed method is an encoder-only Transformer architecture inspired by BERT [25] and adapted for generation via masked language modeling (MLM). The model predicts chord tokens conditioned on a fixed time signature, the melodic context, and the visible (i.e., unmasked) portion of the harmony. During inference, the harmony sequence is initially fully masked using <mask> tokens.
At each successive unmasking stage t, a stage embedding vector is computed by the model such that s t R d . This embedding provides the model with information about the current training or generation stage, analogous to the timestep embeddings used in diffusion models, allowing it to adjust its predictions based on how much of the harmony is already visible. At each unmasking stage, a subset of partially masked harmony tokens y in ( t ) is provided as part of the overall input. Some portion of those masked tokens needs to be predicted and unmasked, depending on t, progressively revealing the complete harmony sequence. During training, the model learns this iterative unmasking process by estimating the conditional distribution over the target tokens given the visible context:
p θ y target ( t ) y in ( t ) , g , m , c , t ,
where y target ( t ) contains the subset of harmony tokens to be predicted at stage t. This formulation enables harmonization in a non-autoregressive manner while preserving the ability to condition on both the melodic input and previously revealed chords.
Figure 1 illustrates the proposed architecture. The key differences from a standard Transformer encoder are as follows:
  • Binary melodic and time signature inputs: The inputs for the time signature ( g ) and melody and pitch classes ( m and c , respectively) are binary vectors rather than token embeddings. The corresponding model output of those components is ignored during both training and inference (denoted as “ignored output” in Figure 1). The model is trained only to predict harmony tokens.
  • Trainable positional embeddings: Positional embeddings, p , are randomly initialized and trained jointly with the model parameters. This design allows the model to learn how relative positions in the 16th-note grid correspond to meaningful temporal relationships between the melody and harmony, rather than imposing a fixed positional prior.
  • Stage-awareness conditioning: A stage embedding layer ( E s ) receives an integer index indicating the current unmasking stage (t). The stage embedding vector ( s t = E s ( t ) ) is replicated for 513 “time steps” and concatenated to the output of the positional embedding layer (which includes 513 “time steps”). This augmented input is then passed through a projection W x that maps the concatenated input to the dimensionality of the Transformer encoder. This is the final input to the encoder.
The overall input of the model is formed by passing the time signature information ( g ), melody ( m ) concatenated with the pitch class ( c ) for each time step, and currently unmasked harmony ( y in ( t ) ) through their dedicated trainable feedforward layers (The trainable layer for y in ( t ) , annotated as E y , is actually an embedding layer, i.e., linear with no bias, since it is intended to map one-hot tokens to the dimensionality of the model.) for matching the dimensionality of the transformer model. The final input of the model is obtained by
z ( t ) = concat W g ( g ) , W m _ c concat m , c , E y ( y in ( t ) ) ,
At each step, this overall input embedding is combined with positional and stage information to form the final input to the Transformer encoder model as follows:
x ( t ) = W x concat z ( t ) + p , s t ,
where W x is a linear projection layer that maps the concatenated input to the dimensionality of the encoder.

2.3. Diffusion-Inspired Unmasking in Training and Generation

Two strategies are examined for progressively revealing masked harmony tokens during training and inference. Both are inspired by the denoising process of diffusion models but adapted to the discrete nature of our symbolic music task. In our formulation, the harmony tokens are initially fully masked and gradually unmasked following the procedures below:
  • Random n % Unmasking: At each stage t, n% of the remaining masked harmony tokens are randomly selected and unmasked. This introduces stochasticity and encourages the model to generalize across diverse partially observed contexts. While any value of n can be valid under this procedure, values of 5 and 10 are examined in the results.
  • Midpoint Doubling: Inspired by binary subdivision and the hierarchical structure of music, this deterministic strategy reveals tokens at the midpoints between previously unmasked tokens, effectively doubling the number of visible tokens at each step. This results in a structured, coarse-to-fine unmasking trajectory.
Both strategies can be viewed as discrete analogues of diffusion processes, where masking corresponds to the addition of noise and prediction corresponds to denoising.
According to each unmasking strategy, let M ( t ) { 1 , , T } be the set of masked token positions at stage t and U ( t ) M ( t 1 ) be the set of tokens newly selected for unmasking. We have that M ( t 1 ) = M ( t ) U ( t ) . The input sequence is defined element-wise as follows:
y i ( t ) = y i , i M ( t ) < mask > , i M ( t )
and the prediction targets at this step are
y target ( t ) = { y i i ( M ( t ) ) c } ,
where ( M ( t ) ) c is the complementary set of M ( t ) in the set { 1 , , T } , i.e., all the tokens that are not masked in unmasking stage t.
During training, we minimize the masked language modeling loss by computing the cross-entropy loss only over the newly unmasked tokens at each stage:
L ( t ) = i ( M ( t ) ) c log p θ ( y i y in ( t ) , g , m , c , t ) .
The full loss is the sum over all stages:
L = t = 1 T s L ( t ) ,
where T s is the total number of unmasking stages.
For midpoint doubling, the necessary steps to unmask a sequence of s time steps is l o g 2 ( s ) + 1 , since every step doubles the total tokens that are unmasked. In the current application, we have that every harmonic sequence has a length s = 256 , and therefore T s = 9 . For comparable unmasking steps between the two examined unmasking strategies, we initially set n = 10 % in the random n% unmasking, and thus T s = 10 . Since 10 unmasking steps could be considered a relatively small number compared with the total of 256 tokens that need to be revealed (on average, 25.6 tokens per step are revealed), we also tested for n = 5 % (leading to T s = 20 ) to examine whether there was any improvement in introducing more steps. It would be possible to perform more elaborate sensitivity analysis for finding the optimal value of n for the random n % strategy, but given the high cost of resources for training, in this paper, we only heuristically examine n = 10 % and n = 5 % .
Practically, for each item in the batch at each training step, the following occurs:
a
A stage index t is sampled, which determines the unmasking level and which tokens are to be predicted.
b
A partial “visible” harmony sequence y in ( t ) is defined using one of the unmasking strategies, and the set of target tokens to be learned is identified ( M ( t ) ) c ).
c
The model is trained to predict the current target-masked tokens using the melody and the partially visible harmony context.
The melody and stage conditioning are fully visible throughout, while the harmony is incrementally revealed across the stages.
At inference time, the harmony sequence is initially set to all <mask> tokens. The model is then applied iteratively, following one of the two unmasking schedules until the full sequence is generated. For the random n% strategy, new positions to unmask can be selected either by choosing the top n% most confident predictions or by sampling based on the model’s confidence distribution. In the midpoint doubling strategy, the unmasking schedule is strictly deterministic. In both cases, once the next positions are selected, their token values can be assigned using sampling strategies from the predicted token distributions:
y ^ ( t ) p θ ( · y in ( t ) , g , m , c , t ) ,
and the input is updated as follows:
y in ( t + 1 ) = y in ( t ) y ^ ( t ) .

3. Experimental Set-up and Dataset

The proposed method was evaluated in both in-domain and out-of-domain scenarios against autoregressive baselines, along with two ablated versions. Two methods of comparison were employed that showed different aspects of the generated music: one based on symbolic music metrics and one based on the audio rendering of the symbolic music. This section presents the models under comparison, dataset and training details, and the evaluation metrics and protocols.

3.1. Model Comparison

We compared the two variants (random N% and midpoint doubling) of the proposed model with two autoregressive baselines. The two proposed model variants include two ablated versions that isolate the effects of specific design choices (see the dash-bordered boxes in Figure 1):
i
Random Unmasking (Rn): Trained and generated using the random n% (for n = 10 and n = 5 ) unmasking strategy described in Section 2.3;
ii
Midpoint Doubling (MD): Trained and generated using the midpoint doubling schedule, including both pitch class roll input and stage-aware embeddings.
iii
No pc-roll (Rn-NPC and MD-NPC): Identical to (i) and (ii) but without the pitch class roll input; only the binary 88-note piano roll is used to represent the melody;
iv
No Stage Awareness (Rn-NS and MD-NS): Identical to (i) and (ii) but without stage embeddings, making the model unaware of its position in the unmasking process.
We also included two autoregressive baselines:
v
GPT-2 [26]: A decoder-only transformer trained to autoregressively generate harmony tokens conditioned on a symbolic melody sequence.
vi
BART [27]: An encoder-decoder transformer where the encoder processes the melody and the decoder autoregressively generates the harmony.
To align with common autoregressive practice, both baseline models used a different tokenization scheme from the proposed approach. In particular, melody sequences were tokenized as sequences of discrete symbols using one-hot tokens, rather than the binary piano roll and pitch class roll representations used in our model. The autoregressive token vocabulary included standard special tokens (e.g., <s>, <e>, <mask>, <unk>, and <pad>), along with additional domain-specific tokens, namely <bar>, indicating the start of a new bar;<rest>, marking rests; position_BxSD, indicating the onset position within a bar (with B being the beat and SD an eighth-step subdivision); and P:X, representing the MIDI pitch value.
The onset format BxSD quantizes the beat subdivisions into eight values (used subdivisions: 0 ,   0.16 ,   0.25 ,   0.33 ,   0.5 ,   0.66 ,   0.75 ,   0.83 ) A single time signature was assumed per piece and encoded as ts_NxD, where N is the numerator and D the denominator (either 4 or 8).
Harmony tokenization shares special tokens (e.g., padding, bar, and position) with melody tokenization and begins each sequence with a special <h> token. Chords are represented using MIR_eval [24] chord symbols, consistent with the proposed model. Unlike our model, autoregressive models do not repeat chord symbols since the chord position and duration are encoded explicitly via position tokens.
For the comparison with baseline models to be meaningful and fair, it was necessary to develop the above-mentioned baseline models, since specific fundamental limitations make existing state-of-the-art (SoA) models incompatible for comparison. The first limitation is that SoA models that perform melodic harmonization (e.g., [5,6,8]) do not consider a strict definition of harmonic rhythm, as we do in our model. In other words, SoA models consider chord rhythmic patterns via chord repetitions within bars, a fact that makes their output incompatible for comparison with the ground truth we used and with the output of our model. A second limitation is that SoA models either output accompaniment instead of harmonization [5], i.e., discrete notes that potentially form elaborate patterns that need an additional inference step to be transformed into chord symbols, or use chord symbols from a restricted dictionary [6,8]. For comparison, in [6], 6 chord qualities for all 12 roots were considered, leading to 72 total chord symbols, while our approach considers 29 qualities for 12 roots, leading to 348 chord symbols in total. Therefore, a direct comparison with SoA models would require significant post-processing and additional inference steps that would possibly distort their output and lead to unclear results.

3.2. Data and Training

All models were trained on data splits from the HookTheory dataset [2], which contains 15,440 pieces in MIDI format. The pieces in the dataset were modified to reflect harmonic rhythm, i.e., locations where chords change within each bar. Chord repetitions that reflect rhythm beyond the harmonic rhythm were removed. The only chord repetition that was allowed was at the beginning of each bar if the starting chord of a bar was the ending chord from the previous bar. To address tonal imbalance, we applied key normalization; major-mode pieces were transposed to C major and minor-mode pieces to A minor using the Krumhansl key finding algorithm [28]. This follows prior work [6,7] and leverages the shared pitch class structures of C major and A minor [29]. We used a 95%/5% split for training and validation and testing, resulting in 14,679 training and 761 validation and test pieces. We evaluated the models trained on the training set in two distinct settings:
  • In domain: In this setting, models were evaluated on the validation and test split of the HookTheory dataset.
  • Out of domain: Here, evaluation was conducted on a separate collection of 650 jazz standard lead sheets, again transposed to C major or A minor using the Krumhansl key profiles.
Given the scarcity in high-quality, large-scale data for melodic harmonization, we followed an in-domain and out-of-domain evaluation approach as demonstrated in [6]. Therefore, training and in-domain evaluation were performed on the Hooktheory dataset, and a jazz dataset was employed for out-of-domain evaluation. In contrast to [6], we did not use the Chord Melody Dataset (CMD) [30], since it includes pieces with some restrictions regarding the number of chords in each bar and restrictions in time resolution for the notes. We used our own curated dataset of jazz standard melody harmonizations, which is also larger in comparison with the CMD (650 vs. 473 pieces).
For all autoregressive baselines, we restricted the maximum input length to 16 bars, corresponding to 256 tokens for both melody and harmony sequences. Evaluation was likewise performed on 16-bar segments.
Optimization used the Adam optimizer with cosine learning rate scheduling and a 5% warm-up phase. The models were trained for 50 epochs on the normalized (from C major to A minor) HookTheory dataset, and validation was performed at the end of each epoch. We saved model checkpoints when validation loss reached a new minimum.

3.3. Evaluation Metrics and Protocols

First, the training performance was assessed for each variation of the proposed method by measuring the validation loss, accuracy, perplexity, and normalized token entropy. Then, we evaluated harmonization quality using two distinct methods of comparison by harmonizing melodies from the evaluation datasets. One focused on interpretable metrics that could be measured from the symbolic representation of the data, and one compared the embeddings of a pretrained symbolic music model. The symbolic music metrics provide a detailed picture of the advantages and disadvantages of the compared methods in comparison to the ground truth, i.e., the original harmonization in the evaluation dataset, while the embeddings evaluation provides an overall assessment of how similar the generated harmonization was to the original or to a representative piece of a given dataset.

3.3.1. Training Performance Assessment

The validation set loss and accuracy for each variation of the proposed method indicate how well each variation captured the distribution to be learned. Accuracy was measured as the percentage of times that the argmax of the logits corresponded to the actual tokens in the validation set, averaged across the unmasking stages. Since the latter stages were provided with a larger number of unmasked tokens, the model was more or less carrying out the easy task to fill in token repetitions, and therefore it was expected that the accuracy values would be generally high. Two additional metrics were considered alongside loss and accuracy. The first was perplexity, which accounts for the model’s uncertainty normalized over the sequence length and offers a measure of predictive confidence that is independent of absolute token count. The second was normalized token entropy, which provides a measure of distributional uncertainty adjusted for the vocabulary size.
The perplexity was computed for each tokenized sequence as follows:
ppl = 1 S t = 1 S e 1 T j = 1 T ln p θ ( x j y in ( t ) , g , m , c , t ) ,
where T is the number of tokens in the sequence and p θ ( x j y in ( t ) , g , m , c , t ) denotes the probability assigned by the model to the correct token at position j given the entire context. This measure of perplexity is the average perplexity across all unmasking stages. Perplexity quantifies the model’s average uncertainty when predicting each token. A value of 1 indicates perfect certainty and correctness at every step, while a value of, for example, 4 implies that the model’s predictions are as uncertain as choosing uniformly among four equally likely options.
Normalized token entropy was computed for each sequence as follows:
H ˜ = 1 S · T · log 2 | V | t = 1 S i = 1 T j = 1 | V | p θ ( x i j y in ( t ) , g , m , c , t ) log 2 p θ ( x i j y in ( t ) , g , m , c , t ) ,
where | V | is the vocabulary size and p θ ( x i j y in ( t ) , g , m , c , t ) is the model’s predicted probability of a token at vocabulary index j at sequence position i given the context. This measure captures the average entropy of the model’s full predictive distribution across all unmasking stages and all positions, normalized by the maximum possible entropy log 2 | V | . A value of 0 indicates that the model is entirely confident in its predictions (assigning all probability mass to a single token), whereas a value of 1 suggests that the model is maximally uncertain.

3.3.2. Symbolic Music Metrics

This evaluation process incorporated a comprehensive set of music-specific metrics that assess three core aspects: chord progression structure, harmony–melody alignment, and rhythmic coherence. The first two categories, chord progression and chord–melody harmonicity, follow established frameworks [2] and have been widely adopted [6,7,31]. The third, harmonic rhythm, focuses on the temporal placement of chords and complements our analysis of token positioning [32].
A.
Chord Progression Coherence and Diversity
(i) 
Chord histogram entropy (CHE) measures how evenly chords are distributed in a piece. Higher values reflect greater harmonic variety.
(ii) 
Chord coverage (CC) counts the number of distinct chord types used, indicating the breadth of harmonic vocabulary.
(iii) 
Chord tonal distance (CTD) computes the average tonal distance between adjacent chords, where lower values suggest smoother, more connected progressions.
B.
Chord–Melody Harmonicity
(i) 
The chord-tone-to-non-chord-tone ratio (CTnCTR) measures the proportion of melody notes that match chord tones or are near passing tones. Higher values imply stronger harmonic support for the melody.
(ii) 
The Pitch Consonance Score (PCS) assigns consonance scores to melody–chord intervals based on standard musical intervals. Higher scores indicate more consonant melodic writing.
(iii) 
The melody–chord tonal distance (MCTD) evaluates the average tonal distance between the melody notes and underlying chords. Lower values indicate closer harmonic alignment.
C.
Harmonic Rhythm Coherence and Diversity
(i) 
Harmonic rhythm histogram entropy (HRHE) measures the diversity in the timing of chord changes. Higher entropy suggests more rhythmically varied progressions.
(ii) 
Harmonic rhythm coverage (HRC) counts distinct rhythmic patterns of chord placement. Higher values indicate a wider range of rhythmic usage.
(iii) 
The chord beat strength (CBS) scores how aligned chord onsets are with the metrical strength. Lower scores imply alignment with strong beats, and higher scores reflect more syncopated rhythms.

3.3.3. FMD

In addition to symbolic evaluations, we assessed harmonization quality using the Fréchet Music Distance [33], a metric adapted from the Fréchet Inception Distance (FID), which compares the embedding distributions of generated outputs to a reference set. It compares the distributions of high-level musical embeddings to evaluate the quality of compositions in MIDI and score form. The use of in-domain and out-of-domain reference sets applied here as well. Two FMD scores were calculated, namely the FMD (internal), where generated outputs were compared to reference MIDIs (“real” subset) for both the in-domain and out-of-domain set-ups, and FMD (POP909), where trained models were compared to the standard versions of the MIDIs existing in the POP909 Dataset [34]:
FD = μ r μ e 2 + Tr Σ r + Σ e 2 Σ r Σ t ,
where μ r , μ e are the mean vectors and Σ r , Σ e are the covariance matrices of the reference and test distributions, respectively, while Tr ( · ) is the matrix trace.
For the FMD calculation, we extracted symbolic music embeddings using the CLaMP2 encoder [35]. We further assessed model-level differences for the domains using non-parametric paired tests on the per-piece metric results. For each model comparison, we performed Wilcoxon signed-rank tests on the matched pairs of scores (each pair representing the same musical piece evaluated under two conditions) to determine whether any observed improvement was statistically significant [36]. To control for multiple comparisons, we applied a Benjamini–Hochberg false discovery rate correction, marking differences as significant only if they met a corrected α = 0.05 threshold.

4. Results

This section presents the evaluation of the generated melodic harmonizations against ground-truth harmonic sequences. All compared models shared similar architectural profiles, with eight layers and eight attention heads per layer (both for the encoder and decoder in the case of BART). The total number of parameters was comparable across the models, being approximately 26 million for the proposed encoder-only model and around 35 million for BART. Training was performed on an NVIDIA RTX 3080 GPU (11 GB VRAM), using a batch size of 20 for approximately 12 h per model. For each training session, the model with the best validation loss was saved, and the results are reported using these models.
During inference, the autoregressive models generated results using beam search with five beams, because the produced results had a better long-term structure than using any other sampling method. Additionally, with beam search, the generated results were consistent across runs and better reflected the “best take” of the models in each melodic harmonization task. For the proposed method, a simple multinomial sampling from Equation (8) was performed with a temperature of 1.0. The temperature value of 1.0 was decided because the results were not much different in comparison with various temperature values between 0.5 and 1.5, and we considered that it would be better to therefore use the temperature value that kept the logit distribution unaltered.

4.1. Training Performance

Table 1 shows the validation loss, accuracy, perplexity, and normalized token entropy results that corresponded to the best (smallest) validation loss achieved during training for the examined variations and the ablated versions of the proposed model. The Rn versions achieved better validation loss with similar accuracy to the MD versions. With the exception of the R05-NS ablation, which achieved the “best” values across the board, the non-ablated versions achieved marginally “better” performance for all metrics from their ablated counterparts. The terms “best” and “better” are used in quotes to indicate that superiority in the training metrics is not necessarily reflected in the music-related metrics, as we will see in the following subsections.
Accuracy parity among variations shows that the winning token predictions were similar across the variations; what changed in loss was the logit values that provided accurate argmax values. Smaller losses were achieved by the models that were more certain about their decisions, i.e., where the zeros and the ones of the target one-hot distributions were expressed with logits that were closer to zero and one, respectively. This is also reflected by the perplexity and normalized token entropy values, which were correlated with the loss values.

4.2. Music Metrics

On the in-domain HookTheory validation set (Table 2), our proposed non-autoregressive variants consistently surpassed the autoregressive baselines across most music-theoretic dimensions. Specifically, the full MD model demonstrated superior chord progression variety (CHE), highlighting its ability to generate harmonically diverse sequences. The MD-NPC variant, despite lacking pitch class information, notably excelled in harmonic rhythm complexity and rhythmic diversity (HRHE and HRC, respectively), suggesting a unique emphasis on rhythmic structuring independent of chromatic detail. The MD-NS variant, without explicit stage-awareness, stood out by effectively recovering a wide chord vocabulary (CC) and achieving natural rhythmic placement (CBS), indicating that positional context alone provided meaningful structural cues.
The stochastic variants (Rn) performed remarkably well across all chord–melody harmonicity metrics (CTnCTR, PCS, and MCTD), reaffirming their strength in aligning chord choices closely with melodic content. However, this harmonic alignment came at the expense of diminished harmonic and rhythmic variety, reflecting their tendency toward simpler chord progressions and less diverse rhythmic structures. The same happened with the autoregressive baselines (GPT-2 and BART), which were reasonably effective at achieving chord–melody harmonicity—though still below the stochastic variants—but demonstrated notable limitations in chord progression and rhythmic coherence metrics. Their comparatively weaker performance in terms of harmonic diversity and rhythm highlights the inherent limitations associated with sequential, unidirectional decoding, particularly when modeling harmonically and rhythmically complex musical contexts.
When evaluated with the out-of-domain jazz standards (Table 3), the performance characteristics shifted notably. The MD-NS model emerged prominently by demonstrating superior chord progression coherence and diversity across key related metrics (CHE, CC, and CTD). Remarkably, it also excelled in chord-beat alignment (CBS), reinforcing the effectiveness of positional regularities alone for capturing sophisticated rhythmic and harmonic structures in unfamiliar musical contexts. The absence of explicit stage-awareness embeddings seemed to benefit generalization by encouraging the model to infer richer harmonic structures from temporal positions. On the other hand, the MD-NPC variant excelled specifically at rhythmic complexity metrics (HRHE and HRC), underlining the role of rhythmic structuring independent of detailed pitch class information.
The stochastic unmasking variant (R10) again achieved strong chord–melody alignment (CTnCTR and MCTD), while a similar variant (R05-NS) topped the pitch consonance score (PCS). However, these high alignment metrics appear to be connected to their comparatively limited harmonic and rhythmic diversity, mirroring the pattern observed in the in-domain evaluation. The consistently simpler harmonic vocabulary and restricted rhythmic complexity may facilitate closer melodic-harmonic consonance. Finally, BART and GPT-2 significantly underperformed across all music metrics, including those related to chord–melody harmonicity, contrasting the previous in-domain results, which showed relatively better melodic-harmonic alignment. This substantial drop in out-of-domain generalization performance further underscores the advantages of the proposed non-autoregressive, diffusion-inspired strategies for modeling harmonically and rhythmically rich musical styles outside of their original training distribution.

4.3. FMD Scores

The results in Table 4 and Figure 2 show that when the internal datasets for validation were used, the diffusion-inspired non-autoregressive variants consistently achieved the best scores, with the stage-aware and pitch class-aware MD-NS model (FMD = 7.7068) leading in the domain and the MD-NPC model leading out of the domain (FMD = 55.15). Statistical comparisons using FDR-corrected Wilcoxon tests show that these differences were significant in most adjacent model pairs, particularly between the top-performing MD variants and the first tier of random baselines (e.g., R10 and R05). Autoregressive baselines like GPT-2 and BART performed substantially worse, indicating limited generalization under the FMD scores.
When evaluated with POP909-based embeddings, all models showed closer performance, with the MD models leading in the domain, followed by the baseline models. On the contrary, the baseline models performed better for the out-of-domain harmonization task (FMD = 638.06), i.e., the pop style POP909 was better aligned with the output of the baselines, even though the task was to produce jazz outputs. The low POP909 FMD scores of the baseline models (e.g., GPT-2 and BART) in both settings might suggest that they “rigidly” produce pop-style outputs even if they are prompted with jazz-style melodies, therefore remaining closer to pop-style harmonic distributions. This fact, in combination with the better out-of-domain FMD scores of our method, indicates that our methodology is effectively more flexible in capturing out-of-domain nuances. In contrast, the baseline models will follow the training style less flexibly.

5. Conclusions

This paper introduced a novel approach to melodic harmonization using a non-autoregressive, encoder-only transformer trained under a discrete diffusion-inspired objective. By formulating harmonization as a progressive unmasking task on a fixed time grid, our model generates harmony sequences through iterative refinement rather than left-to-right decoding. The framework employs a piano roll representation for the melody, with additional experiments exploring the impact of pitch class conditioning using a pitch class roll.
Empirical evaluations on both in-domain (HookTheory) and out-of-domain (jazz standards) datasets show that our models consistently outperformed autoregressive baselines (GPT-2 and BART) in the key metrics of harmonic diversity, rhythmic variety, and melody–harmony alignment. They also achieved the lowest Fréchet Music Distance (FMD) scores relative to the ground truth data, indicating that bidirectional refinement enables more musically coherent and perceptually convincing harmonizations. Ablation studies further underscored the importance of pitch class information and the structure of the unmasking schedule for stylistic generalization.
Despite these strengths, several limitations remain. The current unmasking schedule is hand-designed. Future work could explore mathematically informed [37], learnable policies or even continuous diffusion on the level of logits [38]. The midpoint doubling unmasking strategy is well motivated by the hierarchical nature of musical structure. However, indicating bar locations with special tokens could enable more elaborate strategies that adapt to bar-level organization, e.g., starting harmonization from the last bar containing melody notes. Additionally, while the model generalizes reasonably well to jazz standards, broader evaluation across diverse musical traditions and genres is necessary to assess its full potential. Future directions also include subjective evaluation of the framework’s flexibility for interactive use cases, such as enabling user-defined chord constraints.
Overall, this work positions discrete diffusion-inspired modeling as a compelling alternative to autoregressive approaches for symbolic melodic harmonization. It offers faster execution (a constant number of model calls regardless of the harmonization length), greater structural control, stylistic versatility, and the potential for expansion into interactive and adaptive music generation systems.

Author Contributions

Conceptualization, M.K.-P., E.C. and V.K.; methodology, M.K.-P., D.M., K.S. and K.-T.T.; software, M.K.-P., D.M. and K.S.; validation, D.M., K.S. and K.-T.T.; formal analysis, D.M., K.S. and K.-T.T.; resources, V.K. and M.K.-P.; data curation, D.M. and K.-T.T.; writing—original draft preparation, M.K.-P.; writing—review and editing, all authors; supervision, M.K.-P. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by project MIS 5154714 of the National Recovery and Resilience Plan Greece 2.0, funded by the European Union under the NextGenerationEU Program.

Data Availability Statement

Data are subject to copyright and are available upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
LSTMLong short-term memory
VAEVariational auto encoder
MLMMasked language modeling
BERTBidirectional encoder representations from transformers
BARTBidirectional and auto-regressive transformers
GPT-2Generative Pretrained Model 2
MDMidpoint doubling
RnRandom n%
NPCNo pitch classes
NSNo stage
pplPerplexity
CHEChord histogram entropy
CCChord coverage
CTDChord tonal distance
CTnCTRChord-tone-to-non-chord-tone ratio
PCSPitch Consonance Score
MCTDMelody–chord tonal distance
HRHEHarmonic rhythm histogram entropy
HRCHarmonic rhythm coverage
CBSChord beat strength
FIDFréchet Inseption Distance
FMDFréchet Music Distance

References

  1. Lim, H.; Rhyu, S.; Lee, K. Chord generation from symbolic melody using BLSTM networks. arXiv 2017, arXiv:1712.01011. [Google Scholar] [CrossRef]
  2. Yeh, Y.C.; Hsiao, W.Y.; Fukayama, S.; Kitahara, T.; Genchel, B.; Liu, H.M.; Dong, H.W.; Chen, Y.; Leong, T.; Yang, Y.H. Automatic melody harmonization with triad chords: A comparative study. J. New Music. Res. 2021, 50, 37–51. [Google Scholar] [CrossRef]
  3. Chen, Y.W.; Lee, H.S.; Chen, Y.H.; Wang, H.M. SurpriseNet: Melody harmonization conditioning on user-controlled surprise contours. arXiv 2021, arXiv:2108.00378. [Google Scholar]
  4. Costa, L.F.; Barchi, T.M.; de Morais, E.F.; Coca, A.E.; Schemberger, E.E.; Martins, M.S.; Siqueira, H.V. Neural networks and ensemble based architectures to automatic musical harmonization: A performance comparison. Appl. Artif. Intell. 2023, 37, 2185849. [Google Scholar] [CrossRef]
  5. Huang, C.Z.A.; Vaswani, A.; Uszkoreit, J.; Shazeer, N.; Simon, I.; Hawthorne, C.; Dai, A.M.; Hoffman, M.D.; Dinculescu, M.; Eck, D. Music Transformer. arXiv 2018, arXiv:1809.04281. [Google Scholar]
  6. Rhyu, S.; Choi, H.; Kim, S.; Lee, K. Translating melody to chord: Structured and flexible harmonization of melody with transformer. IEEE Access 2022, 10, 28261–28273. [Google Scholar] [CrossRef]
  7. Huang, J.; Yang, Y.H. Emotion-driven melody harmonization via melodic variation and functional representation. arXiv 2024, arXiv:2407.20176. [Google Scholar] [CrossRef]
  8. Wu, S.; Wang, Y.; Li, X.; Yu, F.; Sun, M. Melodyt5: A unified score-to-score transformer for symbolic music processing. arXiv 2024, arXiv:2407.02277. [Google Scholar]
  9. Mittal, G.; Engel, J.; Hawthorne, C.; Simon, I. Symbolic music generation with diffusion models. arXiv 2021, arXiv:2103.16091. [Google Scholar] [CrossRef]
  10. Lv, A.; Tan, X.; Lu, P.; Ye, W.; Zhang, S.; Bian, J.; Yan, R. Getmusic: Generating any music tracks with a unified representation and diffusion framework. arXiv 2023, arXiv:2305.10841. [Google Scholar] [CrossRef]
  11. Zhang, J.; Fazekas, G.; Saitis, C. Fast diffusion gan model for symbolic music generation controlled by emotions. arXiv 2023, arXiv:2310.14040. [Google Scholar] [CrossRef]
  12. Atassi, L. Generating symbolic music using diffusion models. arXiv 2023, arXiv:2303.08385. [Google Scholar] [CrossRef]
  13. Min, L.; Jiang, J.; Xia, G.; Zhao, J. Polyffusion: A diffusion model for polyphonic score generation with internal and external controls. arXiv 2023, arXiv:2307.10304. [Google Scholar] [CrossRef]
  14. Li, S.; Sung, Y. Melodydiffusion: Chord-conditioned melody generation using a transformer-based diffusion model. Mathematics 2023, 11, 1915. [Google Scholar] [CrossRef]
  15. Wang, Z.; Min, L.; Xia, G. Whole-song hierarchical generation of symbolic music using cascaded diffusion models. arXiv 2024, arXiv:2405.09901. [Google Scholar] [CrossRef]
  16. Huang, Y.; Ghatare, A.; Liu, Y.; Hu, Z.; Zhang, Q.; Sastry, C.S.; Gururani, S.; Oore, S.; Yue, Y. Symbolic music generation with non-differentiable rule guided diffusion. arXiv 2024, arXiv:2402.14285. [Google Scholar] [CrossRef]
  17. Zhang, J.; Fazekas, G.; Saitis, C. Mamba-Diffusion Model with Learnable Wavelet for Controllable Symbolic Music Generation. arXiv 2025, arXiv:2505.03314. [Google Scholar] [CrossRef]
  18. Chang, H.; Zhang, H.; Jiang, L.; Liu, C.; Freeman, W.T. Maskgit: Masked generative image transformer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 19–20 June 2022; pp. 11315–11325. [Google Scholar]
  19. Austin, J.; Johnson, D.D.; Ho, J.; Tarlow, D.; Van Den Berg, R. Structured denoising diffusion models in discrete state-spaces. Adv. Neural Inf. Process. Syst. 2021, 34, 17981–17993. [Google Scholar]
  20. Ho, J.; Jain, A.; Abbeel, P. Denoising diffusion probabilistic models. Adv. Neural Inf. Process. Syst. 2020, 33, 6840–6851. [Google Scholar]
  21. Jonason, N.; Casini, L.; Sturm, B.L. SYMPLEX: Controllable Symbolic Music Generation using Simplex Diffusion with Vocabulary Priors. arXiv 2024, arXiv:2405.12666. [Google Scholar] [CrossRef]
  22. Zhang, J.; Fazekas, G.; Saitis, C. Composer style-specific symbolic music generation using vector quantized discrete diffusion models. In Proceedings of the 2024 IEEE 34th International Workshop on Machine Learning for Signal Processing (MLSP), London, UK, 22–25 September 2024; IEEE: New York, NY, USA, 2024; pp. 1–6. [Google Scholar]
  23. Plasser, M.; Peter, S.; Widmer, G. Discrete diffusion probabilistic models for symbolic music generation. arXiv 2023, arXiv:2305.09489. [Google Scholar] [CrossRef]
  24. Raffel, C.; McFee, B.; Humphrey, E.J.; Salamon, J.; Nieto, O.; Liang, D.; Ellis, D.P.; Raffel, C.C. MIR_EVAL: A Transparent Implementation of Common MIR Metrics. In Proceedings of the ISMIR, Taipei, Taiwan, 27–31 October 2014; Volume 10, p. 2014. [Google Scholar]
  25. Devlin, J.; Chang, M.W.; Lee, K.; Toutanova, K. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA, 2–7 June 2019; Volume 1, pp. 4171–4186. [Google Scholar]
  26. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog 2019, 1, 9. [Google Scholar]
  27. Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; Zettlemoyer, L. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv 2019, arXiv:1910.13461. [Google Scholar]
  28. Krumhansl, C.L. Cognitive Foundations of Musical Pitch; Oxford University Press: Oxford, UK, 2001. [Google Scholar]
  29. Hahn, S.; Yin, J.; Zhu, R.; Xu, W.; Jiang, Y.; Mak, S.; Rudin, C. SentHYMNent: An Interpretable and Sentiment-Driven Model for Algorithmic Melody Harmonization. In Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Barcelona, Spain, 25–29 August 2024; pp. 5050–5060. [Google Scholar]
  30. Hiehn, S. Chord Melody Dataset. 2019. Available online: https://github.com/shiehn/chord-melody-dataset (accessed on 21 August 2025).
  31. Sun, C.E.; Chen, Y.W.; Lee, H.S.; Chen, Y.H.; Wang, H.M. Melody harmonization using orderless NADE, chord balancing, and blocked Gibbs sampling. In Proceedings of the ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, 6–11 June 2021; IEEE: New York, NY, USA, 2021; pp. 4145–4149. [Google Scholar]
  32. Wu, S.; Yang, Y.; Wang, Z.; Li, X.; Sun, M. Generating chord progression from melody with flexible harmonic rhythm and controllable harmonic density. EURASIP J. Audio Speech Music Process. 2024, 2024, 4. [Google Scholar] [CrossRef]
  33. Retkowski, J.; Stępniak, J.; Modrzejewski, M. Frechet music distance: A metric for generative symbolic music evaluation. arXiv 2024, arXiv:2412.07948. [Google Scholar] [CrossRef]
  34. Wang, Z.; Chen, K.; Jiang, J.; Zhang, Y.; Xu, M.; Dai, S.; Bin, G.; Xia, G. POP909: A Pop-song Dataset for Music Arrangement Generation. In Proceedings of the 21st International Conference on Music Information Retrieval, ISMIR, Montreal, QC, Canada, 11–16 October 2020. [Google Scholar]
  35. Wu, S.; Wang, Y.; Yuan, R.; Guo, Z.; Tan, X.; Zhang, G.; Zhou, M.; Chen, J.; Mu, X.; Gao, Y.; et al. Clamp 2: Multimodal music information retrieval across 101 languages using large language models. arXiv 2024, arXiv:2410.13267. [Google Scholar] [CrossRef]
  36. Conover, W.J. Practical Nonparametric Statistics; John Wiley & Sons: Hoboken, NJ, USA, 1999. [Google Scholar]
  37. Sahoo, S.; Arriola, M.; Schiff, Y.; Gokaslan, A.; Marroquin, E.; Chiu, J.; Rush, A.; Kuleshov, V. Simple and effective masked diffusion language models. Adv. Neural Inf. Process. Syst. 2024, 37, 130136–130184. [Google Scholar]
  38. Lou, A.; Meng, C.; Ermon, S. Discrete diffusion modeling by estimating the ratios of the data distribution. arXiv 2023, arXiv:2310.16834. [Google Scholar]
Figure 1. Model overview.
Figure 1. Model overview.
Applsci 15 09513 g001
Figure 2. FMD scores.
Figure 2. FMD scores.
Applsci 15 09513 g002
Table 1. Validation data metrics that correspond to the training epoch with the best validation loss.
Table 1. Validation data metrics that correspond to the training epoch with the best validation loss.
MDR10R05MD-NPCR10-NPCR05-NPCMD-NSR10-NSR05-NS
loss0.06250.04860.04980.06610.04970.04990.06690.04860.0437
acc0.98600.98680.98690.98460.98730.98700.98460.98710.9883
ppl1.06511.05031.05161.06901.05121.05151.06981.05001.0451
H ˜ 0.07370.03000.03240.07680.02790.02710.07350.02590.0211
Table 2. Quantitative evaluation in the in-domain validation dataset. Mean values were calculated, and the closest values to the ground truth are bolded.
Table 2. Quantitative evaluation in the in-domain validation dataset. Mean values were calculated, and the closest values to the ground truth are bolded.
ModelCHECCCTDCTnCTRPCSMCTDHRHEHRCCBS
GT1.41264.98410.97430.83690.47451.34670.54322.31390.3413
MD1.34844.59630.74910.68480.29121.51640.44612.12790.1271
MD-NPC1.32764.60420.73600.68050.28391.51620.48872.23740.1415
MD-NS1.33154.92080.80910.70130.31021.50190.75213.04220.2285
R100.44871.96830.39700.81680.42341.39610.22201.53690.0601
R10-NPC0.13411.28890.10710.80370.41401.41510.08711.22410.0246
R10-NS0.29781.63850.27310.79450.40521.42010.17621.42610.0506
R050.04061.07120.02690.80920.42141.40990.02691.04740.0053
R05-NPC0.40881.76780.33190.74960.38971.45420.16421.37460.0422
R05-NS0.16591.34030.14980.79870.40761.41830.06711.15830.0179
BART1.02483.13060.95950.77030.41191.42920.07871.13580.1863
GPT-20.79912.57250.77860.76440.39621.44160.01441.02370.0605
Table 3. Quantitative evaluation on the out-of-domain test dataset. Mean values were calculated, and the closest values to the ground truth are bolded.
Table 3. Quantitative evaluation on the out-of-domain test dataset. Mean values were calculated, and the closest values to the ground truth are bolded.
ModelCHECCCTDCTnCTRPCSMCTDHRHEHRCCBS
GT2.204311.65580.88230.83200.31691.40280.51072.05700.2468
MD1.41385.13110.60230.60280.23551.56500.28891.87070.0631
MD-NPC1.42315.28950.58460.60640.23451.56730.38742.21710.0814
MD-NS1.45625.83040.75920.61290.24861.55590.68993.21330.1722
R100.60842.62540.35800.71250.34061.45140.23411.73380.0502
R10-NPC0.12961.31810.09680.68580.31811.48750.11021.29710.0258
R10-NS0.26791.67420.18510.68440.31851.48540.18011.51810.0432
R050.03851.07600.02190.68810.32201.48490.01691.04560.0032
R05-NPC0.56982.20380.44380.63960.29661.52460.21531.58470.0422
R05-NS0.13961.32190.09470.68360.31771.48830.05081.16000.0114
BART0.94223.17490.66040.45230.24421.65670.03591.06270.0327
GPT-20.47241.87830.42810.45480.24411.66490.00171.00390.0016
Table 4. Internal and external FMD scores (lower is better), sorted by score within domain, with FDR-corrected p values and significance flags (* for p < 0.05 ) for adjacent models.
Table 4. Internal and external FMD scores (lower is better), sorted by score within domain, with FDR-corrected p values and significance flags (* for p < 0.05 ) for adjacent models.
FMD, InternalFMD, POP909
In-Domain (HookTheory)In-Domain (HookTheory)
ModelFMD (Internal)pSigModelFMD (POP909)pSig
MD-NS7.7068MD-NS602.2391
MD7.88680.0402*MD604.76020.7767
MD-NPC7.92540.3336 MD-NPC604.89490.7767
R1015.93200.0000*BART606.61700.7767
R05-NPC16.01060.0000*GPT-2610.91950.1321
R10-NS18.85810.0000*R05-NPC613.51040.0145*
R05-NS21.25720.6148 R10617.78170.0000*
R10-NPC21.75070.8510 R10-NS619.84170.4983
R0523.79620.8510 R05-NS622.85650.0298*
BART35.84100.0000*R10-NPC625.58390.0039*
GPT-239.43920.6754 R05626.46880.7767
Out-of-Domain (JazzStandards)Out-of-Domain (JazzStandards)
MD-NPC55.1527GPT-2638.0580
MD-NS57.18640.0171*BART642.00340.0015*
MD60.17730.4622 MD-NS653.85200.0000*
R05-NPC99.16330.0000*MD655.25970.5850
R10100.41370.0000*MD-NPC656.65480.2650
R10-NS109.67480.0000*R05-NPC661.87150.5558
R10-NPC114.73090.0010*R10667.10430.0000*
R05-NS114.87980.9814 R10-NS669.59150.5558
R05118.66100.0859 R10-NPC671.63420.2650
BART154.79400.0000*R05-NS671.82280.9679
GPT-2159.86000.0004*R05674.96890.0141*
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kaliakatsos-Papakostas, M.; Makris, D.; Soiledis, K.; Tsamis, K.-T.; Katsouros, V.; Cambouropoulos, E. Diffusion-Inspired Masked Language Modeling for Symbolic Harmony Generation on a Fixed Time Grid. Appl. Sci. 2025, 15, 9513. https://doi.org/10.3390/app15179513

AMA Style

Kaliakatsos-Papakostas M, Makris D, Soiledis K, Tsamis K-T, Katsouros V, Cambouropoulos E. Diffusion-Inspired Masked Language Modeling for Symbolic Harmony Generation on a Fixed Time Grid. Applied Sciences. 2025; 15(17):9513. https://doi.org/10.3390/app15179513

Chicago/Turabian Style

Kaliakatsos-Papakostas, Maximos, Dimos Makris, Konstantinos Soiledis, Konstantinos-Theodoros Tsamis, Vassilis Katsouros, and Emilios Cambouropoulos. 2025. "Diffusion-Inspired Masked Language Modeling for Symbolic Harmony Generation on a Fixed Time Grid" Applied Sciences 15, no. 17: 9513. https://doi.org/10.3390/app15179513

APA Style

Kaliakatsos-Papakostas, M., Makris, D., Soiledis, K., Tsamis, K.-T., Katsouros, V., & Cambouropoulos, E. (2025). Diffusion-Inspired Masked Language Modeling for Symbolic Harmony Generation on a Fixed Time Grid. Applied Sciences, 15(17), 9513. https://doi.org/10.3390/app15179513

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop