Next Article in Journal
Stock Returns and Income Inequality
Next Article in Special Issue
Why Market Prices May Not Be the Best Benchmark for Automated Valuation Models: Empirical Evidence of Ex Ante Unobservability of Gender-Associated Price Discrepancy in the Auckland House Market
Previous Article in Journal
Is It a Case of Safe Haven? Analyzing Stablecoin Returns Considering Cryptocurrency Dynamics
Previous Article in Special Issue
Informed Trading Through the COVID-19 Pandemic: Evidence from the Bitcoin Market
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Towards Generative Interest-Rate Modeling: Neural Perturbations Within the Libor Market Model

School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Kent Street, Bentley, Perth, WA 6102, Australia
J. Risk Financial Manag. 2026, 19(1), 82; https://doi.org/10.3390/jrfm19010082
Submission received: 30 November 2025 / Revised: 5 January 2026 / Accepted: 12 January 2026 / Published: 21 January 2026
(This article belongs to the Special Issue Quantitative Finance in the Era of Big Data and AI)

Abstract

This study proposes a neural-augmented Libor Market Model (LMM) for swaption surface calibration that enhances expressive power while maintaining the interpretability, arbitrage-free structure, and numerical stability of the classical framework. Classical LMM parametrizations, based on exponential decay volatility functions and static correlation kernels, are known to perform poorly in sparsely quoted and long-tenor regions of swaption volatility cubes. Machine learning–based diffusion models offer flexibility but often lack transparency, stability, and measure-consistent dynamics. To reconcile these requirements, the present approach embeds a compact neural network within the volatility and correlation layers of the LMM, constrained by structural diagnostics, low-rank correlation construction, and HJM-consistent drift. Empirical tests across major currencies (EUR, GBP, USD) and multiple quarterly datasets from 2024 to 2025 show that the neural-augmented LMM consistently outperforms the classical model. Improvements of approximately 7–10% in implied volatility RMSE and 10–15% in PV RMSE are observed across all datasets, with no deterioration in any region of the surface. These results reflect the model’s ability to represent cross-tenor dependencies and surface curvature beyond the reach of classical parametrizations, while remaining economically interpretable and numerically tractable. The findings support hybrid model designs in quantitative finance, where small neural components complement robust analytical structures. The approach aligns with ongoing industry efforts to integrate machine learning into regulatory-compliant pricing models and provides a pathway for future generative LMM variants that retain an arbitrage-free diffusion structure while learning data-driven volatility geometry.

1. Introduction

The Libor Market Model (LMM), also known as the Brace–Gatarek–Musiela model, remains one of the most well-established frameworks for interest rate option pricing (Andersen & Piterbarg, 2010; Brace et al., 1997; Rebonato, 2002). Its appeal lies in the explicit modeling of discretely compounded forward rates, the transparent separation of volatility and correlation parameters (James & Webber, 2000; Karlsson et al., 2017), and the interpretability of its structural assumptions (Henry-Labordère, 2008; Hull, 2022). These features have made the model a staple for pricing and hedging swaptions across trading desks and risk management environments.
Despite its wide adoption, the classical LMM is limited by its reliance on parametric volatility and correlation specifications. Such parametrizations often fail to reproduce the full swaption volatility cube observed in modern markets, particularly expiry–tenor curvature and cross-sectional (expiry–tenor) structures, especially in sparsely quoted and long-tenor regions (Hagan et al., 2002; Obłój, 2007; Rebonato, 2004). Extensions such as SABR–LMM hybrids (Rebonato & White, 2009) address some smile features but introduce additional assumptions and can remain insufficient for jointly matching expiry and tenor structures.
Recent advances in machine learning have reopened the question of whether established financial models could be enhanced without sacrificing their interpretability. Neural stochastic differential equations (Chen et al., 2018; Kidger et al., 2021) and deep learning approaches to high-dimensional PDEs and BSDEs (Han et al., 2018; Huré et al., 2019) provide powerful tools for approximating complex diffusions, while physics-informed neural networks integrate structural PDE constraints directly into training (Berg & Nystrom, 2018; Raissi et al., 2019). However, most of such methods sacrifice transparency or introduce latent representations whose financial interpretation is unclear (also known as black box approaches). For option-pricing applications, where calibration stability, risk factor interpretability, and the traceability of sensitivities are essential, full replacement of classical structures is rarely acceptable.
Within this context, the present work views neural augmentation not as a substitute for the LMM’s analytical foundation but as a constrained overlay. Neural networks are introduced solely to parameterize volatility and correlation structures within the established forward-rate dynamics, and only under diagnostics designed to enforce model-consistent properties (Alaya et al., 2021; Horváth et al., 2021; Liu et al., 2019). This preserves the interpretability of the diffusion and drift structure while allowing greater expressiveness in matching empirical swaption surfaces. Such a design is aligned with the aims of the Special Issue, which emphasizes applications of modern machine learning tools that enhance, rather than replace, domain-specific models.
Although the industry now calibrates swaptions using OIS discounting and benchmarks such as EURIBOR, SONIA, and SOFR, the purpose of the SOFR discussion here is not to recast the LMM for backward-looking compounded rates. Rather, it illustrates that several calibration difficulties encountered when applying LMM-style models to SOFR (e.g., synthetic term construction, normal-volatility quoting conventions) reflect structural misalignment between forward-looking and backward-looking benchmarks. The proposed neural augmentation demonstrates how to tackle certain deficiencies of classical parametrizations that arise in both IBOR and SOFR settings, without requiring modifications to the LMM’s interpretability. The introduction of neural components raises natural questions regarding the necessity of classical extensions such as SABR overlays or jump diffusion terms. In this framework, neither are included. SABR is excluded because the neural parametrization provides sufficient functional expressiveness to generate strike-dependent shapes in principle. This study calibrates only to ATM quotes due to sparse and inconsistent OTM data, and the smile plots in the Appendix A are illustrative (not calibrated) (Richert & Buch, 2022). Jumps are excluded because under swaption-only calibration, jump intensity and diffusive volatility are not separately identifiable in a stable manner (Belomestny & Schoenmakers, 2009; Glasserman & Kou, 2003; Glasserman & Merener, 2003; Steinrücke et al., 2015). Avoiding these components prevents parameter proliferation and maintains the clarity of the diffusion-based interpretation.
The objective of this study is twofold: (i) to improve the calibration of the LMM to the ATM slice of the swaption volatility cube and (ii) to do so while preserving the model’s transparent structure. Neural augmentation is therefore treated not as flexibility for its own sake but as a targeted mechanism to address long-standing calibration deficiencies without altering the economic interpretation of the model. Subsequent sections detail the construction of the neural parametrizations, the diagnostics used to enforce structural consistency, and the empirical performance relative to classical specifications.

2. Materials and Methods

2.1. Market Data and Instruments

2.1.1. Yield Curves

The empirical analysis conducted was based on Bloomberg swaption volatility cubes for USD-SOFR, EUR-EURIBOR, and GBP-SONIA, together with the corresponding discount and projection curves (Andersen & Piterbarg, 2010; Hull, 2022; Rebonato, 2002). For each valuation date, input CSV files provide par yields for the relevant overnight-indexed swap (OIS) curves used for discounting and IBOR-linked swap curves used for projecting forward rates. These are converted into zero-coupon discount factors P ( 0 , T ) through a standard bootstrapping routine external to the present framework (Hull, 2022; James & Webber, 2000; Rebonato, 2002). The resulting discount function is represented numerically by a callable interpolant D F 0 ( T ) , which returns P ( 0 , T ) for maturities T.
Given a tenor structure { T 0 , , T N } with accrual factors δ i = T i + 1 T i , forward rates at time 0 are defined by
L i ( 0 ) = 1 δ i P ( 0 , T i ) P ( 0 , T i + 1 ) 1 , i = 0 , , N 1 .
This is consistent with standard LMM conventions (Andersen & Piterbarg, 2010; Brace et al., 1997; Rebonato, 2002). The par swap rates S 0 needed for swaption pricing are computed from the same discount curve using
A 0 = j = k k + m 1 δ j P ( 0 , T j + 1 ) ,
S 0 = P ( 0 , T k ) P ( 0 , T k + m ) A 0 ,
ensuring internal consistency between forward-rate and swap-rate inputs (Andersen & Piterbarg, 2010; Brace et al., 1997; Hull, 2022; Rebonato, 2002) for a swap starting at T k with m accrual periods. This ensures internal consistency between the forward-rate and swap-rate inputs.

2.1.2. Swaption Volatility Surface

The calibration target is the ATM swaption volatility surface for USD-SOFR, EUR-EURIBOR, and GBP-SONIA; hence, restricting the empirical study to ATM quotes is sensible because reliable OTM/smile grids are comparatively sparse and inconsistent across dates and currencies, and we treat full-smile calibration as future work. Market inputs are provided as normal (Bachelier) ATM-implied volatilities σ N , quoted in basis points on an expiry–tenor grid for each currency (Hull, 2022; Rebonato, 2002). Expiry and swap-tenor labels (e.g., 6M, 1Y, 5Y) are parsed into year fractions, and the union of all expiries and underlying payment dates is merged with the yield–curve pillars to define the simulation tenor array (Andersen & Piterbarg, 2010; Rebonato, 2002).
For each grid point, an annuity-consistent conversion from normal to Black ATM volatility is performed. Let F denote the ATM forward swap rate for expiry T exp and underlying swap annuity A 0 . Under the normal model, the per-annuity price of an ATM payer swaption is
PV ATM N = σ N T exp 2 π .
Under the Black model with volatility σ B , the corresponding per-annuity price is
PV ATM B ( σ B ) = F Φ 1 2 σ B T exp Φ 1 2 σ B T exp ,
where Φ is the standard normal CDF. For each grid point, σ B is obtained by numerically solving
PV ATM B ( σ B ) = PV ATM N ,
using a robust root-finding procedure with adaptive bracketing and safe fallbacks (Hagan et al., 2002; Hull, 2022; Rebonato, 2002; Rebonato & White, 2009). The resulting Black-ATM surface σ ATM mkt ( T exp , τ ) constitutes the primary calibration target.

2.2. Classical LIBOR Market Model Specification

2.2.1. Forward Dynamics

Let { T 0 , , T N } be a fixed tenor structure with associated forward rates L i ( t ) for accrual periods [ T i , T i + 1 ] . Under the forward measure Q T i + 1 and numeraire P ( t , T i + 1 ) , the LMM specifies
d L i ( t ) = σ i ( t ) L i ( t ) d W i ( i + 1 ) ( t ) ,
where σ i ( t ) is the instantaneous volatility and W i ( i + 1 ) is a Brownian motion under Q T i + 1 (Andersen & Piterbarg, 2010; Brace et al., 1997; Rebonato, 2002). When written under a common terminal measure Q T N , the coupled dynamics take the form
d L i ( t ) = μ i ( t ) L i ( t ) d t + σ i ( t ) L i ( t ) d W i ( t ) ,
with drift given by the HJM-style no-arbitrage condition
μ i ( t ) = j = i + 1 N 1 δ j L j ( t ) σ i ( t ) ρ i j σ j ( t ) 1 + δ j L j ( t ) ,
and instantaneous correlations
E [ d W i ( t ) d W j ( t ) ] = ρ i j d t .

2.2.2. Functional Volatility and Correlation

The classical benchmark uses low-dimensional functional forms:
σ i cl ( t ) = a exp b τ i ( t ) , τ i ( t ) = max ( T i t , 0 ) ,
ρ i j cl = exp β | i j | ,
with parameters ( a , b , β ) to be calibrated. This structure reduces the number of free parameters and reflects the empirical decay of volatilities and correlations across maturities (James & Webber, 2000; Karlsson et al., 2017; Rebonato, 2002, 2004).

2.3. Neural Parametrization of Volatility and Correlation

Architecture

To enhance calibration power while preserving interpretability, the classical parametrization is overlaid with a compact neural network. Network f θ takes as input the current time t and forward-rate vector L ( t ) = ( L 0 ( t ) , , L N 1 ( t ) ) and outputs perturbations to the classical volatility and a low-rank representation of the correlation structure:
( Δ σ ( t ) , B ( t ) ) = f θ t , L ( t ) ,
where Δ σ ( t ) R N and B ( t ) R N × r for a small rank r (typically r { 2 , 3 } ). The effective volatility and correlation are then defined as
C ˜ ( t ) = B ( t ) B ( t ) ,
D ( t ) = diag max ( diag ( C ˜ ( t ) ) , ε d ) ,
C 0 ( t ) = D ( t ) 1 C ˜ ( t ) D ( t ) 1 ,
ρ ( t ) = 1 2 C 0 ( t ) + C 0 ( t ) + ε I .
Here, ρ ( t ) is obtained through the diagonal rescaling of C ˜ ( t ) to (approximately) unit diagonal, followed by symmetrization and a small diagonal jitter term to improve conditioning; PSD is ensured by construction via C ˜ ( t ) = B ( t ) B ( t ) (Henry-Labordère, 2008; Rebonato, 2004).
The network itself is a shallow multi-layer perceptron with small hidden layers (8–16 units), smooth activations, and explicit output clipping,
Δ σ i ( t ) [ Δ ̲ , Δ ¯ ] , B i j ( t ) [ b ̲ , b ¯ ] ,
to avoid extreme values and improve numerical stability.
Jump components are not modeled in the final specification; jump-related heads are disabled and no Poisson or jump-amplitude parameters enter the SDEs.

2.4. Monte Carlo Simulation and Swaption Pricing

2.4.1. Path Simulation

Both classical and neural-augmented LMMs are simulated under the terminal measure Q T N using an Euler–Maruyama scheme with log-normal updates to preserve the positivity of forward rates (Andersen & Piterbarg, 2010; Brace et al., 1997; Glasserman & Merener, 2003; Rebonato, 2002). For a time step of size Δ t , the update for forward rate L i ( t ) is
L i ( t + Δ t ) = L i ( t ) exp μ i ( t ) 1 2 σ i ( t ) 2 Δ t + σ i ( t ) Δ t ( Γ ( t ) ϵ t ) i ,
where ϵ t N ( 0 , I ) , and Γ ( t ) is the Cholesky factor of the correlation matrix ρ ( t ) after normalization/symmetrization and jitter.
Simulation was fully vectorized: a fixed number of time steps n steps and Monte Carlo paths n paths were used, with quantities stored in rank-3 tensors ( time , path , forward ) . Classical and neural simulations shared the same numerical grid to permit direct comparison.

2.4.2. Swaption Pricing and Implied Volatility

For each expiry–tenor pair ( T exp , τ ) present in the market surface, the model-implied ATM payer swaption price was computed from simulated paths. Let S ( T exp ) denote the simulated par swap rate at the option expiry constructed from the simulated discount factors. The discounted payoff for each path is
Π = max S ( T exp ) S 0 , 0 A 0 ,
discounted back to time 0 using the simulated or initial discount curve. The Monte Carlo estimator of the present value is
PV ^ MC = 1 n paths k = 1 n paths Π ( k ) .
An implied Black-ATM volatility σ ATM model ( T exp , τ ) is then recovered by inverting the Black-ATM pricing formula with F = S 0 and annuity A 0 , ensuring comparability with the market surface and consistency with standard practice (Andersen & Piterbarg, 2010; Hull, 2022; Rebonato, 2002).

2.5. Calibration Objectives and Diagnostics

2.5.1. Vega-Weighted Swaption Data Loss

The primary calibration objective is a vega-weighted least squares error between market and model-implied ATM Black volatilities over all valid surface points (Hagan et al., 2002; Rebonato, 2002; Rebonato & White, 2009):
L data = 1 N cells ( T exp , τ ) PV ^ MC ( T exp , τ ) PV mkt ( T exp , τ ) Vega mkt ( T exp , τ ) 2 .
Here, PV mkt is the market price implied by the market Black-ATM volatility, and Vega mkt is the corresponding Black vega. This normalizes errors in “volatility points” and emphasizes regions where the market is most sensitive.
To reduce computational cost, a mini-batch strategy is employed: at each training step, only a small random subset of surface cells is used to estimate L data . Over the course of training, all cells are visited repeatedly.

2.5.2. Structural Regularization and Diagnostics

In addition to the data loss, a light structural regularizer is introduced. Rather than computing a full second-order pricing PDE residual, a first-order proxy penalizes rapid time variation and large gradients of the neural outputs, following ideas from physics-informed and BSDE-based deep learning for pricing and control (Berg & Nystrom, 2018; Han et al., 2018; Huré et al., 2019; Raissi et al., 2019):
L struct = E ( t , L ) t σ ( t , L ) 2 + L σ ( t , L ) 2 + σ ( t , L ) 2 ,
where expectations are approximated using states visited along simulated paths. This encourages smoothness in time and state space without incurring the numerical overhead and instability of nested automatic differentiation.
The set of the following diagnostics is tracked but not directly optimized:
  • A minimum eigenvalue of ρ ( t ) over time and paths (PSD check);
  • The fraction of simulated forward rates that become negative (positivity check);
  • Deviations from martingale conditions for discounted swap rates;
  • Gradient norms of network parameters and incidence of NaN/Inf values.
These statistics are used to tune regularization weights and learning rates but are not explicitly included in L total .

2.5.3. Total Objective

The overall loss used for neural calibration is
L total = λ data L data + λ struct L struct ,
with λ data λ struct to prioritize market fit while maintaining a minimal level of smoothness and stability.

2.6. Pricing Error Diagnostics

In the ATM setting used throughout the empirical study, K = S 0 at each expiry–tenor grid point. These are compared against market-implied Black prices:
PV Black = DF pay · Δ · BlackCall ( F , K , σ mkt , T ) .
Pricing errors are evaluated both in absolute basis points and as vega-weighted deviations:
Absolute error ( bp ) = 10 4 · PV model PV market
Vega-weighted error = PV model PV market BlackVega ( F , K , T , σ mkt ) .
Model-implied volatilities σ ^ are obtained by inverting the Black formula:
PV model = DF pay · Δ · BlackCall ( F , K , σ ^ , T ) .
The implied volatility error is computed as
IV error = σ ^ model σ market
and is reported separately for the classical and neural models.

2.7. Bucketed RMSE Analysis

To assess calibration quality across the swaption surface, we group swaption quotes into two-dimensional buckets defined by expiry and underlying swap tenor. Specifically,
  • Expiry buckets: T [ 0 , 1 ] , ( 1 , 2 ] , ( 2 , 5 ] , ( 5 , 10 ] , ( 10 , 30 ] ;
  • Tenor buckets: τ [ 1 , 2 ] , ( 2 , 5 ] , ( 5 , 10 ] , ( 10 , 30 ] .
Within each bucket, we compute the root mean squared error (RMSE) for implied volatility errors and for price errors:
RMSE ( σ ) = 1 N i = 1 N σ ^ i σ i mkt 2
RMSE ( bp ) = 1 N i = 1 N PV i model PV i market 2 · 10 4
where N is the number of quotes in the bucket, σ ^ i is the model-implied volatility, and σ i mkt is the corresponding market-implied volatility. Price RMSE is reported in basis points by scaling with 10 4 .

2.8. Drop-One Expiry Jackknife

To test robustness to surface segmentation, we evaluate calibration accuracy under a leave-one-expiry-out protocol. For each expiry bucket T drop , we recompute the RMSE after excluding all swaptions with expiry T drop :
Jackknife RMSE T drop = RMSE IV errors for all swaptions with T T drop .
This diagnostic highlights whether the reported improvements are concentrated in (or driven by) a particular expiry region, rather than being broadly distributed across the surface.

2.9. Statistical Significance Tests

Paired statistical tests were applied to determine whether neural calibration improves pricing accuracy in a statistically meaningful way:
  • Paired t-test: It tests mean error differences across all instruments.
  • Wilcoxon signed-rank test: A non-parametric test on absolute error ranks.
  • Cohen’s d: It measures effect size for improvement:
    d = X ¯ classic X ¯ neural s , s = pooled std . dev .
  • Proportion improved: The fraction of instruments where neural RMSE is lower than that of the classical instrument.

2.10. Summary Metrics

All diagnostics contribute to headline metrics, including
  • Overall implied volatility RMSE: IV RMSE ;
  • Overall pricing RMSE in basis points: BP RMSE ;
  • Percentage improvement: % Improvement = RMSE classic RMSE neural RMSE classic ;
  • Statistical test p-values and effect sizes.
Together, these diagnostics validate the neural LMM’s ability to improve fit and maintain robustness across expiries, tenors, and calibration conditions.

2.11. Training Procedure and Numerical Safeguards

2.11.1. Optimization Scheme

The neural parameters θ are initialized around the classical solution ( a , b , β ) and optimized using a stochastic gradient method with a conservative learning rate, similar in spirit to other neural extensions of the LMM (Horváth et al., 2021; Liu et al., 2019; Sridi & Bilokon, 2023). Training proceeds in micro-batches: for each time step in the simulation grid, a small subset of paths (e.g., 8) and a mini-batch of swaption surface cells are used to compute L total and its gradient. The number of gradient updates per time step is adaptively bounded based on recent loss levels, with explicit caps to avoid excessive computation.

2.11.2. Stability Mechanisms

Several numerical safeguards are employed—gradient clipping, output clipping, correlation projection, and finite-value checks—to maintain the stability of the neural-augmented dynamics and avoid numerical arbitrage (Horváth et al., 2021; Kidger et al., 2021; Marshall et al., 2024):
  • Gradient clipping: Global norm clipping is applied to parameter gradients to prevent exploding updates.
  • Value clipping: Neural outputs are clipped to predefined bounds before constructing volatilities and correlation factors.
  • Correlation projection: The correlation output is normalized to unit scale, symmetrized, and diagonally jittered to ensure stable Cholesky factorization.
  • Numerics checks: All intermediate tensors involved in loss computation are passed through finite-value checks; NaNs or Infs trigger diagnostic flags rather than silent failure.
  • Shared code path: Classical and neural simulations share the same simulation and pricing routines, with the neural block deactivated when f θ is absent, ensuring that the classical benchmark remains a stable point of reference.
These choices collectively convert the initially overparameterized and numerically fragile prototype into a tractable, interpretable, and computationally manageable neural-augmented LMM suitable for swaption surface calibration.

3. Discussion

3.1. Computational Environment and Performance

3.1.1. Hardware and Software Configuration

All experiments were conducted on a single workstation with the following specifications:
  • CPU: Intel® Core i7–8700K @ 3.70 GHz (6 cores/12 threads);
  • System Memory: 16 GB RAM;
  • GPU: NVIDIA GeForce RTX 2080 SUPER (8 GB VRAM);
  • Driver/CUDA: NVIDIA driver 440.33.01, CUDA 10.2;
  • Operating Environment: Docker container; (tensorflow/tensorflow:2.12.0-gpu-jupyter)
  • Deep Learning Framework: TensorFlow 2.12.0.
GPU utilization during experiments remained within nominal thermal and power envelopes, with no concurrent GPU workloads active. All reported timings correspond to wall-clock measurements on this single-GPU system.

3.1.2. Model Configuration and Runtime Parameters

The numerical experiments used the following simulation and calibration setup:
  • Number of Monte Carlo paths: n paths = 200 ;
  • Time step: Δ t = 0.25 ;
  • Number of time steps: n steps (model-dependent);
  • Mini-batch size: equal to the number of surface grid cells;
  • Optimization: gradient-based calibration using automatic differentiation.
Initialization and Parameterization Details (Directly from the Code)
The model is a compact MLP with a shared trunk and two heads:
[ t , L ( t ) ] R N + 1 Dense ( 4 , tanh ) h ( t , L ) vol head : Dense ( 8 , ReLU ) Dense ( N ) vol params ,
h ( t , L ) corr head : Dense ( 4 , ReLU ) Dense N ( N + 1 ) / 2 triangular params , h ( t , L ) Dense ( 1 , sigmoid ) w ( t , L ) .
All computations are in float32 (explicit casts in call and T_forw are stored as tf.float32). Dense-layer initializers are the TensorFlow/Keras defaults (kernel: Glorot/Xavier uniform; bias: zeros), since no custom initializers are specified in the class.

3.1.3. Volatility Map (Fixed by Code)

The code implements a tenor-decaying baseline
τ i ( t ) = ( T i forw t ) + , σ i base ( t ) = a vol exp b decay τ i ( t ) ,
and a small multiplicative perturbation produced by the network:
u i ( t , L ) = tanh vol _ head ( h ( t , L ) ) i [ 1 , 1 ] , σ i ( t ) = σ i base ( t ) exp s pert u i ( t , L ) ,
with s_pertperturb_scale = 0.15 . For numerical stability, the output volatility is explicitly clipped in the code to
σ i ( t ) [ 10 6 , 3.0 ] .

3.1.4. Correlation Map

The neural correlation candidate is SPSD by construction. The correlation head outputs a vector that is reshaped into a lower-triangular matrix T ( t , L ) (via tfp.math.fill_triangular). The diagonal is forced to be strictly positive,
T i i ( t , L ) softplus ( T i i ( t , L ) ) + 10 4 ,
and the SPSD covariance is formed as
Σ nn ( t ) = T ( t , L ) T ( t , L ) .
This is converted to a correlation matrix through diagonal rescaling:
C nn ( t ) = D ( t ) 1 Σ nn ( t ) D ( t ) 1 , D ( t ) = diag max ( diag ( Σ nn ( t ) ) , 10 8 ) .
Finally, symmetry and conditioning are enforced by the same code-level step applied twice:
C 1 2 ( C + C ) + 10 5 I .
Importantly, PSD is ensured by the T T construction and correlation normalization.
The learned correlation is anchored to an exponential baseline
C i j base = exp β opt | i j | ,
and mixed with the neural candidate using a sigmoid gate with an explicit cap,
w ( t , L ) = corr _ mix ( h ( t , L ) ) · w max , w max w _ cap = 0.15 ,
so that w ( t , L ) [ 0 , 0.15 ] in the provided code. The final correlation is
C ( t ) = ( 1 w ( t , L ) ) C base + w ( t , L ) C nn ( t ) ,
followed by the symmetrization/jitter step above.

3.1.5. Overfitting Mitigation Under Mini-Batch Calibration (Clarified Using the Code Design)

Overfitting is mitigated primarily by structural constraints that are explicit in the implementation:
  • Low capacity: The shared representation comprises only four units, and the heads are small (eight and four hidden units), limiting function complexity.
  • Bounded, small volatility deviations: Perturbations are bounded by tanh and scaled by perturb_scale = 0.15 , and then volatility is clipped to [ 10 6 , 3.0 ] . This prevents the network from fitting idiosyncratic quotes by extreme local volatility distortions.
  • PSD-by-construction correlation with anchored mixing: The correlation candidate is constrained via T T and correlation normalization, and its impact is restricted by a capped mixing weight w ( t , L ) [ 0 , 0.15 ] (with w_cap non-trainable in the provided code). Thus, the learned correlation can only deviate modestly from the interpretable exponential baseline unless the cap is explicitly relaxed.
  • Numerical regularization: Small diagonal jitter ( 10 5 I ) is added to the correlation output, improving conditioning and reducing sensitivity to mini-batch noise.

3.1.6. Runtime Performance

Figure 1 provides a visual comparison in seconds between wall-clock runtimes for the classical and neural implementations under identical hardware conditions.
For example, for Q2 of year 2024, the resulting end-to-end computational overhead of the neural approach corresponds to an approximate slowdown factor of
Slowdown 0.218 + 20.141 0.602 + 16.086 = 20.359 16.688 1.22 × ,
relative to the classical calibration–simulation pipeline. For some runs, the classical pipeline is slightly slower due to the BLAS threading/CPU state. Despite the increased runtime, the neural formulation enables substantially richer model expressiveness and end-to-end differentiability, which are not available in the classical setting.

3.2. Model Comparison on OOS and Long-Tenor Holdout

For each dataset (currency–quarter), we compared a classic model against a neural model using the root mean squared error (RMSE) measured in volatility points. Given targets y i and predictions y ^ i on a test set of size n,
RMSE = 1 n i = 1 n y i y ^ i 2 .
A lower RMSE indicates better out-of-sample fit.

3.2.1. Per-Row 80/20 Masked-Holdout Evaluation

Figure 2 shows the masked-holdout RMSE computed on a fixed 20% subset of tenor cells selected independently within each expiry row (dataset-specific mask). For each dataset d, we denote the classic and neural test errors by RMSE classic ( d ) and RMSE neural ( d ) . Bars are shown side-by-side to make absolute error differences visually comparable across datasets.

3.2.2. Long-Tenor Holdout Evaluation

Figure 3 evaluates both models on a long-tenor holdout subset defined by a tenor cutoff of
D holdout ( d ) = { ( x i , y i ) D ( d ) : T end ( i ) > T 0 } , T 0 = 8.5 .
RMSE is then computed on D holdout ( d ) using Equation (33). This isolates performance on the long end of the surface, where extrapolation risk is typically higher.

3.2.3. Relative Improvement Summary

To summarize the benefit of the neural model, Figure 4 plots the percentage improvement versus the classic baseline:
Δ % ( d ) = RMSE classic ( d ) RMSE neural ( d ) RMSE classic ( d ) × 100 .
Positive Δ % ( d ) indicates that the neural model reduces error; negative values would indicate degradation. Two bars per dataset are shown: one for the OOS per-row test set and one for the long-tenor holdout test set.

3.2.4. Classical vs. Neural Scatter

Figure 5 plots ( RMSE classic ( d ) , RMSE neural ( d ) ) for each dataset d. The diagonal line y = x marks equal performance:
RMSE neural ( d ) = RMSE classic ( d ) .
points below the diagonal correspond to RMSE neural ( d ) < RMSE classic ( d ) (neural better), while points above indicate the opposite. The distance to the diagonal provides a visual cue for effect size.

3.2.5. Interpretation Note

Absolute RMSE levels differ across currencies/quarters due to underlying market regime and dataset scale. For a cross-dataset comparison of model gain, Equation (35) (percentage reduction) is the most comparable metric, while Figure 2 and Figure 3 emphasize absolute error magnitudes.

3.3. Implied Volatility Error

Primary metrics (headline IV/PV RMSE):
CurrencyYearQuarterIV RMSE
(Classic)
IV RMSE
(Neural)
Δ  IV (%)PV RMSE bp
(Classic)
PV RMSE bp
(Neural)
Δ  PV (%)n
EUR2024Q20.2160.2007.69289.8258.310.87213
EUR2024Q30.2160.1959.70306.3265.213.44213
EUR2024Q40.2310.20710.21293.7251.714.32213
EUR2025Q20.1720.15510.13277.0236.114.74213
GBP2024Q20.1600.1487.63230.3201.912.33213
GBP2024Q30.1590.1477.41229.8202.411.93213
GBP2024Q40.1470.1376.69225.7202.110.49213
GBP2025Q20.1330.1209.50238.4205.913.64213
USD2024Q20.1750.1608.63253.8222.812.23213
USD2024Q30.1870.1708.96267.5231.113.61213
USD2024Q40.1600.1459.22258.9226.212.61213
USD2025Q20.1550.1418.56259.1226.612.52213
Secondary metrics (vega-weighted vol-point RMSE):
CurrencyYearQuarterVW Vol-Pts RMSE
(Classic)
VW Vol-Pts RMSE
(Neural)
Δ  VW (%)n
EUR2024Q20.2130.1977.17213
EUR2024Q30.2110.1928.92213
EUR2024Q40.2260.2059.42213
EUR2025Q20.1700.1549.63213
GBP2024Q20.1590.1477.39213
GBP2024Q30.1580.1467.17213
GBP2024Q40.1460.1376.50213
GBP2025Q20.1320.1209.26213
USD2024Q20.1730.1598.31213
USD2024Q30.1850.1698.55213
USD2024Q40.1580.1448.90213
USD2025Q20.1530.1418.25213
The empirical results of this study demonstrate that the neural augmentation of an OIS-discounted, EURIBOR/SONIA/SOFR-referenced LMM framework yields robust and systematic improvements in swaption surface calibration relative to classical parametric specifications (Brace et al., 1997; Horváth et al., 2021; Rebonato, 2002). Across all currencies examined (EUR, GBP, USD) and multiple quarterly datasets from 2024 to 2025, the neural-augmented LMM achieved reductions in implied volatility RMSE of approximately 7–10% and reductions in PV RMSE of 10–15% relative to the classical exponential decay volatility and correlation parameterization (Andersen & Piterbarg, 2010; Hull, 2022; Rebonato, 2002). These gains were observed uniformly across surface regions—including sparsely quoted and long-dated tenors—indicating that the improvement is neither an artifact of local overfitting nor sensitivity to specific surface segments (Karlsson et al., 2017; Rebonato, 2004). The consistency of these results reinforces the central thesis of this work: the neural perturbations of LMM volatility and correlation structures can outperform classical specifications while maintaining interpretability, numerical discipline, and arbitrage-aware dynamics (Andersen & Piterbarg, 2010; Henry-Labordère, 2008; Horváth et al., 2021).
From the perspective of prior research, these findings lie at the intersection of classical analytical models (e.g., SABR and HJM-type formulations) and modern machine learning approaches such as neural SDEs and PINN-based pricing networks (Hagan et al., 2002; Obłój, 2007; Rebonato, 2002). SABR remains widely used for EURIBOR-, SONIA-, and SOFR-referenced swaptions, but its single-factor structure and cross-maturity parameter inconsistency—documented extensively in both the pre- and post-LIBOR literature—limit its robustness in sparse-tenor and long-dated regions (Hagan et al., 2002; Obłój, 2007). The neural-augmented LMM does not require smile data and produces a globally consistent volatility–correlation structure, thereby addressing a well-known limitation of SABR (Hagan et al., 2002; Obłój, 2007; Richert & Buch, 2022).
Machine learning diffusion models offer greater flexibility but often sacrifice interpretability and require nontrivial arbitrage enforcement (Chen et al., 2018; Kidger et al., 2021; Raissi et al., 2019). Pure neural SDEs introduce latent state coordinates and non-economic drift structures, which complicate model validation, stress testing, and risk sensitivity analysis (Horváth et al., 2021; Kidger et al., 2021). In contrast, the present hybrid approach integrates a compact neural network into the LMM’s volatility and correlation layers while preserving HJM drift consistency, log-normal forward-rate dynamics, and the established economic interpretation of forward rates (Andersen & Piterbarg, 2010; Brace et al., 1997; Rebonato, 2002). This ensures that neural components enhance—rather than replace—the structured dynamics of the LMM.
The improvements observed here arise precisely in those regions where classical exponential decay parametrizations are known to underperform: mid-to-long expiries and long-dated underlying swaps (Karlsson et al., 2017; Rebonato, 2004). These regions of the OIS-discounted swaption surface—particularly long-dated SONIA and SOFR tenors—suffer from structural sparsity and elevated uncertainty, making them historically difficult for rigid parametric specifications (Hagan et al., 2002; Rebonato, 2004). The neural-augmented LMM captures subtle maturity-dependent curvature and cross-tenor geometry that classical forms cannot express. The low-rank factor structure for correlation perturbations introduces flexibility without violating PSD or requiring full-matrix calibration, which is historically unstable (Henry-Labordère, 2008; Rebonato, 2004).
Although numerical safeguards (correlation normalization with symmetrization/jitter, log-normal updates, gradient clipping) appear in the methodology, they do not alter the model’s economic structure; they simply preserve numerical stability in the presence of neural perturbations. The backbone LMM remains intact, demonstrating the usefulness of the classical model as a scaffold for neural refinement.
Although this work was calibrated to OIS-discounted swaption cubes referencing EURIBOR, SONIA, and SOFR—the post-LIBOR standard—the findings retain relevance for any forward-rate-based model under multi-curve discounting. The difficulty of adapting traditional LMM structures to backward-looking compounded benchmarks highlights the potential for neural overlays to mitigate parametric rigidity (Alaya et al., 2021). Although a full SOFR-specific extension is beyond the present scope, the success of neural perturbations here suggests promising directions for multi-curve or backward-looking frameworks.
In summary, the neural-augmented LMM improves calibration accuracy across EURIBOR-, SONIA-, and SOFR-referenced swaption surfaces while preserving the interpretability and arbitrage-aware structure expected of modern OIS-based interest-rate models (Horváth et al., 2021; Liu et al., 2019). The hybrid design balances interpretability with expressive power, offering a reproducible and regulatorily acceptable path toward machine learning–enhanced interest-rate modeling.

4. Conclusions

This work proposes and evaluates a neural-augmented extension of an OIS-discounted LMM referencing EURIBOR/SONIA/SOFR, enhancing swaption surface calibration without compromising interpretability or structural guarantees. Unlike full neural diffusion models that replace the underlying structure, the present method preserves forward-rate dynamics, HJM drift consistency, and log-normal evolution, modifying only the volatility and correlation layers through a compact, regularized neural network (Andersen & Piterbarg, 2010; Brace et al., 1997; Hagan et al., 2002; Horváth et al., 2021; Kidger et al., 2021).
Empirical evaluation across the EUR-EURIBOR, GBP-SONIA, and USD-SOFR swaption datasets from 2024 to 2025 demonstrated systematic improvements over classical exponential decay formulations (Hull, 2022; Rebonato, 2002). Improvements of 7–10% in implied volatility RMSE and 10–15% in PV RMSE were observed across all datasets, with no deterioration in any region of the surface. This suggests that neural perturbations address structural deficiencies in classical parametrizations, especially in long-dated or sparsely quoted regions (Karlsson et al., 2017; Rebonato, 2004).
These findings underline the value of hybrid classical–neural designs in OIS-discounted settings, where the transparency of the LMM coexists with the expressive power needed to match modern EURIBOR, SONIA, and SOFR volatility geometry (Andersen & Piterbarg, 2010; Henry-Labordère, 2008). This structure aligns with model-risk requirements and integrates naturally into existing trading-desk calibration pipelines.
The observed improvements in Table 1 (for further visualization aspects, please see extensive charts in the Appendix A) suggest broader implications for OIS-based markets, where backward-looking compounded benchmarks such as SOFR, SONIA, and €STR introduce modeling challenges that neural perturbations may help mitigate (Liu et al., 2019; Richert & Buch, 2022). Future work primarily concerns broader empirical validation once consistent non-ATM swaption grids are available and applying the same constrained-overlay mechanism on top of richer classical backbones (e.g., multi-factor LMM variants) rather than changing the core methodology (Kidger et al., 2021; Liu et al., 2019; Richert & Buch, 2022). Hybrid designs such as the one presented here offer a path toward next-generation interest rate models that remain both interpretable and data-driven.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Swaption data is available from Bloomberg under the following tickers: SWAPTION VOLATILITY CUBE USD-SOFR/EUR-EURIBOR/GBP-SONIA.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
ATMAt-the-money;
BSDEBackward stochastic differential equation;
CDFCumulative distribution function;
DFDiscount factor;
EUREuro area currency (euro);
EURIBOREuro Interbank Offered Rate;
FRAForward-rate agreement;
GANGenerative adversarial network;
GBPBritish pound sterling;
HJMHeath–Jarrow–Morton model;
IBORInterbank Offered Rate (generic benchmark family);
IVImplied volatility;
LMMLibor market model;
MCMonte Carlo;
MLPMulti-layer perceptron;
OISOvernight indexed swap (discounting curve);
OLSOrdinary least squares (if used anywhere);
PDFProbability density function;
PINNPhysics-informed neural network;
PVPresent value;
PDEPartial differential equation;
PSDPositive semidefinite;
QRisk-neutral probability measure (when used as Q );
RMSERoot mean squared error;
SABRStochastic Alpha Beta Rho volatility model;
SDEStochastic differential equation;
SOFRSecured Overnight Financing Rate;
SONIASterling Overnight Index Average;
USDUnited States dollar;
VWVega-weighted.

Appendix A. Charts

Appendix A.1. USD Surface Comparison

Jrfm 19 00082 i001
Jrfm 19 00082 i002

Appendix A.2. EUR Surface Comparison

Jrfm 19 00082 i003
Jrfm 19 00082 i004

Appendix A.3. GBP Surface Comparison

Jrfm 19 00082 i005
Jrfm 19 00082 i006

Appendix A.4. USD Correlation Difference

Jrfm 19 00082 i007

Appendix A.5. EUR Correlation Difference

Jrfm 19 00082 i008

Appendix A.6. GBP Correlation Difference

Jrfm 19 00082 i009

Appendix A.7. USD Model Implied Smile

Jrfm 19 00082 i010

Appendix A.8. EUR Model Implied Smile

Jrfm 19 00082 i011

Appendix A.9. GBP Model Implied Smile

Jrfm 19 00082 i012

Appendix A.10. USD NN Volatility Output Across Time and Tenor

Jrfm 19 00082 i013

Appendix A.11. EUR NN Volatility Output Across Time and Tenor

Jrfm 19 00082 i014

Appendix A.12. GBP NN Volatility Output Across Time and Tenor

Jrfm 19 00082 i015

Appendix A.13. USD Correlation Mix Weight

Jrfm 19 00082 i016

Appendix A.14. EUR Correlation Mix Weight

Jrfm 19 00082 i017

Appendix A.15. GBP Correlation Mix Weight

Jrfm 19 00082 i018

Appendix A.16. USD Volatility Term Structure

Jrfm 19 00082 i019

Appendix A.17. EUR Volatility Term Structure

Jrfm 19 00082 i020

Appendix A.18. GBP Volatility Term Structure

Jrfm 19 00082 i021

References

  1. Alaya, M. B., Kebaier, A., & Sarr, D. (2021). Deep calibration of interest rates model. arXiv, arXiv:2110.15133. Available online: https://arxiv.org/abs/2110.15133 (accessed on 28 November 2025).
  2. Andersen, L. B., & Piterbarg, V. V. (2010). Interest rate modeling, volume iii: Products and risk management. Atlantic Financial Press. [Google Scholar]
  3. Belomestny, D., & Schoenmakers, J. (2009). A jump-diffusion Libor model and its robust calibration (Tech. Rep. No. RQUF-2008-0135). Weierstrass Institute for Applied Analysis and Stochastics (WIAS). Available online: https://www.wias-berlin.de/people/schoenma/RQUF-2008-0135_Final.pdf (accessed on 28 November 2025).
  4. Berg, J., & Nystrom, K. (2018). A unified deep artificial neural network approach to partial differential equations in complex geometries. Neurocomputing, 317, 28–41. [Google Scholar] [CrossRef]
  5. Brace, A., Gatarek, D., & Musiela, M. (1997). The market model of interest rate dynamics. Mathematical Finance, 7(2), 127–147. [Google Scholar] [CrossRef]
  6. Chen, R. T., Rubanova, Y., Bettencourt, J., & Duvenaud, D. K. (2018). Neural ordinary differential equations. Advances in Neural Information Processing Systems, 31, 7. Available online: https://user.eng.umd.edu/~austin/ence688p.d/papers-reading/NN-Ordinary-Differential-Equations2018.pdf (accessed on 28 November 2025).
  7. Glasserman, P., & Kou, S. (2003). The term structure of simple forward rates with jump risk. Mathematical Finance, 13(3), 383–410. Available online: https://www.columbia.edu/~sk75/GlassKou.pdf (accessed on 28 November 2025). [CrossRef]
  8. Glasserman, P., & Merener, N. (2003). Numerical solution of jump-diffusion LIBOR market models. Finance and Stochastics, 7(1), 1–27. Available online: https://link.springer.com/article/10.1007/s007800200076 (accessed on 28 November 2025). [CrossRef]
  9. Hagan, P. S., Kumar, D. K., Lesniewski, A. S., & Woodward, D. E. (2002). Managing smile risk. Wilmott Magazine, 1, 84–108. [Google Scholar]
  10. Han, J., Jentzen, A., & E, W. (2018). Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34), 8505–8510. [Google Scholar] [CrossRef] [PubMed]
  11. Henry-Labordère, P. (2008). Analysis, geometry, and modeling in finance: Advanced methods in option pricing. Chapman and Hall/CRC. [Google Scholar]
  12. Horváth, B., Muguruza, A., & Tomas, J. (2021). Deep learning volatility: A deep neural network perspective on pricing and calibration in (rough) volatility models. Quantitative Finance, 21(11), 1731–1749. [Google Scholar] [CrossRef]
  13. Hull, J. C. (2022). Options, futures, and other derivatives (11th ed.). Pearson Education. [Google Scholar]
  14. Huré, C., Pham, H., & Warin, X. (2019). Deep neural networks algorithms for stochastic control problems on finite horizon: Numerical applications. ESAIM: Proceedings and Surveys, 65, 32–50. [Google Scholar] [CrossRef]
  15. James, J., & Webber, N. (2000). Interest rate modelling. Wiley. [Google Scholar]
  16. Karlsson, P., Pilz, K. F., & Schlögl, E. (2017). Calibrating a market model with stochastic volatility to commodity and interest rate risk. Quantitative Finance, 17(6), 907–925. [Google Scholar] [CrossRef]
  17. Kidger, P., Foster, J., Li, X., Oberhauser, H., & Lyon, T. (2021). Neural SDEs as infinite-dimensional GANs. arXiv, arXiv:2106.01845. [Google Scholar] [CrossRef]
  18. Liu, S., Borovykh, A., Grzelak, L. A., & Oosterlee, C. W. (2019). A neural network-based framework for financial model calibration. Journal of Mathematics in Industry, 9, 1–24. [Google Scholar] [CrossRef]
  19. Marshall, N., Xiao, K. L., Agarwala, A., & Paquette, E. (2024). The Dynamics of SGD with gradient clipping in high dimensions. arXiv, arXiv:2406.11733. [Google Scholar] [CrossRef]
  20. Obłój, J. (2007). Fine-tune your smile: Correction to Hagan et al.’s formula. arXiv, arXiv:0708.0998. [Google Scholar] [CrossRef]
  21. Raissi, M., Perdikaris, P., & Karniadakis, G. E. (2019). Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378, 686–707. [Google Scholar] [CrossRef]
  22. Rebonato, R. (2002). Modern pricing of interest-rate derivatives: The libor market model and beyond. Princeton University Press. [Google Scholar]
  23. Rebonato, R. (2004). Volatility and Correlation: The Perfect Hedger and the Fox. John Wiley & Sons. [Google Scholar]
  24. Rebonato, R., & White, M. (2009). Linking caplets and swaptions prices in the LMM–SABR model. The Journal of Computational Finance, 13(2), 1–43. [Google Scholar] [CrossRef]
  25. Richert, I., & Buch, R. (2022). Interpolation of missing swaption volatility data using gibbs sampling on variational autoencoders. arXiv, arXiv:2204.10400. [Google Scholar] [CrossRef]
  26. Sridi, A., & Bilokon, P. (2023). Applying deep learning to calibrate stochastic volatility models. arXiv, arXiv:2309.07843. Available online: https://arxiv.org/abs/2309.07843 (accessed on 28 November 2025).
  27. Steinrücke, M., Zagst, R., & Swishchuk, A. (2015). The Markov-switching jump-diffusion LIBOR market model. Quantitative Finance, 15(3), 455–476. [Google Scholar] [CrossRef]
Figure 1. Classic vs Neural implementation runtime.
Figure 1. Classic vs Neural implementation runtime.
Jrfm 19 00082 g001
Figure 2. Per-row 80/20 masked holdout: test RMSE comparison between classic and neural models.
Figure 2. Per-row 80/20 masked holdout: test RMSE comparison between classic and neural models.
Jrfm 19 00082 g002
Figure 3. Long-tenor holdout ( T end > 8.5 ): test RMSE comparison between classical and neural models.
Figure 3. Long-tenor holdout ( T end > 8.5 ): test RMSE comparison between classical and neural models.
Jrfm 19 00082 g003
Figure 4. Relative improvement of neural vs. classical models on test RMSE (%): OOS per-row and long-tenor holdout. Positive values indicate lower RMSE for the neural model.
Figure 4. Relative improvement of neural vs. classical models on test RMSE (%): OOS per-row and long-tenor holdout. Positive values indicate lower RMSE for the neural model.
Jrfm 19 00082 g004
Figure 5. Scatter plot of OOS per-row test RMSE: neural vs. classical. The diagonal y = x indicates equal performance; points below the line favor the neural model.
Figure 5. Scatter plot of OOS per-row test RMSE: neural vs. classical. The diagonal y = x indicates equal performance; points below the line favor the neural model.
Jrfm 19 00082 g005
Table 1. Summary of calibration improvements from neural-augmented LMM relative to the classical model.
Table 1. Summary of calibration improvements from neural-augmented LMM relative to the classical model.
DatasetIV RMSE Improvement (%)PV RMSE Improvement (%)
EUR 2024 Q27.6910.87
EUR 2024 Q39.7013.44
EUR 2024 Q410.2114.32
EUR 2025 Q210.1314.74
GBP 2024 Q27.6312.33
GBP 2024 Q37.4111.93
GBP 2024 Q46.6910.49
GBP 2025 Q29.5013.64
USD 2024 Q28.6312.23
USD 2024 Q38.9613.61
USD 2024 Q49.2212.61
USD 2025 Q28.5612.52
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Knezevic, A. Towards Generative Interest-Rate Modeling: Neural Perturbations Within the Libor Market Model. J. Risk Financial Manag. 2026, 19, 82. https://doi.org/10.3390/jrfm19010082

AMA Style

Knezevic A. Towards Generative Interest-Rate Modeling: Neural Perturbations Within the Libor Market Model. Journal of Risk and Financial Management. 2026; 19(1):82. https://doi.org/10.3390/jrfm19010082

Chicago/Turabian Style

Knezevic, Anna. 2026. "Towards Generative Interest-Rate Modeling: Neural Perturbations Within the Libor Market Model" Journal of Risk and Financial Management 19, no. 1: 82. https://doi.org/10.3390/jrfm19010082

APA Style

Knezevic, A. (2026). Towards Generative Interest-Rate Modeling: Neural Perturbations Within the Libor Market Model. Journal of Risk and Financial Management, 19(1), 82. https://doi.org/10.3390/jrfm19010082

Article Metrics

Back to TopTop