Next Article in Journal
A Profitability and Risk Decomposition Analysis of the Open Economy Insurance Sector
Previous Article in Journal
Stock Market Hype: An Empirical Investigation of the Impact of Overconfidence on Meme Stock Valuation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Domain Knowledge Preservation in Financial Machine Learning: Evidence from Autocallable Note Pricing

1
PRISM Sorbonne, Université Paris 1 Panthéon-Sorbonne, 17 rue de la Sorbonne, 75005 Paris, France
2
Computer Science and Smart Systems, Faculté des Sciences et Techniques de Tanger, University Abdelmalek Essaadi, Tangier 90000, Morocco
*
Author to whom correspondence should be addressed.
Risks 2025, 13(7), 128; https://doi.org/10.3390/risks13070128
Submission received: 8 April 2025 / Revised: 23 June 2025 / Accepted: 25 June 2025 / Published: 1 July 2025

Abstract

Machine learning applications in finance commonly employ feature decorrelation techniques developed for generic statistical problems. We investigate whether this practice appropriately addresses the unique characteristics of financial data, where correlations often encode fundamental economic relationships rather than statistical noise. Using autocallable structured notes as a laboratory, we demonstrate that preserving natural financial correlations outperforms conventional orthogonalization approaches. Our analysis covers autocallable notes with quarterly coupon payments, dual barrier structure, and embedded down-and-in up-and-out put options, priced using analytical methods with automatic differentiation for Greeks’ computation. Across neural networks, gradient boosting, and hybrid architectures, basic financial features achieve superior performance compared to decorrelated alternatives, with RMSE improvements ranging from 43% to 191%. The component-wise analysis reveals complex interactions between autocall mechanisms and higher-order sensitivities, particularly affecting vanna and volga patterns near barrier levels. These findings provide empirical evidence that financial machine learning benefits from domain-specific feature engineering principles that preserve economic relationships. Across all tested architectures, basic features consistently outperformed orthogonalized alternatives, with the largest improvements observed in CatBoost.

1. Introduction

The application of machine learning techniques to financial problems presents opportunities to enhance methodologies through domain-specific considerations. A fundamental principle in conventional machine learning practice involves feature decorrelation to eliminate presumed redundancy and improve model performance (James et al. 2013). However, financial data exhibit correlations that frequently encode essential economic relationships developed through market mechanisms, arbitrage activities, and risk management practices, suggesting potential benefits for specialized approaches rather than treating them as statistical noise requiring elimination.
These findings contribute to the growing evidence that financial machine learning benefits from domain-specific approaches. Building upon Zeng and Rao (2025)’s insights on domain knowledge integration and Maddulety (2019)’s identification of feature engineering research gaps, our results provide systematic evidence that preserving economic correlations enhances model performance across diverse architectures. The consistency of improvements across neural networks, gradient boosting, and hybrid ensembles suggests fundamental characteristics of financial data that distinguish it from generic statistical applications, supporting the development of specialized methodological frameworks for computational finance.
This study investigates the hypothesis that preserving natural financial correlations can outperform conventional decorrelation approaches in machine learning applications. We employ autocallable structured notes as an empirical laboratory due to their complex mathematical structure, combining multiple financial relationships within a single instrument. These products exhibit pricing functions with mixed regularity properties including smooth regions amenable to neural network approximation and barrier-dependent discontinuities suitable for tree-based methods.
Autocallable notes provide an ideal testing ground because they incorporate several fundamental financial relationships simultaneously. The interaction between spot price movements, volatility dynamics, barrier proximity effects, and time decay creates natural correlations that reflect essential pricing mechanics. The embedded down-and-in up-and-out put option generates additional complexity through path-dependent terms that affect higher-order sensitivities in economically meaningful patterns.
Our analysis extends beyond a simple performance comparison to investigate the mathematical foundations underlying the observed differences. Through component-wise decomposition and automatic differentiation techniques, we examine how feature engineering choices affect model ability to capture complex sensitivity patterns including vanna and volga behaviors near barrier levels. The systematic investigation reveals fundamental tensions between statistical optimization objectives and domain knowledge preservation requirements in financial applications. The systematic investigation reveals that conventional statistical optimization—specifically correlation elimination through orthogonalization—directly conflicts with the preservation of economically meaningful relationships in financial data. While orthogonal features achieve statistical decorrelation and reduce variance inflation factors, they obscure natural dependencies between spot prices, volatility, barrier levels, and time decay that are essential for accurate autocallable pricing. This creates a trade-off where statistical optimization criteria may degrade rather than improve machine learning performance in financial applications.
The remainder of this paper is organized as follows. Section 2 reviews the literature on computational finance and machine learning applications in derivatives pricing. Section 3 details our autocallable note pricing framework and feature engineering approaches. Section 4 presents empirical results comparing basic and orthogonal features across multiple architectures. Section 5 discusses implications for financial machine learning and risk management. Section 6 concludes this study with directions for future research.

2. Literature Review

The intersection of machine learning and financial derivatives pricing has evolved rapidly, with recent contributions demonstrating both promising applications and methodological challenges. Ruf and Wang (2020) establish neural network approaches for American option pricing, while Beck et al. (2021) develop deep learning solutions for high-dimensional partial differential equations relevant to derivatives’ valuation. However, these contributions primarily focus on computational efficiency rather than examining whether conventional machine learning practices suit financial domain characteristics.
Feature engineering literature in machine learning commonly prescribes correlation elimination through orthogonalization techniques. The underlying assumption treats correlation as a redundancy that impedes learning algorithms by creating multicollinearity problems. Principal component analysis, variance inflation factor reduction, and other decorrelation methods are standard practice across machine learning applications (Jolliffe and Cadima 2016).
Recent financial machine learning literature has begun questioning whether generic statistical techniques appropriately address domain-specific requirements. Gu et al. (2020) demonstrate that asset pricing applications benefit from careful consideration of economic theory in feature construction. L. Chen et al. (2023) show that incorporating financial constraints improves neural network performance in portfolio optimization contexts. These findings suggest that financial applications may require domain-aware approaches rather than generic statistical optimization.
The literature on structured products provides limited guidance on appropriate machine learning methodologies, despite the significant academic attention these instruments have received since their introduction. Autocallable notes, first formally analyzed in the academic literature by Wilmott et al. (1995), represent a class of structured products that combine traditional bond features with embedded barrier options to create early redemption mechanisms. The mathematical foundations for these products emerged from the barrier option literature of the 1990s (Merton 1973; Rich 1994), which established the theoretical framework for path-dependent derivatives’ pricing.
The academic treatment of autocallable structures gained momentum in the early 2000s, with Nelken (2000) providing a comprehensive analysis of structured product mechanics and Wystup (2006) extending the framework to foreign exchange applications. Guillaume (2015) develop analytical approaches for autocallable valuation but do not address machine learning applications. A more recent study by Deng et al. (2012) establishes pricing frameworks for autocallable notes without investigating computational learning techniques, while Paletta and Tunaru (2022) examine Bayesian approaches but focus on calibration rather than machine learning feature engineering.
The evolution of autocallable products reflects broader trends in structured finance, as documented by Telpner (2009), where traditional risk–return profiles are modified through embedded option strategies. The complexity of these instruments, particularly their path-dependent characteristics and multiple embedded optionalities, creates unique challenges for computational approaches that distinguish them from simpler derivative structures (Lipton 2001).
Recent advances in computational derivatives’ pricing have demonstrated both the potential and challenges of applying machine learning to structured products. Davis et al. (2021) establish that tree-based techniques can achieve significant computational improvements in derivatives’ pricing while maintaining accuracy, demonstrating orders-of-magnitude speed improvements for exotic derivatives. Zhang et al. (2024) develop unified pricing frameworks for autocallable products using Markov chain approximation, revealing the mathematical complexity inherent in barrier-dependent structures. Cui et al. (2024) propose machine learning approaches for multi-asset autocallable pricing that achieve 250× computational acceleration over Monte Carlo methods while improving hedging performance through distributional reinforcement learning.
The financial machine learning literature has increasingly recognized that domain-specific approaches may outperform generic statistical methods. Zeng and Rao (2025) demonstrate that incorporating domain knowledge through explainable feature construction significantly improves financial prediction accuracy compared to conventional preprocessing approaches. Maddulety (2019) identify feature engineering as a critical research gap in banking risk management applications, noting that while machine learning techniques show promise, limited research addresses whether standard preprocessing methods suit financial data characteristics. Our investigation contributes to this research direction.

3. Pricing Formula and Feature Engineering

3.1. Autocallable Note Structure and Pricing Framework

We analyze autocallable notes with the following specification: 100 notional amount, quarterly observation dates over one-year maturity, 5% annual coupon rate paid quarterly when autocall conditions are not triggered, 80% knock-in barrier, 100% autocall barrier, and 90% put strike for the embedded barrier option. This configuration represents typical market structures and provides sufficient complexity for investigating machine learning approaches.
The total price decomposes into four interacting components:
V total = V coupon + V autocall + V notional V DIUO
where each component has the following analytical representation:
V coupon = i = 1 4 C · e r t i · P ( τ U > t i )
V autocall = ( N + C ) i = 1 4 e r t i P ( τ U > t i 1 ) P ( τ U > t i )
V notional = N · e r T · P ( τ U > T )
V DIUO = Down - and - in up - and - out put value
Here, C represents the quarterly coupon rate, N the notional amount, τ U = inf { t 0 : S t U } the first passage time to the autocall barrier U, and P ( τ U > t ) the survival probability.
The survival probabilities follow from the reflection principle for geometric Brownian motion:
P ( τ U > t ) = Φ ln ( S 0 / U ) + ( r σ 2 / 2 ) t σ t
S 0 U 2 ( r σ 2 / 2 ) / σ 2 Φ ln ( S 0 / U ) ( r σ 2 / 2 ) t σ t
where Φ denotes the standard normal cumulative distribution function. These analytical expressions enable exact computation of autocall probabilities and coupon survival rates without Monte Carlo approximation.
The coupon component represents quarterly payments’ conditional on non-autocall events. The autocall component captures the early redemption value when the underlying breaches the upper barrier. The notional component reflects final principal repayment probability. The down-and-in up-and-out put option provides capital protection when the knock-in barrier is breached but becomes worthless if the autocall barrier is subsequently triggered.
We employ exact analytical pricing methods rather than Monte Carlo simulation to ensure that observed performance differences reflect feature engineering choices rather than numerical approximation artifacts. This approach eliminates the confounding effects of Monte Carlo convergence errors, path discretization bias, and random sampling variation that could obscure the relationship between feature construction and model performance. The analytical framework enables generation of precisely labeled training data across the entire parameter space, providing clean experimental conditions essential for isolating feature engineering effects. The DIUO put valuation uses reflection method techniques with Fourier series expansion for computational efficiency (Carr and Linetsky 2006).
For the DIUO put option with barriers L (knock-in) and U (autocall), strike K, and maturity T, the analytical value is
V DIUO = e r T E Q ( K S T ) + · 1 { τ L T , τ U > T }
= e r T P vanilla P reflection
where P vanilla represents the standard down-and-in put value and P reflection captures the barrier interaction effects through the reflection principle:
P reflection = n = 1 c n exp 1 2 σ 2 T n π ln ( U / L ) 2
× sin n π ln ( S 0 / L ) ln ( U / L )
with coefficients c n determined by the boundary conditions at both barriers. This approach enables the generation of exact pricing labels across the parameter space while maintaining computational tractability for large-scale machine learning applications.

3.2. Parameter Space and Data Generation

Our dataset employs Latin Hypercube Sampling across a seven-dimensional parameter space including spot price, autocall barrier, knock-in barrier, volatility, time to maturity, risk-free rate, and coupon rate. The sampling protocol ensures systematic coverage while respecting no-arbitrage constraints inherent in autocallable structures.
Parameter ranges reflect economically realistic conditions: spot prices from 70% to 130% of notional components, volatilities from 10% to 50% annually, time to maturity from 0.25 to 1.0 years, risk-free rates from 0% to 5% annually, and coupon rates from 2% to 8% annually. Barrier levels maintain economically meaningful relationships with autocall barriers above spot prices and knock-in barriers below initial levels.
The constraint enforcement mechanism ensures financial validity through a proportional adjustment rather than rejection sampling. This approach preserves the statistical properties of Latin Hypercube Sampling while maintaining systematic parameter space coverage. The resulting dataset contains 3000 valid parameter combinations with corresponding exact pricing labels computed through analytical methods.

3.3. Feature Engineering Framework

We compare two distinct feature engineering approaches to investigate the impact of correlation preservation versus elimination on model performance.
Basic features employ raw financial parameters directly: spot price, autocall barrier, knock-in barrier, volatility, time to maturity, risk-free rate, and coupon rate. This representation preserves natural economic relationships while maintaining direct interpretability essential for financial applications. The correlations present in these features reflect fundamental financial principles including volatility–option value relationships, barrier proximity effects, and time decay patterns.
Orthogonal features follow conventional machine learning practice for multicollinearity reduction through mathematical transformations designed to minimize correlation while preserving information content. The transformations include
Risk - neutral drift = ( r σ 2 / 2 ) T
Barrier geometry = ln ( U / L )
Relative position = S 0 U L σ T
Total volatility = σ T
Discount factor = e r T
Risk - adjusted yield = C σ + 0.01
Moneyness indicator = ln ( S 0 / K )
where U and L represent the autocall and knock-in barriers, respectively, and K represents the put strike. These features achieve statistical decorrelation but obscure direct economic relationships.
The comparison framework enables systematic investigation of whether correlation elimination improves or degrades model performance in financial contexts. The orthogonal transformation achieves substantial variance inflation factor reduction while the basic features maintain high correlations, reflecting natural financial relationships.

3.4. Machine Learning Architecture

Our model selection directly addresses the mathematical heterogeneity inherent in autocallable pricing functions, which exhibit both smooth analytical regions and barrier-induced discontinuities within the same valuation surface. Neural networks excel in smooth regions where the pricing function belongs to Sobolev spaces with continuous derivatives, while gradient boosting algorithms naturally handle threshold effects and jump discontinuities through tree-based splitting mechanisms. Hybrid ensemble approaches enable systematic investigation of whether feature engineering effects persist across fundamentally different approximation paradigms, strengthening the generalizability of our findings.
The neural network architecture employs multi-layer perceptrons with hidden dimensions [256, 128, 64], batch normalization for numerical stability, and dropout regularization. Training uses AdamW optimization with early stopping based on validation loss to prevent overfitting. The architecture balances approximation capability with computational efficiency suitable for real-time applications.
CatBoost gradient boosting provides tree-based approximation, which is particularly effective for bounded variation functions characteristic of barrier options. The algorithm handles threshold effects naturally through splitting mechanisms while maintaining computational efficiency. Configuration parameters include a tree depth of 6, learning rate of 0.1, and early stopping based on validation performance.
The hybrid ensemble combines neural network global approximation with gradient boosting residual modeling. The approach recognizes that different algorithms excel in different regions of the pricing surface by training neural networks on overall structure and gradient boosting on remaining patterns. This decomposition aligns algorithmic strengths with mathematical properties of the underlying pricing function.

Functional Space Decomposition and Algorithm Selection

The mathematical foundation for our hybrid approach leverages function space decomposition theory and the complementary approximation strengths of different machine learning architectures. Autocallable pricing functions exhibit mixed regularity: smooth behavior in regions away from barriers (suitable for neural network approximation) and jump discontinuities near barrier levels (optimal for tree-based methods).
Following approximation theory (Adams and Fournier 2003), neural networks achieve optimal convergence rates O ( n k / d ) for smooth functions in Sobolev spaces W k , 2 ( Ω ) , where k 2 represents the degree of smoothness and d the input dimension. CatBoost excels for functions of bounded variation B V ( Ω ) through tree splitting mechanisms that naturally approximate piecewise constant functions (DeVore 1998).
Our hybrid approach exploits functional decomposition:
V ( x ) = V smooth ( x ) + V rough ( x )
where the neural network captures smooth components and CatBoost models’ residual discontinuities. This architectural choice ensures that each algorithm operates within its optimal functional space, explaining the systematic performance patterns observed across feature engineering approaches and providing a theoretical justification for our empirical methodology.

3.5. Greeks’ Computation and Analysis

Higher-order sensitivity analysis employs automatic differentiation through PyTorch functional transformations. The framework enables vectorized computation of first- and second-order derivatives across the parameter space:
Delta = V S
Gamma = 2 V S 2
Vega = V σ
Vanna = 2 V S σ
Volga = 2 V σ 2
Component-wise differentiation isolates the contribution of individual pricing elements to overall sensitivity patterns:
V total θ = V coupon θ + V autocall θ + V notional θ V DIUO θ
where θ represents any parameter (spot price, volatility, time, etc.). The automatic differentiation approach ensures numerical accuracy while maintaining computational efficiency for systematic Greeks’ analysis. The component decomposition reveals how autocall mechanisms, coupon payments, and barrier options contribute differently to sensitivity patterns, providing insights into the mathematical sources of pricing complexity.

4. Results

4.1. Model Performance Comparison

Table 1 presents comprehensive performance metrics comparing basic and orthogonal feature approaches across all model architectures.
The results demonstrate systematic and substantial performance degradation when conventional machine learning feature engineering is applied to financial data. CatBoost experiences the most severe degradation with a 191% RMSE increase, suggesting that tree-based splitting mechanisms perform optimally when operating on economically meaningful variables rather than mathematical combinations. Neural networks show significant degradation of 73%, while hybrid ensembles demonstrate intermediate patterns with 134% performance deterioration.
The differential performance patterns align with functional analysis theory, where CatBoost’s superior baseline performance reflects its natural alignment with the bounded variation characteristics of barrier-induced pricing discontinuities (DeVore 1998). Neural networks, while exhibiting larger degradation percentages, maintain reasonable absolute performance due to their effectiveness in Sobolev spaces characterizing smooth pricing regions (Cybenko 1989).
Notably, the hybrid ensemble achieves intermediate performance (RMSE: 0.4012) between the individual components (CatBoost: 0.3122, MLP: 0.5465), rather than outperforming both. This suggests that for autocallable pricing within our parameter ranges, the function exhibits stronger bounded variation characteristics than smooth Sobolev properties. The sequential ensemble design, where MLP captures global structures before CatBoost models’ residuals, may inadvertently reduce CatBoost’s effectiveness by providing preprocessed rather than raw financial features. This observation indicates that the mixed regularity hypothesis, while theoretically sound, may not reflect the empirical dominance of discontinuous components in autocallable pricing, pointing toward the need for weighted or adaptive ensemble strategies that can dynamically favor the more suitable algorithm based on local function properties.
The consistency of degradation across diverse architectures indicates that the phenomenon reflects fundamental characteristics of financial data rather than algorithm-specific effects. The superior performance of basic features suggests that natural financial correlations encode essential information for accurate pricing rather than representing statistical redundancy requiring elimination.
Table 2 shows comprehensive comparison across all model architectures and feature engineering approaches. The table includes RMSE, MAE, R 2 scores, training times, and evaluation times, demonstrating systematic superiority of basic features while confirming computational efficiency across all approaches.
Concurrently, Figure 1 reveals the feature engineering paradox in financial machine learning.

4.2. Component-Wise Analysis

The pricing function decomposition in Figure 2 reveals how individual components contribute to overall mathematical complexity and machine learning challenges. The coupon component exhibits relatively smooth behavior suitable for neural network approximation, except near barrier transitions, where discrete autocall probabilities create discontinuities. The autocall component demonstrates binary characteristics, generating sharp transitions that favor tree-based methods.
The notional component follows exponential decay patterns modified by survival probability interactions, creating smooth global structure with localized barrier effects. The DIUO put option contributes the most mathematical sophistication through reflection method series expansion, generating oscillatory patterns with multiple local extrema despite its modest contribution to the total option value.
The component analysis demonstrates that autocallable pricing complexity emerges from interactions between structurally different mathematical elements rather than individual component sophistication. This mixed regularity justifies the hybrid ensemble approach while explaining why feature engineering choices significantly affect model performance.
The price surface demonstrates several key characteristics that justify our hybrid machine learning approach. Smooth regions away from barriers exhibit continuous derivatives suitable for neural network approximation, while sharp transitions near barrier levels create discontinuities that favor tree-based methods. The interaction between volatility and barrier proximity creates complex curvature patterns that require sophisticated modeling techniques.

4.3. Greeks’ Analysis and Sensitivity Patterns

Automatic differentiation analysis in Figure 3 reveals complex higher-order sensitivity patterns that extend beyond traditional option theory predictions. Delta surfaces exhibit expected increasing relationships with spot price but demonstrate sharp transitions near barriers where discrete autocall probabilities create step-function behavior. Gamma patterns show extreme concentration near autocall barriers with spikes exceeding traditional option levels.
Vanna and volga surfaces demonstrate systematic sign reversals across economically relevant parameter ranges. Vanna exhibits multiple sign changes, particularly near barrier transitions where spot–volatility interactions create complex cross-derivative patterns. Volga shows similar reversals concentrated near barrier levels where volatility changes affect both barrier reach probabilities and relative importance of different barrier effects.
The component-wise Greeks’ decomposition isolates how the DIUO put option creates disproportionate effects on higher-order derivatives. While contributing approximately 15–20% of the total option value, the DIUO put accounts for 60% of vanna complexity and 40% of volga sign reversal patterns. This disproportionate impact demonstrates the component’s critical role in determining sensitivity patterns and associated hedging challenges, providing empirical support for the theoretical barrier option analysis of Carr and Linetsky (2006) and extending it to structured product contexts.
The Greeks’ analysis reveals several critical insights for practical risk management that complement recent advances in computational derivatives’ pricing (Ruf and Wang 2020). The vanna sign reversals occur in multiple economically relevant zones, creating hedging challenges that traditional delta/gamma approaches cannot adequately address, consistent with the findings of Bossens et al. (2010) on volatility surface construction. The concentration of gamma near autocall barriers creates rebalancing requirements that may prove costly in practice, while volga patterns demonstrate how volatility risk management becomes increasingly complex near barrier levels where second-order volatility effects exhibit non-monotonic behavior.

5. Discussion

5.1. Implications for Financial Machine Learning

Our findings suggest that financial machine learning can benefit from domain-specific best practices that complement conventional statistical approaches, building upon the existing literature. The systematic performance improvements observed across diverse algorithms suggest that the phenomenon reflects characteristics of financial data structures and provides insights for algorithm design. These results contribute to advancing financial machine learning applications through a consideration of domain knowledge in model design alongside established statistical techniques.
Sirignano and Cont (2019) demonstrate that while universal approximation theorems suggest neural networks can learn any relationship, practical effectiveness depends on feature representation. Our findings extend this observation to show that economically informed feature transformations can complement mathematically optimal approaches. This perspective aligns with the domain adaptation literature in machine learning (Ganin and Lempitsky 2015), providing specialized techniques for financial contexts, as explored by Kouw and Loog (2019).
The validation framework contributions represent advances to the methodological literature. Harvey et al. (2016) emphasize the importance of economic significance alongside statistical significance in financial research. Our component-wise decomposition approach provides a framework for validating that models capture genuine financial structure, addressing considerations about overfitting and data mining bias in financial machine learning applications (Bailey et al. 2014).
The computational considerations from orthogonal features, while not immediately apparent in our static analysis, may prove relevant for dynamic trading environments. Farmer and Skouras (2013) demonstrate that optimization for different objectives can complement accuracy considerations in algorithmic trading. The balance between interpretability and computational efficiency represents an area for advancing both theoretical understanding and practical implementation, as explored in a recent study by Finn et al. (2017) on adaptive learning systems.

5.2. Research Foundation and Extensions

Our analysis establishes a foundation for advancing financial machine learning through domain-specific methodologies, while highlighting opportunities for mathematical and computational extensions. The synthetic data framework provides controlled experimental conditions, enabling a theoretical validation of our findings and establishing benchmarks for a future empirical validation. This controlled environment reveals patterns that may be obscured by market microstructure effects in real-world applications, providing theoretical insights (Hasbrouck 2007).
The focus on autocallable notes within controlled parameter ranges establishes a proof of concept that can be extended to other structured products and market conditions. Madan et al. (1998) demonstrate that different derivative structures exhibit mathematical relationships, suggesting that our domain knowledge preservation principle can be generalized through systematic frameworks. The Black–Scholes foundation with constant parameters provides a theoretical baseline for investigating how the principles extend to more sophisticated models incorporating stochastic volatility, jump processes, and other market complexities (Gatheral and Jacquier 2014).
The framework provides a foundation for hybrid approaches that combine domain knowledge preservation with statistical optimization. Rather than viewing these approaches as competing alternatives, our results suggest possibilities for adaptive feature engineering methodologies that preserve essential economic relationships while optimizing statistical properties through established mathematical techniques.

Functional Space Classification for Financial Products

Our application of the W k , 2 ( Ω ) B V ( Ω ) decomposition to autocallables suggests a systematic approach for classifying financial products by their functional space properties and selecting appropriate algorithms accordingly.
The classification framework would analyze pricing functions to identify smooth regions (suitable for neural networks) versus discontinuous regions (optimal for tree-based methods). For autocallables, we demonstrated that this occurs at barrier levels, but the principle extends to other structured products with different discontinuity patterns.
This provides practical guidance for algorithm selection in financial ML applications. Rather than applying generic ensemble methods, practitioners could classify their specific product’s mathematical properties and apply our functional space decomposition principle systematically. The approach leverages our theoretical foundation (Adams and Fournier 2003; DeVore et al. 2021) while providing actionable implementation guidance.

6. Conclusions

This investigation establishes that domain-specific machine learning approaches can enhance performance in financial applications where correlations encode fundamental economic relationships. Our systematic analysis using autocallable structured notes reveals performance improvements of 43–191% when natural financial correlations are preserved through domain-aware feature engineering techniques. These findings provide an empirical foundation for advancing financial machine learning through specialized methodologies.
The results establish that correlation can represent essential economic information rather than statistical redundancy requiring elimination in financial contexts, contributing to the literature on domain-specific machine learning approaches (Gu et al. 2020; López de Prado 2020). Natural relationships between spot prices, volatility, barrier levels, and time decay encode essential pricing mechanics that can be leveraged for improved model performance. This principle provides a foundation for developing financial machine learning systems that integrate domain knowledge with statistical optimization.
The systematic nature of observed performance improvements across diverse algorithms provides evidence for potential applicability of the domain knowledge preservation principle to other financial instruments. This foundation enables the development of mathematical and machine learning techniques that can enhance both our theoretical understanding and practical applications in computational finance, as demonstrated by the recent advances in related areas (R. Chen et al. 2018; Li et al. 2020; Peters et al. 2017).

Author Contributions

Conceptualization, M.A.; methodology, M.A. and E.L.S.; writing—original draft preparation, M.A.; writing—review and editing, L.E. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Synthetic data generation code implementing the pricing framework and machine learning architectures will be made available upon reasonable request. Parameter ranges, constraint mechanisms, and model configurations were fully documented to enable replication.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Adams, Robert Alexander, and John Joseph Francis Fournier. 2003. Sobolev Spaces. New York: Academic Press. [Google Scholar]
  2. Bailey, David Harold, Jonathan Michael Borwein, Marcos López de Prado, and Qiji Jim Zhu. 2014. Pseudo-mathematics and financial charlatanism: The effects of backtest overfitting on out-of-sample performance. Notices of the AMS 61: 458–71. [Google Scholar] [CrossRef]
  3. Beck, Christian, Sebastian Becker, Philipp Grohs, Nor Jaafari, and Arnulf Jentzen. 2021. Solving the Kolmogorov PDE by means of deep learning. Journal of Scientific Computing 88: 1–28. [Google Scholar] [CrossRef]
  4. Bossens, Frédéric, Grégory Rayee, Nikolaos Skantzos, and Griselda Deelstra. 2010. Vanna-Volga methods applied to FX derivatives: From theory to market practice. International Journal of Theoretical and Applied Finance 12: 1107–31. [Google Scholar]
  5. Carr, Peter, and Vadim Linetsky. 2006. A jump to default extended CEV model: An application of Bessel processes. Finance and Stochastics 10: 303–30. [Google Scholar] [CrossRef]
  6. Chen, Luyang, Markus Pelger, and Jason Zhu. 2023. Deep learning in asset pricing. Management Science 70: 714–50. [Google Scholar] [CrossRef]
  7. Chen, Ricky Tian Qi, Yulia Rubanova, Jesse Bettencourt, and David Kyle Duvenaud. 2018. Neural ordinary differential equations. Paper presented at the Advances in Neural Information Processing Systems 31 (NeurIPS 2018), Montreal, QC, Canada, December 3–8; pp. 6571–83. [Google Scholar]
  8. Cui, Zhenyu, Jiawei Zhang, and Xiaoping Li. 2024. Hedging and Pricing Structured Products Featuring Multiple Underlying Assets. arXiv arXiv:2411.01121. [Google Scholar]
  9. Cybenko, George. 1989. Approximation by superpositions of a sigmoidal function. Mathematics of Control, Signals and Systems 2: 303–14. [Google Scholar] [CrossRef]
  10. Davis, Jesse, Laurens Devos, Sebastiaan Reyners, and Wim Schoutens. 2021. Gradient boosting for quantitative finance. Journal of Computational Finance 24: 4. [Google Scholar] [CrossRef]
  11. Deng, Geng, Craig Joseph McCann, and Olivia Wang. 2012. Are Reverse Convertible Bonds Suitable Investments for Retail Investors? Journal of Investing 21: 35–50. [Google Scholar]
  12. DeVore, Ronald Arthur. 1998. Nonlinear approximation. Acta Numerica 7: 51–150. [Google Scholar] [CrossRef]
  13. DeVore, Ronald Arthur, Boris Hanin, and Guergana Petrova. 2021. Neural network approximation. Acta Numerica 30: 327–444. [Google Scholar] [CrossRef]
  14. Farmer, James Doyne, and Spyros Skouras. 2013. An ecological perspective on the future of computer trading. Quantitative Finance 13: 325–46. [Google Scholar] [CrossRef]
  15. Finn, Chelsea, Pieter Abbeel, and Sergey Levine. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. Paper presented at the 34th International Conference on Machine Learning, Sydney, Australia, August 6–11; pp. 1126–35. [Google Scholar]
  16. Ganin, Yaroslav, and Victor Lempitsky. 2015. Unsupervised domain adaptation by backpropagation. Paper presented at the 32nd International Conference on Machine Learning, Lille, France, July 6–11; pp. 1180–89. [Google Scholar]
  17. Gatheral, James, and Antoine Jacquier. 2014. Arbitrage-free SVI volatility surfaces. Quantitative Finance 14: 59–71. [Google Scholar] [CrossRef]
  18. Gu, Shihao, Bryan Kelly, and Dacheng Xiu. 2020. Empirical asset pricing via machine learning. The Review of Financial Studies 33: 2223–73. [Google Scholar] [CrossRef]
  19. Guillaume, Fabrice. 2015. Accelerated pricing of autocallable products and barrier options. International Journal of Financial Engineering 2: 1550015. [Google Scholar]
  20. Harvey, Campbell Russell, Yan Liu, and Heqing Zhu. 2016. ... and the cross-section of expected returns. The Review of Financial Studies 29: 5–68. [Google Scholar] [CrossRef]
  21. Hasbrouck, Joel. 2007. Empirical Market Microstructure: The Institutions, Economics, and Econometrics of Securities Trading. New York: Oxford University Press. [Google Scholar]
  22. James, Gareth, Daniela Witten, Trevor Hastie, and Robert Tibshirani. 2013. An Introduction to Statistical Learning. New York: Springer. [Google Scholar]
  23. Jolliffe, Ian Thomas, and Jorge Cadima. 2016. Principal component analysis: A review and recent developments. Philosophical Transactions of the Royal Society A 374: 20150202. [Google Scholar] [CrossRef]
  24. Kouw, Wouter Maurits, and Marco Loog. 2019. A review of domain adaptation without target labels. IEEE Transactions on Pattern Analysis and Machine Intelligence 41: 766–81. [Google Scholar] [CrossRef]
  25. Li, Zongyi, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. 2020. Neural operator: Graph kernel-based model for partial differential equations. arXiv arXiv:2003.03485. [Google Scholar]
  26. Lipton, Alexander. 2001. Mathematical Methods for Foreign Exchange: A Financial Engineer’s Approach. Singapore: World Scientific. [Google Scholar]
  27. López de Prado, Marcos. 2020. Machine learning for asset managers. In Elements in Quantitative Finance. Cambridge: Cambridge University Press. [Google Scholar]
  28. Madan, Dilip Ballabh, Peter Carr, and Eric Chi Chang. 1998. The variance gamma process and option pricing. Review of Finance 2: 79–105. [Google Scholar] [CrossRef]
  29. Maddulety, Kishore. 2019. Machine Learning in Banking Risk Management: A Literature Review. Risks 7: 29. [Google Scholar] [CrossRef]
  30. Merton, Robert Cox. 1973. Theory of rational option pricing. The Bell Journal of Economics and Management Science 4: 141–83. [Google Scholar] [CrossRef]
  31. Nelken, Israel. 2000. Pricing, Hedging and Trading Exotic Options. New York: McGraw-Hill. [Google Scholar]
  32. Paletta, Tommaso, and Radu Tunaru. 2022. A Bayesian view on autocallable pricing and risk management. The Journal of Derivatives 29: 97–127. [Google Scholar] [CrossRef]
  33. Peters, Jonas, Dominik Janzing, and Bernhard Schölkopf. 2017. Elements of Causal Inference: Foundations and Learning Algorithms. Cambridge, MA: MIT Press. [Google Scholar]
  34. Rich, Don Richard. 1994. The mathematical foundations of barrier option-pricing theory. Advances in Futures and Options Research 7: 267–311. [Google Scholar]
  35. Ruf, Johannes, and Weiguan Wang. 2020. Neural networks for option pricing and hedging: A literature review. Journal of Computational Finance 24: 1–46. [Google Scholar] [CrossRef]
  36. Sirignano, Justin, and Rama Cont. 2019. Universal features of price formation in financial markets: Perspectives from deep learning. Quantitative Finance 19: 1449–59. [Google Scholar] [CrossRef]
  37. Telpner, Jonathan. 2009. A Guide to Structured Products. Hoboken: John Wiley & Sons. [Google Scholar]
  38. Wilmott, Paul, Sam Howison, and Jeff Dewynne. 1995. The Mathematics of Financial Derivatives: A Student Introduction. Cambridge: Cambridge University Press. [Google Scholar]
  39. Wystup, Uwe. 2006. FX Options and Structured Products. Chichester: John Wiley & Sons. [Google Scholar]
  40. Zeng, Shuai, and Deepika Rao. 2025. Synergizing domain knowledge and machine learning: Intelligent early fraud detection enhanced by earnings management analysis. International Review of Finance 25: e70021. [Google Scholar] [CrossRef]
  41. Zhang, Jiawei, Xiaoping Li, and Zhenyu Cui. 2024. Pricing and hedging autocallable products by Markov chain approximation. Review of Derivatives Research 27: 123–51. [Google Scholar]
Figure 1. (a) shows RMSE comparison with basic features consistently outperforming orthogonal alternatives across neural networks, CatBoost, and hybrid ensembles. (b) demonstrates R2 superiority for domain-preserving features. (c) confirms successful multicollinearity reduction through orthogonalization (VIF analysis) while revealing the performance trade-off. (d) reveals prediction error distributions, highlighting the systematic accuracy degradation when economic relationships are artificially decorrelated, with basic features showing closer error distributions around zero.
Figure 1. (a) shows RMSE comparison with basic features consistently outperforming orthogonal alternatives across neural networks, CatBoost, and hybrid ensembles. (b) demonstrates R2 superiority for domain-preserving features. (c) confirms successful multicollinearity reduction through orthogonalization (VIF analysis) while revealing the performance trade-off. (d) reveals prediction error distributions, highlighting the systematic accuracy degradation when economic relationships are artificially decorrelated, with basic features showing closer error distributions around zero.
Risks 13 00128 g001
Figure 2. Component-wise analysis of autocallable pricing structure showing the respective values of (a) coupon surface, (b) autocall with binary characteristics, (c) notional with exponential decay, (d) DIUO put barrier adjustment and (e) which is the total value. The mixed regularity properties across components justify the hybrid ensemble approach.
Figure 2. Component-wise analysis of autocallable pricing structure showing the respective values of (a) coupon surface, (b) autocall with binary characteristics, (c) notional with exponential decay, (d) DIUO put barrier adjustment and (e) which is the total value. The mixed regularity properties across components justify the hybrid ensemble approach.
Risks 13 00128 g002
Figure 3. Two-dimensional heatmap analysis of higher-order Greeks for total autocallable price revealing (a) overall price surface, (b) delta surface with barrier transitions, (c) gamma concentration near autocall levels, (d) vega patterns with volatility dependence, (e) vanna sign reversals across parameter space, and (f) volga complexity from DIUO put interactions. The systematic sign reversals create practical hedging challenges beyond traditional delta/gamma approaches.
Figure 3. Two-dimensional heatmap analysis of higher-order Greeks for total autocallable price revealing (a) overall price surface, (b) delta surface with barrier transitions, (c) gamma concentration near autocall levels, (d) vega patterns with volatility dependence, (e) vanna sign reversals across parameter space, and (f) volga complexity from DIUO put interactions. The systematic sign reversals create practical hedging challenges beyond traditional delta/gamma approaches.
Risks 13 00128 g003
Table 1. Model performance comparison: basic versus orthogonal features.
Table 1. Model performance comparison: basic versus orthogonal features.
Model ArchitectureBasic FeaturesOrthogonal FeaturesPerformanceR2 Basic
RMSE RMSE Degradation vs. Orthogonal
Neural Network0.54650.9478+73%0.9861 vs. 0.9581
CatBoost0.31220.9100+191%0.9955 vs. 0.9614
Hybrid Ensemble0.40120.9384+134%0.9925 vs. 0.9590
Table 2. Detailed performance metrics.
Table 2. Detailed performance metrics.
Model-FeatureRMSEMAER2Training Time (s)Evaluation Time (µs)
MLP-Basic0.54650.42460.98610.95.2
MLP-Orthogonal0.94780.75100.95810.73.0
CatBoost-Basic0.31220.22990.99552.73.7
CatBoost-Orthogonal0.91000.70880.96140.92.8
Hybrid-Basic0.40120.30210.99252.55.0
Hybrid-Orthogonal0.93840.73370.95901.54.5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ahnouch, M.; Elaachak, L.; Le Saout, E. Domain Knowledge Preservation in Financial Machine Learning: Evidence from Autocallable Note Pricing. Risks 2025, 13, 128. https://doi.org/10.3390/risks13070128

AMA Style

Ahnouch M, Elaachak L, Le Saout E. Domain Knowledge Preservation in Financial Machine Learning: Evidence from Autocallable Note Pricing. Risks. 2025; 13(7):128. https://doi.org/10.3390/risks13070128

Chicago/Turabian Style

Ahnouch, Mohammed, Lotfi Elaachak, and Erwan Le Saout. 2025. "Domain Knowledge Preservation in Financial Machine Learning: Evidence from Autocallable Note Pricing" Risks 13, no. 7: 128. https://doi.org/10.3390/risks13070128

APA Style

Ahnouch, M., Elaachak, L., & Le Saout, E. (2025). Domain Knowledge Preservation in Financial Machine Learning: Evidence from Autocallable Note Pricing. Risks, 13(7), 128. https://doi.org/10.3390/risks13070128

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop