Next Article in Journal
Gene Selection Algorithms in a Single-Cell Gene Decision Space Based on Self-Information
Previous Article in Journal
Production Decision Optimization Based on a Multi-Agent Mixed Integer Programming Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning Parameter Dependence for Fourier-Based Option Pricing with Tensor Trains

1
Department of Physics, The University of Tokyo, Tokyo 113-0033, Japan
2
Department of Physics, Saitama University, Saitama 338-8570, Japan
3
Center for Quantum Information and Quantum Biology, The University of Osaka, Osaka 560-0043, Japan
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(11), 1828; https://doi.org/10.3390/math13111828
Submission received: 18 April 2025 / Revised: 22 May 2025 / Accepted: 28 May 2025 / Published: 30 May 2025

Abstract

:
A long-standing issue in mathematical finance is the speed-up of option pricing, especially for multi-asset options. A recent study has proposed to use tensor train learning algorithms to speed up Fourier transform (FT)-based option pricing, utilizing the ability of tensor trains to compress high-dimensional tensors. In this study, we focus on another usage of the tensor train, which is to compress functions, including their parameter dependence. Here, we propose a pricing method, where, by a tensor train learning algorithm, we build tensor trains that approximate functions appearing in FT-based option pricing with their parameter dependence and efficiently calculate the option price for the varying input parameters. As a benchmark test, we run the proposed method to price a multi-asset option for the various values of volatilities or present asset prices. We show that, in the tested cases involving up to 11 assets, the proposed method outperforms Monte Carlo-based option pricing with 10 6 paths in terms of computational complexity while keeping better accuracy.
MSC:
91G60; 65D05; 15A69; 65K05

1. Introduction

Financial firms are conducting demanding numerical calculations in their business. One of the most prominent ones is option pricing. An option is a financial contract in which one party, upon specific conditions being met, pays an amount (payoff) determined by the prices of underlying assets such as stocks and bonds to the other party. Concerning the time when the payoff occurs, the simplest and most common type of option is the European type, which this paper hereafter focuses on: at the predetermined future time (maturity) T, the payoff v ( S ( T ) ) depends on the underlying asset prices S ( T ) when time T occurs. For example, in a single-asset European call (put) option, one party has the right to buy (sell) an asset at the predetermined price, i.e., strike K and maturity T, and the corresponding payoff function is v ( S ( T ) ) = max { c ( S ( T ) K ) , 0 } , where c = 1 and 1 for a call and put option, respectively. In addition to this simple one, various types of options are traded and constitute a large part of the financial industry.
Pricing options appropriately is needed for making a profit and managing the risk of loss in option trading. According to the theory in mathematical finance (as typical textbooks in this area, we refer to refs. [1,2]), the price of an option is given by the expectation of the discounted payoff in the contract with a stochastic model on the dynamics of underlying asset prices assumed. Except for limited cases with simple contract conditions and models, the analytical formula for the option price is not available and thus we need to resort to the numerical calculation. In the rapidly changing financial market, quick and accurate pricing is vital in option trading, but it is a challenging task, for which long-lasting research has been conducted. In particular, pricing multi-asset options, whose payoff depends on the prices of multiple underlying assets, is often demanding. Analytical formulas are available for limited cases: ref. [3] gave the formula for options on the minimum or maximum of two assets, but not for the case with three or more assets. Ref. [4] gave the formal solution for the price of that type of option with n assets written with the cumulative distribution function of the high-dimensional normal distribution, but it needs to be numerically evaluated in general cases. Pricing methodologies such as PDE-based solvers (e.g., the finite difference method [5]) and tree models [6] suffer from the so-called curse of dimensionality, which means the exponential increase in computational complexity with respect to the asset number, and the Monte Carlo method, which may evade the exponential complexity, has a slow convergence rate. The quasi-Monte Carlo method is more effective than the Monte Carlo method up to a certain level of the dimension but has the worst-case complexity exponential with respect to the dimension, despite the methods for mitigating this proposal [7,8,9]. At any rate, in the rapidly changing financial market, the pursuit of the speed-up of option pricing can never go too far.
Motivated by these points, recently, applications of quantum computing to option pricing have been considered actively (see ref. [10] as a comprehensive review). For example, many studies have focused on applications of the quantum algorithm for Monte Carlo integration [11], which provides the quadratic quantum speed-up over the classical counterpart. Unfortunately, running such a quantum algorithm requires fault-tolerant quantum computers, which may take decades to be developed.
In light of this, applications of quantum-inspired classical algorithms, that is tensor network (TN) algorithms, to option pricing have also been studied as solutions in the present [12,13,14,15]. Among them, this paper focuses on the application of tensor train (TT) [16] learning to the FT-based option pricing method [17,18], following the original proposal in ref. [13]. This option pricing method is based on converting the integration for the expected payoff in the space of the asset prices S ( T ) to that in the Fourier space, namely, the space of z , the wavenumbers corresponding to the logarithm of S ( T ) . After this conversion, the numerical integration is performed more efficiently in many cases. Unfortunately, the FT-based method with naive grid-based integration suffers from the curse of dimensionality, with its computational complexity increasing exponentially for multi-asset options.
On the other hand, the tensor network is the technique originally developed in quantum many-body physics to express state vectors with exponentially large dimensions efficiently (see [19,20] as reviews), and recently, it has also been utilized in various fields such as machine learning [21,22,23], quantum field theory [24,25,26], and partial differential equations [14,27,28,29].
A recent study [13] proposed using tensor train learning algorithms to accelerate FT-based option pricing by leveraging the ability of tensor networks to compress high-dimensional functions of z involved in FT-based option pricing. Specifically, the authors constructed TTs, a kind of tensor network, approximating those functions by using a TT learning algorithm called tensor cross interpolation (TCI) [25,30,31,32], which allowed them to evaluate the relevant integrals more efficiently and achieve a significant speed-up in FT-based option pricing in their test cases. As another approach for accelerating FT-based multi-asset option pricing, please refer to [33].
Here, we would like to point out an issue in this TT-based method. That is, we need to rerun TCI to obtain TTs and compute the option price each time the input parameters, such as volatility and initial asset price, are changed. According to our numerical experiments, the TT learning method for FT-based option pricing [13] takes a longer time than the Monte Carlo method, which means that the TT-based method does not have a computational time advantage.
To address this issue, we focus on another usage of tensor trains that can embed parameter dependence [34,35,36,37] to make FT-based option pricing more efficient. Namely, we learn TTs that approximate the functions including not only the dependence on z but also that on parameters in the asset price model, such as the volatilities and the present asset prices, by a single application of TCI for each function. We use these tensor trains to evaluate the integral including parameter dependence and perform fast option pricing in response to various parameter changes (refer to Figure 1). Note that this is an advantage of the proposed method not only over the original TT-based method in ref. [13] but also over other aforementioned methods such as PDE-based solvers, tree models, and the (quasi-)Monte Carlo method. When parameters are changed and an option is repriced, these methods must be rerun from scratch, in contrast to the proposed method, in which we just reuse the TT that is evaluated efficiently.
For evaluation, we consider two benchmark scenarios in which we vary volatilities and present stock prices under the Black–Scholes model, focusing specifically on a min-option. In the test cases, it is seen that for up to 11 assets, the computational complexity of our proposed method, which is measured by the number of elementary operations, is advantageous to that of the Monte Carlo method with 10 6 paths by a factor of O ( 10 5 ) (see Figure 2). We also confirm numerically that in the tested cases, the accuracy of our method is within the statistical error in the Monte Carlo method with 10 6 paths. In summary, these results indicate that, at least in the tested cases, our proposed method offers significant advantages in computational complexity while keeping a better accuracy.
In the context of TT-based approximations of high-dimensional functions, incorporating parameter dependence into TTs has been considered in some fields [34,35,36,37]. However, to the best of our knowledge, our study is the first to take such an approach in FT-based option pricing in order to make it more efficient for varying parameters, which provides practical benefits in the rapidly changing financial market.

2. Tensor Train

A d-way tensor F x 1 , , x d , where each local index x l , l = 1 , , d , has a local dimension N, can be decomposed into a TT format with a low-rank structure. The TT decomposition of F x 1 , , x d can be expressed as follows.
F x 1 , , x d l 1 χ 1 l d 1 χ d F l 1 , x 1 ( 1 ) F l 1 l 2 , x 2 ( 2 ) F l d 1 , x d ( d ) i = 1 d F x i ( i )
where F x i ( i ) denotes each three-way tensor, l i represents the virtual bond index, and χ i is the dimension of the virtual bond. One of the main advantages of TT is that it significantly reduces computational complexity and memory requirements by reducing bond dimensions χ i . Tensor train is mathematically equivalent to a matrix product state (MPS) [38].
This is an equivalent expression to the wave function F x 1 , , x d of a quantum system, with dN-level qudits as follows:
| F x 1 , x 2 , , x d = l 1 χ 1 l d 1 χ d F l 1 , x 1 ( 1 ) F l 1 l 2 , x 2 ( 2 ) F l d 1 , x d ( d ) | x ,
where | x = | x 1 | x d is the tensor product of | x 1 , , | x d , the basis states from |1〉 to |N〉.
In the same manner, the 2 d -way tensor F x 1 , , x d y 1 , , y d , where each local index x l has local dimension N l and each local index y l has local dimension M l , l = 1 , , d , can be expressed with the product of the each tensor core, i.e., fourth-order tensor [ F a i 1 x i a i y i ] ( i ) ,
F x 1 , , x d y 1 , , y d a 1 = 1 χ 1 a d 1 = 1 χ d [ F a 0 x 1 a 1 y 1 ] ( 1 ) [ F a d 1 x d a d y d ] ( d ) = i = 1 d [ F x i y i ] ( i )
which is called the tensor train operator (TTO) or matrix product operator (MPO) [38].

2.1. Compression Techniques

We introduce the two compression techniques used in this study.

2.1.1. Tensor Cross Interpolation

Tensor cross interpolation (TCI) is a technique to compress tensors corresponding to discretized multivariate functions with a low-rank TT representation. Here, we consider a tensor that, with grid points set in R d , has entries F x 1 , , x d equal to F ( x 1 , , x d ) , the values of a function F on the grid points. Although we here denote the indexes of the tensor and the variables of the function by the same symbols x 1 , , x d for illustrative presentation, we assume that, in reality, the grid points in R d is labeled by integers and the indexes of the tensor denotes the integers. Leaving the detailed explanation to refs. [25,30,31,32], we describe its outline. It learns a TT using the values of the target function F x 1 , x 2 , x d at indexes ( x 1 , x 2 , , x d ) adaptively sampled according to the specific rules. TCI actively inserts adaptively chosen interpolation points (pivots) from the sample points to learn the TT, which can be seen as a type of active learning. It gives the estimated values of the function at points across the entire domain although we use only the function values at a small number of sample points in learning. This is the very advantage of TCI and is particularly useful for compressing target tensors with a vast number of elements, contrary to singular value decomposition (SVD) requiring access to the full tensor. Note that TCI is a heuristic method, which means its effectiveness heavily depends on the internal algorithm to choose the pivots and the initial set of points selected randomly.
In this study, when we learn TT from functions with TCI, we add the pivots so that the error in the maximum norm ( ϵ TCI ) is to be minimized.
ϵ max = F x 1 , x 2 , , x d F ˜ TT max F x 1 , x 2 , , x d max
where F x 1 , x 2 , , x d is a target tensor, F ˜ TT is a low-rank approximation, and the maximum norm is evaluated as the maximum of the absolute values of the entries at the pivots selected already. Here, we assume that we can access the arbitrary elements of the target tensor. Note that we do not need to store all the elements of the target tensor. The computational complexity of TCI is roughly proportional to the number of elements in the TT, which is O ( d χ 2 N ) with χ 1 , , χ d fixed to χ . In addition, considering the case that zero is included in the reference function value, the error should be normalized by F x 1 , x 2 , , x d max .

2.1.2. Singular Value Decomposition

In this study, we use singular value decomposition (SVD) to compress further the TTs obtained by TCI with its error threshold ϵ TCI set to sufficiently low. This is carried out by first canonicalizing the TT using QR decompositions and then performing the compression via SVD, discarding singular values that are smaller than the tolerance ϵ SVD set for each bond. This tolerance is defined by
ϵ SVD = | F ˜ TT F ˜ TT | F 2 | F ˜ TT | F 2 ,
where | | F 2 indicates the Frobenius norm, F ˜ TT is the TT obtained from TCI and, F ˜ TT is the other TT after SVD. For more technical details, readers are referred to refs. [16,38].

3. Fourier Transform-Based Option Pricing Aided by Tensor Cross Interpolation

3.1. Fourier Transform-Based Option Pricing

In this paper, we consider the underlying asset prices S ( t ) = ( S 1 ( t ) , , S d ( t ) ) in the Black–Scholes (BS) model described by the following stochastic differential equation:
d S m ( t ) = r S m ( t ) d t + σ m S m ( t ) d W m ( t ) .
Here, W 1 ( t ) , , W d ( t ) are the Brownian motions with constant correlation matrix ρ i j , namely,
d W m ( t ) d W n ( t ) = ρ m n d t ,
where r R and σ 1 , , σ d > 0 are constant parameters called the risk-free interest rate and the volatilities, respectively. The present time is set to t = 0 and the present asset prices are denoted by S 0 = ( S 1 , 0 , , S d , 0 ) .
We consider European-type options, in which the payoff v ( S ( T ) ) depends on the asset prices S ( T ) when the maturity T occurs at T. According to the theory of option pricing, the price V of such an option is given by the expectation of the discounted payoff:
V ( p ) = E e r T v ( S ( T ) ) | S 0 = e r T v ( exp ( x ) ) q ( x | x 0 ) d x ,
where we define exp ( x ) : = ( e x 1 , , e x d ) . q ( x | x 0 ) is the probability density function of x : = ( log S 1 ( T ) , , log S d ( T ) ) , the log asset prices at T, conditioned on the present value x 0 = ( log S 1 , 0 , , log S d , 0 ) . In the BS model defined by (6), q ( x | x 0 ) is given by the d-variate normal distribution:
q ( x | x 0 ) = 1 ( 2 π ) d det Σ exp 1 2 x μ T Σ 1 x μ ,
where Σ : = ( σ m σ n ρ m n T ) m n is the covariance matrix of x and μ : = x 0 + ( r T 1 2 σ 1 2 T , , r T 1 2 σ d 2 T ) . Note that, in Equation (8), we denote the option price by V ( p ) , indicating its dependence on the parameter p such as the volatilities σ = ( σ 1 , , σ d ) and the present asset prices S 0 .
In FT-based option pricing, we rewrite the Formula (8) as the integral in the Fourier space:
V ( p ) = e r T 2 π R d + i α ϕ ( z ) v ^ ( z ) d z .
Here, z = ( z 1 , , z d ) is the wavenumber vector corresponding to x .
ϕ ( z ) : = E [ e i z · x | x 0 ] = R d e i z · x q ( x | x 0 ) d x
is the characteristic function, and in the BS model, it is given by
ϕ ( z ) = exp i m = 1 d z m μ m T 2 m = 1 d k = 1 d σ m σ k z m z k ρ m k .
v ^ ( z ) : = R d e i z · x v ( exp ( x ) ) d x is the Fourier transform of the payoff function v, and its explicit formula is known for some types of options. For example, for a European min-call option with strike K, which we will consider in our numerical demonstration, the payoff function is
v min ( S T ) = max { min { S 1 ( T ) , , S d ( T ) } K , 0 }
and its Fourier transform is as follows [39]:
v ^ min ( z ) = K 1 + i m = 1 d z m ( 1 ) d 1 + i m = 1 d z m m = 1 d ( i z m ) .
Note that for v ^ min ( z ) to be well defined, z C d must be taken so that Im z m > 0 and m = 1 d Im z m > 1 . α R d in Equation (10) is the parameter that characterizes the integration contour respecting the above conditions on Im z m and taken so that α m > 0 and m = 1 d α m > 1 .
In the numerical calculation of Equation (10), we approximate it by discretization:
V ( p ) = e r T 2 π j 1 , , j d = N / 2 N / 2 ϕ ( z gr , j i α ) v ^ ( z gr , j + i α ) Δ vol .
Here, the even natural number N is the number of the grid points in one dimension, and z gr , j is the grid point specified by the integer vector j as
z gr , j : = ( η j 1 , , η j d ) ,
where η is the constant integration step size in each direction, and Δ vol : = η d is the volume element. η is a hyperparameter that must be appropriately determined. In our demonstration, it is set to 0.4 or 0.3, which yields the accurate result (see the Numerical Demonstration section for details).

3.2. Fourier Transform-Based Option Pricing with Tensor Trains

Note that to compute the sum in Equation (15), we need to evaluate ϕ and v ^ exponentially many times with respect to the asset number d. This is not feasible for large d. Then, to reduce the computational complexity, following ref. [13], we consider approximating ϕ and v ^ by TTs. For the tensor ϕ j 1 , , j d (resp. v ^ j 1 , , j d ), whose entry with index j is ϕ ( z gr , j i α ) (resp. v ^ ( z gr , j + i α ) ), we construct a TT approximation ϕ ˜ j 1 , , j d (resp. v ˜ j 1 , , j d ) by TCI. Then, we approximately calculate Equation (15) by
V ( p ) e r T 2 π j 1 , , j d = N / 2 N / 2 ϕ ˜ j 1 , , j d v ˜ j 1 , , j d Δ vol ,
where each index j i ( i = 1 , , d ) is from N 2 to N 2 and the corresponding integration range for each variable is η N 2 to η N 2 . We must suitably choose both the number of grid points N and the integral step size η , which serve as hyperparameters in this approach.
Thanks to TCI, we can obtain the approximate TTs, avoiding the evaluations of ϕ and v ^ at all the grid points. In addition, given the TTs, we can compute the sum in (17) as the contraction of two TTs without exponentially many iterations: with the bond dimensions at most χ , the number of multiplications and additions is of order O ( d N χ 3 ) .
Hereafter, we simply call on this approach for FT-based option pricing aided by TT-based option pricing.

3.3. Monte Carlo-Based Option Pricing

Here, we also make a brief description of the Monte Carlo (MC)-based option pricing. It is a widely used approach in practice, and we take it as a comparison target in our numerical demonstration of TT-based option pricing.
In the MC-based approach, we estimate the expectation in Equation (8) by the average of the payoffs in the sample paths:
V ( p ) e r T × 1 N path i = 1 N path v exp ( x i ) ,
where x 1 , , x N path are i.i.d. samples from q ( x | x 0 ) . On how to sample multivariate normal variables, we leave the detail to textbooks (e.g., ref. [40]) and just mention that it requires more complicated operations than simple multiplications and additions such as evaluations of some elementary functions. Furthermore, calculating the payoff v with the normal variable x i involves exponentiation. In the MC simulation for d assets with N path , the number of such operations is O ( d N path ) , and we hereafter estimate the computational complexity of MC-based option pricing by this.

4. Learning Parameter Dependence with Tensor Trains

4.1. Outline

Option prices V ( p ) depend on input parameters p such as the volatilities σ and the present asset prices S 0 . In the rapidly changing financial market, these input parameters vary from time to time, which causes the change in the option price. Therefore, if we have a function, i.e., tensor trains, that efficiently outputs an accurate approximation of the option price for various values of the input parameters, it provides a large benefit to practical business.
Then, extending the aforementioned FT-based option pricing method with TTs, we propose a new scheme to quickly compute the option price in response to the change in the input parameter set. Using TCI, we obtain the TTs to approximate ϕ incorporating the parameter dependence of this function and v ^ . The outline is illustrated in Figure 1. Here, focusing on not all the parameters but a part of them, we take p as a d-dimensional vector, e.g., either of σ or S 0 . Considering ϕ as the functions of not only z but also p , we set in the space of z and p the grid points ( z gr , j , p gr , k ) labeled by the index vectors j and k . Then, as illustrated in Figure 1(a1), we run TCI to obtain the TTs ϕ ˜ j 1 , k 1 , , j d , k d and v ˜ j 1 , , j d that, respectively, approximate the tensors ϕ j 1 , k 1 , , j d , k d and v ^ j 1 , , j d , whose entries are the values of ϕ and v ^ at grid points ( z gr , j , p gr , k ) and ( z gr , j ) , respectively. We reduce the bond dimensions of these TTs using SVD. As shown in Figure 1(a2), we then contract adjacent core tensors pairwise and obtain a tensor train operators (TTO) ϕ ˜ j 1 , , j d k 1 , , k d . Then, as shown in Figure 1(a3), we contract this TTO ϕ ˜ j 1 , , j d k 1 , , k d and TT v ˜ j 1 , , j d along the index vector j as follows:
V ˜ k 1 , , k d = j 1 , , j d = N / 2 N / 2 ϕ ˜ j 1 , , j d k 1 , , k d v ˜ j 1 , , j d .
Then, we optimize the bond dimensions of V ˜ k 1 , , k d using SVD. Thus, as shown in Figure 1(a4), we obtain a new function V ˜ k 1 , , k d whose inputs are the parameters and output is the option price. Having this TT, we obtain the option price for the specified value of p as illustrated in Figure 1b. By fixing the index k of V ˜ k 1 , , k d as k ^ 1 , , k ^ d to the value corresponding to the specified p , we obtain the option price for the specified p as
V ( p gr , k ^ ) e r T 2 π V ˜ k ^ 1 , , k ^ d Δ vol .
Here, we mention the ordering of the local indices of the TT. Two core tensors in the TT that correspond to z j and p j of the same asset are arranged next to each other, i.e., the order is ( z 1 p 1 z 2 p 2 z d p d ). In the numerical demonstration, we have numerically found that this arrangement allows us to compress the TTs with parameter dependence while maintaining the accuracy of the option pricing. On the other hand, if the core tensors on z and those on p are completely separated, i.e., ( z 1 z 2 z d p 1 p 2 p d ), we have found that the accuracy of the option pricing gets worse since TCI fails to learn these tensor trains. However, the optimal arrangement of the local indices may vary, for example, depending on the correlation matrix: intuitively, the core tensors corresponding to highly correlated assets should be placed nearby. Although we do not discuss it in detail, this is an important topic for future research.
In the two test cases for our proposed method in the Numerical Demonstration section, we will identify p as the volatility σ or the present asset price S 0 , the varying market parameters that particularly affect the option prices.

4.2. Computational Complexity

The computational complexity of TT-based option pricing, involving computation for obtaining the specific tensor components V ˜ k ^ 1 , , k ^ d for the fixed values of k ^ 1 , , k ^ d is O ( d χ V ˜ 2 )  [38]. Here, we denote the maximum bond dimensions as χ V . In fact, the bond dimension depends on the bond index, and it is necessary to account for this for an accurate evaluation of the number of operations. Indeed, we consider this point in evaluating the computational complexity of TT-based option pricing demonstrated in the Discussion section.
Here, we ignore the computational complexity of all the processes in Figure 1a and consider it in Figure 1b after we obtain the TT only. This is reasonable if we can use plenty of time to perform these tasks before we need fast option pricing. As discussed in the Discussion section, we can reasonably find such a situation in practice.

5. Numerical Demonstration

5.1. Details

Now, as a demonstration, we apply the proposed method to pricing a d-asset European min-call option in the BS model. In the following, we describe the parameter values used in this study, the software used, and how the errors were evaluated.

5.1.1. Ranges of Volatility and Initial Stock Price

With respect to p , on which the TTs learn the dependence of the functions ϕ and v ^ , we take the two test cases: p is the volatilities σ or the present asset prices S 0 . In the proposed method, we need to set the range in the space of p and the grid points in it. For each volatility σ m , we set the range to σ m [ 0.15 , 0.25 ) , where the center σ m = 0.2 is a typical value of the Nikkei Stock Average Volatility Index [41] and the range width ± 0.05 covers the changes in this index on most days. For each S m , 0 , we set the range to S m , 0 [ 90 , 120 ) , which corresponds to the 20% variation in the asset price centered at 100. The lower bound is set not to 80 but 90 because the price of the option we take as an example is negligibly small for S 0 < 90 . For both σ m and S m , 0 , we set 100 equally spaced grid points in the range, and so the total number of the grid points in the space of p is 100 d .

5.1.2. Other Parameters

The other parameters for option pricing are fixed to the values summarized in the table below. The values of α are adopted from previous study [13] and we confirmed that the solution remains stable when the values are varied slightly. N and η are set sufficiently large and small, respectively, so that the accuracy of the proposed approach becomes better than the MC (Table 1).

5.1.3. Error Evaluation

We do not have the exact price of the multivariate min-call option since there is no known analytic formula for it. Instead, we regard the option price computed by the MC-based method with very many paths, concretely 5 × 10 7 paths, as the true value. The error of the option price computed by the proposed method is evaluated by calculating the mean absolute error from the true value. As a comparison target for our method, we use an MC-based approach with 10 6 paths. As the error of the MC-based option price, we compute the half-width of its 95% confidence interval, namely 1.96 σ / n MC , where σ is the sample standard deviation of the discounted payoff in the MC run and the path number n MC is 10 6 . We assess the accuracy of our method by verifying if its error is below that of the MC-based method.
Here, an issue is that the number of possible combinations of the parameter p is 100 d , and thus we cannot test all of them. Thus, we randomly select 100 combinations and perform option pricing for each of them. We compare the mean absolute error of our method for the 100 parameter sets with the one obtained from the Monte Carlo simulations with the same parameter setting.

5.1.4. Software and Hardware Used in This Study

TensorCrossInterpolation.jl (v0.9.16) [42] was used for learning tensor trains from functions appearing with Fourier-based option pricing. The Monte Carlo simulations were carried out using tf-quant-finance (v0.0.1.dev34) [43]. Parallelization was not employed in either case. GPUs were not utilized in any of the calculations. The computations were performed on a 2023 MacBook Pro featuring a 12-core Apple M2 Max processor and 32 GB of 400 GB/s unified memory.

5.2. Results

We show the results for the computational complexity, time, and accuracy of TT-based and MC-based option pricing when two parameters σ and S 0 are varied.
The results are summarized in Table 2. In particular, the computational complexity versus d is plotted in Figure 2. The mean absolute error in TT-based option prices among runs for 100 random parameter sets is represented by e TT . For the same parameter sets, we compute mean values of the absolute values of the error in the MC-based option price with 10 6 paths and denote it by e MC . The computational complexity and time of TT-based option pricing are represented by c TT and t TT , respectively, and those of MC-based pricing with 10 6 paths are denoted by c MC and t MC . To maintain the desired accuracy of option pricing, we set the tolerance of TCI sufficiently low, concretely ϵ TCI = 10 9 . Subsequently, we reduce the bond dimension by SVD, with the tolerance of SVD set to ϵ SVD ϕ , v ^ = 1.0 × 10 6 for ϕ and v ^ . The maximum bond dimensions of the TTs for ϕ and v ^ are denoted as χ ϕ and χ v ^ , respectively, in Table 2. In the SVD on V ˜ k 1 , , k d obtained by contracting the TTs for ϕ and v ^ , we set the tolerance ϵ SVD V ˜ = 1.0 × 10 6 , and the maximum bond dimension of the resulting TT is denoted by χ V .

5.2.1. The Case of Varying Volatilities

Table 2a shows the computational results of TT-based option pricing when we consider σ dependence. TT-based option pricing demonstrates advantages in terms of computational complexity and time over the MC-based method. The bond dimensions of the TT results for d = 5 and d = 10 are depicted in Figure 3a. By applying SVD to the TTs trained via TCI, the bond dimensions between tensors related to p m and z m + 1 could generally be maintained at around 10. The details of this compression by SVD are described in Appendix A. The maximum bond dimension χ V ˜ of V ˜ k 1 , , k d is maintained at 2, or 1, especially when d = 10 , 11 . Therefore, the computational complexity is much lower than that of MC-based option pricing. In addition, the error in the TT-based result is smaller than that in the MC-based result in all the tested cases.

5.2.2. The Case of Varying Initial Stock Prices

Table 2b shows the results of TT-based option pricing when we consider S 0 dependence. TT-based option pricing demonstrates superiority over the Monte Carlo method in terms of computational complexity and time. For d = 5 and 10, after compression using SVD and contractions, the bond dimensions between tensors related to p m and z m + 1 were reduced to around 10 (refer to Figure 3b). The maximum bond dimension, χ V ˜ , of V ˜ k 1 , , k d is 2. Therefore, we saw superiority over the Monte Carlo method in terms of computation complexity and time.
Compared with the MC-based method with 10 6 paths, the TT-based method again gives the smaller error in every value of d.

5.2.3. Randomness in Learning the TTs

Here, we mention the randomness in learning the TTs and the error induced by it. We should note that TCI is a heuristic method, and depending on the choice of the initial points, the learning might not work well. That is, the error defined by Equation (4) might not go below the threshold. To assess such a fluctuation of the accuracy, for d = 10 in the case of varying σ , we evaluated the mean and standard deviation of the accuracy of the TT-based method in 20 runs, with initial points randomly selected in each. As a result, the mean was 0.00230 and the standard deviation was 4.74 × 10 9 , which indicates that the accuracy fluctuation of TT-based option pricing is very small.

5.2.4. Total Computational Time for Obtaining the TTs

We mention the total computational time for obtaining the TTs for ϕ and v ^ and V ˜ k 1 , , k d through TCI and SVD. We focus on TCI because it dominates over SVD. It took about 12.5  min for d = 11 in the case that S 0 dependence was involved. This is sufficiently short for a practical use-case of the TT-based method mentioned exemplified in the Discussion section. We also note that, when we do not incorporate the parameter dependence into the tensor train for ϕ as in ref. [13], TCI takes a much longer time than the MC-based method, 7.3 s for d = 11 in the case of σ dependence. In this setup, we set η = 0.48 , ϵ TCI = 10 6 , and N = 50 , and all other parameters were the same as those in the default settings listed in Table 1. This means that running TCI every time the parameter varies does not lead to the time advantage of the TT-based method over the MC-based one.

6. Discussion

We propose a method that employs a single TCI to learn TTs incorporating parameter dependence from the function of Equation (12), enabling fast option pricing in response to varying parameters. In this study, we considered scenarios with varying volatility and present stock prices as benchmarks for our proposed method. Up to d = 11 , we demonstrated superiority in both computational complexity and time. Note that in the MC-based method, the implementation may have room for improvement to reduce computational complexity and time. We have also seen that in all the tested cases, the error in the TT-based result is smaller than that of the MC-based result with 10 6 paths.
Now, let us consider how the proposed method provides benefits in practical business in financial firms. An expected way to utilize this method is as follows. At night, when the financial market is closed, we learn the TTs and perform contractions, and then, during the day, when the market is open, we use the TT of parameter-dependent option prices to quickly price the option for the fluctuating parameters. If we pursue the computational speed in the daytime and allow the overnight precomputation to some extent, the above operation can be beneficial. In light of this, it is reasonable that we compare the computational complexity in the TT-based method after the TTs are obtained with that in the MC-based method, neglecting the learning process and contraction.
Finally, we discuss future research directions. For our method to be more practical, compressing many kinds of input parameters including both σ and S 0 into a single TT is desired. For example, taking into account the dependence on the parameters concerning the option contract, such as the maturity T and the strike K, enables us to price different option contracts with a single set of TTs. Although this is a promising approach, there might be some issues. For example, it is non-trivial whether TTs incorporating many parameter dependencies have a low-rank structure. Thus, we will leave such a study for future work. Furthermore, to enhance the practical benefit, we should extend our methodology so that it is applicable to more advanced settings in option pricing. With respect to the pricing model, while this paper has considered the BS model, more advanced models such as the local volatility model [44], stochastic volatility model [45,46], and Lévy models [47,48] are also used in practice. Applying our method to such models, where the dependencies on the parameters in the models are incorporated into TTs, is an interesting direction. Expanding the scope to the broad types of products is also desired. While this paper focused on European-type options, more complicated ones such as American and Bermudan options are also traded widely, and pricing of such options is more time consuming. Considering whether our method can be extended for such options will be an important and interesting challenge.

Author Contributions

Conceptualization: R.S.; methodology: R.S.; software: R.S.; validation: R.S.; investigation: R.S.; data curation: R.S. and H.T.; writing original draft preparation: R.S.; writing review and editing: R.S. and K.M.; visualization: R.S. and H.T.; supervision: K.M.; project administration: R.S.; funding acquisition: R.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Japan Society for the Promotion of Science (JSPS), Grant No. 23KJ0295. K. M. is supported by the MEXT Quantum Leap Flagship Program (MEXT Q-LEAP), Grant No. JPMXS0120319794, JSPS KAKENHI Grant No. JP22K11924, and JST COI-NEXT Program Grant No. JPMJPF2014. This work was supported by the Center of Innovation for Sustainable Quantum AI (JST Grant Number JPMJPF2221).

Data Availability Statement

The data supporting the findings of this study are available within the article. Additional raw data are available from the corresponding author on reasonable request.

Acknowledgments

We are grateful to Marc K. Ritter, Hiroshi Shinaoka, and Jan von Delft for providing access to TensorCrossInterpolation.jl for TCI [42]. We deeply appreciate Marc K. Ritter and Hiroshi Shinaoka for their critical remarks on the further improvement of our proposed method. R. S. thanks the Quantum Software Research Hub at Osaka University for the opportunity to participate in the study group on applications of tensor trains to option pricing. R.S. is grateful to Yusuke Himeoka, Yuta Mizuno, Wataru Mizukami, and Hiroshi Shinaoka since R.S. was inspired to conduct this study by their collaboration.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Figure A1 shows the maximum relative error e TT when applying SVD with varying ϵ SVD ϕ , v ^ to the TT when incorporating σ dependence at d = 10 obtained through TCI. Here, we fixed the tolerance of SVD ϵ SVD V ˜ to 10 6 . The tolerance ϵ SVD ϕ , v ^ for SVD was chosen to maintain e TT smaller than e MC . From Figure A1, it can be seen that e TT increases sharply between ϵ SVD = 10 6 and 10 2 , suggesting that setting ϵ SVD around 1.0 × 10 6 is appropriate for keeping e TT smaller than e MC . By compressing the bond dimension with SVD, the computational time for the contraction of these optimized TTs can also be kept low.
Figure A1. The mean absolute error e TT when applying SVD with various ϵ SVD ϕ , v ^ values to the TTs when incorporating the σ dependence at d = 10 .
Figure A1. The mean absolute error e TT when applying SVD with various ϵ SVD ϕ , v ^ values to the TTs when incorporating the σ dependence at d = 10 .
Mathematics 13 01828 g0a1
From the fact that we can keep the error e TT small through SVD, it is suggested that the TTs obtained by TCI contain redundant information. By using SVD to achieve an optimal approximation in terms of the Frobenius norm, the reduced redundant information could be effectively removed. In addition, it is surprising that in the analysis with ϵ SVD ϕ , v ^ = 10 6 , the e TT decreased compared to before compression by SVD. We consider that the error contained in TTs obtained by TCI is eliminated through SVD by chance. We expect that this phenomenon does not occur generally, and in fact, it did not occur for other asset numbers or parameters.

References

  1. Hull, J.C. Options Futures and Other Derivatives; Pearson: London, UK, 2003. [Google Scholar]
  2. Shreve, S.E. Stochastic Calculus for Finance I & II; Springer: New York, NY, USA, 2004. [Google Scholar]
  3. Stulz, R. Options on the minimum or the maximum of two risky assets: Analysis and applications. J. Financ. Econ. 1982, 10, 161–185. [Google Scholar] [CrossRef]
  4. Johnson, H. Options on the maximum or the minimum of several assets. J. Financ. Quant. Anal. 1987, 22, 277–283. [Google Scholar] [CrossRef]
  5. Duffy, D.J. Finite Difference Methods in Financial Engineering: A Partial Differential Equation Approach; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  6. Clewlow, L.; Strickland, C. Implementing Derivative Models; John Wiley & Sons: Hoboken, NJ, USA, 1998. [Google Scholar]
  7. Bayer, C.; Siebenmorgen, M.; Tempone, R. Smoothing the payoff for efficient computation of Basket option prices. Quant. Financ. 2018, 18, 491–505. [Google Scholar] [CrossRef]
  8. Liu, S.; Owen, A.B. Preintegration via Active Subspace. SIAM J. Numer. Anal. 2023, 61, 495–514. [Google Scholar] [CrossRef]
  9. Bayer, C.; Hammouda, C.B.; Papapantoleon, A.; Samet, M.; Tempone, R. Quasi-Monte Carlo for Efficient Fourier Pricing of Multi-Asset Options. arXiv 2024, arXiv:2403.02832. [Google Scholar]
  10. Herman, D.; Googin, C.; Liu, X.; Sun, Y.; Galda, A.; Safro, I.; Pistoia, M.; Alexeev, Y. Quantum computing for finance. Nat. Rev. Phys. 2023, 5, 450–465. [Google Scholar] [CrossRef]
  11. Montanaro, A. Quantum speedup of Monte Carlo methods. Proc. R. Soc. A 2015, 471, 20150301. [Google Scholar] [CrossRef]
  12. Glau, K.; Kressner, D.; Statti, F. Low-Rank Tensor Approximation for Chebyshev Interpolation in Parametric Option Pricing. SIAM J. Financ. Math. 2020, 11, 897–927. [Google Scholar] [CrossRef]
  13. Kastoryano, M.; Pancotti, N. A highly efficient tensor network algorithm for multi-asset Fourier options pricing. arXiv 2022, arXiv:2203.02804. [Google Scholar]
  14. Patel, R.; Hsing, C.W.; Sahin, S.; Jahromi, S.S.; Palmer, S.; Sharma, S.; Michel, C.; Porte, V.; Abid, M.; Aubert, S.; et al. Quantum-Inspired Tensor Neural Networks for Partial Differential Equations. arXiv 2022, arXiv:2208.02235. [Google Scholar]
  15. Bayer, C.; Eigel, M.; Sallandt, L.; Trunschke, P. Pricing High-Dimensional Bermudan Options with Hierarchical Tensor Formats. SIAM J. Financ. Math. 2023, 14, 383–406. [Google Scholar] [CrossRef]
  16. Oseledets, I.V. Tensor-Train Decomposition. SIAM J. Sci. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
  17. Carr, P.; Madan, D. Option valuation using the fast Fourier transform. J. Comput. Financ. 1999, 2, 61–73. [Google Scholar] [CrossRef]
  18. Lewis, A.L. A Simple Option Formula for General Jump-Diffusion and Other Exponential Lévy Processes. 2001. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=282110 (accessed on 22 May 2025).
  19. Orús, R. A practical introduction to tensor networks: Matrix product states and projected entangled pair states. Ann. Phys. 2014, 349, 117–158. [Google Scholar] [CrossRef]
  20. Okunishi, K.; Nishino, T.; Ueda, H. Developments in the Tensor Network—From Statistical Mechanics to Quantum Entanglement. J. Phys. Soc. Jpn. 2022, 91, 062001. [Google Scholar] [CrossRef]
  21. Stoudenmire, E.; Schwab, D.J. Supervised Learning with Tensor Networks. In Proceedings of the Advances in Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016; Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., Garnett, R., Eds.; Curran Associates, Inc.: Sydney, Australia, 2016; Volume 29. [Google Scholar]
  22. Novikov, A.; Trofimov, M.; Oseledets, I. Exponential machines. arXiv 2016, arXiv:1605.03795. [Google Scholar] [CrossRef]
  23. Sozykin, K.; Chertkov, A.; Schutski, R.; Phan, A.H.; Cichocki, A.; Oseledets, I. TTOpt: A Maximum Volume Quantized Tensor Train-based Optimization and its Application to Reinforcement Learning. arXiv 2022, arXiv:2205.00293. [Google Scholar]
  24. Shinaoka, H.; Wallerberger, M.; Murakami, Y.; Nogaki, K.; Sakurai, R.; Werner, P.; Kauch, A. Multiscale Space-Time Ansatz for Correlation Functions of Quantum Systems Based on Quantics Tensor Trains. Phys. Rev. X 2023, 13, 021015. [Google Scholar] [CrossRef]
  25. Núñez Fernández, Y.; Jeannin, M.; Dumitrescu, P.T.; Kloss, T.; Kaye, J.; Parcollet, O.; Waintal, X. Learning Feynman Diagrams with Tensor Trains. Phys. Rev. X 2022, 12, 041018. [Google Scholar] [CrossRef]
  26. Takahashi, H.; Sakurai, R.; Shinaoka, H. Compactness of quantics tensor train representations of local imaginary-time propagators. arXiv 2024, arXiv:2403.09161. [Google Scholar] [CrossRef]
  27. Ye, E.; Loureiro, N.F.G. Quantum-inspired method for solving the Vlasov-Poisson equations. Phys. Rev. E 2022, 106, 035208. [Google Scholar] [CrossRef] [PubMed]
  28. Kornev, E.; Dolgov, S.; Pinto, K.; Pflitsch, M.; Perelshtein, M.; Melnikov, A. Numerical solution of the incompressible Navier–Stokes equations for chemical mixers via quantum-inspired Tensor Train Finite Element Method. arXiv 2023, arXiv:2305.10784. [Google Scholar]
  29. Gourianov, N.; Lubasch, M.; Dolgov, S.; van den Berg, Q.Y.; Babaee, H.; Givi, P.; Kiffner, M.; Jaksch, D. A quantum-inspired approach to exploit turbulence structures. Nat. Comput. Sci. 2022, 2, 30–37. [Google Scholar] [CrossRef] [PubMed]
  30. Oseledets, I.; Tyrtyshnikov, E. TT-cross approximation for multidimensional arrays. Linear Algebra Appl. 2010, 432, 70–88. [Google Scholar] [CrossRef]
  31. Dolgov, S.; Savostyanov, D. Parallel cross interpolation for high-precision calculation of high-dimensional integrals. Comput. Phys. Commun. 2020, 246, 106869. [Google Scholar] [CrossRef]
  32. Ritter, M.K.; Núñez Fernández, Y.; Wallerberger, M.; von Delft, J.; Shinaoka, H.; Waintal, X. Quantics Tensor Cross Interpolation for High-Resolution Parsimonious Representations of Multivariate Functions. Phys. Rev. Lett. 2024, 132, 056501. [Google Scholar] [CrossRef]
  33. Bayer, C.; Ben Hammouda, C.; Papapantoleon, A.; Samet, M.; Tempone, R. Optimal damping with a hierarchical adaptive quadrature for efficient Fourier pricing of multi-asset options in Lévy models. J. Comput. Financ. 2023, 27, 43–86. [Google Scholar] [CrossRef]
  34. Shashua, A.; Levin, A. Linear image coding for regression and classification using the tensor-rank principle. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, Kauai, HI, USA, 8–14 December 2001; Volume 1, p. I-I. [Google Scholar] [CrossRef]
  35. Vasilescu, M.A.O.; Terzopoulos, D. Multilinear Analysis of Image Ensembles: TensorFaces. In Computer Vision—ECCV 2002: Proceedings of the 7th European Conference on Computer Vision Copenhagen, Denmark, 28–31 May 2002; Heyden, A., Sparr, G., Nielsen, M., Johansen, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2002; pp. 447–460. [Google Scholar]
  36. Ion, I.G.; Wildner, C.; Loukrezis, D.; Koeppl, H.; De Gersem, H. Tensor-train approximation of the chemical master equation and its application for parameter inference. J. Chem. Phys. 2021, 155, 034102. [Google Scholar] [CrossRef]
  37. Ballani, J.; Grasedyck, L. Hierarchical Tensor Approximation of Output Quantities of Parameter-Dependent PDEs. SIAM/ASA J. Uncertain. Quantif. 2015, 3, 852–872. [Google Scholar] [CrossRef]
  38. Schollwock, U. The density-matrix renormalization group in the age of matrix product states. Ann. Phys. 2011, 326, 96–192. [Google Scholar] [CrossRef]
  39. Ernst Eberlein, K.G.; Papapantoleon, A. Analysis of Fourier Transform Valuation Formulas and Applications. Appl. Math. Financ. 2010, 17, 211–240. [Google Scholar] [CrossRef]
  40. Glasserman, P. Monte Carlo Methods in Financial Engineering; Springer: New York, NY, USA, 2004; Volume 53. [Google Scholar]
  41. Available online: https://indexes.nikkei.co.jp/en/nkave/index/profile?idx=nk225vi (accessed on 29 December 2024).
  42. Fernández, Y.N.; Ritter, M.K.; Jeannin, M.; Li, J.W.; Kloss, T.; Louvet, T.; Terasaki, S.; Parcollet, O.; von Delft, J.; Shinaoka, H.; et al. Learning tensor networks with tensor cross interpolation: New algorithms and libraries. arXiv 2024, arXiv:2407.02454. [Google Scholar] [CrossRef]
  43. Google. tf-Quant-Finance. 2023. Available online: https://github.com/google/tf-quant-finance (accessed on 29 December 2024).
  44. Dupire, B. Pricing with a smile. Risk 1994, 7, 18–20. [Google Scholar]
  45. Heston, S.L. A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev. Financ. Stud. 1993, 6, 327–343. [Google Scholar] [CrossRef]
  46. Hagan, P.S.; Kumar, D.; Lesniewski, A.S.; Woodward, D.E. Managing Smile Risk. Wilmott Mag. 2002, 1, 84–108. [Google Scholar]
  47. Madan, D.B.; Seneta, E. The Variance Gamma (V.G.) Model for Share Market Returns. J. Bus. 1990, 63, 511–524. [Google Scholar] [CrossRef]
  48. Barndorff-Nielsen, O. Processes of normal inverse Gaussian type. Financ. Stoch. 1997, 2, 41–68. [Google Scholar] [CrossRef]
Figure 1. Fast option pricing based on TNs proposed in this study. In (a) (1), we learn TTs from the parameter-dependent function of ϕ and v ^ using TCI and reduce the bond dimension of these TTs using SVD. In (a) (2), we then contract the tensors associated with p m and z m for TT of ϕ , resulting in TTO. In (a) (3), we take the contraction of the TTO and TT with respect to the index vector j . In (a) (4), we further compress the bond dimension of the resulting TT using SVD. In (b), we use this TT to perform fast option pricing for a specified parameter p .
Figure 1. Fast option pricing based on TNs proposed in this study. In (a) (1), we learn TTs from the parameter-dependent function of ϕ and v ^ using TCI and reduce the bond dimension of these TTs using SVD. In (a) (2), we then contract the tensors associated with p m and z m for TT of ϕ , resulting in TTO. In (a) (3), we take the contraction of the TTO and TT with respect to the index vector j . In (a) (4), we further compress the bond dimension of the resulting TT using SVD. In (b), we use this TT to perform fast option pricing for a specified parameter p .
Mathematics 13 01828 g001
Figure 2. The computational complexity of TT-based and MC-based option pricing, denoted as c TT and c MC versus the number of assets d. We consider including parameter dependencies on σ and S 0 in TTs, and the number of paths in MC is 10 6 .
Figure 2. The computational complexity of TT-based and MC-based option pricing, denoted as c TT and c MC versus the number of assets d. We consider including parameter dependencies on σ and S 0 in TTs, and the number of paths in MC is 10 6 .
Mathematics 13 01828 g002
Figure 3. The bond dimensions χ l of each bond l for ϕ and v ^ , obtained through TCI and SVD, incorporating dependencies on (a) σ and (b) S 0 . The odd bond l = 2 i 1 connects the core tensors for j m and k m , the indexes for z m and p m , respectively, and the even bond l = 2 i connects those for k m and j m + 1 . Note that v ^ does not depend on parameters, and thus the bond l takes only even values 2 i (connecting the core tensors for z m and z m + 1 ). It is noteworthy that the graph of bond dimensions exhibits a characteristic jagged shape. The bond dimensions between sites for j m and k m , which are related to the same asset, are large, and the ones between sites for k m and j m + 1 , which are related to different assets, are small.
Figure 3. The bond dimensions χ l of each bond l for ϕ and v ^ , obtained through TCI and SVD, incorporating dependencies on (a) σ and (b) S 0 . The odd bond l = 2 i 1 connects the core tensors for j m and k m , the indexes for z m and p m , respectively, and the even bond l = 2 i connects those for k m and j m + 1 . Note that v ^ does not depend on parameters, and thus the bond l takes only even values 2 i (connecting the core tensors for z m and z m + 1 ). It is noteworthy that the graph of bond dimensions exhibits a characteristic jagged shape. The bond dimensions between sites for j m and k m , which are related to the same asset, are large, and the ones between sites for k m and j m + 1 , which are related to different assets, are small.
Mathematics 13 01828 g003
Table 1. The input parameters used in the experiment, except for σ and S 0 . When either σ or S 0 is selected as the parameter dependency p , the remaining one is fixed at the value shown in the table. For the case where S 0 is varied and d = 11 , we use ( N , η ) = ( 200 , 0.3 ) . In all other cases, we use ( N , η ) = ( 100 , 0.4 ) .
Table 1. The input parameters used in the experiment, except for σ and S 0 . When either σ or S 0 is selected as the parameter dependency p , the remaining one is fixed at the value shown in the table. For the case where S 0 is varied and d = 11 , we use ( N , η ) = ( 200 , 0.3 ) . In all other cases, we use ( N , η ) = ( 100 , 0.4 ) .
TrK S 0 σ α ρ mn ( m n ) ( N , η )
1 0.01 100100 0.2 5 d 1 3 ( 200 , 0.3 ) if d = 11 , p = S 0 ( 100 , 0.4 ) otherwise
Table 2. The results of TT-based option pricing incorporating (a) σ and (b) S 0 dependence. Here, we set the ranges σ m [ 0.15 , 0.25 ) and S m , 0 [ 90 , 120 ) and place 100 equally spaced grid points within these ranges. t TT and t MC represent the average computational time from 100 measurements, respectively.
Table 2. The results of TT-based option pricing incorporating (a) σ and (b) S 0 dependence. Here, we set the ranges σ m [ 0.15 , 0.25 ) and S m , 0 [ 90 , 120 ) and place 100 equally spaced grid points within these ranges. t TT and t MC represent the average computational time from 100 measurements, respectively.
(a) σ
d e TT e MC c TT c MC t TT [s] t MC [s] χ ϕ χ v ^ χ V ˜
50.001780.0060616 5.0 × 10 6 4.91 × 10 7 2.84 × 10 1 16112
60.001540.0050320 6.0 × 10 6 6.37 × 10 7 3.38 × 10 1 16112
70.001340.0042824 7.0 × 10 6 9.12 × 10 7 3.71 × 10 1 17112
80.001360.0037228 8.0 × 10 6 1.75 × 10 6 4.09 × 10 1 18112
90.0008670.0032932 9.0 × 10 6 2.47 × 10 6 4.51 × 10 1 20112
100.002290.0029410 1.0 × 10 7 1.18 × 10 6 5.11 × 10 1 20111
110.0005540.0026511 1.1 × 10 7 1.16 × 10 6 5.04 × 10 1 20111
(b) S 0
d e TT e MC c TT c MC t TT [s] t MC [s] χ ϕ χ v ^ χ V ˜
50.001510.0077316 5.0 × 10 6 7.67 × 10 7 3.99 × 10 1 18112
60.001220.0064020 6.0 × 10 6 6.34 × 10 7 5.15 × 10 1 19112
70.001120.0054724 7.0 × 10 6 9.21 × 10 7 5.37 × 10 1 21102
80.0009730.0047728 8.0 × 10 6 1.63 × 10 6 6.18 × 10 1 23112
90.0006860.0042432 9.0 × 10 6 1.19 × 10 6 7.95 × 10 1 24112
100.0006620.0037736 1.0 × 10 7 1.47 × 10 6 8.77 × 10 1 24112
110.001140.0033940 1.1 × 10 7 1.93 × 10 6 8.43 × 10 1 25132
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sakurai, R.; Takahashi, H.; Miyamoto, K. Learning Parameter Dependence for Fourier-Based Option Pricing with Tensor Trains. Mathematics 2025, 13, 1828. https://doi.org/10.3390/math13111828

AMA Style

Sakurai R, Takahashi H, Miyamoto K. Learning Parameter Dependence for Fourier-Based Option Pricing with Tensor Trains. Mathematics. 2025; 13(11):1828. https://doi.org/10.3390/math13111828

Chicago/Turabian Style

Sakurai, Rihito, Haruto Takahashi, and Koichi Miyamoto. 2025. "Learning Parameter Dependence for Fourier-Based Option Pricing with Tensor Trains" Mathematics 13, no. 11: 1828. https://doi.org/10.3390/math13111828

APA Style

Sakurai, R., Takahashi, H., & Miyamoto, K. (2025). Learning Parameter Dependence for Fourier-Based Option Pricing with Tensor Trains. Mathematics, 13(11), 1828. https://doi.org/10.3390/math13111828

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop