Abstract
This paper studies a p-layers deep learning artificial neural network (DLANN) for European multi-asset options. Firstly, a p-layers DLANN is constructed with undetermined weights and bias. Secondly, according to the terminal values of the partial differential equation (PDE) and the points that satisfy the PDE of multi-asset options, some discrete data are fed into the p-layers DLANN. Thirdly, using the least square error as the objective function, the weights and bias of the DLANN are trained well. In order to optimize the objective function, the partial derivatives for the weights and bias of DLANN are carefully derived. Moreover, to improve the computational efficiency, a time-segment DLANN is proposed. Numerical examples are presented to confirm the accuracy, efficiency, and stability of the proposed p-layers DLANN. Computational examples show that the DLANN’s relative error is less than for different numbers of assets . In the future, the p-layers DLANN can be extended into American options, Asian options, Lookback options, and so on.
Keywords:
multi-asset option; European option; high-dimensional PDE; artificial neural network; p-layers MSC:
65M99; 68W25; 65D12
1. Introduction
In the last 30 years, different methods and techniques for multi-asset option pricing have been developed. For options pricing on one to three assets, the radial basis function (RBF) method uses the so-called radial basis function to approximate European and American options [1,2,3,4,5,6,7,8,9,10]. For the low-dimensional case, the semi-analytical solutions and analytical solutions of a European option and an American option can be obtained [11,12]. The finite difference (FD) method for solving an option’s partial differential equation (PDE) was derived under the Black–Scholes model and Heston model [13,14,15,16,17]. A Monte Carlo simulation for SABA options of calorimeter systems and multi-asset options was proposed [18,19,20]. Laplace transform or Mellin transform methods for some classical options have been presented [15,21,22,23,24]. The willow tree (WT) method under Lévy processes was established [25,26].
In general, for RBF, FD, and WT methods, only some cases with lower dimensions () can be solved easily. For analytical or semi-analytical methods, these can only be applied to some simple or classical options. Although Monte Carlo simulation is suitable for any high-dimensional problem, it is computationally inefficient, resulting in a significant consumption of CPU time and storage space. Therefore, Monte Carlo methods are limited to simple models and are computationally heavy.
In the past decade, several scholars have applied artificial neural networks (ANNs) to solve option values. Anderson and Ulrich used deep neural networks to price American options [27]. Caverhill et al. proposed an ANN method for option pricing and hedging [28]. Based on the residual ANN, Gan and Liu discussed option pricing [29]. Kapllani and Teng provided a deep learning artificial neural network to solve nonlinear backward stochastic differential equations (BSDE) [30]. For estimating option prices, Lee and Son used artificial neural networks to predict arbitrage-free American options [31]. Mary and Salchenberger proposed an ANN model [32] to estimate option prices. Nikolas and Lorenz discussed a deep learning method to interpolate between BSDEs and PINNs [33]. Robert Balkin et al. considered a machine learning algorithm to solve stochastic delay differential equations [34]. Teng et al. discussed combining artificial neural networks with classical volatility prediction models for option pricing [35,36]. Umeorah, Mashele, and Agbaeze provided a barrier option pricing method with an ANN [37]. Wang proposed an artificial neural network prediction model for stock index options [38]. All of these BSDE-based approaches, however, do not come directly from the option’s PDE.
Unlike general data fitting problems, there are many difficulties in using artificial neural networks in option pricing. We know that these options satisfy the Black–Scholes PDE. In order to apply artificial neural network to option pricing, proper discretization of the PDE must be carried out. Using an artificial neural network to approximate the PDE of an option is the main difficulty and innovation of this paper.
In this paper, a p-layers deep learning (DLANN) is considered for multi-dimensional option pricing. After a p-layers DLANN is established, the PDE of a multi-asset option is expressed in the ANN. By minimizing the objective functions in the DLANN, all parameters (including the weights and bias) are trained well. The deep learning p-ANN is relatively easy to implement, shows the capability for multi-asset option pricing, and is much faster than other schemes (such as a Monte Carlo simulation). It is noted that this paper is an improved version of Zhou and Wu [39].
This paper is arranged as follows. In Section 2, we introduce the PDE of a multi-asset model. In Section 3, a layers deep learning ANN ( DLANN) computational frame is proposed. The derivatives of the p-DLANN’s weights and bias are listed. Then, the updated formulas of the weights and bias in the DLANN are given. Section 4 lists some examples to confirm the efficiency and accuracy of the proposed algorithms. In Section 5, some conclusions and future work are proposed. Finally, Appendix A provides some appendices, including derivative formulas of the weights and bias and some computational results.
2. Model of Multi-Asset Options
Let , with denoting the transposition of , be d-asset prices at time t, and let , be modeled by Brownian motion together with a drift term under a no-arbitrage assumption, i.e., are controlled by stochastic differential equations (SDEs),
In SDEs (1), r represents the risk-free interest, represents the dividend yield, and represents the volatility of asset I. , are d standard Brownian motions, and it is assumed that
where are the coefficients of association between and . Then, the PDE of the d-asset option value emerges as
where .
Let the remainder time to maturity , and for ; then, the d-asset option value is re-written as
where , and the linear differential operator is defined as
with coefficients and .
For a European option, has the initial values (also called terminal conditions or payoff functions, for time )
System (4)–(6) can be written as
For put options and geometric mean payoff functions , the initial values of v are taken as
with strike price K. In the general case, System (7) has no analytical solutions, and a certain numerical method is needed to determine the option value in domain , for example, the Monte Carlo simulation, RBF numerical solution, finite difference method, and artificial neural network method.
With the geometric mean payoff function defined by (8), the put option governed by System (7) has an analytical solution (see Expression (42)). However, for a put option with an arithmetic mean payoff function
the option has no analytical solution. So, the numerical scheme is a feasible method to price a multi-asset option governed by (7) and (9). In this paper, we propose a p-layers DLANN to price the multi-asset option values.
3. Computational Frame of DLANN
3.1. Structure of DLANN
A deep learning artificial neural network (DLANN) is defined as shown in Figure 1. In Figure 1, the p-layers ANN includes an input layer, hidden layers, and an output layer. For an example, the neural units in five layers are , and . We will see in Section 4 that this structure is powerful for some European option pricing.
Figure 1.
Graphical depiction of the artificial neural network with a input layer, three hidden layers, and a output layer.
The evolution of the p-layers DLANN is represented as matrix expressions,
where is the input data in the layer, is the output in the layer ℓ, and is the transform function (or activation function). The algebraic expression is given by
where is a input data column vector , and are the activation functions, i.e.,
The derivatives and are
which can be found in Appendix A.2. The activation function can be used in other functions, but this paper only uses functions in the above form.
The weights and bias in the ANN are defined as
for . We call (10) or (11) a p-layers deep learning artificial neural network (denoted as p-DLANN). Using the p-DLANN network has the following challenges:
- (1)
- It is relatively hard to compute the partial derivatives, and , for , , and . We must update the parameters and with the appropriate leaning rate .
- (2)
- For a deep learning neural network p-DLANN (), it is somewhat complicated to compute the partial derivatives of and for and .
- (3)
- In option pricing, to use the deep learning networks p-DLANN to solve the PDE, we must first obtain the discrete PDE.
The output of the network is denoted by
with input data . We denote by the network output with input data under parameters . In addition, we denote by with N space and time input data
The purpose of our p-DLANN structure is to determine the optimal weights and bias . For this purpose, we use an iteration algorithm, i.e., we modify the values of the parameters and until some objective function is less than the pre-specified error .
For convenience, we list some of the symbols used in this paper in Table 1.
Table 1.
Some symbols for DLANN.
In the following paragraphs, we give the definition of the objective function for the DLANN, the mean square error (MSE),
with being the error at points , with and , and being the error of the PDE (7), with and being one of values
3.2. Update of Weights and Bias
Firstly, we consider the at .
For the initial conditions, we have . Then, the of p-DLANN at is defined by
The partial derivatives of with respect to the weights are
for , , and .
The partial derivatives of with respect to are
for and . In detail, we have
The deduction of (24) can be seen in Appendix A.1.2. Observing (21) and (24), we have a relationship between and ,
for , , and . This relationship can simplify our computation and save some CPU time.
Secondly, we discuss the at discrete points for . The discrete error of PDE at points are defined as
where
for . Here, differential operators , and .
The partial derivatives are recursively computed as
for
By careful analysis, we have recursive derivation formulas as follows:
for ;; ; and , with and being defined in (13). In Equation (31), the derivatives and are defined as in (29).
The terminate values in (29) and (31) are set as
We define the partial derivatives
for , , and . We define the partial derivatives
for and .
The partial derivatives in (33) are found as
for , , and . The partial derivatives in (34) are found as
for and . The , and are computed according to (29)–(32).
We define the increments of and as follows:
with the definitions of and as in (33) and (34).
Then, we update the network’s weights and bias as follows:
where is the learning rate, and and are defined by (26) and (37).
Finally, we define the total error as in (17) and
with being defined by (28). To minimize the objective function , we have
We obtain the trained network with input data and optimal parameters by iterating algorithm (38). Finally, we can compute
for any input data .
Algorithm 1 gives the detailed procedure of DLANN for option pricing.
| Algorithm 1: p-layers DLANN for multi-asset option pricing in time region . |
|
3.3. Time-Segment DLANN
Algorithm 2 gives a time-segment DLANN, labeled as TSDLANN. For this algorithm, firstly, we use DLANN to solve the options in time region with initial values . Then, we use DLANN to compute the options in a time interval with initial values . Over and over again, the options are solved by DLANN in each time interval , with initial values . The experiments described in Section 4.5 show the TSDLANN can improve the calculation accuracy and reduce the calculation CPU cost.
| Algorithm 2: p-TSDLANN in . |
|
4. Numerical Examples
4.1. Parameters Setting
In the experiments, we used Matlab R2016a on a computer (12th Gen Intel(R), Core(TM), i7-12700H, 2.30 GHz, RAM 16.0G, HUAWEI) to implement some numerical cases. We set the parameters of a multi-asset option as follows. The dimension of PDE was set as . The coefficients were set as and for . The dividend yields were set as , and the risk-free interest was set as . The volatilities were set as for . The strike prices were set as . The asset ratios were set as for . The maturity date was taken as .
The discrete stock prices were taken as with . The time to maturity T was discretized as with and . for , and for . The input data for network training were expressed by with each being one of and each being one of . So, the total numbers of training data were . The input data were taken as the initial data with the network output being the payoff functions (see (8) and (9)). The reminder data with were taken as the input data corresponding to the discrete PDE.
For , the network layers were taken as , and , and the training number was taken as in the ANN structure. For , the network layers were taken as , and , and the training number was taken as . When the MSE was increasing, we ended the DLANN training. In Section 4.2 and Section 4.3, we give some computational results with . In Section 4.4, we list some computational results with . In Section 4.5, we list some computational results of TSDLANN.
Throughout the training process, we used the following techniques:
- (1)
- The learning rate was set as initially. When the objective function did not decrease, we set = 0.5 . Throughout the training process, we used a factor of 0.5 to reduce the learning rate when the error did not decrease, repeatedly.
- (2)
- The initial parameters values and were set randomly between .
- (3)
- To speed the training process, at each iteration we only used partial training data to update the weights and the bias . Simply, at the ith iteration, we used the data sequence labeled by as the input data of p-DLANN.
- (4)
- If the MSE of the simulated solutions was not ideal, we reset the initial values of the weights and bias , randomly. This technique may prevent the optimization process from falling into a local minimum rather than a global performance values.
- (5)
- The simulated result was for any input data (see expression (41)) contained within the envelope of , with trained parameters and .
To compare the results, we list the options computed by Monte Carlo simulation. The Monte Carlo algorithm for multi-asset option is described as in Algorithm 3. For the geometric mean payoff function, we modified the simulated option at Step 3.
| Algorithm 3: Monte Carlo algorithm for multi-asset option pricing in . |
|
4.2. Numerical Results with Geometric Mean Payoff Function
With the geometric mean payoff function defined by (8), a put option governed by System (7) has an analytical solution (see the literature [40]),
where is the cumulative normal distribution function, and
with
We list some numerical examples to illustrate the errors between the DLANN () numerical solutions, the Monte Carlo solutions, and the analytical options.
Table 2, Table 3, Table 4 and Table 5 list the 4∼DLANN solutions (labeled by ‘DLANN’), Monte Carlo solutions (labeled by ‘MC’), and the analytical solutions (label by ‘Anal.’) with , time to expire date , and different strike prices of . In Table 2, Table 3, Table 4 and Table 5, the fifth column and the sixth column are the absolute errors (labeled by ‘ERR’) and the relative errors (labeled by ‘RE’), respectively. Figure 2, Figure 3, Figure 4 and Figure 5 plot the analytical solutions, Monte Carlo solutions, and 4∼DLANN solutions at different asset points with the following parameters: , , and .
Table 2.
Computational results of 4∼DLANN for geometric mean payoff function with .
Table 3.
Computational results of 4∼DLANN for geometric mean payoff function with .
Table 4.
Computational results of 4∼DLANN for geometric mean payoff function with .
Table 5.
Computational results of 4∼DLANN for geometric mean payoff function with .
Figure 2.
4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with and . (a) and (b) .
Figure 3.
4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with and . (a) and (b) .
Figure 4.
4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with and . (a) and (b) .
Figure 5.
4∼DLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with and . (a) and (b) .
From Table 2, Table 3, Table 4 and Table 5, we see the absolute errors are less than , and the relative errors are about between the 4∼DLANN solutions and analytical solutions, which illustrate that our 4∼DLANN scheme is efficient and accurate. Moreover, the results of the calculation are quite stable, and there are no unstable results. From Figure 2, Figure 3, Figure 4 and Figure 5, we see the 4∼DLANN solutions are very close to the Monte Carlo solutions and analytical solutions.
As the iteration number L increases, Figure 6 and Figure 7 record the error evolution path with respect to the training number L for different values of d. We can see from these figures that the errors are decreasing quickly as the iteration number L increases. Figure 8 records the learning rate path . The learning rate fluctuates between 0 and 0.1.
Figure 6.
4∼DLANN MSE vs. training number L with and . (a) and (b) .
Figure 7.
4∼DLANN MSE vs. the iteration number L with and . (a) and (b) .
Figure 8.
4∼DLANN learning rate path with . (a) and ; (b) and .
4.3. Numerical Results with Arithmetic Mean Payoff Function
In this subsection, we list some results of the put options with arithmetic mean payoff functions
This type of option has no analytical solution. We used the Monte Carlo simulation (the procedure can be seen in Algorithm 3) in order to compare the two results.
Table 6, Table 7, Table 8 and Table 9 list the results of the 4-DLANN solutions (labeled by ‘DLANN’) and the Monte Carlo simulation (labeled by ‘MC’). From these tables, we see the absolute errors (labeled by ‘ERR’) are less than , and the relative errors (labeled by ‘RE’) are less than , which illustrate that our DLANN algorithm is effective and accurate.
Table 6.
Computational results of 4∼DLANN for arithmetic mean payoff function with and .
Table 7.
Computational results of 4∼DLANN for arithmetic mean payoff function with and .
Table 8.
Computational results of 4∼DLANN for arithmetic mean payoff function with and .
Table 9.
Computational results of 4∼DLANN for arithmetic mean payoff function with and .
Figure 9a, Figure 10a, Figure 11a and Figure 12a plot the solutions obtained by the 4-DLANN method and the Monte Carlo simulation with , , and . Figure 9b, Figure 10b, Figure 11b and Figure 12b plot the errors with respect to the number of iterations L. From this figure, we see the errors decrease quickly, as the iteration number L increases.
Figure 9.
(a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: , and .
Figure 10.
(a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: , and .
Figure 11.
(a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: , and .
Figure 12.
(a) 4∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: , and .
4.4. Numerical Results with
We set the DLANN parameters as and for the arithmetic payoff functions. Table 10 lists some DLANN solutions and Monte Carlo solutions. We see the absolute errors and relative errors are less than and , respectively. Figure 13a plots the DLANN solutions, Monte Carlo solutions, and analytical solutions for the geometric payoff functions. Figure 13b plots the errors with respect to the iteration number L. From the table and figure, we see the errors between the DLANN and MC are less than , which illustrates our 5-DLANN is more efficient than the .
Table 10.
Computational results of 5∼DLANN for arithmetic mean payoff function with and .
Figure 13.
(a) 5∼DLANN computational options vs. Monte Carlo solutions for geometric mean payoff function. (b) Iteration error process. Parameters: , and .
Figure 14 plots the DLANN solutions and the errors’ evolution process with respect to the iteration number L under the parameters for arithmetic payoff functions. Again, these numerical results confirm our DLANN is more accurate and efficient for .
Figure 14.
(a) 6∼DLANN computational options vs. Monte Carlo solutions for arithmetic mean payoff function. (b) Iteration error process. Parameters: , and .
Compared with the computational results of the DLANN with are more accurate, which illustrate that our DLANN is convergent with respect to the ANN layers p.
4.5. Numerical Results for TSDLANN
Using p-TSDLANN, as shown in Algorithm 2, we computed the options with geometric payoff functions. Figure 15 plots the TSDLANN solutions with , compared with the analytical solutions and Monte Carlo solutions, from which we see the p-TSDLANN solutions are very close to the analytical solutions. The experiments show, with the same accuracy, the CPU time of this Algorithm is about half of that computed by the p-DLANN.
Figure 15.
4∼TSDLANN computational options, analytical solutions, and Monte Carlo solutions for geometric mean payoff function with . (a) and (b) .
5. Conclusions
This paper introduces a -layers deep learning ANN to price European multi-asset options based on the discrete PDE. By setting the numbers of the DLANN structure, the multi-asset options are simulated correctly and efficiently. The key point is obtaining the discrete formula of the PDE that is established by the option valuation and then computing the gradients of the DLANN with respect to the net weights and net bias . By setting the learning parameter , the DLANN is trained well, and the optimal parameters and are obtained. Lastly, the option prices are simulated by inputting the asset log prices and remainder time into the trained DLANN.
Numerical examples are provided for geometric mean payoff functions and arithmetic mean payoff functions. These examples show that the p-layers DLANN has a powerful capability for multi-asset European option pricing. The relative errors of the DLANN options are less than , and this calculation accuracy is comparable to the Monte Carlo simulation. Moreover, we propose a so-called TSDLANN, which improves the accuracy and saves CPU time.
In the future, we will prove the convergence of DLANN under certain conditions. The challenge is how to analyze the errors between the actual options and computed options by DLANN. Moreover, we will consider applying the DLANN to more complex options, such as multi-asset American options, multi-asset Asian options, and multi-asset barrier options, which is somewhat complicated.
Author Contributions
Methodology, Z.Z.; Software, H.W. and Y.W.; Formal analysis, C.K.; Resources, Y.L. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the National Natural Science Foundation of China (Grant No.12171409), This work was supported by the Key Project of Hunan Provincial Department of Education (Grant No.21A0533).
Data Availability Statement
The authors confirm that the data supporting the findings of this study are available within the article.
Conflicts of Interest
The authors declare no conflicts of interest.
Appendix A
Appendix A.1. Deducing the Derivatives for Weights and Bias
Appendix A.1.1. Deducing the Derivatives for Weights
The partial derivatives of are
for . The partial derivatives of are
for and . Similarly, we have
for and .
for and .
for , , and , with the function
Appendix A.1.2. Deducing the Derivatives for Weights
The derivatives of are
for . The derivatives are is
for . Similarly, we have
for . The derivatives of are
for . The derivatives of are
for .
Appendix A.2. Deducing g ′ (U) and g ′′ (U)
The function is
So, the derivative is
and the second derivative is
References
- Álvarez, D.; González-Rodríguez, P.; Kindelan, M. A local radial basis function method for the Laplace-Beltrami operator. J. Sci. Comput. 2021, 86, 28. [Google Scholar] [CrossRef]
- Banei, S.; Shanazari, K. On the convergence analysis and stability of the RBF-adaptive method for the forward-backward heat problem in 2D. Appl. Numer. Math. 2021, 159, 297–310. [Google Scholar] [CrossRef]
- Bastani, A.F.; Ahmadi, Z.; Damircheli, D. A radial basis collocation method for pricing American options under regime-switching jump-diffusion models. Appl. Numer. Math. 2013, 65, 79–90. [Google Scholar] [CrossRef]
- Bollig, E.; Flyer, N.; Erlebacher, G. Solution to PDEs using radial basis function finite-differences (RBF-FD) on multiple GPUs. J. Comput. Phys. 2012, 231, 7133–7151. [Google Scholar] [CrossRef]
- Fornberg, B.; Lehto, E. Stabilization of RBF-generated finite difference methods for convective PDEs. J. Comput. Phys. 2011, 230, 2270–2285. [Google Scholar] [CrossRef]
- Fornberg, B.; Piret, C. A stable algorithm for at radial basis functions on a sphere. SIAM J. Sci. Comput. 2007, 30, 60–80. [Google Scholar] [CrossRef]
- Fornberg, B.; Piret, C. On choosing a radial basis function and a shape parameter when solving a convective PDE on a sphere. J. Comput. Phys. 2008, 227, 2758–2780. [Google Scholar] [CrossRef]
- Larsson, E.; Shcherbakov, V.; Heryudono, A. A least squares radial basis function partition of unity method for solving PDEs. SIAM J. Sci. Comput. 2017, 39, 2538–2563. [Google Scholar] [CrossRef]
- Li, H.G.; Mollapourasl, R.; Haghi, M. A local radial basis function method for pricing options under the regime switching model. J. Sci. Comput. 2019, 79, 517–541. [Google Scholar] [CrossRef]
- Shcherbakov, V.; Larsson, E. Radial basis function partition of unity methods for pricing vanilla Basket options. Comput. Math. Appl. 2016, 71, 185–200. [Google Scholar] [CrossRef]
- Sunday, E.; Olabisi, U.; Enahoro, O. Analytical solutions of the Black–Scholes pricing model for European option valuation via a projected differential transformation method. Entropy 2015, 17, 7510–7521. [Google Scholar] [CrossRef]
- Zhao, W.J. An artificial boundary method for American option pricing under the CEV model. SIAM J. Numer. Anal. 2008, 46, 2183–2209. [Google Scholar]
- Chiarella, C.; Kang, B.; Meyer, G.H. The numerical solution of the American option pricing problem-finite difference and transform approaches. World Sci. Books 2014, 127, 161–168. [Google Scholar]
- in’t Hout, K.; Foulon, S. ADI finite difference schemes for option pricing in the Heston model with correlation. Int. J. Numer. Anal. Model. 2008, 7, 303–320. [Google Scholar]
- in ’t Hout, K.; Weideman, J.A.C. A contour integral method for the Black–Scholes and Heston equations. SIAM J. Sci. Comput. 2011, 33, 763–785. [Google Scholar] [CrossRef]
- Pang, H.; Sun, H. Fast numerical contour integral method for fractional diffusion equations. J. Sci. Comput. 2016, 66, 41–66. [Google Scholar] [CrossRef]
- Song, L.; Wang, W. Solution of the fractional Black-Scholes option pricing model by finite difference method. Abstr. Appl. Anal. 2013, 2013, 194286. [Google Scholar] [CrossRef]
- Gabriel, T.A.; Amburgey, J.D.; Bishop, B.L. CALOR: A Monte Carlo Program Package for the Design and Analysis of Calorimeter Systems. 2024, In FORTRAN IV, osti Information Bridge Server. Available online: https://www.osti.gov/servlets/purl/7215451/ (accessed on 13 December 2024).
- Gamba, A. An Extension of Least Squares Monte Carlo Simulation for Multi-Options Problems; 2002. pp. 1–40. Available online: https://www.realoptions.org/papers2002/Gamba.pdf (accessed on 13 December 2024).
- Rodriguez, A.L.; Grzelak, L.A.; Oosterlee, C.W. On an efficient multiple time-step Monte Carlo simulation of the SABR model. Soc. Sci. Electron. Publ. 2016, 17, 1549–1565. [Google Scholar] [CrossRef][Green Version]
- Ma, J.; Zhou, Z. Convergence analysis of iterative Laplace transform methods for the coupled PDEs from regime-switching option pricing. J. Sci. Comput. 2018, 75, 1656–1674. [Google Scholar] [CrossRef]
- Ma, J.; Zhou, Z. Fast Laplace transform methods for the PDE system of Parisian and Parasian option pricing. Sci. China Math. 2022, 65, 1229–1246. [Google Scholar] [CrossRef]
- Panini, R.; Srivastav, R.P. Option pricing with Mellin transforms. Math. Comput. Model. 2004, 40, 43–56. [Google Scholar] [CrossRef]
- Zhou, Z.; Ma, J.; Sun, H. Fast Laplace transform methods for free-boundary problems of fractional diffusion equations. J. Sci. Comput. 2018, 74, 49–69. [Google Scholar] [CrossRef]
- Zhou, Z.; Xu, W. Robust willow tree method under Lévy processes. J. Comput. Appl. Math. 2023, 424, 114982. [Google Scholar] [CrossRef]
- Zhou, Z.; Xu, W. Joint calibration of S&P 500 and VIX options under local stochastic volatility models. Int. J. Financ. Econ. 2024, 29, 273–310. [Google Scholar]
- Anderson, D.; Ulrych, U. Accelerated American option pricing with deep neural networks. Quant. Financ. Econ. 2023, 7. [Google Scholar] [CrossRef]
- Carverhill, A.P.; Cheuk, T.H.F. Alternative Neural Network Approach for Option Pricing and Hedging; Social Science Electronic Publishing: Rochester, NY, USA, 2024; pp. 1–17. [Google Scholar] [CrossRef]
- Gan, L.; Liu, W. Option pricing based on the residual neural network. Comput. Econ. 2023, 63, 1327–1347. [Google Scholar] [CrossRef]
- Kapllani, L.; Teng, L. Deep learning algorithms for solving high-dimensional nonlinear backward stochastic differential equations. Discret. Contin. Dyn. Syst. Ser. B 2024, 29, 1695–1729. [Google Scholar] [CrossRef]
- Lee, Y.; Son, Y. Predicting arbitrage-free American option prices using artificial neural network with pseudo inputs. Ind. Eng. Manag. Syst. 2021, 20, 119–129. [Google Scholar] [CrossRef]
- Mary, M.; Salchenberger, L. A neural network model for estimating option prices. Appl. Intell. 1993, 3, 193–206. [Google Scholar]
- Nikolas, N.; Lorenz, R. Interpolating Between BSDEs and PINNs: Deep Learning, for Elliptic and Parabolic Boundary Value Problems. J. Mach. Learn. 2024, 2, 31–64. [Google Scholar] [CrossRef]
- Balkin, R.; Ceniceros, H.D.; Hu, R. Stochastic Delay Differential Games: Financial Modeling and Machine Learning Algorithms. J. Mach. Learn. 2024, 3, 23–63. [Google Scholar] [CrossRef]
- Teng, Y.; Li, Y.; Wu, X. Option volatility investment strategy: The combination of neural network and classical volatility prediction model. Discret. Dyn. Nat. Soc. 2022, 2022, 8952996. [Google Scholar] [CrossRef]
- Tung, W.L.; Quek, C. Financial volatility trading using a self-organising neural-fuzzy semantic network and option straddle-based approach. Expert Syst. Appl. 2011, 38, 4668–4688. [Google Scholar] [CrossRef]
- Umeorah, N.; Mashele, P.; Agbaeze, O.M.J.C. Barrier Options and Greeks: Modeling with Neural Networks. Axioms 2023, 12, 384. [Google Scholar] [CrossRef]
- Wang, H. Nonlinear neural network forecasting model for stock index option price: Hybrid GJRCGARCH approach. Expert Syst. Appl. 2009, 36, 564–570. [Google Scholar] [CrossRef]
- Zhou, Z.; Wu, H.; Li, Y.; Kang, C.; Wu, Y. Three-layer artificial neural network for pricing multi-asset European option. Mathematics 2024, 12, 2770. [Google Scholar] [CrossRef]
- Jiang, L. Mathematical Models and Methods of Option Pricing (Chinese Edition); Higher Education Press: Beijing, China, 2008. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).