Next Article in Journal
The Role of E-Accounting Adoption on Business Performance: The Moderating Role of COVID-19
Next Article in Special Issue
Deconstructing Risk Factors for Predicting Risk Assessment in Supply Chains Using Machine Learning
Previous Article in Journal
Social Security Payments and Financialization: Lessons from the Greek Case
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Newton–Raphson Emulation Network for Highly Efficient Computation of Numerous Implied Volatilities

1
Department of Mathematics, Chonnam National University & Statistics, Gwangju 61186, Republic of Korea
2
Asset Management Department, KB Kookmin Bank, Seoul 07328, Republic of Korea
3
Department of Mathematics, Yonsei University, Seoul 03722, Republic of Korea
4
Department of Statistics, Chonnam National University, Gwangju 61186, Republic of Korea
*
Author to whom correspondence should be addressed.
J. Risk Financial Manag. 2022, 15(12), 616; https://doi.org/10.3390/jrfm15120616
Submission received: 1 November 2022 / Revised: 12 December 2022 / Accepted: 15 December 2022 / Published: 18 December 2022
(This article belongs to the Special Issue Neural Networks for Financial Derivatives)

Abstract

:
In finance, implied volatility is an important indicator that reflects the market situation immediately. Many practitioners estimate volatility by using iteration methods, such as the Newton–Raphson (NR) method. However, if numerous implied volatilities must be computed frequently, the iteration methods easily reach the processing speed limit. Therefore, we emulate the NR method as a network by using PyTorch, a well-known deep learning package, and optimize the network further by using TensorRT, a package for optimizing deep learning models. Comparing the optimized emulation method with the benchmarks, implemented in two popular Python packages, we demonstrate that the emulation network is up to 1000 times faster than the benchmark functions.

1. Introduction

Volatility is the degree of variability in underlying asset dynamics, helping investors predict future market variability, and is usually divided into historical and implied volatility. Because historical volatility is obtained from information for a specific period in the past, this type of volatility lags behind the market situation. Unlike historical volatility, implied volatility contains only current market information, not past market information (Gatheral 2011). When a sudden shock, such as a financial crisis, occurs, implied volatility is fairly important in predicting future volatility. See (Hull 2003; Wilmott 2013) for a detailed explanation of volatility.
When using implied volatility for various purposes, such as estimating parameters of an option pricing model, it is often necessary to convert a large number of option prices into implied volatilities in real time. However, iterative methods, such as the bisection and Newton–Raphson (NR) methods, typically used to obtain implied volatilities, are unsuitable for calculating numerous implied volatilities due to excessive computation. Therefore, many studies (Brenner and Subrahmanyan 1988; Chance 1996; Corrado and Miller 1996; Jäckel 2006; Li 2005; Mininni et al. 2021; Orlando and Taglialatela 2017; Stefanica and Radoičić 2017) have proposed several formulas to approximate implied volatility. For example, Brenner and Subrahmanyan (1988); Chance (1996) and Li (2005) employed the Taylor expansion, Corrado and Miller (1996) utilized the quadratic approximation, and Mininni et al. (2021) used hyperbolic tangent functions to approximate the implied volatility.
However, increasing the accuracy by using various mathematical methods already faces a limitation. Thus, in line with various studies (Berg and Nyström 2019; Chen et al. 2018; Li et al. 2020; Raissi and Karniadakis 2018; Raissi et al. 2019; Ramuhalli et al. 2005) that have supplemented numerical schemes, such as the finite element method, with neural networks, deep learning methods Kim et al. (2022); Liu et al. (2019) are introduced to improve the accuracy of estimating implied volatility. As in many existing studies estimating the implied volatility (e.g., Jäckel (2006, 2015); Kim et al. (2022)), an iteration procedure is added after the network approximation to attain higher accuracy. This additional iteration procedure significantly burdens the whole estimation process, though. In the experiments performed by Kim et al. (2022), for example, the iteration procedure takes considerably more time compared to the network approximation of implied volatility.
Thus, in this study, we are concerned with further reducing the estimation time by facilitating the iteration procedure. We develop a graphics processing unit (GPU) acceleration scheme for the NR method, reducing the computational time for estimating implied volatilities dramatically. To this end, we apply the so-called neural emulation technique, which implements an algorithm like a neural network with zero or very few parameters. This technique enables employing well-known deep learning packages, such as TensorFlow and PyTorch, to accelerate a scientific procedure. These popular packages make it straightforward to implement large-scale parallel computation by using GPUs. Additionally, this approach allows a neural network optimization engine, TensorRT, to further maximize inference performance. We refer to the network emulating the NR method as the NR emulation network, and this study has an implication for accelerating the iteration process through the NR emulation network.
The presented NR emulation network was compared with the benchmarks, implemented widely used two packages, in terms of estimation accuracy and speed to verify the effectiveness of this study. The test results reveal that the NR emulation network is up to 1000 times faster than the benchmarks of the two well-known packages, but with similar accuracy. In other words, the proposed NR emulation network is stable and efficient enough to be used in estimating numerous implied volatilities in practice.
The background, such as the implied volatility and the NR method, is provided in the next section. Section 3 fully describes the NR emulation network. The NR network is compared in terms of accuracy and computation time with the benchmarks in Section 4. The last section concludes the work.

2. Backgrounds

2.1. Implied Volatility

An option is a contract that trades the right to buy (call option) and sell (put option) an asset at a predetermined strike price on a maturity date. In addition, options can be divided into several types depending on the exercise method. If the option can be exercised only on the expiration date of the contract, it is a European-style option. The Black–Scholes model (Black and Scholes 1973) is generally used to evaluate European options.
In the Black–Scholes model, the option pricing formula is given by
c c a l l ( S t , t ; r , σ , K , T ) = S t N ( d 1 ) K e r ( T t ) N ( d 2 ) c p u t ( S t , t ; r , σ , K , T ) = K e r ( T t ) N ( d 2 ) S t N ( d 1 ) ,
where S t is the stock price at t, r denotes the risk-free rate, σ represents the volatility of S t , K and T are the strike price and expiration time of the option, respectively, d 1 = 1 σ T t { ln S t K + ( r + 1 2 σ 2 ) ( T t ) } , d 2 = d 1 σ T t , N ( · ) denotes the cumulative distribution function of the standard normal distribution, and c c a l l and c p u t indicates the prices for the call and put options, respectively. Among the variables that influence the option price c, except for the volatility σ , the other variables S t , t, r, K, and T can be provided from the market information and the option specification, whereas σ must be estimated by using market data. However, in many cases, the market price c m k t of the option is a known quote because most options are exchange-traded products, and the corresponding σ is reversely calculated from the price c m k t . The value of σ computed in this way is called implied volatility σ i m p l .
In other words, for a given S t , r, t, K, T, and c m k t , the implied volatility σ i m p l is defined for each option as follows:
c m k t = h r , k , τ ( σ i m p l ) : = c ( S t , t ; r , σ i m p l , K , T ) ,
where k = S t / K and τ = T t . Because h r , k , τ ( · ) is monotonically increasing, σ i m p l uniquely exists as h r , k , τ 1 ( c m k t ) if c m k t is within an appropriate range. In addition, σ i m p l is often considered an alternative indicator of c m k t because σ i m p l changes in a more stable way than c m k t .

2.2. Newton–Raphson Iterative Method

The nonlinear Equation (2) must be solved with a numerical scheme for determining σ i m p l because h r , k , τ 1 is not found explicitly. An iterative method, such as the bisection or secant method, is commonly used to determine a solution to the nonlinear equation. In particular, the NR method, which is an algorithm with a fast convergence rate, is most used for estimating σ i m p l .
According to the NR method, the implied volatility σ i m p l can be obtained in a series of the following update steps:
σ n + 1 = σ n h r , k , τ ( σ n ) c m k t h r , k , τ ( σ n ) .
If the initial value σ 0 is given within the convergence interval, the NR method converges rapidly to σ i m p l with a quadratic convergence rate. However, there is a risk of divergence if σ 0 is not given in the convergence interval. Fortunately, the convergence of the NR method is guaranteed if σ 0 is set to σ c , as follows (refer to Higham 2004),
σ c = 2 τ ( ln k + r τ ) ,
where σ c is the unique inflection point of h r , k , τ , where the option vomma is 0. The first and second derivatives c σ and 2 c σ 2 of the option price c with respect to σ are called vega υ and vomma υ , respectively.

3. Newton–Raphson Emulation Network

This section proposes and describes the NR emulation network emulating the NR method. The emulation network enables us to obtain numerous implied volatilities in real time through parallel computing of the GPU and optimizing the computation graphs of the network.
The NR update (NRU) layer depicted in Figure 1 is designed to emulate the update step (3) of the NR method. In addition, h r , k , τ of the NRU layer is defined in (2). Therefore, if the input σ n passes through the NRU layer, one step of the NR method is applied to produce σ n + 1 , which is expected to be closer to σ i m p l than σ n . Additionally, h r , k , τ and h r , k , τ depend on the risk-free rate r, the ratio k of the stock price to the strike price, and the time to maturity τ ; thus, the NRU layer also depends on r, k, and τ . In addition, the NRU layer is also dependent on c m k t . This dependence can also be considered for the NRU layer to be conditioned on r, k, τ , and c m k t , similar to the conditional generative adversarial network (Mirza and Osindero 2014).
The NR emulation network is created by stacking NRU layers as depicted in Figure 2, which corresponds to the process of repeating the update steps of the NR method. As the input σ 0 for the network, σ c in Equation (4) is chosen. This choice ensures that the output σ p r e d is sufficiently close to σ i m p l if the emulation network is deep enough (except in the cases where a too-small σ i m p l makes σ p r e d diverge because of the limitations of the floating point number system). Passing through the deep network means performing the update steps of the NR method many times. In the experiments that follow in the next section, it is empirically demonstrated that the minimum depth of the NR emulation network should be eight to guarantee convergence. In other words, when σ 0 = σ c , there should be at least eight NRU layers in the network such that | σ p r e d σ i m p l | < ϵ for the machine epsilon ϵ ( 10 6 ) of the single-precision floating system.
To exploit powerful parallel computing, we implement the NR emulation network with PyTorch, a well-known deep learning framework, and run it on the GPU. This approach also allows for optimizing the network with TensorRT to accelerate the inference performance of the emulation network. In addition, TensorRT is one of the deep learning-related tools provided by NVIDIA, which can be used to optimize the structure of a network while converting a dynamic graph of PyTorch into a static graph (https://developer.nvidia.com/tensorrt, accessed on 1 November 2022). Although it is usual to use neural networks to identify patterns inherent in data, such a data-learning stage does not exist in this study.
In the next section, we experimentally reveal how accurately and quickly the NR emulation network derives the implied volatility. We conclude that market prices c m k t of numerous options can be converted into implied volatilities σ i m p l in real time.

4. Numerical Tests

4.1. Test Data Description

A testing dataset with one million data points was prepared by generating virtual option prices c m k t by using the Black–Scholes Formula (1), converting them to the corresponding implied volatilities σ i m p l . The variables σ , τ , and k involved in generating c m k t are randomly selected within the ranges as in Table 1 (for convenience, the risk-free rate r is fixed to 0 to offset its effect). The variables σ , τ , and k are the volatility parameter for the Black–Scholes model, time to maturity T t , and ratio S t / K of the stock price to exercise price, respectively.
We set the variable ranges to be as acceptable as possible by considering and reflecting the real market. Most options in the real market have a time to maturity τ of less than two years, and typically, the volatility σ does not fall below 1% and does not exceed 50%. Moreover, the strike price k is set to be within the 95% confidence interval of the distribution of the stock price S τ at time τ , and the distribution is obtained from the assumption of Black and Scholes that ln S τ follows N ( σ 2 2 τ , σ 2 τ ) when S 0 = 1 and r = 0 .

4.2. Test Results

In this section, we analyze the results of various tests. As benchmarks, we choose the NR method from the Python package SciPy (https://scipy.org/) (accessed on 1 November 2022) and the implied volatility estimation function from the recently released Python package py_vollib_vectorized (https://github.com/marcdemers/py_vollib_vectorized) (accessed on 15 November 2022). We denote the methods of SciPy and py_vollib_vectorized as SciPy-NR and Vectorized, respectively.
The SciPy-NR method is set to estimate the implied volatility through eight iterations, so the emulation network is also set to perform the estimation through eight NRU layers. Eight is the minimum number for both methods to reduce errors to near the machine epsilon ϵ ( 10 6 ) of the single-precision floating number system.
The Python package py_vollib_vectorized was released in 2021, and is the latest among the packages for estimating implied volatility. The package is theoretically based on Jäckel (2015), and works in parallel on CPU to run faster.
Table 2 reveals that the NR emulation network runs on the GPU to take full advantage of parallel computing. However, except for expensive Tesla GPUs, ordinary GPUs specialize in single-precision floating numbers, not double precision. In this study, we do not have a Tesla GPU; thus, we process the tests based on the single-precision floating number system. Therefore, for a fair comparison, the benchmark methods are also conducted with the precision of the single-precision floating numbers.
Table 3 compares the accuracy of each method by using the mean absolute error (MAE), mean square error (MSE), and mean relative error (MRE) for inferring the implied volatility. The definitions of MAE, MSE, and MRE are provided as follows:
MAE = 1 L i = 1 L | σ i , p r e d σ i , i m p l | , MSE = 1 L i = 1 L ( σ i , p r e d σ i , i m p l ) 2 , MRE = 1 L i = 1 L | σ i , p r e d σ i , i m p l | σ i , i m p l ,
where L = 1,000,000, and σ i , p r e d denotes the value derived by the emulation network to predict σ i , i m p l . Both methods achieve the maximum possible accuracy on the single-precision floating number system, as the values of MAE and MRE are below ϵ , and the value of MSE is below ϵ 2 . The SciPy-NR and Vectorized methods tend to infer σ i m p l 10 times more accurately than the emulation network, implying that the CPU may achieve higher precision than the GPU, even if both processing units work on similar single-precision floating number systems.
Table 4 presents the computation time consumed for the execution of each method. The NR emulation network has a very short computation time compared to the SciPy-NR and Vectorized methods. Additionally, as the number of implied volatility estimates increases, the emulation network becomes overwhelmed by the SciPy-NR and Vectorized methods in terms of processing speed. When the number of implied volatility estimates reaches one million, the running time of the network is about 1000 times shorter than that of the SciPy-NR method and about 700 times shorter than Vectorized. The computation times are repeatably measured 100 times, and the average and standard deviation of the resultant values are written together.
Figure 3 depicts how the MSE changes each time it passes through the NRU layer of the NR emulation network. The MSE decreases by about 10 1 from the first to third NRU layers and by about 10 2 from the fourth to sixth NRU layers. In contrast, the seventh and eighth NRU layers reduce the MSE only slightly because the sixth layer has already virtually achieved the maximum possible accuracy of the single-precision floating number system.
Lastly, we demonstrate how the inference value σ p r e d changes while passing through the NRU layers. Table 5 presents two specific cases: (1) σ = 0.3 , τ = 1 , and k = 1.5 and (2) σ = 0.3 , τ = 1 , and k = 1.3 . The emulation network produces virtually the exact outputs at the seventh and fourth layers for k = 1.5 and k = 1.3 , respectively. The outputs are indistinguishable from the exact implied volatility σ i m p l on the single-precision floating number system. These results confirm that the number of NRU layers required to achieve accurate implied volatility differs individually depending on the option.

5. Conclusions

Implied volatility is critical indicator that reflects expectations about future volatility and can be obtained by solving a nonlinear equation by using the NR method. However, it is often necessary to repeatedly estimate numerous implied volatilities. The iterative method then fails because of a heavy computational burden. Therefore, the NR emulation network is proposed in this study to resolve the challenge. To develop the network, we implemented the NR method, like a PyTorch network, and optimized the network with TensorRT. As a result, the emulation network is up to 1000 times faster than the well-known benchmarks, showing that the proposed method is stable and efficient enough to be utilized for computation of numerous implied volatilities in practice.
The purpose of this work is achieved by emulating and optimizing the NR method without taking a complex mathematical approach, which is a distinctive contribution to literature compared to the existing results with complicated techniques. Moreover, all codes are uploaded online for the NR emulation network to be utilized efficiently (https://github.com/thix-is/Newton-Raphson-emulation, accessed on 11 November 2022). Our result implies the possibility to contribute to solving other difficult issues of computational finance, such as model parameter calibration, due to the recent progress in computing technology. Therefore, follow-up studies are required to address these problems using the neural emulation technique.

Author Contributions

Conceptualization, G.L. and J.H.; methodology, G.L.; software, G.L. and T.-K.K.; validation, H.-G.K.; formal analysis, G.L. and H.-G.K.; investigation, G.L.; resources, G.L. and H.-G.K.; data curation, G.L. and T.-K.K.; writing—original draft preparation, G.L.; writing—review and editing, J.H.; visualization, G.L.; supervision, J.H.; project administration, J.H.; funding acquisition, J.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the BK21 Fostering Outstanding Universities for Research (No. 5120200913674) funded by the Ministry of Education (Korea) and the National Research Foundation of Korea. Jeonggyu Huh received financial support from the National Research Foundation of Korea (Grant No. NRF-2022R1F1A1063371). This work was supported by the artificial intelligence industrial convergence cluster development project funded by the Ministry of Science and ICT (Korea) and Gwangju Metropolitan City.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Berg, Jens, and Kaj Nyström. 2019. Data-driven discovery of pdes in complex datasets. Journal of Computational Physics 384: 239–52. [Google Scholar] [CrossRef] [Green Version]
  2. Black, Fischer, and Myron Scholes. 1973. The pricing of options and corporate liabilities. Journal of Political Economy 81: 637–54. [Google Scholar] [CrossRef] [Green Version]
  3. Brenner, Menachem, and Marti G Subrahmanyan. 1988. A simple formula to compute the implied standard deviation. Financial Analysts Journal 44: 80–83. [Google Scholar] [CrossRef]
  4. Chance, Don M. 1996. A generalized simple formula to compute the implied volatility. Financial Review 31: 859–67. [Google Scholar] [CrossRef]
  5. Chen, Ricky T. Q., Yulia Rubanova, Jesse Bettencourt, and David K. Duvenaud. 2018. Neural Ordinary Differential Equations, in ‘Advances in Neural Information Processing Systems’. La Jolla. Available online: https://papers.nips.cc/paper/2018/hash/69386f6bb1dfed68692a24c8686939b9-Abstract.html (accessed on 10 November 2022).
  6. Corrado, Charles J., and Thomas W. Miller Jr. 1996. A note on a simple, accurate formula to compute implied standard deviations. Journal of Banking & Finance 20: 595–603. [Google Scholar]
  7. Gatheral, Jim. 2011. The Volatility Surface: A Practitioner’s Guide. John Wiley & Sons: Oxford. [Google Scholar]
  8. Higham, Desmond J. 2004. An Introduction to Financial Option Valuation: Mathematics, Stochastics and Computation. Cambridge University Press: Cambridge. [Google Scholar]
  9. Hull, John C. 2003. Options Futures and Other Derivatives. Noida: Pearson Education India. [Google Scholar]
  10. Jäckel, Peter. 2006. By implication. Wilmott 26: 60–66. [Google Scholar]
  11. Jäckel, Peter. 2015. Let’s be rational. Wilmott 2015: 40–53. [Google Scholar] [CrossRef]
  12. Kim, Tae-Kyoung, Hyun-Gyoon Kim, and Jeonggyu Huh. 2022. Large-scale online learning of implied volatilities. Expert Systems with Applications 203: 117365. [Google Scholar] [CrossRef]
  13. Li, Steven. 2005. A new formula for computing implied volatility. Applied Mathematics and Computation 170: 611–25. [Google Scholar] [CrossRef]
  14. Li, Zongyi, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. 2020. Fourier neural operator for parametric partial differential equations. arXiv arXiv:2010.08895. [Google Scholar]
  15. Liu, Shuaiqiang, Cornelis W. Oosterlee, and Sander M. Bohte. 2019. Pricing options and computing implied volatilities using neural networks. Risks 7: 16. [Google Scholar] [CrossRef] [Green Version]
  16. Mininni, Michele, Giuseppe Orlando, and Giovanni Taglialatela. 2021. Challenges in approximating the black and scholes call formula with hyperbolic tangents. Decisions in Economics and Finance 44: 73–100. [Google Scholar] [CrossRef]
  17. Mirza, Mehdi, and Simon Osindero. 2014. Conditional generative adversarial nets. arXiv arXiv:1411.1784. [Google Scholar]
  18. Orlando, Giuseppe, and Giovanni Taglialatela. 2017. A review on implied volatility calculation. Journal of Computational and Applied Mathematics 320: 202–20. [Google Scholar] [CrossRef]
  19. Raissi, Maziar, and George Em Karniadakis. 2018. Hidden physics models: Machine learning of nonlinear partial differential equations. Journal of Computational Physics 357: 125–41. [Google Scholar] [CrossRef] [Green Version]
  20. Raissi, Maziar, Paris Perdikaris, and George E. Karniadakis. 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics 378: 686–707. [Google Scholar] [CrossRef]
  21. Ramuhalli, Pradeep, Lalita Udpa, and Satish S. Udpa. 2005. Finite-element neural networks for solving differential equations. IEEE Transactions on Neural Networks 16: 1381–92. [Google Scholar] [CrossRef] [PubMed]
  22. Stefanica, Dan, and Radoš Radoičić. 2017. An explicit implied volatility formula. International Journal of Theoretical and Applied Finance 20: 1750048. [Google Scholar] [CrossRef]
  23. Wilmott, Paul. 2013. Paul Wilmott on Quantitative Finance. John Wiley & Sons: Oxford. [Google Scholar]
Figure 1. Newton–Raphson update layer.
Figure 1. Newton–Raphson update layer.
Jrfm 15 00616 g001
Figure 2. Newton–Raphson (NR) emulation network.
Figure 2. Newton–Raphson (NR) emulation network.
Jrfm 15 00616 g002
Figure 3. The degree of MSE change according to the number of NRU layers passed.
Figure 3. The degree of MSE change according to the number of NRU layers passed.
Jrfm 15 00616 g003
Table 1. Variable ranges involved in generating the virtual test data. U ( a , b ) is the uniform distribution on ( a , b ) .
Table 1. Variable ranges involved in generating the virtual test data. U ( a , b ) is the uniform distribution on ( a , b ) .
Variable σ i m p l τ ln k
Distribution U ( 0.01 , 0.5 ) U ( 0.01 , 2 ) U ( σ 2 2 τ 2 σ τ , σ 2 2 τ + 2 σ τ )
Table 2. Implementation platform and hardware.
Table 2. Implementation platform and hardware.
SciPy-NR & VectorizedNR Emulation
PlatformSciPy (Python)PyTorch + TensorRT (Python)
HardwareCPU (Intel Xeon Silver 4216)GPU (NVIDIA GeForce RTX 2080)
Table 3. Implied volatility estimation error.
Table 3. Implied volatility estimation error.
Error TypeSciPy-NRVectorizedNR Emulation
MAE2.800171 ×   10 8 2.890932 ×   10 8 2.816055 ×   10 7
MSE1.930116 ×   10 15 1.988556 ×   10 15 2.949284 ×   10 13
MRE2.155739 ×   10 7 2.184584 ×   10 7 1.962279 ×   10 6
Table 4. Computation times (in milliseconds) for estimating the implied volatility. Each value is calculated by averaging the values from 100 repetitions, and the corresponding standard deviation is provided in parentheses.
Table 4. Computation times (in milliseconds) for estimating the implied volatility. Each value is calculated by averaging the values from 100 repetitions, and the corresponding standard deviation is provided in parentheses.
# of Implied Volatility EstimatesSciPy-NRVectorizedNR Emulation
10,00014.71 (3.0527)7.22 (0.5959)0.4 (0.0182)
100,00099.07 (4.7911)72.04 (6.5210)0.44 (0.01)
1,000,0001212.64 (11.2775)754.19 (31.7333)1.50 (0.0065)
Table 5. Change in the predicted value of the NR neural network according to the number of NRU layers passed.
Table 5. Change in the predicted value of the NR neural network according to the number of NRU layers passed.
σ = 0.3 , τ = 1
# of NRU Layers Passed k = 1.5 k = 1.3
00.900516569614410.72438144683838
10.375986993312840.32452529668808
20.309902608394620.30062055587769
30.300271093845370.30000048875809
40.300000369548800.30000001192093
50.300000071525570.30000001192093
60.300000160932540.30000001192093
70.300000011920930.30000001192093
80.300000011920930.30000001192093
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lee, G.; Kim, T.-K.; Kim, H.-G.; Huh, J. Newton–Raphson Emulation Network for Highly Efficient Computation of Numerous Implied Volatilities. J. Risk Financial Manag. 2022, 15, 616. https://doi.org/10.3390/jrfm15120616

AMA Style

Lee G, Kim T-K, Kim H-G, Huh J. Newton–Raphson Emulation Network for Highly Efficient Computation of Numerous Implied Volatilities. Journal of Risk and Financial Management. 2022; 15(12):616. https://doi.org/10.3390/jrfm15120616

Chicago/Turabian Style

Lee, Geon, Tae-Kyoung Kim, Hyun-Gyoon Kim, and Jeonggyu Huh. 2022. "Newton–Raphson Emulation Network for Highly Efficient Computation of Numerous Implied Volatilities" Journal of Risk and Financial Management 15, no. 12: 616. https://doi.org/10.3390/jrfm15120616

Article Metrics

Back to TopTop