Next Article in Journal
Exploring the Depths: Soliton Solutions, Chaotic Analysis, and Sensitivity Analysis in Nonlinear Optical Fibers
Previous Article in Journal
Investigation of Well-Posedness for a Direct Problem for a Nonlinear Fractional Diffusion Equation and an Inverse Problem
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Preconditioned Policy–Krylov Subspace Method for Fractional Partial Integro-Differential HJB Equations in Finance

1
School of Economics, Guangdong University of Technology, Guangzhou 510520, China
2
Industrial Big Data Strategic Decision Laboratory, Guangdong University of Technology, Guangzhou 510520, China
3
Department of Mathematics, University of Macau, Macau
*
Author to whom correspondence should be addressed.
Fractal Fract. 2024, 8(6), 316; https://doi.org/10.3390/fractalfract8060316
Submission received: 9 April 2024 / Revised: 19 May 2024 / Accepted: 20 May 2024 / Published: 27 May 2024
(This article belongs to the Topic Advances in Nonlinear Dynamics: Methods and Applications)

Abstract

:
To better simulate the prices of underlying assets and improve the accuracy of pricing financial derivatives, an increasing number of new models are being proposed. Among them, the Lévy process with jumps has received increasing attention because of its capacity to model sudden movements in asset prices. This paper explores the Hamilton–Jacobi–Bellman (HJB) equation with a fractional derivative and an integro-differential operator, which arise in the valuation of American options and stock loans based on the Lévy- α -stable process with jumps model. We design a fast solution strategy that includes the policy iteration method, Krylov subspace method, and banded preconditioner, aiming to solve this equation rapidly. To solve the resulting HJB equation, a finite difference method including an upwind scheme, shifted Grünwald approximation, and trapezoidal method is developed with stability and convergence analysis. Then, an algorithmic framework involving the policy iteration method and the Krylov subspace method is employed. To improve the performance of the above solver, a banded preconditioner is proposed with condition number analysis. Finally, two examples, sugar option pricing and stock loan valuation, are provided to illustrate the effectiveness of the considered model and the efficiency of the proposed preconditioned policy–Krylov subspace method.

1. Introduction

The option-pricing problem has always been a hot research topic in finance, as options are important tools for investment and hedging risk. Among a variety of option products, American options have gained popularity in the market because they can be exercised before maturity. This feature makes them the predominant type of options sold in the market. Concurrently, stock loans have become commonplace in the current financial market, and determining how much money can be loaned against the current value of stocks has also emerged as a significant area of research. Given that both of these financial instruments can be exercised (or repaid) at any time, their prices (or loan amounts) can be determined by solving the following linear complementarity problem (LCP):
L U ( x , t ) 0 , U ( x , t ) U ( x , t ) , L U ( x , t ) [ U ( x , t ) U ( x , t ) ] = 0 ,
where L is a linear operator containing partial derivatives, U ( x , t ) represents the value, and U ( x , t ) signifies the constraint.
To solve the LCP (1), it is necessary to understand the definition of the operator L , which depends on the assumed underlying asset price behavior. According to the well-known Black–Scholes model [1], the underlying asset price is modeled as follows:
d S t = ν S t d t + σ S t d W t ,
where S t is the assets’ price, W t represents the standard Brownian motion, ν is the drift rate of the asset’s return, and:
L U ( S , t ) : = U ( S , t ) t 1 2 σ 2 S 2 2 U ( S , t ) S 2 r S U ( S , t ) S + r U ( S , t ) ,
with r being the interest rate, σ being the volatility, and S being the asset’s price. However, this model cannot account for many empirical observations of asset prices, such as sudden price movements. Hence, new models have been proposed to handle these issues, including stochastic volatility models [2,3], jump diffusion models [4], self-exciting jump models [5], Hawkes jump diffusion models [6], mixed fractional Brownian model [7], two-factor non-affine stochastic volatility model [8], and sub-fractional Brownian model [9,10]. The model based on the Lévy process, notable for its ability to model price jumps and transform into a fractional diffusion equation [11], is among these. With different density functions, different models have been proposed, such as the KoBoL model [12], the CGMY model [13], and the FMLS model [14]. The Lévy process with jumps has also been proposed to more accurately describe the price of underlying assets [15,16]. In this paper, this stochastic process is used to model the underlying assets, defined by
d x t = ( r ν λ ˜ ξ ) d t + σ d L t α , 1 + d ( i = 1 N t Y i ) ,
where x t = ln S t and t denotes the current time, with ν = 1 2 σ α sec ( α π 2 ) being the convexity adjustment. The variable L t α , 1 represents the Lévy- α -stable process with maximum skewness, where the tail index α falls within the range of (1, 2). N t denotes a Poisson process characterized by the jump intensity λ ˜ 0 . { Y i , i = 1 , 2 } is a sequence of independent and identically distributed random variables, and ξ = e u J + σ J 2 2 1 , where the parameters u J and σ J represent the expectation and standard deviation of the jumps, respectively. Then, the operator L U ( x , t ) includes both a fractional derivative and an integral operator (for more details, refer to the following section). By solving the LCP in (1) with this L U ( x , t ) under specific boundary and initial conditions, we can determine the price of American options or the value of stock loans.
To address the LCP arising from the valuation of American options and stock loans, various numerical algorithms have been developed. These can be categorized into five primary strategies. The first focuses on identifying the optimal execution boundary to resolve the challenges, as demonstrated by the approach presented in [17]. The second strategy transforms the problem into a linear framework through either semi-implicit or implicit–explicit schemes, such as the L-stable method [18], the linearly implicit predictor–corrector scheme [19], and the IMEX BDF method [20]. The third approach converts the problem into a nonlinear equation, utilizing iterative methods for solution acquisition, including the fixed-point method [21], the preconditioned penalty method [22], and the modulus-based matrix splitting iteration method [23]. The fourth approach involves devising iterative methods for direct LCP resolution, exemplified by the projected SOR method [24] and the projected algebraic multigrid method [25]. The last category of algorithms consists of the recently popular deep learning algorithms, such as the neural network method [26] and the physics-informed neural network method [27]. For the LCP discussed in this paper, a Laplace transform method is introduced for rapid resolution by avoiding time marching [15]. Moreover, a fast solution strategy that combines Newton’s method with the preconditioned conjugate gradient normal residual method is proposed in [16], benefiting from fast Fourier transformation acceleration and achieving an operational cost of O ( N log N ) per inner iteration. This paper introduces an alternative approach, translating the LCP into a Hamilton–Jacobi–Bellman (HJB) equation, and devising a fast algorithm for its resolution, complete with theoretical assurances.
This study aims to devise a fast algorithm for solving the HJB equation, which includes a fractional derivative and an integral operator derived from the LCP based on the Lévy- α -stable process with jumps model. To tackle this equation, a nonlinear finite difference method is developed, accompanied by stability and convergence analyses. Subsequently, the policy iteration method and the Krylov subspace method are applied to solve the resultant finite difference scheme. A banded preconditioner is also proposed to enhance the convergence speed of the internal iterative method, supported by theoretical analysis. These steps enable the development of a preconditioned policy–Krylov subspace method for solving the fractional partial integro-differential HJB equations in finance, ensuring an efficient and theoretically sound solution.
The rest of this article is organized as follows: The description of the equation considered in this paper is illustrated in Section 2. A nonlinear finite difference scheme is proposed in Section 3 to discretize the HJB equation with the unconditional stability guarantee and first-order convergence rate. In Section 4, a fast algorithm framework incorporating the policy iteration method and the Krylov subspace method is introduced. In Section 5, a banded preconditioner is proposed with theoretical analysis about the condition number of the preconditioned matrix. Numerical experiments are given in Section 6, and conclusions are drawn in Section 7.

2. Description of the Equation

As described in Section 1, the price of the American options and the stock loans can be obtained by solving the LCP in (1). Referring to [28], solving the LCP can be achieved by converting it into an HJB equation. Then the pricing model based on the Lévy- α -stable process with jumps considered in this paper can be solved by solving the following HJB equation, which is defined as follows:
min { L U ( x , t ) , U ( x , t ) U ( x , t ) } = 0 , ( x , t ) ( x L , x R ) × [ 0 , T ) , U ( x , T ) = Ψ ( x , T ) , x ( x L , x R ) , U ( x L , t ) = Ψ ( x L , t ) , U ( x R , t ) = Ψ ( x R , t ) , t [ 0 , T ] ,
where x = ln S , and S is the asset’s price. In the above equation, L U ( x , t ) is defined by
L U ( x , t ) : = U ( x , t ) t ( r ν λ ˜ ξ ) U ( x , t ) x ν D α U ( x , t ) + ( r + λ ˜ ) U ( x , t ) λ ˜ + U ( x + y , t ) f ( y ) d y ,
where D α U ( x , t ) is the left Riemann–Liouville fractional derivative [29], and the definition of the parameters r, ν , λ ˜ , ξ has been given in Section 1. The boundary conditions and initial condition are determined by the function Ψ ( x , t ) , which has different meanings in different problems. This paper considers the problems of pricing American options and stock loans; hence, the definition of Ψ ( x , t ) is given as follows:
Ψ ( x , t ) = max ( 0 , e x K ) , for American option pricing , max { 0 , e x K e γ t } , for stock loan pricing ,
for x ( , ) and t [ 0 , T ] . For the American option pricing, K is the strike price. For stock loan pricing, K is the principal value, and γ is the interest rate of the loan [30]. Additionally, U ( z , t ) = Ψ ( z , t ) for z ( , x L ) ( x R , ) in (3).
The main difficulty in solving the HJB Equation (2), apart from it being a nonlinear equation, also lies in how to handle the Riemann–Liouville fractional derivative and the integral operator. First is the Riemann–Liouville fractional derivative, which is defined as follows:
D α U ( x , t ) = 1 Γ ( 2 α ) 2 x 2 x U ( ζ , t ) ( x ζ ) α 1 d ζ .
Because it is a global operator, using the shifted Grünwald approximation [29] will result in a dense coefficient matrix after discretization, thus making the solution challenging.
On the other hand, the integro-differential part + U ( x + y , t ) f ( y ) d y also poses computational difficulties. If the trapezoidal method is used for discretization [15], it will also result in a dense coefficient matrix. Therefore, in summary, when designing a fast algorithm to solve the HJB equation, it is necessary to address the computational challenges posed by a fractional derivative and an integro-differential operator. In addition, differing from Refs. [15,16], the probability density function f ( z ) of Y i is described by the following Gaussian distribution formula [31]:
f ( z ) = e ( z u J ) 2 / 2 σ J 2 2 π σ J .

3. Finite Difference Method with Theoretical Analysis

As mentioned above, since L U ( x , t ) in the HJB Equation (2) involves both a fractional derivative and an integral operator, finding the analytical solution to the HJB equation is challenging. Consequently, numerical methods become the primary approach for solving the aforementioned HJB equation. In this section, a finite difference method is developed to discretize the equation, which includes the shifted Grünwald approximation, an upwind scheme and the trapezoidal formulas.

3.1. Finite Difference Method

Let N and M be positive integers. Divide the spatial interval [ x L , x R ] and temporal interval [ 0 , T ] into N + 1 and M sub-intervals, respectively. That is,
x i = x L + i h , for i = 0 , 1 , , N + 1 , t m = T + m τ , for m = 0 , 1 , , M ,
where h = x R x L N + 1 , τ = T / M . With above finite difference mesh, the left fractional derivative D α U ( x , t ) can be discretized using the shifted Grünwald approximation [29], expressed as
D α U ( x i , t m ) = 1 h α k = 0 i + 1 g k ( α ) U ( x i k + 1 , t m ) + O ( h ) ,
for all 1 < α < 2 . The sequence { g k ( α ) } is defined by
g 0 ( α ) = 1 , g k ( α ) = ( 1 ) k k ! α ( α 1 ) ( α k + 1 ) , for k = 1 , 2 , 3 .
For the advection term, the upwind scheme [32] is applied, as given by
U ( x i , t m ) x = U ( x i + h , t m ) U ( x i , t m ) h + O ( h ) , i f r ν λ ˜ ξ 0 , U ( x i , t m ) U ( x i h , t m ) h + O ( h ) , i f r ν λ ˜ ξ < 0 .
In addition, the integral part in (3) can be approximated by the trapezoidal formulas. To simplify the notation, we denote Ω ( u ^ , σ ^ , x ) as
Ω ( u ^ , σ ^ , x ) = 1 F ( x ) = x + e ( z u ^ ) 2 / 2 σ ^ 2 2 π σ ^ d z ,
where F ( x ) = x e ( z u ^ ) 2 / 2 σ ^ 2 2 π σ ^ d z denotes the cumulative distribution function with expectation u ^ and standard deviation σ ^ , evaluated at the values in x.
To simplify the analysis, we assume that the truncation domain is sufficiently large; i.e., the left and right boundary conditions for the American call option are 0 and e x K , respectively. As mentioned before, the value outside the domain satisfies that U ( z , t ) = Ψ ( z , t ) for z ( , x L ) ( x R , ) . Similar to the approach in [33], with the notation given in (7), when x = x i , we have
+ U ( x i + z , t m ) f ( z ) d z = x 0 x i U ( x i + z , t m ) f ( z ) d z + x 0 x i x N + 1 x i U ( x i + z , t m ) f ( z ) d z + x N + 1 x i + U ( x i + z , t m ) f ( z ) d z = x 0 x i U ( x i + z , t m ) f ( z ) d z + j = 0 N x j x i x j + 1 x i U ( x i + z , t m ) f ( z ) d z + x N + 1 x i + U ( x i + z , t m ) f ( z ) d z = j = 0 N 1 2 [ U ( x j , t m ) + U ( x j + 1 , t m ) ] ( j i ) h ( j i + 1 ) h f ( z ) d z + η i + O ( h 2 ) = j = 0 N ρ j i [ U ( x j , t m ) + U ( x j + 1 , t m ) ] + η i + O ( h 2 ) ,
and define ρ j as follows,
ρ j = 1 2 j h ( j + 1 ) h f ( z ) d z = 1 2 j h ( j + 1 ) h e ( z u J ) 2 / 2 σ J 2 2 π σ J d z , = 1 2 Ω ( u J , σ J , j h ) 1 2 Ω ( u J , σ J , ( j + 1 ) h ) .
Additionally, η i varies based on the financial product in question. For the American option pricing, η i is defined as follows:
η i = x N + 1 x i + ( e x i + z K ) f ( z ) d z = x N + 1 x i + ( e x i + z K ) e ( z u J ) 2 / 2 σ J 2 2 π σ J d z = e σ J 2 + 2 u J + 2 x i 2 x N + 1 x i + e ( z ( u J + σ J 2 ) ) 2 / 2 σ J 2 2 π σ J d z K x N + 1 x i + e ( z u J ) 2 / 2 σ J 2 2 π σ J d z = e σ J 2 + 2 u J + 2 x i 2 Ω ( u J + σ J 2 , σ J , x N + 1 x i ) K Ω ( u J , σ J , x N + 1 x i ) .
Similarly, by assuming that the truncation domain for the stock loan is sufficiently large, then the left and right boundary conditions are set to 0 and e x K e γ t , respectively. Thus, the operator + U ( x i + z , t m ) f ( z ) d z , used in the stock loan pricing, can be discretized according to (8), and η i is given by
η i = e σ J 2 + 2 u J + 2 x i 2 Ω ( u J + σ J 2 , σ J , x N + 1 x i ) K e γ t Ω ( u J , σ J , x N + 1 x i ) .
Also, denote c 1 = r ν λ ˜ ξ , c 2 = ν , c 3 = r + λ ˜ , c 4 = λ ˜ ; combining (4), (6) and (8), the HJB Equation (2) can be discretized as
min { L ˜ u i m + 1 , u i m + 1 u i m + 1 , } = 0 ,
where u i m U ( x i , t m ) , u i m , Ψ ( x i , t m ) ,
L ˜ u i m + 1 = u i m + 1 u i m τ c 1 u i + θ m + 1 u i ( 1 θ ) m + 1 h c 2 h α k = 0 i + 1 g k ( α ) u i k + 1 m + 1 + c 3 u i m + 1 c 4 ( j = 0 N ρ j i [ u j m + 1 + u j + 1 m + 1 ] + η i ) ,
and
θ = 1 , if c 1 0 , 0 , if c 1 < 0 .

3.2. Matrix Form

To simplify the notation of the matrix form, we use T n to denote an n-by-n Toeplitz matrix, which is defined by
T n ( t n 1 , , t 1 ; t 0 ; t 1 , , t 1 n ) = t 0 t 1 t 2 n t 1 n t 1 t 0 t 1 t 2 n t 1 t 0 t n 2 t 1 t n 1 t n 2 t 1 t 0 .
Define the vectors u m = [ u 1 m , u 2 m , u 3 m , , u N m ] T , u m , = [ u 1 m , , u 2 m , , u 3 m , , , u N m , ] T , f m = [ f 1 m , f 2 m , f 3 m , , f N m ] T . Since τ < 0 , L ˜ u i m + 1 0 is equal to τ L ˜ u i m + 1 0 . Then, the matrix form of the numerical scheme (11) can be written as
min { W u m + 1 u m + τ f m + 1 , u m + 1 u m + 1 , } = 0 ,
where
W = ( 1 τ c 3 ) I N + τ c 1 h D + τ c 2 h α G + τ c 4 S ,
and I N is an N by N identity matrix; other matrices in (14) can be expressed by
D = T n ( 0 , 0 , , θ 1 ; 1 2 θ ; θ , 0 , , 0 ) , G = T n ( g n ( α ) , g n 1 ( α ) , , g 2 ( α ) ; g 1 ( α ) ; g 0 ( α ) , 0 , 0 , 0 ) , S = T n ( s 1 N , s 2 N , , s 1 ; s 0 ; s 1 , s 2 , , s N 1 ) ,
where
s j = 1 2 ( j 1 ) h ( j + 1 ) h f ( z ) d z = 1 2 ( j 1 ) h ( j + 1 ) h e ( z μ ) 2 2 σ 2 2 π σ J d z = 1 2 Ω ( u J , σ J , ( j 1 ) h ) 1 2 Ω ( u J , σ J , ( j + 1 ) h ) .
To simplify the notation, we assume that the truncation domain is large, then the specific form of the entry f i m is given as follows:
f i m = c 4 ( η i + ρ N i u N + 1 m + 1 ) , i f i = 1 , 2 , , N 1 , c 4 ( η N + ρ 0 u N + 1 m + 1 ) + c 2 h α u N + 1 m + 1 + c 1 h u N + 1 m + 1 , i f i = N a n d c 1 0 , c 4 ( η N + ρ 0 u N + 1 m + 1 ) + c 2 h α u N + 1 m + 1 , i f i = N a n d c 1 < 0 .

3.3. Stability and Convergence Analysis

Although the numerical scheme has been developed, theorems about the stability and convergence are still needed to ensure the numerical solution can be obtained correctly. For theoretical analysis, the following proposition is introduced.
Proposition 1
([34]). The sequence { g k ( α ) } defined in (5) has the following properties:
g 0 ( α ) = 1 , g 1 ( α ) = α < 0 , g 2 ( α ) > g 3 ( α ) > g 4 ( α ) > > 0 , k = 0 g k ( α ) = 0 , k = 0 n g k ( α ) < 0 , n 1 .
The above proposition provides the properties of the shifted Grünwald approximation. Additionally, one more lemma is needed to analyze the properties of the integral operator, which is given as follows:
Lemma 1.
With the density function f ( z ) , the entries of the matrix S satisfy
0 < 1 N N 1 s j < 1 .
Proof. 
By the definition of s j , we have
1 N N 1 s j = 1 2 j = 1 N N 1 ( j 1 ) h j h e ( z μ ) 2 2 σ 2 2 π σ J d z + j h ( j + 1 ) h e ( z μ ) 2 2 σ 2 2 π σ J d z = 1 2 N h ( N 1 ) h e ( z μ ) 2 2 σ 2 2 π σ J d z + ( 1 N ) h N h e ( z μ ) 2 2 σ 2 2 π σ J d z < 1 .
On the other hand, s j is equal to the probability of a variable X that follows a normal distribution, in the range of ( j 1 ) h to ( j + 1 ) h , that is, P r ( ( j 1 ) h < X < ( j + 1 ) h ) . Therefore, its value is definitely greater than 0 for all j.    □
In the following analysis, the properties of the M -matrix play an important role. Therefore, the definition of the M -matrix is provided first. A matrix is called an M -matrix if it is a diagonally dominant matrix, with all entries on the main diagonal being positive and all off-diagonal entries being negative [35]. To prove the stability and convergence of the numerical scheme, combining Proposition 1 and Lemma 1, the properties of matrix W are analyzed.
Lemma 2.
For 1 < α < 2 , the matrix W = [ w i , j ] i , j = 1 N is an M -matrix, and satisfies
| w i , i | j = 1 , j i N | w i , j |   1 + | τ | r .
Proof. 
By the definition of D and G, and with Proposition 1, it is straightforward to understand that the matrices τ c 1 h D and τ c 2 h α G are the weakly diagonally dominant matrices which have positive diagonals and non-positive off-diagonals. Let us denote a matrix W ˜ as W ˜ = ( 1 τ c 3 ) I N + τ c 4 S . Then, we obtain that
| w ˜ i , i | j = 1 , j i N | w ˜ i , j | 1 τ c 3 + τ c 4 s 0 | τ | c 4 j = 1 N , j 0 N 1 | s j | = 1 + | τ | r + | τ | c 4 | τ | c 4 j = 1 N N 1 | s j | 1 + | τ | r .
and
w ˜ i , j = 1 + | τ | r + | τ | c 4 | τ | c 4 s 0 , f o r i = j , τ c 4 s j , f o r i j .
Based on the definition of W ˜ in (15), and given that τ s j < 0 and | τ | c 4 > | τ | c 4 s 0 , W ˜ is a diagonally dominant matrix with positive diagonals and negative off-diagonals. Since the sum of matrices W ˜ , τ c 1 h D and τ c 2 h α G constitutes the matrix W, we know that W is an M -matrix and satisfies | w i , i | j = 1 , j i N | w i , j |   1 + | τ | r .    □
To prove that the numerical scheme is stable, we assume that u i m and u ˜ i m are both the numerical solutions of the scheme (13). Denote that E i m = u i m u ˜ i m and the vector E m = [ E 1 m , E 2 m , , E N m ] T , then we have the following theorem.
Theorem 1.
The proposed nonlinear scheme (13) is unconditionally stable, that is,
E m E 0 ,
where m = 1 , 2 , , M .
Proof. 
Assume that the i 0 -th entry of the vector E m is such that E m = | E i 0 m | = max { | E i m | } for i = 1 , 2 , , N . We can then proceed with the proof as follows: since u i m and u ˜ i m are both the numerical solutions of the scheme, we are given the following two equations:
min { W u m u m 1 + τ f m , u m u m , } = 0 , min { W u ˜ m u ˜ m 1 + τ f m , u ˜ m u m , } = 0 .
By assuming the existence of the solution, the proof can be discussed in four cases.
Case 1:
Assume that in the i 0 -th row of the vector u m and u ˜ m , two solutions satisfy j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m = 0 , u i 0 m u i 0 m , , j = 1 N w i 0 , j u ˜ j m u ˜ i 0 m 1 + τ f i 0 m = 0 , and u ˜ i 0 m u i 0 m , . Then we have the following equation:
j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m = j = 1 N w i 0 , j u ˜ j m u ˜ i 0 m 1 + τ f i 0 m + j = 1 N w i 0 , j E j m E i 0 m 1 = j = 1 N w i 0 , j E j m E i 0 m 1 .
With Lemma 2, this yields
E m =   | E i 0 m |     | j = 1 N w i 0 , j E j m |   =   | E i 0 m 1 |     E m 1 .
Case 2:
In the i 0 -th row of the vector u m and u ˜ m , two solutions satisfy j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m 0 , u i 0 m = u i 0 m , , j = 1 N w i 0 , j u ˜ j m u ˜ i 0 m 1 + τ f i 0 m = 0 , and u ˜ i 0 m u i 0 m , . Then, we have
E i 0 m = u i 0 m u ˜ i 0 m = u i 0 m , u ˜ i 0 m 0 .
On the other hand, this leads to
j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m = j = 1 N w i 0 , j u ˜ j m u ˜ i 0 m 1 + τ f i 0 m + j = 1 N w i 0 , j E j m E i 0 m 1 = j = 1 N w i 0 , j E j m E i 0 m 1 0 .
With Lemma 2, we have
E i 0 m 1 j = 1 N w i 0 , j E j m E i 0 m 0 .
Thus,
E m E m 1 .
Case 3:
In the i 0 -th row of the vector u m and u ˜ m , two solutions satisfy j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m = 0 , u i 0 m u i 0 m , , j = 1 N w i 0 , j u ˜ j m u ˜ i 0 m 1 + τ f i 0 m 0 , and u ˜ i 0 m = u i 0 m , . Then, we have
E i 0 m = u i 0 m u ˜ i 0 m = u i 0 m u i 0 m , 0 ,
and
j = 1 N w i 0 , j u ˜ j m u ˜ i 0 m 1 + τ f i 0 m = j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m j = 1 N w i 0 , j E j m + E i 0 m 1 = j = 1 N w i 0 , j E j m + E i 0 m 1 0 .
Then,
0 E i 0 m j = 1 N w i 0 , j E j m E i 0 m 1 .
This leads to
E m E m 1 .
Case 4:
In the i 0 -th row of the vector u m and u ˜ m , two solutions satisfy j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m 0 , u i 0 m = u i 0 m , , j = 1 N w i 0 , j u ˜ j m u ˜ i 0 m 1 + τ f i 0 m 0 , and u ˜ i 0 m = u i 0 m , . The following inequality can be obtained:
E m =   | E i 0 m |   =   | u i 0 m u ˜ i 0 m |   = 0   E m 1 .
By combing the above four cases, we know that the proposed scheme (13) is unconditionally stable, that is,
E m E 0 .
   □
Theorem 2.
The proposed nonlinear scheme (13) is convergent with first order, meaning that
u i m U ( x i , t m ) C ( h + | τ | ) ,
where m = 1 , 2 , , M , and C is a positive constant.
Proof. 
Similar to the proof of Theorem 1, by denoting R i m = C ( h + | τ | ) , we know that the solutions u i m and U ( x i , t m ) satisfy the following two equations:
min { j = 1 N w i , j u j m u i m 1 + τ f i m + τ R i m , u i m u i m , } = 0 , min { j = 1 N w i , j U ( x j , t m ) U ( x i , t m 1 ) + τ f i m , U ( x i , t m ) u i m , } = 0 ,
respectively. Denote that e i m = u i m U ( x i , t m ) and R max = max i = 1 , , N ; m = 1 , , M { | R i m | } ; then we can start the analysis. Similarly to the proof of Theorem 1, by assuming that e m =   | e i 0 m |   =   max { | e i m | } for i = 1 , 2 , , N , the proof can be completed by dividing into four cases.
Case 1:
Two solutions satisfy j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m + τ R i 0 m = 0 , u i 0 m u i 0 m , , j = 1 N w i 0 , j U ( x j , t m ) U ( x i 0 , t m 1 ) + τ f i 0 m = 0 , and U ( x i 0 , t m ) u i 0 m , . Then we have the following equation:
j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m + τ R i 0 m = j = 1 N w i 0 , j U ( x j , t m ) U ( x i 0 , t m 1 ) + τ f i 0 m + τ R i 0 m + j = 1 N w i 0 , j e j m e i 0 m 1 = τ R i 0 m + j = 1 N w i 0 , j e j m e i 0 m 1 .
With Lemma 2, we have
e m =   | e i 0 m |     | j = 1 N w i 0 , j e j m |   =   | τ R i 0 m e i 0 m 1 |     e m 1 + | τ | R max .
Case 2:
Two solutions satisfy j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m + τ R i 0 m 0 , u i 0 m = u i 0 m , , j = 1 N w i 0 , j U ( x j , t m ) U ( x i 0 , t m 1 ) + τ f i 0 m = 0 , and U ( x i 0 , t m ) u i 0 m , . Then,
e i 0 m = u i 0 m U ( x i 0 , t m ) = u i 0 m , U ( x i 0 , t m ) 0 .
On the other hand, this leads to
j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m + τ R i 0 m = j = 1 N w i 0 , j U ( x j , t m ) U ( x i 0 , t m 1 ) + τ f i 0 m + τ R i 0 m + j = 1 N w i 0 , j e j m e i 0 m 1 = j = 1 N w i 0 , j e j m e i 0 m 1 + τ R i 0 m 0 .
With Lemma 2, the following inequality can be obtained.
e i 0 m 1 τ R i 0 m j = 1 N w i 0 , j e j m e i 0 m 0 .
Thus,
e m   e m 1 + | τ | R max .
Case 3:
Two solutions satisfy j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m + τ R i 0 m = 0 , u i 0 m u i 0 m , , j = 1 N w i 0 , j U ( x j , t m ) U ( x i 0 , t m 1 ) + τ f i 0 m 0 , and U ( x i 0 , t m ) = u i 0 m , . Then,
e i 0 m = u i 0 m U ( x i 0 , t m ) = u i 0 m u i 0 m , 0 ,
and
j = 1 N w i 0 , j U ( x j , t m ) U ( x i 0 , t m 1 ) + τ f i 0 m = j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m j = 1 N w i 0 , j e j m + e i 0 m 1 = j = 1 N w i 0 , j e j m + e i 0 m 1 τ R i 0 m 0 .
Then,
0 e i 0 m j = 1 N w i 0 , j e j m e i 0 m 1 τ R i 0 m .
This leads to
e m   e m 1 + | τ | R max .
Case 4:
Two solutions satisfy j = 1 N w i 0 , j u j m u i 0 m 1 + τ f i 0 m + τ R i 0 m 0 , u i 0 m = u i 0 m , , j = 1 N w i 0 , j U ( x j , t m ) U ( x i 0 , t m 1 ) + τ f i 0 m 0 , and U ( x i 0 , t m ) = u i 0 m , . The following inequality can be obtained:
e m =   | e i 0 m |   =   | u i 0 m U ( x i 0 , t m ) |   = 0   e m 1 + | τ | R max .
By combing the above four cases, we have
e m   e 0 + m | τ | R max T R max = C ( h + | τ | ) .
   □
With Theorems 1 and 2, the proposed numerical scheme has been shown to be stable and convergent. However, given its nature as a nonlinear finite difference scheme, a numerical algorithm is required to address this problem. Consequently, a preconditioned policy–Krylov subspace method is developed in the subsequent section.

4. Fast Policy–Krylov Subspace Iterative Method

Inspired by [28,36], this paper employs the policy iteration method to solve the discrete HJB equation. Within each policy iteration, a linear system must be solved. The coefficient matrix W being a Toeplitz matrix allows for the consideration of the Krylov subspace iterative method, accelerated with the fast Fourier transformation (FFT), as the inner iterative method for solving this linear system.

4.1. Policy Iteration Method

The component-wise form based on the HJB Equation (13) can be written as,
min ϕ { 0 , 1 } { ϕ ( W u m + 1 u m + τ f m + 1 ) i + ( 1 ϕ ) ( u m + 1 u m + 1 , ) i } = 0 ,
where ( v ) i represents the i-th entry of any given vector v when 1 i N . ϕ is a control parameter that can take two values, 0 or 1. It is defined that the sequence { u m + 1 , k } will converge to the solution u m + 1 , where k = 0 , 1 , 2 . Then, the framework of the policy iteration method [28], starting with the initial numerical value u m + 1 , 0 , is presented in Algorithm 1. Similar to the above, let ( V ) i represent the i-th row for any given matrix V when 1 i N .
Algorithm 1 Policy iteration method
Since u m + 1 , k is the k-th iteration of the policy iteration method for computing the solution u m + 1 . Let ϕ k R N , M k R N × N and b k R N . Find u m + 1 , k + 1 R N such that
M m + 1 , k u m + 1 , k + 1 = b m + 1 , k ,
for 1 i N , where
( M m + 1 , k ) i = ( ϕ m + 1 , k ) i ( W ) i + ( 1 ( ϕ m + 1 , k ) i ) ( I N ) i , ( b m + 1 , k ) i = ( ϕ m + 1 , k ) i ( u m τ f m + 1 ) i + ( 1 ( ϕ m + 1 , k ) i ) ( u m + 1 , ) i ,
and
( ϕ m + 1 , k ) i = arg min ϕ { 0 , 1 } { ϕ ( W u m + 1 , k u m + τ f m + 1 , k ) i + ( 1 ϕ ) ( u m + 1 , k u m + 1 , ) i } .
After introducing the structure of the policy iteration method, an additional lemma is needed to prove that the exact solution can be obtained in a finite number of steps.
Lemma 3.
The coefficient matrix M m , k is invertible and satisfies
( M m , k ) 1 1 .
Proof. 
From Lemma 2, we know that the matrix W is an M -matrix and | w i , i | j = 1 , j i N | w i , j | 1 + | τ | r . The identity matrix I N satisfies | i i , i | j = 1 , j i N | i i , j | = 1 . The coefficient matrix M m , k is composed by selecting rows from the identity matrix I N and the matrix W. Since both I N and W are M -matrices, M m , k is also an M -matrix and satisfies
| m i , i | j = 1 , j i N | m i , j | 1 .
Referring to Theorem 1 in [37], it can also be proven that ( M m , k ) 1 1 . □
Based on the properties outlined in Lemma 3 and referring to [28], it can be proven that the policy iteration method can be shown to converge to the solution after a finite number of steps.

4.2. Fast Krylov Subspace Method

With the policy iteration method, the discrete HJB equation can be solved. However, a linear system exists within each policy iteration that needs to be solved. With Algorithm 1, the coefficient matrix can be expressed as follows:
M m , k = Φ m , k W + ( I Φ m , k ) I N ,
where Φ m , k represents a diagonal matrix, and its i-th diagonal entry is the control parameter ϕ i m , k . Specifically, it is defined as Φ m , k = diag ( ϕ 1 m , k , ϕ 2 m , k , , ϕ N m , k ) . Similar to the case in [38], the fast Krylov subspace method can be used to solve the linear system, given the Toeplitz structure of the matrix W. Hence, in this paper, the fast policy–Krylov subspace iterative method is utilized to solve the discrete HJB equation. It is noteworthy that the operational cost of each iteration of the Krylov subspace method is only O ( N log N ) , where N is the matrix size.

5. Preconditioning Technique

Although the preconditioning method proposed in [38] can be utilized to enhance the efficiency of the Krylov subspace method discussed in this paper, our ongoing pursuit of faster and more stable algorithms remains paramount. Consequently, we reference the preconditioning approach in Ref. [39], designing a new banded preconditioner in this section.

5.1. Banded Preconditioner

Define the matrix P l m , k as a banded matrix that approximates the coefficient matrix M m , k with 2 l 1 bands. In this paper, we consider the situation where l 2 . The composition of the matrix P l m , k is structured as:
P l m , k = Φ m , k [ ( 1 τ c 3 ) I N + τ c 1 h D + τ c 2 h α ( G T + G D ) + τ c 4 ( S T + S D ) ] + ( 1 Φ m , k ) I N ,
where
G T = T N ( 0 , 0 , , 0 , g l ( α ) , g l 1 ( α ) , , g 2 ( α ) ; g 1 ( α ) ; g 0 ( α ) , 0 , , 0 ) , G D = diag ( 0 , 0 , 0 l , g l + 1 ( α ) , g l + 1 ( α ) + g l + 2 ( α ) , , j = l + 1 N g j ( α ) ) , S T = T N ( 0 , 0 , , 0 , s 1 l , s 2 l , , , s 1 ; s 0 ; s 1 , , s l 2 , s l 1 , 0 , , 0 ) , S D = diag ( 0 , 0 , 0 l , s l , s l 1 + s l , , j = 1 N l s j ) + diag ( j = l N 1 s j , j = l N 2 s j , , s l , 0 , 0 , 0 l ) .
After proposing the band preconditioner, it becomes necessary to further analyze the properties, such as the invertibility of the preconditioning matrix and the condition number of the preconditioned matrix. To simplify the subsequent analysis, we drop the indices m and k; that is, we use P l 1 M to represent ( P l m , k ) 1 M m , k .

5.2. Properties of the Preconditioned Matrix

To ensure the proposed preconditioner is feasible, a theorem is required to prove that P l is invertible. The theorem is stated as follows:
Theorem 3.
The proposed preconditioner P l is invertible and satisfies
P l 1 1 .
Proof. 
Based on the structure of the matrix P l as described in (20), and a proof similar to that in Lemma 3, it can be easily demonstrated that P l is an M -matrix and satisfies P l 1 1 . □
With the above theorem, we can ascertain that the preconditioner P l is feasible. Since the preconditioner P l is a banded matrix, the computational cost of the LU factorization of P l is O ( l 2 N ) , and the cost for the matrix-vector product P l 1 v is O ( l N ) for any given vector v . With the proposed preconditioning technique, the operational cost of the Krylov subspace iterative method remains O ( N log N ) per iteration when l N . However, the computational cost of the preconditioned Krylov subspace method still depends on the number of iterations. In order to prove that the preconditioner proposed in this paper can improve the convergence rate of the Krylov subspace method, the following analysis will examine the condition number of the preconditioned matrix, defined as follows:
κ ( P l 1 M ) =   P l 1 M ( P l 1 M ) 1 .
If the coefficient matrix is ill-conditioned, that is, if the condition number is large, the fast Krylov subspace method converges to the solution slowly. Hence, in the following part, we will prove that the condition number κ ( P l 1 M ) has an upper bound. Then, a relatively fast convergence rate of the preconditioned Krylov subspace method can be expected. The following lemmas are provided to support further analysis before analyzing the condition number:
Lemma 4
([40]). For α ( 1 , 2 ) , we have
j = l g j ( α ) C ¯ l α , for l 2
with C ¯ being a positive constant.
Lemma 5.
If u J = 0 and σ J = C h , the entries s j satisfy
j = l N 1 s j + j = 1 N l s j C ( l 1 ) 2 h ,
where C is a positive constant. In the more general case, when σ J is not specified, one obtains:
j = l N 1 s j + j = 1 N l s j 2 σ J ( l 1 ) h 2 π .
Proof. 
When u J = 0 , σ J = C h , with Chebyshev’s inequality, we have
j = l N 1 s j + j = 1 N l s j = 1 2 2 π σ J l h N h e z 2 2 σ J 2 d z + 1 2 2 π σ J ( l 1 ) h ( N 1 ) h e z 2 2 σ J 2 d z + 1 2 2 π σ J N h l h e z 2 2 σ J 2 d z + 1 2 2 π σ J ( 1 N ) h ( 1 l ) h e z 2 2 σ J 2 d z 1 2 π σ J ( l 1 ) h + e z 2 2 σ J 2 d z + 1 2 π σ J ( 1 l ) h e z 2 2 σ J 2 d z = P ( | z | ( l 1 ) h ) σ J 2 ( l 1 ) 2 h 2 = C ( l 1 ) 2 h .
Next, we proceed to prove the scenario where σ J is unspecified. Based on the properties of the Q-function, we derive the following equation.
j = l N 1 s j + j = 1 N l s j 2 2 π σ J ( l 1 ) h + e z 2 2 σ J 2 d z = 2 Q ( ( l 1 ) h σ J ) 2 σ J ( l 1 ) h 2 π e ( l 1 ) 2 h 2 2 σ J 2 2 σ J ( l 1 ) h 2 π .
To analyze the condition number of the preconditioned matrix, an additional assumption is needed: there exists a positive constant C ˜ such that
| τ | h α C ˜ .
With the above assumption, it will be shown that there exists an upper bound on the condition number of the preconditioned matrix P l 1 M . Thus, the relatively fast convergence rate of the preconditioned Krylov subspace method can be expected.
Theorem 4.
If u J = 0 and σ J = C h , the preconditioned matrix satisfies
κ ( P l 1 M ) 1 + 2 c 2 C ¯ C ˜ ( l + 1 ) α + 2 c 4 C C ˜ ( l 1 ) 2 2 .
Further, if u J = 0 , the condition number of the preconditioned matrix holds
κ ( P l 1 M ) 1 + 2 c 2 C ¯ C ˜ ( l + 1 ) α + 4 σ J c 4 C ˜ ( l 1 ) 2 π 2 .
Proof. 
It is straightforward that
P l 1 M = I N + P l 1 M R ,
where
M R = M P l = Φ [ τ c 2 h α ( G G T G D ) + τ c 4 ( S S T S D ) ] = Φ [ τ c 2 h α T N ( g N ( α ) , , g l + 2 ( α ) , g l + 1 ( α ) , 0 , , 0 ; 0 ; 0 , 0 , , 0 ) τ c 2 h α G D ] + Φ [ τ c 4 T N ( s 1 N , s 2 N , , s l , 0 , , 0 ; 0 ; 0 , , 0 , s l , s l + 1 , , s N 1 ) τ c 4 S D ] .
Hence,
M R 2 | τ | c 2 h α j = l + 1 N g j ( α ) + | τ | c 4 j = 1 N l s j + | τ | c 4 j = l N 1 s j + | τ | c 4 S D .
With Lemmas 4 and 5, when u J = 0 and σ J = C h , we have
M R 2 c 2 C ¯ C ˜ ( l + 1 ) α + 2 c 4 C C ˜ ( l 1 ) 2 .
Thus, with Theorem 3, we have
P l 1 M =   I N +   P l 1 M R 1 + 2 c 2 C ¯ C ˜ ( l + 1 ) α + 2 c 4 C C ˜ ( l 1 ) 2 .
Similarly,
( P l 1 M ) 1 = M 1 P l I N + M 1 ( P l M ) 1 + M 1 M R 1 + 2 c 2 C ¯ C ˜ ( l + 1 ) α + 2 c 4 C C ˜ ( l 1 ) 2 .
Therefore,
κ ( P l 1 M ) =   P l 1 M ( P l 1 M ) 1 1 + 2 c 2 C ¯ C ˜ ( l + 1 ) α + 2 c 4 C C ˜ ( l 1 ) 2 2 .
On the other hand, by Lemma 5, if u J = 0 , then
M R 2 c 2 C ¯ C ˜ ( l + 1 ) α + 4 σ J c 4 C ˜ ( l 1 ) 2 π ,
and
κ ( P l 1 M ) =   P l 1 M ( P l 1 M ) 1 1 + 2 c 2 C ¯ C ˜ ( l + 1 ) α + 4 σ J c 4 C ˜ ( l 1 ) 2 π 2 .

6. Numerical Experiments

In this section, two examples are given to demonstrate the accuracy of the considered model and the effectiveness of the proposed preconditioned iterative method. All numerical experiments are carried out using Matlab 2023a with the following configuration: an Intel(R) Core(TM) i9-10900K CPU at 3.70 GHz and 64 GB RAM. The stopping criterion for the policy iteration method is set to 10 9 , while the stopping criterion for the Krylov subspace method is 10 12 . The generalized minimal residual (GMRES) method [41] is chosen to represent the Krylov subspace method. The initial guess of the GMRES method is the zero vector, whereas the initial guess of the policy iteration method is the numerical solution from the previous time step. The GMRES method is restarted every 20 iterations. To simplify the notation in the following section, let N ˜ = N + 1 .

6.1. American Call Option

In the first example, we consider an American option pricing problem. To test the accuracy of the considered model for the option pricing problem in the real financial market, the considered model, i.e., the FMLSJ model, is used to price the sugar call option contract SR403 on 23 November 2023, in the Zhengzhou Commodity Exchange. Additionally, the Black–Scholes (BS) model and the finite moment log stable (FMLS) model are used as comparative models for pricing this option. In this test, r = 2.561 % , derived from the annual Shibor rate on the trading day, and T = 56 / 179 , x l = ln ( 1 ) , x r = ln ( 20,000 ) , the strike prices K = [ 6500 , 6600 , 6700 , 6800 , 6900 , 7000 , 7100 ] , and the option prices in the market are C = [ 300.5 , 219.5 , 167 , 117 , 76.5 , 49 , 33.5 ] . Note that the right preconditioner is used in these numerical experiments.
Denote that Θ represents the set of the parameters of the models; V m o d e l , i and V m a r k e t , i are the i-th values of the option from the model and the market, respectively. The particle swarm optimization algorithm is then used to obtain the model’s parameters, with the objective function being
arg Θ min max i V m o d e l , i ( Θ ) V m a r k e t , i V m a r k e t , i .
The specific values of these parameters and the mean square error (MSE) from different model pricing are shown in Table 1.
From Table 1, it is evident that the MSE of the FMLSJ model is the smallest, being only about one-third of the error compared to the other two models. To further distinguish the performance of different models in terms of pricing errors, Figure 1 is provided.
Figure 1a offers a comparison of the three models against the actual option prices. From the figure, it can be observed that the prices calculated via the FMLS model and the BS model are relatively similar, which explains why their MSEs are close to each other. In contrast, the FMLSJ model’s prices are closer to the actual prices. Figure 1b presents the relative errors under different option prices. This figure also demonstrates that the errors of the FMLS model and the BS model are very similar. Meanwhile, the relative errors of the FMLSJ model are all below 7 % , indicating superior performance compared to the first two models.
In the second numerical test of this example, the parameters listed in Table 1, obtained from the option contract SR403, are used to assess the performance of the proposed fast solution strategy. The other parameters are set as follows: r = 2.561 % , T = 1 , x max = ln ( 100 ) , x min = ln ( 0.1 ) , K = 50 . Additionally, to evaluate the effectiveness of the banded preconditioner used in this paper, the preconditioner based on the circulant approximation from [38] is employed for comparison. Specifically, this preconditioner is defined as:
P s = Φ [ ( 1 τ c 3 ) I N + τ c 1 h D + τ c 2 h α s ( G ) + τ c 4 s ( S ) ] + ( 1 Φ ) I N ,
where s ( . ) denotes Strang’s approximation.
For simplicity, the following text will use the notation GMRES to refer to the GMRES algorithm without preconditioning, SGMRES to denote the GMRES algorithm with the circulant preconditioner (21), and BGMRES(l) to indicate the GMRES method preconditioned by the banded preconditioner with 2 l 1 bands. Additionally, in the following tables, “Error” represents the infinite norm of the error between the numerical solution and the exact solution, “Rate” represents the convergence rate, “Iter-Out” indicates the average number of iterations of the policy iteration method on each time level, and “Iter-In” refers to the average number of iterations of the Krylov subspace method on each policy iteration. Note that the numerical solution on a fine grid with N ˜ = 2 16 and M = 2 12 is regarded as the exact solution since its analytical solution is unknown.
From Table 2, it is evident that since the outer iterations all employ the policy iteration method, the number of outer iterations remains the same, as do the errors and convergence order. Meanwhile, Table 3 demonstrates that compared to circulant preconditioning, the band preconditioning method performs better. Additionally, when comparing different values of l, BGMRES(4) achieves good results with the lowest CPU time, whereas l = 7 and l = 13 do not improve the number of iterations in this example and lead to higher CPU times due to the increased computational effort required for obtaining its inversion. Therefore, l = 4 is found to be sufficient to achieve good results, both in terms of the number of iterations and CPU time.
Additionally, to verify the effectiveness of Theorem 4, the condition numbers of the preconditioning matrices for different l will be presented. However, it should be noted that the coefficient matrix in the policy iteration method changes with each time step and iteration. To facilitate verification, the parameters of the model from Table 2 are utilized, with N ˜ = 2 14 , M = 2 7 , and Φ = diag ( ϕ 1 , ϕ 2 , , ϕ N ) , where
ϕ i = 0 , for i N ˜ 1 2 , 1 , for i > N ˜ 1 2 .
Then, the condition number of the coefficient matrix M is 365.1499 . The condition numbers of the preconditioned coefficient matrix for various l values are depicted in Figure 2.
As stated in Theorem 4, it is observed from Figure 2 that as l increases, the condition number gradually decreases and more closely approaches 1. Given that the original condition number reaches 365.1499 , it can be seen that the banded preconditioner effectively reduces the condition number of the coefficient matrix, and when l = 2 , the condition number is already very close to 1.

6.2. Stock Loan

In this part, we consider a stock loan problem to test the performance of the proposed preconditioned policy–Krylov subspace method. The parameters are set as follows: σ = 0.2 , λ ˜ = 0.01 , u J = 0.8 , σ J = 0.01 , γ = 0.1 , r = 0.05 , T = 1 . Experience from Example 1 indicates that a smaller value of l can achieve satisfactory results, so in this example, the value of l is uniformly set to 4. Additionally, in this example, the value of α will be set to 1.2, 1.5, and 1.8, respectively, to assess the algorithm’s performance with different fractional orders. Similar to Example 1, the numerical solution on a fine grid with N ˜ = 2 15 and M = 2 11 is regarded as the exact solution.
For α = 1.2 , 1.5 , and 1.8 , the comparison results of different iterative methods are presented in Table 4, Table 5 and Table 6, respectively. The errors in these tables are derived from the results of the BGMRES(4) method. It is observed that the convergence rate is first order for all results. Compared with the results of the iterative method without preconditioning, it is evident that as α increases, more iterations are required. However, both the circulant preconditioner and the proposed banded preconditioner can reduce the number of iterations, with the band preconditioning technique demonstrating significantly better performance in terms of both the number of iterations and CPU time.
To further illustrate the effectiveness of the model and the impact of its parameters, Figure 3 is presented. The same parameters from this subsection are used, specified as follows: u J = 0.8 , σ J = 0.01 , γ = 0.1 , r = 0.05 , T = 1 . Any missing parameters are provided in the figure captions. Additionally, all models are solved using the BGMRES(4) method with settings of N ˜ = 2 12 and M = 2 10 . The figures demonstrate that the stock loan prices, similar to American call options, are positioned above the pay-off function. Figure 3a shows that as α increases, the curve approaches the pay-off function, aligning with results from Ref. [42]. Figure 3b illustrates that as volatility σ increases, the price of the stock loan becomes higher. Figure 3c indicates that as λ ˜ , the jump frequency, increases, price fluctuations become more frequent, thereby enhancing the value of the stock loan. These figures confirm that the solutions obtained from the considered model meet our expectations for the curve of stock loan valuation.

7. Conclusions

In this paper, we consider a fractional partial integro-differential HJB equation resulting from the American option pricing and stock loan pricing based on the Lévy- α -stable process with jumps model, and propose a preconditioned policy–Krylov subspace method to solve it. A finite difference scheme is developed to discretize the HJB equation, with stability and first-order convergence analysis. The coefficient matrix generated from the fractional derivative and the integro-differential operator is an M -matrix and possesses the Toeplitz structure, leading us to use the policy iteration method and the fast Krylov subspace method as the outer and inner iterative methods, respectively. To accelerate the convergence rate of the Krylov subspace method, a banded preconditioner is proposed. It is proven that the condition number of the preconditioned matrix has an upper boundary affected by the bandwidth. Finally, numerical examples of an American options pricing problem and a stock loan valuation problem are provided to demonstrate the effectiveness of the proposed fast solution strategy.

Author Contributions

Writing—review & editing, X.C., Y.S. and S.-L.L.; Supervision, S.-L.L.; Investigation, X.C., Y.S. and S.-L.L.; Funding acquisition, X.C. and S.-L.L.; Methodology, X.C. and S.-L.L.; Software, X.C. and X.-X.G.; Writing—original draft, X.-X.G.; Project administration, S.-L.L. All authors have read and agreed to the published version of the manuscript.

Funding

The first author is supported by Guangdong Basic and Applied Basic Foundation (Grant No. 2020A1515110991) and the National Natural Science Foundation of China (Grant No. 12101137). The third author is supported by Guangdong Basic and Applied Basic Foundation (Grant No. 2022A1515011125) and the National Natural Science Foundation of China (Grant No. 72271064). The corresponding author is supported by the research grants MYRG2022-00262-FST and MYRG-GRG2023-00181-FST-UMDF from University of Macau.

Data Availability Statement

Data is contained within the article.

Acknowledgments

The authors would like to thank the reviewers for their valuable comments, which greatly improved the quality of this paper.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Black, F.; Scholes, M. The pricing of options and corporate liabilities. J. Polit. Econ. 1973, 81, 637–654. [Google Scholar] [CrossRef]
  2. Heston, S.L. A closed-form solution for options with stochastic volatility with applications to bond and currency options. Rev. Financ. Stud. 1993, 6, 327–343. [Google Scholar] [CrossRef]
  3. Stein, E.M.; Stein, J.C. Stock price distributions with stochastic volatility: An analytic approach. Rev. Financ. Stud. 1991, 4, 727–752. [Google Scholar] [CrossRef]
  4. Kou, S.G. A jump-diffusion model for option pricing. Manage. Sci. 2002, 48, 1086–1101. [Google Scholar] [CrossRef]
  5. Fulop, A.; Li, J.; Yu, J. Self-exciting jumps, learning, and asset pricing implications. Rev. Financ. Stud. 2015, 28, 876–912. [Google Scholar] [CrossRef]
  6. Hawkes, A.G. Hawkes jump-diffusions and finance: A brief history and review. Eur. J. Financ. 2022, 28, 627–641. [Google Scholar] [CrossRef]
  7. Wang, Y.; Wang, J. Pricing of American Carbon Emission Derivatives and Numerical Method under the Mixed Fractional Brownian Motion. Discrete Dyn. Nat. Soc. 2021, 2021, 6612284. [Google Scholar] [CrossRef]
  8. Huang, S.; He, X. Analytical approximation of European option prices under a new two-factor non-affine stochastic volatility model. AIMS. Math. 2022, 8, 4875–4891. [Google Scholar] [CrossRef]
  9. Bian, L.; Li, Z. Fuzzy simulation of European option pricing using sub-fractional Brownian motion. Chaos Solitons Fractals 2021, 153, 111442. [Google Scholar] [CrossRef]
  10. Guo, Z.; Liu, Y.; Dai, L. European option pricing under sub-fractional Brownian motion regime in discrete time. Fractal Fract. 2024, 8, 13. [Google Scholar] [CrossRef]
  11. Cartea, A.; del Castillo-Negrete, D. Fractional diffusion models of option prices in markets with jumps. Phys. A 2007, 374, 749–763. [Google Scholar] [CrossRef]
  12. Boyarchenko, S.; Levendorskii, S. Non-Gaussian Merton-Black–Scholes Theory; World Scientific: Singapore, 2002. [Google Scholar]
  13. Carr, P.; Geman, H.; Madan, D.B.; Yor, M. The fine structure of asset returns: An empirical investigation. J. Bus. 2002, 75, 305–333. [Google Scholar] [CrossRef]
  14. Carr, P.; Wu, L. The finite moment log stable process and option pricing. J. Financ. 2003, 58, 753–777. [Google Scholar] [CrossRef]
  15. Zhou, Z.; Ma, J.; Gao, X. Convergence of iterative laplace transform methods for a system of fractional PDEs amd PIDEs arising in option pricing. East Asian Appl. Math. 2018, 8, 782–808. [Google Scholar] [CrossRef]
  16. Fan, C.; Chen, W.; Feng, B. Pricing stock loans under the Lévy-α-stable process with jumps. Netw. Heterog. Media 2023, 18, 191–211. [Google Scholar] [CrossRef]
  17. Boyarchenko, S.; Levendorskii, S. American options in regime-switching models. SIAM J. Control Optim. 2009, 48, 1353–1376. [Google Scholar] [CrossRef]
  18. Yousuf, M.; Khaliq, A.; Liu, R. Pricing American options under multi-state regime switching with an efficient L-stable method. Int. J. Comput. Math. 2015, 92, 2530–2550. [Google Scholar] [CrossRef]
  19. Khaliq, A.; Voss, D.; Kazmi, S. A linearly implicit predictor–corrector scheme for pricing American options using a penalty method approach. J. Bank Financ. 2006, 30, 489–502. [Google Scholar] [CrossRef]
  20. Wang, W.; Chen, Y.; Fang, H. On the variable two-step IMEX BDF method for parabolic integro-differential equations with nonsmooth initial data arising in finance. SIAM J. Numer. Anal. 2019, 57, 1289–1317. [Google Scholar] [CrossRef]
  21. Shi, X.; Yang, L.; Huang, Z. A fixed point method for the linear complementarity problem arising from American option pricing. Acta Math. Appl. Sin. Engl. Ser. 2016, 32, 921–932. [Google Scholar] [CrossRef]
  22. Lei, S.; Wang, W.; Chen, X.; Ding, D. A fast preconditioned penalty method for American options pricing under regime-switching tempered fractional diffusion models. J. Sci. Comput. 2018, 75, 1633–1655. [Google Scholar] [CrossRef]
  23. Bai, Z. Modulus-based matrix splitting iteration methods for linear complementarity problems. Numer. Linear Algebra Appl. 2010, 17, 917–933. [Google Scholar] [CrossRef]
  24. Saigal, R. On the Convergence Rate of Algorithms for Solving Equations That Are Based on Methods of Complementary Pivoting. Math. Oper. Res. 1977, 2, 108–124. [Google Scholar] [CrossRef]
  25. Toivanen, J.; Oosterlee, C.W. A projected algebraic multigrid method for linear complementarity problems. Numer. Math. Theor. Meth. Appl. 2012, 5, 85–98. [Google Scholar] [CrossRef]
  26. Chen, Y.; Wan, J.W.L. Deep neural network framework based on backward stochastic differential equations for pricing and hedging American options in high dimensions. Quant. Financ. 2021, 21, 45–67. [Google Scholar] [CrossRef]
  27. Gatta, F.; Cola, V.S.D.; Giampaolo, F.; Piccialli, F.; Cuomo, S. Meshless methods for American option pricing through Physics-Informed Neural Networks. Eng. Anal. Boundary Elem. 2023, 151, 68–82. [Google Scholar] [CrossRef]
  28. Reisinger, C.; Witte, J.H. On the use of policy iteration as an easy way of pricing American options. SIAM J. Financ. Math. 2012, 3, 459–478. [Google Scholar] [CrossRef]
  29. Podlubny, I. Fractional Differential Equations; Academic Press: New York, NY, USA, 1999. [Google Scholar]
  30. Pascucci, A.; Suarez-Taboada, M.; Vazquez, C. Mathematical analysis and numerical methods for a PDE model of a stock loan pricing problem. J. Math. Anal. Appl. 2013, 403, 38–53. [Google Scholar] [CrossRef]
  31. Pang, H.; Zhang, Y.; Vong, S.; Jin, X. Circulant preconditioners for pricing options. Linear Algebra Appl. 2011, 434, 2325–2342. [Google Scholar] [CrossRef]
  32. Lesmana, D.C.; Wang, S. An upwind finite difference method for a nonlinear Black–Scholes equation governing European option valuation under transaction costs. Appl. Math. Comput. 2013, 219, 8811–8828. [Google Scholar] [CrossRef]
  33. Zhou, Z.; Ma, J.; Sun, H. Fast Laplace transform methods for free-boundary problems of fractional diffusion equations. J. Sci. Comput. 2018, 74, 49–69. [Google Scholar] [CrossRef]
  34. Meerschaert, M.M.; Tadjeran, C. Finite difference approximations for fractional advection–dispersion flow equations. Comput. Appl. Math. 2004, 172, 65–77. [Google Scholar] [CrossRef]
  35. Plemmons, R.J. M-matrix characterizations. I—nonsingular M-matrices. Linear Algebra Appl. 1977, 18, 175–188. [Google Scholar] [CrossRef]
  36. Huang, Y.; Forsyth, P.A.; Labahn, G. Methods for pricing American options under regime switching. SIAM J. Sci. Comput. 2011, 33, 2144–2168. [Google Scholar] [CrossRef]
  37. Varah, J.M. A lower bound for the smallest singular value of a matrix. Linear Algebra Appl. 1975, 11, 3–5. [Google Scholar] [CrossRef]
  38. Chen, X.; Wang, W.; Ding, D.; Lei, S. A fast preconditioned policy iteration method for solving the tempered fractional HJB equation governing American options valuation. Comput. Appl. Math. 2017, 73, 1932–1944. [Google Scholar] [CrossRef]
  39. Wang, Q.; She, Z.; Lao, C.; Lin, F. Fractional centered difference schemes and banded preconditioners for nonlinear Riesz space variable-order fractional diffusion equations. Numer. Algorithms 2024, 95, 859–895. [Google Scholar] [CrossRef]
  40. She, Z.; Lao, C.; Yang, H.; Lin, F. Banded Preconditioners for Riesz Space Fractional Diffusion Equations. J. Sci. Comput. 2021, 86, 31. [Google Scholar] [CrossRef]
  41. Saad, Y.; Schultz, M. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM J. Sci. Statist. Comput. 1986, 7, 856–869. [Google Scholar] [CrossRef]
  42. Aguilar, J.; Korbel, J. Simple formulas for pricing and hedging European options in the finite moment log-stable model. Risks 2019, 7, 36. [Google Scholar] [CrossRef]
Figure 1. Comparisons between three models. (a) Option price comparison. (b) Pricing errors of three models.
Figure 1. Comparisons between three models. (a) Option price comparison. (b) Pricing errors of three models.
Fractalfract 08 00316 g001
Figure 2. Condition number of the preconditioned matrix.
Figure 2. Condition number of the preconditioned matrix.
Fractalfract 08 00316 g002
Figure 3. Comparative analysis of stock loan values. (a) Comparison of different α with σ = 0.2 and λ ˜ = 0.01 . (b) Comparison of different σ with α = 1.5 and λ ˜ = 0.01 . (c) Comparison of different λ ˜ with σ = 0.2 and α = 1.5 .
Figure 3. Comparative analysis of stock loan values. (a) Comparison of different α with σ = 0.2 and λ ˜ = 0.01 . (b) Comparison of different σ with α = 1.5 and λ ˜ = 0.01 . (c) Comparison of different λ ˜ with σ = 0.2 and α = 1.5 .
Fractalfract 08 00316 g003
Table 1. Parameters and MSE from different models.
Table 1. Parameters and MSE from different models.
α σ μ J σ J λ ˜ MSE
BS-0.0806---163.6608
FMLS1.99900.0777---168.0140
FMLSJ1.99900.06450.55230.25850.013256.4139
Table 2. Comparisons of errors and outer iteration numbers.
Table 2. Comparisons of errors and outer iteration numbers.
GMRESSGMRESBGMRES (4)
N ˜ M ErrorRateIter-OutErrorRateIter-OutErrorRateIter-Out
2 10 2 5 1.1432 × 10 2 -2.0 1.1432 × 10 2 -2.0 1.1432 × 10 2 -2.0
2 11 2 6 5.6617 × 10 3 1.01382.0 5.6617 × 10 3 1.01382.0 5.6617 × 10 3 1.01382.0
2 12 2 7 2.6366 × 10 3 1.10262.0 2.6366 × 10 3 1.10262.0 2.6366 × 10 3 1.10262.0
2 13 2 8 1.2406 × 10 3 1.08762.0 1.2406 × 10 3 1.08762.0 1.2406 × 10 3 1.08762.0
2 14 2 9 5.3396 × 10 4 1.21632.0 5.3396 × 10 4 1.21632.0 5.3396 × 10 4 1.21632.0
Table 3. Comparisons between different linear solvers.
Table 3. Comparisons between different linear solvers.
Method N ˜ = 2 11 M = 2 6 N ˜ = 2 12 M = 2 7 N ˜ = 2 13 M = 2 8 N ˜ = 2 14 M = 2 9
Iter-InTime (s)Iter-InTime (s)Iter-InTime (s)Iter-InTime (s)
GMRES49.01.8970.06.67100.070.15146.0416.42
SGMRES18.61.3530.44.0545.851.3168.0316.21
BGMRES(2)4.00.324.00.584.04.124.017.29
BGMRES(4)4.00.283.00.523.03.613.015.22
BGMRES(7)4.00.303.00.643.04.183.018.10
BGMRES(13)4.00.373.00.873.05.923.023.64
Table 4. Algorithm comparison for α = 1.2 .
Table 4. Algorithm comparison for α = 1.2 .
N ˜ MErrorRateIter-OutIter-InTime (s)
GMRES 2 10 2 6 1.9624 × 10 1 0.95312.040.90.62
2 11 2 7 9.7790 × 10 2 1.00492.045.03.51
2 12 2 8 4.6303 × 10 2 1.07862.049.09.54
2 13 2 9 2.0009 × 10 2 1.21042.054.069.94
SGMRES 2 10 2 6 1.9624 × 10 1 0.95312.013.80.29
2 11 2 7 9.7790 × 10 2 1.00492.014.61.88
2 12 2 8 4.6303 × 10 2 1.07862.016.23.93
2 13 2 9 2.0009 × 10 2 1.21042.017.639.81
BGMRES (4) 2 10 2 6 1.9624 × 10 1 0.95312.07.00.15
2 11 2 7 9.7790 × 10 2 1.00492.07.00.74
2 12 2 8 4.6303 × 10 2 1.07862.07.01.68
2 13 2 9 2.0009 × 10 2 1.21042.07.011.83
Table 5. Algorithm comparison for α = 1.5 .
Table 5. Algorithm comparison for α = 1.5 .
N ˜ MErrorRateIter-OutIter-InTime (s)
GMRES 2 10 2 6 4.0928 × 10 2 1.00242.044.90.67
2 11 2 7 2.0015 × 10 2 1.03202.055.04.25
2 12 2 8 9.3484 × 10 3 1.09832.068.013.29
2 13 2 9 4.0197 × 10 3 1.21762.084.0119.81
SGMRES 2 10 2 6 4.0928 × 10 2 1.00242.014.70.32
2 11 2 7 2.0015 × 10 2 1.03202.017.82.27
2 12 2 8 9.3484 × 10 3 1.09832.022.05.90
2 13 2 9 4.0197 × 10 3 1.21762.027.465.05
BGMRES (4) 2 10 2 6 4.0928 × 10 2 1.00242.07.20.16
2 11 2 7 2.0015 × 10 2 1.03202.08.00.85
2 12 2 8 9.3484 × 10 3 1.09832.08.01.95
2 13 2 9 4.0197 × 10 3 1.21762.08.715.35
Table 6. Algorithm comparison for α = 1.8 .
Table 6. Algorithm comparison for α = 1.8 .
N ˜ MErrorRateIter-OutIter-InTime (s)
GMRES 2 10 2 6 1.2786 × 10 2 1.03942.353.00.88
2 11 2 7 6.1931 × 10 3 1.04592.268.85.98
2 12 2 8 3.1333 × 10 3 0.98302.289.519.49
2 13 2 9 1.4819 × 10 3 1.08022.2117.8165.29
SGMRES 2 10 2 6 1.2786 × 10 2 1.03942.321.20.64
2 11 2 7 6.1931 × 10 3 1.04592.229.64.08
2 12 2 8 3.1333 × 10 3 0.98302.239.211.21
2 13 2 9 1.4819 × 10 3 1.08022.252.9125.92
BGMRES (4) 2 10 2 6 1.2786 × 10 2 1.03942.37.00.18
2 11 2 7 6.1931 × 10 3 1.04592.28.00.94
2 12 2 8 3.1333 × 10 3 0.98302.28.02.10
2 13 2 9 1.4819 × 10 3 1.08022.29.015.82
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, X.; Gong, X.-X.; Sun, Y.; Lei, S.-L. A Preconditioned Policy–Krylov Subspace Method for Fractional Partial Integro-Differential HJB Equations in Finance. Fractal Fract. 2024, 8, 316. https://doi.org/10.3390/fractalfract8060316

AMA Style

Chen X, Gong X-X, Sun Y, Lei S-L. A Preconditioned Policy–Krylov Subspace Method for Fractional Partial Integro-Differential HJB Equations in Finance. Fractal and Fractional. 2024; 8(6):316. https://doi.org/10.3390/fractalfract8060316

Chicago/Turabian Style

Chen, Xu, Xin-Xin Gong, Youfa Sun, and Siu-Long Lei. 2024. "A Preconditioned Policy–Krylov Subspace Method for Fractional Partial Integro-Differential HJB Equations in Finance" Fractal and Fractional 8, no. 6: 316. https://doi.org/10.3390/fractalfract8060316

APA Style

Chen, X., Gong, X. -X., Sun, Y., & Lei, S. -L. (2024). A Preconditioned Policy–Krylov Subspace Method for Fractional Partial Integro-Differential HJB Equations in Finance. Fractal and Fractional, 8(6), 316. https://doi.org/10.3390/fractalfract8060316

Article Metrics

Back to TopTop