Previous Article in Journal
Theory of Functional Connections Extended to Continuous Integral Constraints
Previous Article in Special Issue
Regression Modeling for Cure Factors on Uterine Cancer Data Using the Reparametrized Defective Generalized Gompertz Distribution
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Performance Simulation of Generalized Tempered Stable Random Variates: Exact and Numerical Methods for Heavy-Tailed Data

by
Aubain Nzokem
1,*,† and
Daniel Maposa
2,*,†
1
Department of Mathematics & Statistics, York University, Toronto, ON M3J 1P3, Canada
2
Department of Statistics and Operations Research, University of Limpopo, Sovenga 0727, South Africa
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Math. Comput. Appl. 2025, 30(5), 106; https://doi.org/10.3390/mca30050106
Submission received: 8 August 2025 / Revised: 25 September 2025 / Accepted: 26 September 2025 / Published: 28 September 2025
(This article belongs to the Special Issue Statistical Inference in Linear Models, 2nd Edition)

Abstract

The Generalized Tempered Stable (GTS) distribution extends classical stable laws through exponential tempering, preserving the power-law behavior while ensuring finite moments. This makes it especially suitable for modeling heavy-tailed financial data. However, the lack of closed-form densities poses significant challenges for simulation. This study provides a comprehensive and systematic comparison of GTS simulation methods, including rejection-based algorithms, series representations, and an enhanced Fast Fractional Fourier Transform (FRFT)-based inversion method. Through extensive numerical experiments on major financial assets (Bitcoin, Ethereum, the S&P 500, and the SPY ETF), this study demonstrates that the FRFT method outperforms others in terms of accuracy and ability to capture tail behavior, as validated by goodness-of-fit tests. Our results provide practitioners with robust and efficient simulation tools for applications in risk management, derivative pricing, and statistical modeling.

1. Introduction

In many areas of applied statistics and data analysis—particularly in finance, insurance, and environmental modeling—empirical data often exhibit heavy tails, skewness, and discontinuous jumps that deviate significantly from Gaussian assumptions [1,2,3]. Classical models based on normal distributions frequently fail to capture these features, especially when modeling extreme events or tail risks [4].
The Generalized Tempered Stable (GTS) distribution has emerged as a flexible and powerful framework for modeling such phenomena. By introducing exponential tempering to the Lévy measure, it extends classical stable distributions while preserving their heavy-tailed behavior and ensuring finite moments [5,6]. These properties make GTS distributions particularly valuable for modeling Lévy processes with jump activity in financial applications [7]. Moreover, the GTS family encompasses several notable special cases [6,8,9,10,11], further highlighting its flexibility and relevance across various applications.
Despite their theoretical appeal, simulating GTS random variates remains a challenging task. The absence of closed-form probability densities precludes the use of standard techniques that rely on explicit cumulative distribution evaluations [12]. In practice, simulation from the GTS distribution is essential for conducting Monte Carlo experiments, performing risk assessments, fitting models to data, and generating synthetic datasets for testing algorithms [13].
To address this challenge, several approaches have been developed, including rejection-based algorithms (e.g., Standard Stable Rejection, Double Rejection, and Two-Dimensional Single Rejection) [12,14,15,16,17], series representations (e.g., Shot Noise and Inverse Tail Integral methods) [5,18,19], and inversion-based techniques leveraging the characteristic function [1,20]. While each has advantages, they also face limitations in efficiency, accuracy, or robustness—particularly when applied to financial assets with very small stability indices or extreme tempering parameters ( β ± 0 , λ ± + ). Moreover, the literature lacks a systematic empirical comparison of these methods under realistic, data-driven conditions.
This paper aims to fill that gap by providing a comprehensive review, implementation, and empirical evaluation of simulation methods for GTS random variates. We categorize the algorithms into exact methods (rejection-based) and numerical approaches (series representations and inversion techniques). Our key contributions are as follows:
1.
Systematic Empirical Benchmarking: We perform an extensive numerical study using high-frequency daily return data from major financial assets (Bitcoin, Ethereum, S&P 500, SPY ETF), whose parameters often push simulation methods to their limits. This provides a realistic benchmark absent in purely theoretical comparisons.
2.
Implementation and Analysis of Exact Methods: We implement and evaluate key rejection-based algorithms (Standard Stable Rejection, Double Rejection, Two-Dimensional Single Rejection), providing a clear analysis of their computational complexity and identifying their breaking points, particularly for equity indices with extremely low stability index values.
3.
Implementation and Analysis of Series Representations: We investigate the inverse Lévy measure and Shot Noise series representation methods, demonstrating their theoretical elegance but also their practical limitations and sensitivity to parameter values.
4.
An Enhanced FRFT-Based Inversion Framework: We propose and implement an advanced numerical inversion method that leverages the characteristic function of the GTS distribution, combining [21] a Fast Fractional Fourier Transform (FRFT) and Newton–Cotes quadrature. This method achieves high accuracy and robustness across parameter regimes, addressing key shortcomings of rejection and series methods.
5.
Rigorous Validation: Beyond visual Q-Q plot analysis, we use statistical goodness-of-fit tests (Kolmogorov–Smirnov and Anderson–Darling) to assess the fidelity of each method, with a particular focus on tail behavior.
Our results show that the enhanced FRFT-based inversion method consistently outperforms existing techniques in both accuracy and robustness. This establishes it as a practical and reliable tool for applications in risk management, derivative pricing, and statistical modeling where accurate reproduction of heavy-tailed dynamics is essential.
The remainder of this paper is structured as follows: Section 2 reviews the mathematical foundations of GTS distributions. Section 3, Section 4, Section 5, Section 6 and Section 7 present the simulation algorithms. Section 8.2 reports empirical results and comparisons. Section 9 concludes with practical recommendations and directions for future research.

2. Generalized Tempered Stable (GTS) Distribution

The Generalized Tempered Stable (GTS) distribution is a family of infinitely divisible distributions that generalizes the classical stable laws by introducing exponential tempering into the Lévy measure. This modification preserves the power-law behavior in the central part of the distribution while damping the tails, ensuring the existence of moments and improving the tractability of the model in applications.
Formally, a random variable X G T S ( β + , β , α + , α , λ + , λ ) has a Lévy measure V ( d x ) given by
V ( d x ) = α + e λ + x x 1 + β + 1 { x > 0 } + α e λ | x | | x | 1 + β 1 { x < 0 } d x ,
where
  • β + , β ( 0 , 1 ) are stability index parameters, controlling the heaviness of the tails on the positive and negative axes, respectively;
  • α + , α > 0 are scale parameters, determining the overall intensity of jumps;
  • λ + , λ > 0 are tempering parameters, governing the exponential decay of large jumps in either direction.
Remark 1.
The Lévy measure V ( d x ) is a Borel measure on R \ { 0 } that satisfies the following condition:
  • No Jump at Zero: V ( 0 ) = 0 ;
  • Integrability of Small Jumps: R ( 1 | x | 2 ) V ( d x ) < .
The importance of the Lévy measure is shown by the Lévy–Khintchine representation, which states that the characteristics of any Lévy process are uniquely defined by a triplet ( a , σ , V ) , consisting of
  • A drift term (a);
  • A diffusion or variance coefficient (σ), which controls the continuous, Gaussian-motion component;
  • The Lévy measure (V), which precisely quantifies the frequency and size of the jumps.
Thus, the triplet ( a , σ , V ) offers a unique and exhaustive signature for any Lévy process.
The activity process of the GTS distribution can be studied from the integral (2) of the Lévy measure (1):
+ V ( d x ) = + if 0 β + < 1 0 β < 1 .
As shown in Equation (2), for β + , β ( 0 , 1 ) , the Lévy density ( V ( d x ) ) is not integrable as it goes off to infinity too rapidly as x goes to zero, due to a large number of very small jumps. The GTS distribution is described as an infinite activity process with an infinite number of jumps within any given time interval.
In addition to the infinite activities process, the variation of the process can be studied by solving the following integral [22]:
1 1 | x | V ( d x ) < + if 0 β + < 1 0 β < 1 .
Refer to [11] for further development of Equation (3).
As shown in Equation (3), GTS distribution is a finite variation process, and generates a type B Lévy process [23], which is a purely non-Gaussian infinite activity Lévy process of finite variation whose sample paths have an infinite number of small jumps and a finite number of large jumps in any finite time interval. In particular, being of bounded variation shows that X G T S ( β + , β , α + , α , λ + , λ ) can be written as the difference of two independent subordinators [22,24]:
X = X + X with X + T S ( β + , α + , λ + ) , X T S ( β , α , λ ) ,
where X + T S ( β + , α + , λ + ) and X T S ( β , α , λ ) are subordinators.
By adding a drift parameter, we have the following expression:
Y = μ + X G T S ( μ , β + , β , α + , α , λ + , λ ) .
Theorem 1.
Consider a variable Y G T S ( μ , β + , β , α + , α , λ + , λ ) . The characteristic exponent can be written as
ψ ( ξ ) = μ ξ i + α + Γ ( β + ) ( λ + i ξ ) β + λ + β + + α Γ ( β ) ( λ + i ξ ) β λ β .
See [25,26,27] for Theorem 1’s proof.

Maximum Likelihood GTS Parameter Estimation for Four Key Financial Assets

The GTS parameter estimation results for Bitcoin and Ethereum are presented in Table 1, while those for the S&P 500 index and the SPY ETF are shown in Table 2. The values in brackets represent the asymptotic standard errors, computed using the inverse of the Hessian matrix.
For all financial assets considered, the maximum likelihood estimates indicate that μ is negative, while other parameters are positive, as expected in the literature. However, the relatively large standard errors suggest that μ is not statistically significant at the 5% level.
As shown in Table 1 and Table 2, except for the index of stability parameters ( β + β ), the tempering parameters ( λ + , λ ) and the scale parameters ( α + , α ) are all statistically significant at the 5% level.
Remark 2.
A comprehensive methodology for fitting the seven-parameter Generalized Tempered Stable (GTS) distribution—along with parameter estimation results—is presented in [26]. The data source and some empirical findings are summarized as follows:
  • Data Sources:
    Historical price data for Bitcoin (BTC) and Ethereum (ETH) were collected from CoinMarketCap. The time span covers 28 April 2013 to 4 July 2024 for BTC, and 7 August 2015 to 4 July 2024 for ETH.
    Historical price data for the S&P 500 Index and the SPY ETF were obtained from Yahoo Finance, covering the period from 4 January 2010 to 22 July 2024. The prices were adjusted for stock splits and dividends.
  • GTS parameter estimation: The methodology for estimating the seven parameters using the Enhanced Fast Fractional Fourier Transform (FRFT) has been extensively developed in [26,28,29,30,31], providing a comprehensive approach for fitting the rich class of GTS distributions.
  • Model Comparison: Based on log-likelihood, Akaike Information Criterion (AIC), and Bayesian Information Criterion (BIC), the seven-parameter GTS distribution outperforms the classical two-parameter normal distribution (Geometric Brownian Motion, GBM).
  • Goodness-of-Fit Tests: The Kolmogorov–Smirnov, Anderson–Darling, and Pearson chi-squared tests confirm the superior fit of the GTS model, especially in capturing heavy tails and asymmetries in return distributions.
  • Benchmarking Against Alternative Models: The GTS distribution demonstrates improved empirical performance over
    The KoBoL distribution ( β = β + = β );
    The Carr–Geman–Madan–Yor (CGMY) model ( λ = λ + = λ and β = β + = β );
    The Bilateral Gamma distribution ( β + = β = 0 ).

3. β-Stable Distributions and Simulation Algorithms

3.1. β -Stable Distributions: Review

We consider the class of all stable distributions with four parameters ( β , δ , σ , μ ) , denoted by S β ( σ , δ , μ ) . X S β ( σ , δ , μ ) means that X is a stable random variable (r.v) with parameters ( β , δ , σ , μ ) . S β ( σ , δ , μ ) is the notation used in [2]. In the literature, various notational conventions are commonly used; for instance, the parameterization S ( β , δ , σ , μ ) is employed in [4]. A random variable X is said to have a stable distribution, X S β ( σ , δ , μ ) , if and only if the logarithm of its characteristic function ψ ( ξ ) has the following canonical form [32,33,34]:
L o g ( ψ ( ξ ) ) = i μ ξ σ β | ξ | β 1 i δ s i g n ( ξ ) t a n ( β π 2 ) if β 1 i μ ξ σ | ξ | 1 + i 2 π δ s i g n ( ξ ) l o g ( | ξ | ) if β = 1 ,
where
s i g n ( ξ ) = 1 if ξ > 0 0 if ξ = 0 1 if ξ < 0 .
Corollary 1.
Let X s β ( σ , δ , μ ) ; there exists Y S β ( 1 , δ , 0 ) such that X can be expressed as follows:
X = σ Y + μ if β 1 σ Y + μ + 2 π δ σ L o g ( σ ) + σ Y if β = 1 .
The proof of Corollary 1 follows from Proposition 1.2.2 and Proposition 1.2.3 in [2].
Remark 3.
S β ( 1 , δ , 0 ) is called the class of standard stable distribution. Four parameters characterize the class S β ( σ , δ , μ ) and can be described as follows:
  • The skewness parameter ( 1 < δ < 1 ): The distribution is considered positively (negatively) skewed if 0 < δ < 1 ( 1 < δ < 0 ).
  • The stability parameter or index ( 0 β 2 ): It affects the shape of the distribution and its tails. The class S β ( σ , δ , μ ) lacks a closed-form probability density function, except for the Gaussian distribution ( β = 2 , δ = 0 ), the Cauchy distribution ( β = 1 , δ = 0 ), and the Lévy distribution ( β = 1 2 , δ = 1 ) [2,35].
  • The scale or dispersion parameter ( 0 < σ < + ): This is not the standard deviation of non-Gaussian stable distributions, as the variance is infinite when β ( 0 , 2 ) .
  • The location or drift parameter ( < μ < + ): This is not the mean but has a drifting effect on the distribution.

3.2. Sampling from β -Stable Distributions: Review

Remark 4.
The discontinuity in the characteristic function of canonical representation (7) with respect to its parameters causes numerical instabilities in simulations. Thus, smoother alternatives, such as parametrization (B) and parametrization (M) in [33,36], are preferred. The canonical representation (7) is called parametrization (A) in [33].
Theorem 2
(Transition from parametrization (B) to parametrization (A)). Let X B ( β , δ B ) S β ( 1 , δ , 0 ) and X A ( β , δ ) S β ( 1 , δ , 0 ) . X B and X A follow the parametrization (B) and (A), respectively. We have the following relationship:
X B ( β , δ B ) = c o s ( δ B ϕ ( β ) ) 1 β X A ( β A , δ ) X B ( 1 , δ B ) = δ B l o g ( π 2 ) + π 2 X A ( 1 , δ ) ,
where
δ B ϕ ( β ) = a r c t a n ( δ t a n ( π 2 β ) ) , ϕ ( β ) = π 2 β , 0 < β < 1 .
Refer to [32] for the proof of Theorem 2.
Theorem 3
(Representation of stable laws by integrals). The distribution function of a standard stable distribution ( S β ( 1 , δ , 0 ) ) can be written as follows:
1. 
If β 1 and x > 0 , then for any | δ | 1 ,
G ( x , β , δ ) = 1 2 ( 1 θ 0 ) + 1 π θ 0 π 2 e x p { x β β 1 V β ( ϕ , δ ) } d ϕ if β < 1 1 1 π θ 0 π 2 e x p { x β β 1 V β ( ϕ , δ ) } d ϕ if β > 1 ( π 2 θ 0 ) 1 π if x = 0 ,
where
V β ( ϕ , δ ) = B β , δ s i n ( β ( ϕ + θ 0 ) ) c o s ( ϕ ) β 1 β c o s ( ( β 1 ) ϕ + β θ 0 ) c o s ( ϕ ) ,
where
θ 0 = 1 β a r c t a n δ t a n ( π β 2 ) , B β , δ = 1 c o s 1 1 β ( β θ 0 ) .
2. 
If β = 1 and x > 0 , then for any | δ | 1 ,
G ( x , 1 , δ ) = 1 π π 2 π 2 e x p { e x δ V 1 ( ϕ , δ ) } d ϕ if δ > 0 1 2 + 1 π a r c t a n ( x ) if δ = 0 1 G ( x , 1 , δ ) if δ < 0 ,
where
V 1 ( ϕ , δ ) = ( π 2 + δ ) c o s ( ϕ ) e x p ( ϕ + π 2 δ ) t a n ( ϕ ) .
For the proof of Theorem 3, refer to [4,33,36,37].
Theorem 4 (Sampling standard stable distribution
( S β ( 1 , δ , 0 ) )). Let U and W be independent with U uniformly distributed on ( 0 , 1 ) and W exponentially distributed with mean 1.
Then, Z can be simulated by representation.
Z = C β , δ s i n ( β ( π ( U 1 2 ) + θ 0 ) ) c o s 1 β ( π ( U 1 2 ) ) c o s ( ( β 1 ) π ( U 1 2 ) + β θ 0 ) W 1 β β if β 1 ( 1 + 2 δ ( U 1 2 ) ) t a n ( π ( U 1 2 ) ) 2 π δ L o g W c o s ( π ( U 1 2 ) ) 1 + 2 δ ( U 1 2 ) if β = 1 ,
where
C β , δ = 1 c o s 1 β ( β θ 0 ) , θ 0 = 1 β a r c t a n δ t a n ( π β 2 ) , U U ( 0 , 1 ) , W E x p ( 1 ) .
For the proof of Theorem 4, refer to [4,37].
Equation (16) is referred to as the Chambers–Mallows–Stuck (CMS) method for generating standard stable variables ( S β ( 1 , δ , 0 ) .
Remark 5.
Certain special cases of (16) are worth noting.
  • Standard stable distribution ( S β ( 1 , δ , 0 ) ) with β = 2 , δ = 0 : we have
    Z = 2 ( 2 W ) 1 2 s i n ( π ( U 1 2 ) ) N ( 0 , 2 ) ,
    which is a Box-Muller algorithm [38] for generating a normal random variable with mean 0 and variance 2.
  • Standard stable distribution ( S β ( 1 , δ , 0 ) ) with β = 1 , δ = 0 : we have
    Z = s i n ( π 2 δ ) + c o s ( π 2 δ ) t a n ( π ( U 1 2 ) ) .
    For δ = 0 , we have the algorithm [39] for generating Cauchy distribution S 1 ( 1 , 0 , 0 ) .
    The Cauchy distribution function [33] can be written as follows:
    G ( x , 1 , δ ) = 1 2 + 1 π a r c t a n x s i n ( π 2 δ ) c o s ( π 2 δ ) .
  • Standard stable distribution ( S β ( 1 , δ , 0 ) ) with β = 1 2 , δ = 1 : we have θ = π 2 and
    Z = 1 ( 2 W ) 1 2 c o s ( π 2 U ) 2 ,
    where
    V = ( 2 W ) 1 2 c o s ( π U ) N ( 0 , 1 ) , Z = 1 V 2 ,
    We have a well-known relationship between the standard Lévy distribution and the standard normal distribution.

4. Generalized Tempered Stable (GTS) and Exponentially Titled Stable Distributions

The Generalized Tempered Stable (GTS) distribution, denoted as G T S ( β + , β , α + , α , λ + , λ ) , is a finite variation process when 0 < β + < 1 and 0 < β < 1 . Any random variable X following such a GTS distribution can be represented as the difference of two independent subordinators:
X = X + X ,
where X + T S ( β + , α + , λ + ) and X T S ( β , α , λ ) are subordinators on the positive half-line and belong to the class of Tempered Stable distributions, denoted T S ( β , α , λ ) .
Remark 6.
A subordinator is a one-dimensional Lévy process { T t } t > 0 , such that t T t is a non-decreasing function.
Lemma 1
(Characteristic Exponents of X + and X ). The characteristic exponents of the subordinators X + and X are
ψ + ( ξ ) = L o g E [ e i ξ X + ] = α + Γ ( β + ) ( λ + i ξ ) β + λ + β + ψ ( ξ ) = L o g E [ e i ξ X ] = α Γ ( β ) ( λ + i ξ ) β λ β .
Proof. 
The Lévy–Khintchine representation [40] for non-negative Lévy process is applied on X + .
ψ + ( ξ ) = L o g E e i X + ξ = 0 + e i y ξ 1 α + e λ + y y 1 + β + d y = α + λ + β + Γ ( β + ) k = 1 + Γ ( k β + ) Γ ( β + ) k ! ( i ξ λ + ) k = α + λ + β + Γ ( β + ) k = 1 + β + k ( i ξ λ + ) k = α + Γ ( β + ) ( λ + i ξ ) β + λ + β + .
Additional information can be found in [25,26,27]    □

4.1. Exponentially Titled Unilateral Stable Variable

A standard stable variable X with stability parameter β ( 0 , 1 ) belongs to the class S β ( 1 , 1 , 0 ) . Its corresponding Lévy measure is given by
V + ( d x ) = α x 1 + β 1 x > 0 d x .
We consider f ( ξ ) , the probability density function (PDF) of the standard stable variable X. For λ > 0 , we introduce the exponentially titled random variable X λ [12] defined by the tilted density function f λ ( x ) :
f λ ( ξ ) = e λ ξ E ( e λ X ) f ( ξ ) .
Theorem 5
(Characteristic exponent of X λ ). For the exponentially titled random variable X λ , its characteristic exponent satisfies
ψ ( ξ ) = L o g E [ e i ξ X λ ] = α Γ ( β ) ( λ i ξ ) β λ β .
Moreover, we have the following expression for PDF and X λ :
f λ ( ξ ) = e λ ξ α Γ ( β ) λ β f ( ξ ) X λ = d X + .
Proof. 
X is the standard stable variable correspond to λ + = 0 and Lévy density in Equation (19). Equation (18) becomes
L o g ( E [ e i ξ X ] ) = 0 + e i x ξ 1 α x 1 + β d x = α Γ ( β ) ( i ξ ) β .
Applying the moment-generating function by taking ξ = i λ ,
E ( e λ X ) = e α Γ ( β ) λ β .
By substitution, we have the PDF of X λ (21).
The last relation, X λ = d X + , can be shown as follows:
E [ e i ξ X λ ] = 0 + e i ξ u f λ ( u ) d u = e α Γ ( β ) λ β 0 + e i ( ξ + i λ ) u f ( u ) d u = e α Γ ( β ) λ β E [ e i ( ξ + i λ ) X ] = e α Γ ( β ) λ β e α Γ ( β ) ( λ i ξ ) β = e α Γ ( β ) λ β ( λ i ξ ) β .
Applying the logarithmic function on Equation (24), we have the characteristic exponent described in Equation(17).
Therefore, we have
X λ = d X + & X λ T S ( β , α , λ ) ,
X λ is also a Tempered Stable distribution ( T S ( β , α , λ ) ).    □
Remark 7.
In the literature, other papers [12,14,15] dealing with exponential tilted stable X λ used alternative parameterization for characteristic functions, which can be recovered quickly by the transformation of parameters.
  • Alternative Parameterization of the Characteristic Function:
    ψ ( ξ ) = L o g E ( e i ξ X λ ) = α Γ ( β ) λ β ( λ i ξ ) β = ( α Γ ( β ) ) 1 β λ β ( α Γ ( β ) ) 1 β λ i ( α Γ ( β ) ) 1 β ξ β .
  • Transformation of Parameters:
    We have a new parameter (θ) and a transformed variable:
    θ = ( α Γ ( β ) ) 1 β , X λ = θ X θ λ .
  • Setting θ = 1 : Without loss of generality, we set θ = 1 and the Laplace transform of X λ becomes
    E ( e ξ X λ ) = e λ β ( λ + ξ ) β .

4.2. T S ( β , α , λ ) Tempered Stable Distribution

We now consider Theorem 5 in Section 4.2, by setting θ = ( α Γ ( β ) ) 1 β = 1 . As previously shown in Equation (20), the Laplace transform of the exponential tilted stable variable X λ is given by
E ( e ξ X λ ) = e λ β ( λ + ξ ) β .
The probability density function (PDF) of X λ can be written as
f λ ( ξ ) = e λ ξ + λ β f ( ξ ) .
where f ( ξ ) is the PDF of the stable variable X S β ( 1 , 1 , 0 ) with the stability paramater β ( 0 , 1 ) . The analytical expression for f ( ξ ) is given by Zolotarev’s integral representation [36]:
f ( ξ ) = 1 π β 1 β 0 π B ( u ) 1 1 β ξ 1 1 β e B ( u ) 1 1 β ξ β 1 β d ξ ,
where B ( u ) is defined as follows:
B ( u ) = s i n β ( β u ) s i n 1 β ( ( 1 β ) u ) s i n ( u ) .
Remark 8.
B ( u ) is obtained from V β ( ϕ , δ ) (13) by removing the transitional parameter B β , δ = c o s ( δ B ϕ ( β ) ) 1 1 β and transforming the interval ( π 2 , π 2 ) to ( 0 , π ) using the function ϕ = u + π 2 .
Substituting f ( ξ ) into the expression f λ ( ξ ) , we obtain
f λ ( ξ ) = e λ ξ λ β f ( ξ ) = 0 π β e λ β π ( 1 β ) B ( u ) 1 1 β ξ 1 1 β e λ ξ B ( u ) 1 1 β ξ β 1 β d ξ .
We define a bivariated density function f ( ξ , u ) over the domain [ 0 , ) x [ 0 , π ] as
f ( ξ , u ) = β e λ β π ( 1 β ) B ( u ) 1 1 β ξ 1 1 β e λ ξ B ( u ) 1 1 β ξ β 1 β d ξ .
f ( ξ , u ) plays a crucial role in constructing the Two-Dimensional Single Rejection algorithm [14] for simulating the exponentially tilted sable variable X λ .
To further simplify the integral, we introduce a change of variable, y = ξ β 1 β , in the following integral.
0 f ( ξ , u ) d ξ = 0 e λ β B ( u ) 1 1 β π e λ y 1 β β B ( u ) 1 1 β y d y
From the integrand in Equation (29), we define another bivariate density function h ( y , u ) over [ 0 , ) x [ 0 , π ] :
h ( y , u ) = e λ β B ( u ) 1 1 β π e λ y 1 β β B ( u ) 1 1 β y .
h ( y , u ) plays a crucial role in constructing the Double Rejection algorithm [12] to sample X λ = Y 1 β β .

5. Exact Sampling Method for Simulating

5.1. Standard Stable Rejection (SSR) Method

The Acceptance–Rejection Sampling technique [13] generates samples from a target distribution by first generating candidates from a more convenient distribution and then rejecting a random subset of the generated candidates. Let Y be a stable random variable with distribution S β ( 1 , δ , 0 ) . The Chambers–Mallows–Stuck (CMS) method (16) provides an efficient approach for simulating stable random variables [41]. To simulate an exponentially tilted random variable (or tempered stable variable) from Y, each candidate Y is accepted with probability e λ Y . If Y is rejected, independent trials follow until acceptance. It is well-known [42] that the number of trials N before acceptance is geometrically distributed, and its mean is given by
E [ N ] = 1 E e λ Y = exp ( θ λ ) β , θ = ( α Γ ( β ) ) 1 β .
Hence, a smaller E [ N ] implies fewer trials on average, thus increasing the efficiency of the SSR technique.
Figure 1a,b illustrate how E [ N ] behaves under empirical return distributions:
  • For negative Bitcoin returns (black curve) and positive S&P 500 returns (red curve), E [ N ] grows slowly.
  • However, for negative S&P 500 returns (also shown in black), E [ N ] increases exponentially, leading to a prohibitively large number of trials.
The rapid increase in the expected number of trials results in a poor acceptance rate for large values of λ or for small values of β . The expected number of trials ( E [ N ] ) is referred to as the “expected complexity” in the literature [12,14,16,43]. The high expected complexity results in a high rejection rate, rendering the SSR algorithm inefficient and limiting its practical application.
The empirical values of E [ N ] for financial assets, including the S&P 500 index, SPY ETF, Bitcoin, and Ethereum, were computed and summarized in Table 3.
The transformed parameters θ and θ λ exhibit significantly larger values for the S&P 500 index and SPY ETF, with the expected complexities of 629.582 and 6.8 × 10 11 , respectively. The high values make the SSR algorithm not applicable to the financial assets: S&P 500 index and SPY ETF.
Algorithm 1 outlines the SSR sampling approach [16,43,44], commonly used for sampling from exponentially tilted stable distributions, also known as generalized gamma distributions [41].
Algorithm 1 Standard Stable Rejection (SSR) sampling [14].
1:
loop
2:
     Draw X S β ( 1 , δ , 0 )               ▹ via Chambers–Mallows–Stuck (CMS) method (see (16))
3:
     Draw U Uniform ( 0 , 1 )                                                                  ▹ U and X independent
4:
     if  U < e λ X  then
5:
          return  X λ = X                                        ▹ Accept: exponential tilting of the stable law
6:
     else
7:
          Reject X, restart Stage 1
8:
     end if
9:
end loop
To evaluate the performance of Algorithm 1, we generated a sample of 80,000 data points to mimic the two financial assets: Bitcoin and Ethereum. As shown in Figure 2b and Figure 3b, the Q-Q plots maintain smooth linear patterns, confirming the alignment of the empirical and theoretical distributions.
Remark 9.
The theoretical quantiles (represented by the red line in the Q-Q plot) and the observed quantiles were computed by the Enhanced Fast Fractional Fourier Transform (FRFT) scheme. The Enhanced Fast FRFT scheme improves the accuracy of the one-dimensional Fast FRFT by leveraging closed Newton–Cotes quadrature rules [45,46]. For more details on the methodology and its applications, refer to [21,47,48].
The Standard Stable Rejection (SSR) method is theoretically appealing but becomes impractical for large tempering parameters ( λ + ) or small stability index parameter ( β 0 ) due to unbounded computational complexity. While useful for small-scale simulations with moderate parameters, its limitations make it unsuitable for financial modeling, where extreme parameter values are common.
The Fast Rejection Algorithm [16,43] significantly reduces the expected complexity of SSR sampling down to logarithmic expected complexity. However, like SSR, its expected runtime is still unbounded, meaning performance can degrade for extreme parameter values ( β ). By contrast, both the Double Rejection and Two-Dimensional Single Rejection algorithms ensure a bounded expected complexity for all β ( 0 , 1 ) and λ > 0 .

5.2. Double Rejection Method

The joint density function h ( x , u ) in (30) serves as the target distribution for sampling the marginal random variable X λ .
The marginal density of X λ is given by the integral in (32), which does not have a closed-form expression:
f ( x ) = 0 π h ( x , u ) d u = 0 π e λ β B ( u ) 1 1 β π e λ x 1 β β B ( u ) 1 1 β x d u .
Similarly, the marginal density function of U, given by (33), lacks a closed-form solution:
h ( u ) = 0 h ( x , u ) d x = e λ λ B ( u ) 1 1 β π 0 e λ x 1 β β B ( u ) 1 1 β x d x .
To generate samples of X λ , one approach is to first sample U and then generate X λ from the conditional density h ( x U ) . However, the lack of a closed-form expression for h ( u ) complicates this process. The Double Rejection method addresses this challenge by selecting an appropriate bivariate density function g ( x , u ) and a univariate density function k ( u ) such that
h ( x , u ) g ( x , u ) , g * ( u ) = 0 g ( x , u ) d x , g * ( u ) k ( u ) .
These functions are chosen to facilitate efficient sampling of X λ . The first-level rejection method is used to sample U from the marginal density g * ( u ) , while the second-level rejection method generates the bivariate random variable ( X , U ) with density function h ( y , u ) . The Double Rejection method, introduced by [12], uses the following bivariate and univariate density functions.
g ( y , u ) = A ( u ) e λ λ π e A ( u ) 1 β m 1 2 ( y m ) γ ) 2 y m A ( u ) e λ λ π e A ( u ) 1 β m m y m + δ A ( u ) e λ λ π e A ( u ) 1 β m ( y m δ ) λ m + δ y         k ( u ) = w 1 f 1 ( u ) + w 2 f 2 ( u ) γ 1 w 3 f 3 ( u ) + w 2 f 2 ( u ) γ 1 ,
where the parameters and the weight coefficients are defined as follows:
  • A ( u ) = B ( u ) 1 1 β ; m = ( 1 β ) λ β A ( u ) β ; γ = β ( 1 β ) λ β ; δ = m β A ( u ) .
  • w 1 = ξ π 2 σ , f 1 ( u ) is normal density with μ = 0 and σ = 2 π γ .
  • w 2 = 2 ψ π , f 2 ( u ) is beta density with a = 1 and b = 1 2 .
  • w 3 = ξ π , f 3 ( u ) is uniform distribution over ( 0 , π ) .
The following theorem from [12] establishes that the expected complexity of the Double Rejection method is uniformly bounded for all values of β ( 0 , 1 ) and λ > 0 .
Theorem 6.
Let R ( β , λ ) be the expected number of iterations required to generate a random variable X λ using the Double Rejection method. Then,
S u p β ( 0 , 1 ) λ > 0 R ( β , λ ) 8.11328125 .
The proof of Theorem 6 can be found in [12].
The GTS parameter estimates presented in Table 1 and Table 2 were used to simulate the daily returns of each financial asset. For each asset, empirical quantiles were computed from the observed daily return samples. The Q-Q plots shown in Figure 4a,b and Figure 5a,b compare these sample quantiles with the theoretical quantiles derived from the General Tempered Stable (GTS) distribution. The plots display a smooth non-linear pattern in the central region and noticeable discrepancies in the tails.
The graphical results suggest that the empirical return distributions exhibit heavier tails than those predicted by the GTS model, indicating limitations in the Double Rejection method’s ability to accurately capture the GTS model’s extreme values.
Despite its theoretically established complexity bounds, the Double Rejection technique proves inadequate for accurate GTS simulations, as it fails to capture the heavy-tailed characteristics of the distribution.
Algorithm 2 Double Rejection sampling.
Require: 
Marginal density g * ( u ) , proposal density k ( u ) , joint density h ( x , u ) , and proposal g ( x u ) .
Ensure: 
Sample X h ( x , u ) marginalized over u.
  1:
Input: Parameters α , β , λ
  2:
Initialize: Γ 0 , sum 0 , y 1 set threshold ε = 10 5
  3:
loop
  4:
      Draw U k ( u )                           ▹ Stage 1: sample U
  5:
      Draw W Unif ( 0 , 1 )                      ▹U and W independent
  6:
      if  W < g * ( U ) k ( U )  then
  7:
           Accept U
  8:
           Set V W · k ( U ) g * ( U )
  9:
      else
10:
           Reject U, retry Stage 1
11:
      end if
12:
      Draw X g ( x U )                         ▹ Stage 2: sample X
13:
      if  V < h ( X , U ) g ( X , U )  then
14:
           return X
15:
      else
16:
           Reject ( U , X ) , restart Stage 1
17:
      end if
18:
       Return X λ = X 1 β β .
19:
end loop

5.3. Two-Dimensional Single Rejection Method

The joint density function f ( x , u ) in (28) serves as the target distribution for sampling the bivariate variable ( X , U ) . The Two-Dimensional Single Rejection method [14] uses a proposal bivariate density function g ( x , u ) , along with a constant C ( β , λ ) such that
f ( x , u ) C ( β , λ ) g ( x , u ) .
This method generates samples ( X , U ) from the target distribution f ( x , u ) by first generating candidates from the proposal density function g ( x , u ) and then rejecting a subset of these candidates based on the relationship in (36).
Algorithm 3 outlines the Two-Dimensional Single Rejection sampling approach.
Algorithm 3 Two-Dimensional Single Rejection sampling [14].
  1:
Input: Parameters α , β , λ
  2:
Set C ( β , λ ) = sup x , u f ( x , u ) g ( x , u )
  3:
loop                      ▹ Envelope constant ensuring f C g
  4:
      Sample ( X , U ) g ( x , u )         ▹ Draw a sample from the proposal distribution
  5:
      Draw W Uniform ( 0 , 1 )                     ▹ Independent of ( X , U )
  6:
      if  W < f ( X , U ) C ( β , λ ) g ( X , U )  then
  7:
            Accept ( X , U )               ▹ Accepted proposal under joint envelope
  8:
      else
  9:
            Reject ( X , U ) , restart from Step 2
10:
      end if
11:
      return  X λ = K ( X )              ▹ K ( X ) = X λ or K ( X ) = B 1 β ( u ) X 1 β β
12:
end loop
For a given ( β , λ ) , the two types of proposal bivariate density function, denoted as g ( x , u ) , are considered by the Two-Dimensional Single Rejection method:
  • First proposal bivariate density function: A product of the gamma density function and the uniform density function. This approach has expected complexities denoted by C 1 ( β , λ ) and C 2 ( β , λ ) , yielding the target random variable X λ = X λ .
  • Second proposal bivariate density function: A product of a gamma density function and a normal density function. The expected complexities in this case are C 3 ( β , λ ) and C 4 ( β , λ ) , resulting in the target random variable X λ = B 1 β ( u ) X 1 β β .
Table 4 provides additional details on the implementation of the Two-Dimensional Single Rejection algorithm (Algorithm 3).
The expected complexities C 1 ( β , λ ) , C 2 ( β , λ ) , C 3 ( β , λ ) , and C 4 ( β , λ ) are further examined using empirical values based on parameters derived from financial assets, including the S&P 500 index, SPY ETF, Bitcoin, and Ethereum. As shown in Table 5, C 1 ( β , λ ) and C 3 ( β , λ ) demonstrate greater stability and are less affected by large parameter values, such as θ λ = 1.3 × 10 9 for the S&P 500 index and θ λ = 4.2 × 10 64 for the SPY ETF.
The overall complexity ( C ( β , λ ) ) of the Two-Dimensional Single Rejection is defined as follows:
C ( β , λ ) = M i n 1 k 4 C k ( β , λ ) : ( β , λ ) ( 0 , 1 ) x ( 0 , ) .
Theorem 7.
The complexity C ( β , λ ) in Equation (37) for Algorithm 3 is uniformly bounded, satisfying the following relationship:
S u p β ( 0 , 1 ) λ > 0 C ( β , λ ) 4.2154
Consult [14] for the proof of Theorem 7.
As shown in Table 5, the overall complexity ( C ( β , λ ) ) remains more stable and below the upper bound of 4.2154 . However, the individual expected complexities C 3 ( β , λ ) and C 4 ( β , λ ) do not satisfy the relationship in (38) for S&P 500 index and SPY ETF data.
To evaluate the performance of Algorithm 3, we generated a sample of 8000 daily returns for each financial asset. The empirical distributions were compared against the theoretical distributions using Q-Q plots. The results of this distributional analysis are presented in Figure 6a,b and Figure 7a,b.
As shown in each plot, the Q-Q plots display a smooth linear pattern, indicating a strong alignment between the empirical distributions and the theoretical GTS distribution.
The Two-Dimensional Single Rejection method outperforms other rejection-based approaches by offering greater stability across parameter ranges. It effectively captures both the central and tail behavior of the GTS distribution, making it more accurate and reliable for simulations.
Remark 10.
In a Q-Q plot, the quantiles of the observed distribution are plotted against those of the theoretical distribution. When the two distributions are similar, their quantiles will be nearly equal, and the points will closely follow the line X = Y . Any deviation from this line reveals differences between the distributions. The following scenarios, commonly observed in the literature [49,50,51,52,53], are worth noting:
1. 
Q-Q Plots and Skewed Distributions: A left-skewed distribution typically results in a concave downward Q-Q plot, while a right-skewed distribution shows a U-shaped or “humped” pattern. A symmetric distribution, on the other hand, will usually produce a symmetric and linear Q-Q plot around the center of the data.
2. 
Q-Q Plots and Short-Tailed Distributions: Short-tailed distributions may exhibit an S-shaped curve in the Q-Q plot. More specifically, the deviation from the straight line appears in the opposite direction at the tails compared to long-tailed distributions (above the line in the lower tail and below the line in the upper tail).
3. 
Q-Q Plots and Long-Tailed Distributions: Long-tailed distributions typically show deviations from the straight line at both ends of the Q-Q plot, with the lower tail curving downward and the upper tail curving upward.
4. 
S-shaped Curves in Q-Q Plots: An S-shaped curve can indicate several potential issues, such as longer or shorter tails than the theoretical distribution, or systematic differences between the distributions being compared.

6. Series Representation Methods for Simulating GTS Lévy Processes

We develop a simulation framework for Generalized Tempered Stable (GTS) Lévy processes based on almost surely convergent series representations. These representations, originally proposed by Rosiński [19] and later adapted to Tempered Stable distributions in [54], offer theoretically sound methods for generating sample paths of Lévy processes. In this context, we focus on two principal constructions: the inverse Lévy measure series representation and the Shot Noise series representation. These representations will serve as the basis for simulating GTS random variates.
Let X ( t ) , t [ 0 ,   1 ] be a d-dimensional Lévy process with stationary, independent increments, stochastically continuous, and starting at zero. Its characteristic function is
E [ e i ξ X ( t ) ] = e t ϕ ( ξ ) ,
where
ϕ ( ξ ) = i ξ μ + R \ { 0 } e i ξ y 1 i ξ y I ( | y | 1 ) ν ( d y ) ,
with
  • ϕ ( ξ ) : characteristic exponent;
  • μ R d : drift vector;
  • ν : Lévy measure, defining the intensity and distribution of jumps in the process, and satisfying ( | y | 2 1 ) ν ( d y ) < ;
  • ( μ , 0 , ν ) : characteristic triplet of X.
According to the Lévy–Itô decomposition theorem [19,55], X ( t ) admits the following decomposition:
X ( t ) = t μ + | x | 1 x [ N ( [ 0 , t ] , d x ) t ν ( d x ) ] + | x | > 1 x N ( [ 0 , t ] , d x ) ,
where N is a Poisson random measure on ( 0 , ) × ( R d \ { 0 } ) . Equivalently,
N = i = 1 δ ( U i , J i ) ,
with U i Unif [ 0 , 1 ] i.i.d. and J i R d as the associated jump sizes, independent from the U i . For further details, please refer to [19,56,57].
For n N , we define
X n ( t ) = t μ + 1 n | x | 1 x ( N ( [ 0 , t ] , d x ) t V ( d x ) + | X | 1 x N ( [ 0 , t ] , d x .
By replacing (42) in (43), we have
X n ( t ) = t μ t b n + i k n J i 1 ( U i t ) ,
where
k n = 1 i : 1 n | J i | , b n = 1 n | x | 1 x V ( d x ) .
As n , this converges almost surely to
X ( t ) = lim n = X n ( t ) = t μ + i = 1 J i 1 ( U i t ) t c i , c i = b i b i 1 ,
and we have
X ( t ) = t μ + i = 1 J i 1 ( U i t ) t c i .
In the special case of GTS distribution where d = 1 , X G T S ( β + , β , α + , α , λ + , λ ) and the characteristic exponent becomes
ϕ ( ξ ) = i ξ μ + R \ { 0 } e i ξ y 1 ) ν ( d y ) .
The Lévy–Itô decomposition Equation (41) becomes Equation (48):
X ( t ) = t μ + | x | > 0 x N ( [ 0 , t ] , d x ) = t μ + x > 0 x N ( [ 0 , t ] , d x ) + x < 0 x N ( [ 0 , t ] , d x ) .
Following the procedure developed previously (from Equation (41) to Equation (45)), Equation (46) becomes Equation (49):
X ( t ) = t μ + i = 1 J i + 1 ( U i + t ) i = 1 J i 1 ( U i t ) ,
with U i + Unif [ 0 , 1 ] i.i.d. and J i + R + as the associated jump sizes, independent from the U i + . Similarly on the left side, U i Unif [ 0 , 1 ] i.i.d. and J i R + are the associated jump sizes, independent from the U i .

6.1. Sampling GTS Distribution via the Inverse Lévy Measure Series Representation

Let X + T S ( β , α , λ ) , where T S ( β , α , λ ) denotes a Tempered Stable distribution. The Lévy measure of X + is concentrated on ( 0 , ) and the Tail Integral function is defined as follows:
W + ( y ) = α y x 1 β e λ x d x .
Using integration by parts, Equation(50) becomes
W + ( y ) = α β y β e λ y α λ β β Γ ( 1 β , λ y ) ,
where Γ ( α , y ) = y x α 1 e x d x is the upper incomplete gamma function.
The inverse of the Tail Integral function W + is defined as follows:
W + ( Γ ) = i n f { y > 0 : W ( y ) < Γ } .
The series representation generated by the inverse Lévy measure method of X + has the following expression:
X + ( t ) = i = 1 W + ( Γ i + ) 1 ( U i + t ) . t [ 0 , 1 ] ,
X + ( 1 ) = i = 1 W + ( Γ i + ) ,
where
  • Γ j Γ j 1 , E j are i.i.d. exponential random variables with mean 1, and we set Γ 0 = 0 ;
  • { U j } are i.i.d. uniform random variables on [ 0 ,   1 ] ;
  • All the random elements { U j } , { Γ j } are mutually independent.
Algorithm 4 outlines the simulation of X + on [0, 1] using the series representation generated by the inverse Lévy measure method.
Algorithm 4 Series representation using the inverse Lévy measure method.
  1:
Input: Parameters α , β , λ and time horizon t = 1
  2:
loop
  3:
      Initialize: Γ 0 , sum 0 , y 1 set threshold ε = 10 5
  4:
      while  y > ε  do
  5:
            Draw v 1 Unif ( 0 , 1 )
  6:
             Γ Γ ln v 1     (next Poisson arrival time)
  7:
            Solve Γ = W + ( y ) numerically for y = W 1 ( Γ )
  8:
            Draw u Unif ( 0 , 1 )
  9:
            if  u < t  then
10:
                   sum sum + y
11:
            end if
12:
      end while
13:
      return  sum
14:
end loop
Daily return samples for Bitcoin, Ethereum, the S&P 500, and the SPY ETF were generated using the series representation in Equation (53). For each asset, empirical quantiles were computed and compared to theoretical quantiles through Q-Q plots.
As illustrated in Figure 8 and Figure 9, Figure 8a,b and Figure 9a,b exhibit smooth linearity patterns, indicating a strong agreement between the empirical and theoretical distributions for each asset.
The inverse Lévy measure method achieves exact simulation by inverting W + ( y ) , offering a precise series representation, strong empirical performance, and better tail-fitting than rejection-based techniques. However, because it relies on iterative numerical inversion, it is computationally intensive, making it practical only for moderate-scale simulations rather than large-scale applications.

6.2. Sampling GTS Distribution via the Shot Noise Series Representation

The inverse of the tail of the Lévy measure for a Tempered Stable distribution lacks a closed-form expression, as shown in Equation (50). As a result, the inverse Lévy measure method becomes practically intractable. To address this difficulty, Rosiński [5,18,19,54] proposes a series representation based on the generalized Shot Noise framework, which is more revealing about the structure of the Tempered Stable distribution:
Theorem 8
(Theorem 5.1 [18]). Let X ( t ) , t [ 0 ,   1 ] , be a tempered stable Lévy process in R d with L ( X ( 1 ) ) TS ( α , ν ; 0 ) ; if β ( 0 ,   1 ) , or ν is symmetric and β ( 0 ,   2 ) , then
X ( t ) = d j = 1 V j m ( ν ) V j β Γ j 1 β E j U j 1 β 1 { T j 1 } ( t ) , t [ 0 ,   1 ] ,
where the equality holds in the sense of finite-dimensional distributions, and the infinite series converges almost surely, uniformly in t [ 0 ,   1 ] . Here,
  • { T j } , { U j } are i.i.d. uniform random variables on [ 0 ,   1 ] .
  • Γ j Γ j 1 , E j are i.i.d. exponential random variables with mean 1, and we set Γ 0 = 0 .
  • { V j } are i.i.d. random vectors in R d with common distribution ν 1 , defined via the Lévy measure ν by
    ν 1 ( d x ) = 1 m ( ν ) β x β ν ( d x ) , m ( ν ) = { x > 0 } x β ν ( d x ) 1 β .
  • All the random elements { T j } , { U j } , { Γ j } , { E j } , and { V j } are mutually independent.
Refer to [5] for the proof of Theorem 8.
For the one-sided case, let X + T S ( β , α , λ ) denote a Tempered Stable distribution. The Lévy measure of X + is concentrated on ( 0 , ) with dimension d = 1 . Using the notation from [5], we have X + T S β 0 ( α λ β δ λ 1 , 0 ) for a time horizon t = 1 :
ν = α λ β δ λ 1 , m ( ν ) = { x > 0 } x β ν ( d x ) 1 β = α 1 β , ν 1 ( d x ) = λ β x β δ λ 1 ( d x ) .
The common distribution ν 1 is related to the Dirac measure δ λ 1 , implying that V j = λ 1 is deterministic.
The sampling of daily returns for Bitcoin, Ethereum, the S&P 500, and the SPY ETF was performed using Equation (57), which corresponds to a version of Equation (56) with time horizon t = 1 :
X + ( t ) = j = 1 α β Γ j 1 / β e j u j 1 β λ 1 1 { T j 1 } ( t ) , t [ 0 , 1 ]
X + ( 1 ) = j = 1 α β Γ j 1 / β e j u j 1 β λ 1 .
Algorithm 5 describes how to simulate X + ( 1 ) using the Shot Noise series representation.
Algorithm 5 Shot Noise representation for X + ( 1 ) .
  1:
Input: Parameters α , β , λ and time horizon t = 1
  2:
loop
  3:
      Initialize: Γ 0 , sum 0 ; set threshold ε = 10 5
  4:
      while  w > ε  do
  5:
            Draw v U ( 0 , 1 ) ; update Γ ln v + Γ
  6:
            Compute w 1 α / ( β Γ ) 1 / β
  7:
            Draw v U ( 0 , 1 ) , u U ( 0 , 1 ) ; set E = ln v
  8:
            Compute w 2 E u 1 / β λ 1 ; let w min ( w 1 , w 2 )
  9:
            if  T j = u < t  then
10:
                   sum sum + w
11:
            end if
12:
      end while
13:
      return  sum
14:
end loop
For each financial asset, empirical quantiles were computed and compared to theoretical quantiles through Q-Q plots, As illustrated in Figure 10 and Figure 11.
Figure 10a,b exhibit smooth linear patterns, indicating strong agreement between the empirical and theoretical GTS distributions. However, this pattern does not hold for the S&P 500 and SPY ETF daily returns, as shown in Figure 11a,b, where discrepancies between empirical and theoretical quantiles are notably larger in the left tails of the Q-Q plots.
Although the Shot Noise representation delivers theoretically sound convergence, its practical utility is constrained by two significant challenges: (1) dependence on careful truncation of the series expansion and (2) high sensitivity to critical parameters ( β , λ ). The empirical results demonstrate satisfactory performance for cryptocurrencies like Bitcoin and Ethereum, but the method proves unreliable for equities with a very low instability index parameter ( β ± 0 ), such as the S&P 500 ( β = 0.0886 ) and the SPY ETF ( β = 0.0222 ). These inconsistencies, often leading to computational failures, reduce its practicality in financial modeling [58,59,60].

7. FRFT-Based Inverse Transform Sampling Method

The characteristic function (CF)-based inverse sampling method provides a powerful alternative to series representations and rejection sampling, especially when the CF is known but the probability density function (PDF) is unknown [1,5,6,7,8,9,10,11]. Inverse sampling via characteristic functions relies on numerical inversion of the CF to recover the distribution function, enabling sampling through interpolation or numerical root-finding. This idea can be traced back to foundational work on the Gil-Pelaez inversion formula [61], which expresses the cumulative distribution function (CDF) F ( x ) directly in terms of the CF ϕ ( t ) :
F ( x ) = 1 2 1 π 0 Im e i t x ϕ ( t ) t d t ,
where Im is the imaginary part of a complex number.
Modern computational techniques enable efficient numerical inversion of the CF to approximate the CDF or PDF. Subsequently, inverse transform sampling can be carried out by interpolating the approximate CDF and inverting it numerically [62,63]. In this section, we construct a practical and flexible inverse sampling framework based on the following key components:
  • Numerical inversion of the characteristic function using the Enhanced Fast Fractional Fourier Transform (FRFT) algorithm [21];
  • Construction of a high-resolution approximate GTS cumulative distribution function (CDF) on a discrete grid;
  • Efficient inversion of the CDF using interpolation based on a fourth-degree polynomial approximation;
  • Validation through simulation of the daily returns for Bitcoin, Ethereum, the S&P 500 index, and the SPY ETF.
We recall the characteristic exponent of the GTS ( μ , β + , β , α + , α , λ + , λ ) distribution in Theorem 1:
ψ ( ξ ) = μ ξ i + α + Γ ( β + ) ( λ + i ξ ) β + λ + β + + α Γ ( β ) ( λ + i ξ ) β λ β .
The Fourier Transform ( F [ f ] ) and the density function (f) of the GTS process Y can be written as follows:
F [ f ] ( ξ ) = e Ψ ( ξ ) ,
and
f ( y ) = 1 2 π + e i y x F [ f ] ( x ) d x .

7.1. Fast FRFT and Composite Newton–Cotes Quadrature Rules

The conventional Fast Fourier Transform (FFT) algorithm is widely used to compute discrete convolutions, Discrete Fourier Transforms (DFTs) of sparse sequence, and to perform high-resolution trigonometric interpolation [64,65].
We assume that F [ f ] ( y ) in (58) is zero outside the interval [ a 2 , a 2 ] ; β = a M is the step size of the M input values of F [ f ] ( y ) , defined by y j = ( j M 2 ) β for 0 j < M . Similarly, γ is the step size of the M output values of f ( x k ) , defined by x k = ( k M 2 ) γ with 0 k < M .
By choosing the step size β on the input side and the step size γ in the output side, we fix the FRFT parameter δ = β γ 2 π and compute the density function f (59) at x k + s [27].
We have  
f ˜ ( x k + s ) = γ 2 π e π i ( k + s M 2 ) M δ G k + s ( F [ f ] ( y j ) e π i j M δ , δ ) , 0 s < 1
where G k + s ( x , δ ) is the FRFT setup on the M-long sequence x = ( x 1 , x 2 , , x M ) .
The numerical integration of functions, also called a Direct Integration Method, is another method to evaluate the inverse Fourier integrals in (59). One of the sophisticated procedures is the Newton–Cotes rule, where the interval is approximated by some interpolating polynomials, usually in Lagrange form. See [28,47,48] for further development.
We assume m = Q n ; β = a m is the step size of the m input values F [ f ] ( y j + Q p ) , defined by y j + Q p = ( Q p + j m 2 ) β for 0 p < n and 0 j < Q . Similarly, the output values of f ^ ( x Q l + k + s ) are defined by x Q l + k + s = ( Q l + k + s m 2 ) γ with 0 l < n , 0 k < Q and 0 s 1 , as follows:
f ^ ( x Q l + k + s ) = β p = 0 n 1 j = 0 Q W j e ( i y j + Q p x k + Q l + s ) F [ f ] ( y j + Q p ) ,
where { W j } 0 j Q is the weight. See [30,45,46] for further details on the weight computation.
For Q = 12 , the implementation of the composite Newton–Cote’s rule provides great accuracy. The error analysis [46] shows that the global error is O ( h 15 ) .
The Q-point rule composite Newton–Cotes quadrature (61) is integrated into the FRFT algorithm (60) to produce the following FRFT of QN-long weighted sequence:
f ˜ ( x Q l + f + s ) = β 2 π e π i δ M ( Q l + f + s M 2 ) G Q l + f + s ( w j F [ f ] ( y j + Q p ) e π i ( j + Q p ) M δ , δ ) .

7.2. Enhanced Fast FRFT Scheme: Composite of Fast FRFT

The Enhanced Fast FRFT algorithm improves the accuracy of the one-dimensional Fractional Fourier Transform (FRFT) by leveraging closed Newton–Cotes quadrature rules. Using the weights derived from the composite Newton–Cotes rules of order QN, we demonstrate that the FRFT of a QN-long weighted sequence can be expressed as two composites of FRFTs [21].

7.2.1. Composite of FRFTs: FRFT of Q-Long Weighted Sequence and FRFT of N-Long Sequence

We assume that F [ f ] ( x ) is zero outside the interval [ a 2 , a 2 ] , M = Q N and β = a M is the step size of the M input values F [ f ] ( y ) , defined by y j + Q p = ( Q p + j M 2 ) β for 0 p < N and 0 j < Q . Similarly, the output values of f ( x ) is defined by x Q l + f + s = ( Q l + f + s M 2 ) γ for 0 l < N , 0 f < Q and 0 s 1 . We assume that F [ f ] ( x ) is zero outside the interval [ a 2 , a 2 ] , Q = 12 , m = Q n , and β = a m is the step size of the m input values F [ f ] ( y ) , defined by y j + Q p = ( Q p + j m 2 ) β for 0 p < n and 0 j < Q . Similarly, the output values of f ( x ) are defined on x Q l + f + s = ( Q l + f + s m 2 ) γ for 0 l < n , 0 k < Q , and 0 s 1 .
f ˜ Q N ( x Q l + f + s ) is the approximation of f ˜ ( x Q l + f + s ) and Equation (62) becomes
f ˜ Q N ( x Q l + f + s ) = β 2 π e π i δ M ( Q l + f + s M 2 ) G f + s ( G l + f + s Q ( ξ p , α 1 ) w j e 2 π i δ ( Q l M 2 ) j , α 2 ) ,
where ξ p = e π i M p Q δ F [ f ] ( y j + Q p ) .
By comparing Equations (62) and (63), we come to the following conclusion:
G Q l + f + s ( w j F [ f ] ( y j + Q p ) e π i ( j + Q p ) M δ , δ ) = G f + s ( G l + f + s Q ( ξ p , α 1 ) w j e 2 π i δ ( Q l M 2 ) j . α 2 )

7.2.2. Composite of FRFTs: FRFT of N-Long Sequence and FRFT of Q-Long Weighted Sequence

f ˜ N Q ( x Q l + f + s ) is the approximation of f ( x Q l + f + s ) and Equation (62) becomes
f ˜ N Q ( x Q l + f + s ) = β 2 π e π i δ M ( Q l + f + s M 2 ) G l + f + s Q ( G f + s ( z j , δ ) e π i δ M Q p , δ Q 2 ) ,
where ξ p = G f + s ( z j , α 2 ) e π i δ M Q p .
Additional methodological details can be found in [21].
Similarly, by comparing Equations (62) and (65), we come to the following conclusion:
G Q l + f + s ( w j F [ f ] ( y j + Q p ) e π i ( j + Q p ) M δ , δ ) = G l + f + s Q ( G f + s ( z j , δ ) e π i δ M Q p , δ Q 2 ) .
The numerical computation of (63) and (65) in Figure 12 shows that the composite FRFTs in (64) and (66) are equal not only algebraically but also numerically.

7.3. Simulation via the Characteristic Function

The GTS distribution lacks a closed-form probability density function (58), making direct sampling challenging. However, we know the closed form of the Fourier Transform of the density function, F [ f ] (58), and the relationship in (69) provides the Fourier Transform of the cumulative distribution function, F [ F ] . The GTS distribution function, F ( x ) in (70), was computed from the inverse of the Fourier Transform of the cumulative distribution ( F [ F ] ):  
Y G T S ( μ , β + , β , α + , α , λ + , f λ ) ,
F ( x ) = x f ( t ) d t , f is the density function of Y ,
F [ F ] ( x ) = F [ f ] ( x ) i x + π F [ f ] ( 0 ) δ ( x ) ,
F ( x ) = 1 2 π + F [ f ] ( y ) i y e i x y d y + 1 2 .
See Appendix A in [27] for (70) proof.
Theorem 9.
Let a cumulative probability function F ( x ) be at least four times continuously differentiable and let F j 1 j m be a sample of F ( x ) on a sequence of evenly spaced input values x j 1 j m , with F ( x j ) = F j . We also consider x α , a α t h quantile defined by F ( x α ) = α , with x i < x α < x i + 1 and F i < F ( x α ) < F i + 1 . There exists a unique value, y ( 0 ,   1 ) , and b 0 , b 1 , b 2 , b 3 , b 4 coefficients such that y is a solution of the polynomial equation of degree 4 (71):
b 0 + b 1 y + b 2 y 2 + b 3 y 3 + b 4 y 4 = 0 .
The α t h quantile ( x α ) can be written as follows:
x α = x i + y ( x i + 1 x i ) .
Refer to [25] for the proof of Theorem 9.
Given the sample F j 1 j m of a cumulative probability function F ( x ) , Algorithm 6 outlines the inverse transform sampling method.
Algorithm 6 Inverse sampling for a discrete distribution.
  1:
Input: Parameters α + , β + , λ + , α , β , λ , and time horizon t = 1
  2:
loop
  3:
      Initialize: x 0
  4:
      Draw u Unif ( 0 , 1 )
  5:
      if  u = F i  then
  6:
             Accept x = x i and return to Step 1    (where F i = F ( x i ) )
  7:
      else
  8:
            Find i such that F i < u < F i + 1
  9:
            Compute coefficients:
b 0 = ( u F i ) , b 1 = F i + 1 F i 1 2 , b 2 = F i 1 2 F i + F i + 1 2 , b 3 = F i 2 + 2 F i 1 2 F i + 1 + F i + 2 6 , b 4 = F i 2 4 F i 1 + 6 F i 4 F i + 1 + F i + 2 24
10:
            Solve for y ( 0 , 1 ) : b 0 + b 1 y + b 2 y 2 + b 3 y 3 + b 4 y 4 = 0
11:
            Return x = x i + y · ( x i + 1 x i ) and return to Step 1
12:
      end if
13:
end loop
The sampling of daily returns for Bitcoin, Ethereum, the S&P 500 index, and the SPY ETF was performed using the inverse sampling Algorithm 6. Empirical quantiles were computed to construct the Q–Q plots as shown in Figure 13b and Figure 14b.
Graphically, Figure 13a,b and Figure 14a,b. display nearly perfect straight-line relationships; the plotted quantile points fall right on the 45° diagonal. This tight alignment clearly indicates that the simulated samples match the assumed GTS distributions for each asset. In other words, the fit is so accurate that it rules out any meaningful effects from heavy tails or skewness.
The Enhanced FRFT-based inversion sampling method: leveraging the Fast Fractional Fourier Transform (FRFT) combined with high-order Newton–Cotes quadrature rules, this method utilizes the characteristic function to generate highly accurate density estimates for Generalized Tempered Stable (GTS) distributions. This approach achieves
  • Superior numerical stability and improved tail accuracy, especially in the presence of heavy-tailed and asymmetric behavior;
  • Elimination of truncation and rejection sampling, thereby avoiding common sources of bias and computational inefficiencies inherent in alternative approaches;
  • Consistent outperformance relative to existing methods in terms of both theoretical robustness and computational efficiency.
Empirical evaluations confirm the method’s robustness across a range of heavy-tailed and asymmetric distributions, making it particularly well-suited for applications in financial modeling and risk analysis.

8. Goodness-of-Fit Analysis

In addition to the graphical assessment provided by the Q–Q plot, we reinforce our analysis through formal statistical validation using the Kolmogorov–Smirnov (K–S) and Anderson–Darling goodness-of-fit tests. These tests quantitatively evaluate the alignment between the empirical return data and the GTS distribution. By computing test statistics and corresponding p-values, they help determine whether the observed deviations are sufficiently small to consider the GTS model a plausible generator of the return data.

8.1. Kolmogorov–Smirnov (KS) Test

Given a sample of daily returns { y 1 , y 2 , , y m } of size m, and the corresponding empirical cumulative distribution function F m ( x ) for each financial asset, the Kolmogorov–Smirnov (KS) test is performed to assess the goodness-of-fit. The null hypothesis H 0 assumes that the sample { y 1 , y 2 , , y m } is drawn from the GTS distribution with cumulative distribution function F ( x ) . To carry out the test, the theoretical distribution function F ( x ) must be computed explicitly. The two-sided KS goodness-of-fit statistic D m is defined as
D m = sup x F ( x ) F m ( x ) ,
The distribution of Kolmogorov’s goodness-of-fit statistic, D m , has been extensively studied in the statistical literature. It has been established by Massey [66] that the distribution of D m is independent of the underlying theoretical cumulative distribution function F ( x ) , provided the null hypothesis H 0 holds, that is, the data sample { y 1 , y 2 , , y m } is drawn from a continuous distribution. Extensions to discrete, mixed, and discontinuous distributions have also been addressed in more recent work [67].
Under the null hypothesis H 0 , it was first shown by Kolmogorov [68] and later refined by Smirnov [69], that the asymptotic distribution of the scaled test statistic m D m converges to the so-called Kolmogorov distribution as the sample size m . The cumulative distribution function (CDF) of this limiting distribution is given by the series expansion,
lim m Pr m D m x = 1 2 k = 1 ( 1 ) k 1 e 2 k 2 x 2 = 2 π x k = 1 e ( 2 k 1 ) 2 π 2 8 x 2 ,
where the first expression is the classical Kolmogorov series [68], and the second form is derived via the transformation of Jacobi theta functions [70].
As illustrated in Figure 15, the distribution of the asymptotic Kolmogorov statistic m D m is positively skewed. Marsaglia et al. [70] further analyzed its moments, showing that the mean and standard deviation of this distribution are given by
μ = π 2 log ( 2 ) 0.8687 , σ = π 2 12 μ 2 0.2603 .
At 5 % risk level, the risk threshold ( d = 1.3581 ) corresponds to the area in the shaded region under the probability density function.
The KS test is one of the most widely used goodness-of-fit tests based on the empirical distribution function (EDF). It provides a non-parametric method for testing whether a sample comes from a specified continuous distribution. Comprehensive comparisons and applications of EDF-based statistics, including the KS test, can be found in [71,72,73].
To evaluate the test, the p-value associated with the observed KS statistic, D ^ m , is calculated using the asymptotic distribution defined in Equation (74). More precisely, the p-value can be expressed as follows:
p - value = Pr ( D m > D ^ m H 0 ) = 1 Pr m D m m D ^ m .
Here, the p-value represents the probability of observing a test statistic as extreme as, or more extreme than, D ^ m , under the assumption that the null hypothesis H 0 is true. In the context of goodness-of-fit testing, a small p-value (less than 5%) indicates that the empirical distribution F m ( x ) deviates significantly from the theoretical distribution F ( x ) , casting doubt on the validity of H 0 .
The value D ^ m represents the observed realization of the KS test statistic D m , computed from the empirical sample { y 1 , y 2 , , y m } . Following the method described in [74], the estimator D ^ m is defined as
D ^ m = max sup 0 j P F ( x j ) F m ( x j ) , sup 1 j P F ( x j ) F m ( x j 1 ) .
To illustrate the computation, consider the SSR method to sample a Bitcoin (BTC) daily return. KS statistics and the p-values are provided as follows:
m = 80000 , D ^ m = 0.0028 , p - value = Pr m D m > 0.7842 H 0 = 57 % .
The p-value of 57% suggests that the null hypothesis H 0 , that the data follow the theoretical GTS distribution, cannot be rejected at conventional significance levels.
Similar calculations were carried out for the financial asset and the sampling technique. The resulting KS statistics D ^ m , m D m and their corresponding p-values are summarized in Table 6. The KS test results show that the FRFT-based inversion method performed the best, with high p-values (63–99%) across all assets (Bitcoin, Ethereum, S&P 500, SPY ETF), indicating an excellent distributional fit. The Two-Dimensional Single Rejection method showed moderate performance, with p-values ranging from 41% to 72%, while the Inverse Tail Integral approach exhibits greater variability, with p-values ranging from 17% to 72%. In contrast, both the Double Rejection and Shot Noise Representation methods perform poorly, with maximum p-values of just 0.008 and 0.455, respectively, failing to adequately capture the distributional properties. The Standard Stable Rejection method, while acceptable for cryptocurrencies (with p-values ranging from 0.411 to 0.570), proves computationally impractical for equity assets. The findings suggest that Fourier-based methods outperform traditional rejection sampling approaches in both statistical accuracy and computational robustness for Tempered Stable distributions.

8.2. Anderson–Darling Test

The Anderson–Darling (AD) test [71,75,76] is a widely used goodness-of-fit test that assesses whether the sample distribution aligns with a specified theoretical distribution. This test belongs to the family of quadratic EDF statistics [72,73], which are designed to quantify the discrepancy between the EDF and the hypothesized CDF. It is mathematically defined as
m + F m ( x ) F ( x ) 2 w ( x ) d F x ,
where m is the sample size, w ( x ) the weighting function, and F m ( x ) the empirical distribution function defined on the sample size m.
When the weighting function is w ( x ) = 1 , the statistic in Equation (79) becomes the Cramér–von Mises statistic [77]. In contrast, the Anderson–Darling statistic [78] is derived by choosing the weighting function w ( x ) = F ( x ) ( 1 F ( x ) ) , where F ( x ) is the hypothesized cumulative distribution function. The key difference between the two statistics lies in the weighting applied to the data. Specifically, the Anderson–Darling (AD) statistic places more weight on the tails of the distribution, making it more sensitive to deviations in the extreme values compared to the Cramér–von Mises statistic, which treats the distribution more uniformly across the entire range. The AD statistic is mathematically defined as
A m 2 = m + F m ( x ) F ( x ) 2 F ( x ) 1 F ( x ) d F ( x ) .
It can be shown that the asymptotic distribution of the AD statistic, A m 2 , is independent of the theoretical distribution under the null hypothesis. The asymptotic distribution [79,80,81] is defined as follows:
G ( x ) = lim m Pr A m 2 < x = j = 0 a j x b j 1 2 exp b j x 0 f j ( y ) exp ( y 2 ) d y , f j ( y ) = exp 1 8 x b j y 2 x + b j , a j = ( 1 ) j ( 2 ) 1 2 ( 4 j + 1 ) Γ j + 1 2 j ! , b j = 1 2 ( 4 j + 1 ) 2 π 2 .
As shown in Figure 16, the asymptotic distribution of the Anderson–Darling statistic, A m 2 , is positively skewed, with a mean and standard deviation [76,82] defined as follows:
μ = 1 , σ = 2 3 π 2 9 0.761 .
At a 5% risk level, the associated risk threshold ( d = 2.4941 ) corresponds to the area in the shaded region under the probability density function. The p-value of the test statistic, A m 2 , is defined as follows:
p - v a l u e = p r o b ( A m 2 > A ^ m 2 | H 0 ) = 1 G ( A ^ m 2 ) .
To compute the AD statistic, A m 2 , in Equation (80), the sample of daily returns { y 1 , y 2 , , y m } of size m is arranged in ascending order:
y ( 1 ) < y ( 2 ) < < y ( m ) .
The Anderson–Darling statistic [80] is then computed as follows:
A m 2 = m 1 m j = 1 m ( 2 j 1 ) log ( F ( y ( j ) ) ) + ( 2 ( m j ) + 1 ) log ( 1 F ( y ( j ) ) ) .
For each financial asset, and each sampling method, the AD statistic, as defined in Equation (84), is computed along with the corresponding p-value statistic, as shown in Table 7. The AD test results show performance differences among sampling methods. The FRFT-based inversion method leads, with p-values exceeding 94% for Bitcoin, Ethereum, and the S&P 500, and 54% for the SPY ETF, demonstrating unmatched accuracy in GTS distributions. The Inverse Tail Integral method appears promising for Ethereum (97% p-value) but underperforms elsewhere (p-values between 43% and 49%). The Two-Dimensional Single Rejection method yields modest results (p-values from 55% to 71%), while the Standard Stable Rejection method is mostly effective with cryptocurrencies (74% to 76%). Both the Double Rejection and Shot Noise methods fail, with 0% p-values for most assets. Like the KS test, the AD results emphasize the FRFT method as the most reliable for capturing heavy-tailed behavior in financial data.

9. Conclusions

This paper addressed the challenge of simulating random variates from the Generalized Tempered Stable (GTS) distribution, a flexible framework for modeling heavy-tailed data. While GTS models are theoretically powerful, their lack of closed-form densities has limited practical simulation. We provided a comprehensive and systematic comparison of existing simulation techniques, spanning rejection-based algorithms, series representations, and numerical inversion methods.
Our findings demonstrate that classical rejection algorithms, such as the Standard Stable Rejection and Double Rejection methods, have theoretical value. However, they exhibit poor performance under extreme parameter conditions ( λ ± + , β ± 0 ). While enhanced variants like the Double Rejection method demonstrate improved efficiency, they still fall short in accurately capturing tail behavior—a critical requirement for risk-sensitive financial applications. Similarly, the series representation techniques, such as the Shot Noise series representation, provide valuable theoretical insights into GTS processes but exhibit numerical instability when dealing with extreme parameter values, limiting their practical utility.
The comparative evaluation identifies three particularly effective methods: the Two-Dimensional Single Rejection technique, the inverse Lévy measure series representation, and the Fast Fractional Fourier Transform (FRFT)-based inversion approach. Among these methods, the Enhanced Fast Fractional Fourier Transform (FRFT)-based inversion method, consistently achieves the highest accuracy in goodness-of-fit tests (Kolmogorov–Smirnov and Anderson–Darling) across all tested assets—Bitcoin, Ethereum, the S&P 500, and the SPY ETF—while maintaining competitive computational efficiency. This positions the FRFT approach as the most reliable and versatile option for GTS simulation, particularly in risk-sensitive financial applications where accurate tail modeling is essential. Our research contributes to the literature in three main ways. First, it offers a unified and systematic comparison of available GTS simulation methods, clearly articulating their respective strengths and limitations. Second, it introduces an enhanced FRFT-based algorithm that sets a new standard for accurate and efficient GTS simulations. Third, it highlights practical implications for financial modeling, where capturing tail risk with high fidelity is essential.
The development of efficient simulation methods for multivariate GTS distributions remains a crucial challenge. Future research should focus on extending the FRFT approach to higher dimensions while preserving computational efficiency. This advancement would facilitate more realistic modeling of portfolio risk and the dependence structures among financial assets.

Author Contributions

Conceptualization, A.N. and D.M.; methodology, A.N. and D.M.; visualization, A.N. and D.M.; resources, A.N. and D.M.; and writing—original draft and editing, A.N. and D.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

  • The data supporting the findings of this study are available in [26]: Table 1 and Table 2 (GTS Parameter Estimation for four financial assets).
  • Detailed MATLAB R2023b code and implementation specifics for Algorithms 1–6, which underpin the results presented in this paper, are available from the authors via email upon request. The code will be provided promptly to individuals or institutions providing a clear and reasonable justification for their intended use.

Acknowledgments

The authors would like to thank the University of Limpopo for supporting the publication of this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cont, R.; Tankov, P. Financial Modelling with Jump Processes; Chapman & Hall/CRC: Boca Raton, FL, USA, 2004. [Google Scholar]
  2. Samorodnitsky, G.; Taqqu, M.S. Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance; CRC Press: New York, NY, USA, 1994. [Google Scholar]
  3. Nzokem, A.H. Comparing Bitcoin and Ethereum Tail Behavior via QQ Analysis of Cryptocurrency Returns. arXiv 2025, arXiv:2507.01983. [Google Scholar]
  4. Nolan, J.P. Univariate Stable Distributions: Models for Heavy Tailed Data; Springer Series in Operations Research and Financial Engineering: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
  5. Rosiński, J. Tempering Stable Processes. Stoch. Process. Their Appl. 2007, 117, 677–707. [Google Scholar] [CrossRef]
  6. Küchler, U.; Tappe, S. Tempered Stable Distributions and Processes. Stoch. Process. Their Appl. 2013, 123, 4256–4293. [Google Scholar] [CrossRef]
  7. Carr, P.; Geman, H.; Madan, D.B.; Yor, M. Stochastic Volatility for Lévy Processes. Math. Financ. 2003, 13, 345–382. [Google Scholar] [CrossRef]
  8. Rachev, S.T.; Kim, Y.S.; Bianchi, M.L.; Fabozzi, F.J. Stable and Tempered Stable Distributions. In Financial Models with Lévy Processes and Volatility Clustering; The Frank J. Fabozzi Series; Rachev, S.T., Kim, Y.S., Bianchi, M.L., Fabozzi, F.J., Eds.; John Wiley & Sons: Hoboken, NJ, USA, 2011; Volume 187, pp. 57–85. [Google Scholar] [CrossRef]
  9. Boyarchenko, S.I.; Levendorskiĭ, S.Z. Non-Gaussian Merton-Black-Scholes Theory; Advanced Series on Statistical Science & Applied Probability; World Scientific: Singapore, 2002; Volume 9. [Google Scholar] [CrossRef]
  10. Küchler, U.; Tappe, S. Bilateral Gamma Distributions and Processes in Financial Mathematics. Stoch. Process. Their Appl. 2008, 118, 261–283. [Google Scholar] [CrossRef]
  11. Nzokem, A.H. Self-Decomposable Laws Associated with General Tempered Stable (GTS) Distribution and Their Simulation Applications. arXiv 2024, arXiv:2405.16614. [Google Scholar] [CrossRef]
  12. Devroye, L. Random Variate Generation for Exponentially and Polynomially Tilted Stable Distributions. ACM Trans. Model. Comput. Simul. 2009, 19, 1–20. [Google Scholar] [CrossRef]
  13. Glasserman, P. Generating Random Numbers and Random Variables. In Monte Carlo Methods in Financial Engineering; Springer: New York, NY, USA, 2003; pp. 39–77. [Google Scholar] [CrossRef]
  14. Qu, Y.; Dassios, A.; Zhao, H. Random Variate Generation for Exponential and Gamma Tilted Stable Distributions. ACM Trans. Model. Comput. Simul. 2021, 31, 1–21. [Google Scholar] [CrossRef]
  15. Dassios, A.; Qu, Y.; Zhao, H. Exact simulation for a class of tempered stable and related distributions. ACM Trans. Model. Comput. Simul. (TOMACS) 2018, 28, 1–21. [Google Scholar] [CrossRef]
  16. Hofert, M. Sampling Exponentially Tilted Stable Distributions. ACM Trans. Model. Comput. Simul. 2011, 22, 3. [Google Scholar] [CrossRef]
  17. Nzokem, A.H.; Maposa, D. Exact Simulation for General Tempered Stable Random Variates: Review and Empirical Analysis. In Proceedings of the 2025 International Conference on Artificial Intelligence, Computer, Data Sciences and Applications (ACDSA), Antalya, Turkiye, 7–9 August 2025; IEEE: Piscataway, NJ, USA, 2025; pp. 1–6. [Google Scholar] [CrossRef]
  18. Rosiński, J. Tempered Stable Processes. In Proceedings of the Second MaPhySto Conference on Lévy Processes: Theory and Applications, Aarhus, Denmark, 21–25 January 2002; Volume 22, pp. 215–220. Available online: http://www.maphysto.dk/publications/MPS-misc/2002/22.pdf (accessed on 20 May 2025).
  19. Rosiński, J. Series Representations of Lévy Processes from the Perspective of Point Processes. In Lévy Processes: Theory and Applications; Barndorff-Nielsen, O.E., Resnick, S.I., Mikosch, T., Eds.; Birkhäuser: Boston, MA, USA, 2001; pp. 401–415. [Google Scholar] [CrossRef]
  20. Shephard, N. From Characteristic Function to Distribution Function: A Simple Framework for the Theory. Econom. Theory 1991, 7, 519–529. [Google Scholar] [CrossRef]
  21. Nzokem, A.; Maposa, D.; Seimela, A.M. Enhanced Fast Fractional Fourier Transform (FRFT) Scheme Based on Closed Newton-Cotes Rules. Axioms 2025, 14, 543. [Google Scholar] [CrossRef]
  22. Kyprianou, A.E. Fluctuations of Lévy Processes with Applications: Introductory Lectures; Universitext; Springer: Berlin/Heidelberg, Germany, 2014. [Google Scholar]
  23. Sato, K.I. Basic Results on Lévy Processes. In Lévy Processes: Theory and Applications; Barndorff-Nielsen, O.E., Mikosch, T., Resnick, S.I., Eds.; Birkhäuser: Boston, MA, USA, 2001; pp. 1–37. [Google Scholar] [CrossRef]
  24. Tankov, P. Financial Modeling with Lévy Processes: Lecture Notes. 2010. Available online: https://cel.hal.science/cel-00665021v1 (accessed on 20 May 2025).
  25. Nzokem, A.H.; Maposa, D. Bitcoin versus S&P 500 Index: Return and Risk Analysis. Math. Comput. Appl. 2024, 29, 44. [Google Scholar] [CrossRef]
  26. Nzokem, A.; Maposa, D. Fitting the Seven-Parameter Generalized Tempered Stable Distribution to Financial Data. J. Risk Financ. Manag. 2024, 17, 531. [Google Scholar] [CrossRef]
  27. Nzokem, A.H. Fitting Infinitely Divisible Distribution: Case of Gamma-Variance Model. arXiv 2021, arXiv:2104.07580. [Google Scholar] [CrossRef]
  28. Nzokem, A.H. Five-Parameter Variance-Gamma Process: Lévy versus Probability Density. AIP Conf. Proc. 2024, 3005, 020030. [Google Scholar] [CrossRef]
  29. Nzokem, A.H.; Montshiwa, V.T. Fitting Generalized Tempered Stable Distribution: Fractional Fourier Transform (FRFT) Approach. arXiv 2022, arXiv:2205.00586. [Google Scholar] [CrossRef]
  30. Nzokem, A.H. Gamma Variance Model: Fractional Fourier Transform (FRFT). J. Physics Conf. Ser. 2021, 2090, 012094. [Google Scholar] [CrossRef]
  31. Nzokem, A.H. Pricing European options under stochastic volatility models: Case of five-parameter variance-gamma process. J. Risk Financ. Manag. 2023, 16, 55. [Google Scholar] [CrossRef]
  32. Uchaikin, V.V.; Zolotarev, V.M. Chance and Stability: Stable Distributions and their Applications; Modern Probability and Statistics; De Gruyter: Berlin, Germany, 2011. [Google Scholar]
  33. Zolotarev, V.M. One-Dimensional Stable Distributions; American Mathematical Society: Providence, RI, USA, 1986; Volume 65. [Google Scholar]
  34. Feller, W. An Introduction to Probability Theory and Its Applications, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1971; Volume 2. [Google Scholar]
  35. Kring, S.; Rachev, S.T.; Höchstötter, M.; Fabozzi, F.J. Estimation of α-stable sub-Gaussian distributions for asset returns. In Risk Assessment: Decisions in Banking and Finance; Springer: Berlin/Heidelberg, Germany, 2009; pp. 111–152. [Google Scholar]
  36. Zolotarev, V.M. On the Representation of Stable Laws by Integrals. Tr. Mat. Instituta Im. V. A. Steklova 1964, 71, 46–50. [Google Scholar]
  37. Borak, S.; Härdle, W.; Weron, R. Stable Distributions. In Statistical Tools for Finance and Insurance; Springer: Berlin/Heidelberg, Germany, 2005; pp. 21–44. [Google Scholar] [CrossRef]
  38. Box, G.E.P.; Muller, M.E. A Note on the Generation of Random Normal Deviates. Ann. Math. Stat. 1958, 29, 610–611. [Google Scholar] [CrossRef]
  39. Chambers, J.M.; Mallows, C.L.; Stuck, B.W. A Method for Simulating Stable Random Variables. J. Am. Stat. Assoc. 1976, 71, 340–344. [Google Scholar] [CrossRef]
  40. Barndorff-Nielsen, O.E.; Shephard, N. Financial Volatility, Lévy Processes and Power Variation. 2002. Available online: https://www.olsendata.com/data_products/client_papers/papers/200206-NielsenShephard-FinVolLevyProcessPowerVar.pdf (accessed on 20 May 2025).
  41. Brix, A. Generalized Gamma Measures and Shot-Noise Cox Processes. Adv. Appl. Probab. 1999, 31, 929–953. [Google Scholar] [CrossRef]
  42. Ripley, B.D. Stochastic Simulation; John Wiley & Sons: New York, NY, USA, 2009. [Google Scholar]
  43. Hofert, M. Efficiently Sampling Nested Archimedean Copulas. Comput. Stat. Data Anal. 2011, 55, 57–70. [Google Scholar] [CrossRef]
  44. Cerquetti, A. A Note on Bayesian Nonparametric Priors Derived from Exponentially Tilted Poisson-Kingman Models. Stat. Probab. Lett. 2007, 77, 1705–1711. [Google Scholar] [CrossRef]
  45. Nzokem, A.H. Numerical Solution of a Gamma-Integral Equation Using a Higher Order Composite Newton-Cotes Formulas. J. Phys. Conf. Ser. 2021, 2084, 012019. [Google Scholar] [CrossRef]
  46. Nzokem, A.H. Stochastic and Renewal Methods Applied to Epidemic Models. Ph.D. Thesis, York University, Toronto, ON, Canada, 2020. [Google Scholar]
  47. Nzokem, A.H. European Option Pricing Under Generalized Tempered Stable Process: Empirical Analysis. arXiv 2023, arXiv:2304.06060. [Google Scholar] [CrossRef]
  48. Nzokem, A.H.; Montshiwa, V.T. The Ornstein-Uhlenbeck Process and Variance Gamma Process: Parameter Estimation and Simulations. Thai J. Math. 2023, 160–168. [Google Scholar]
  49. Loy, A.; Follett, L.; Hofmann, H. Variations of Q-Q Plots: The Power of Our Eyes! Am. Stat. 2016, 70, 202–214. [Google Scholar] [CrossRef]
  50. Thode, H.C. Testing for Normality; Statistics, Textbooks and Monographs; CRC Press: New York, NY, USA, 2002; Volume 164. [Google Scholar]
  51. Wang, M.C.; Bushman, B.J. Using the Normal Quantile Plot to Explore Meta-Analytic Data Sets. Psychol. Methods 1998, 3, 46–54. [Google Scholar] [CrossRef]
  52. Wilk, M.B.; Gnanadesikan, R. Probability Plotting Methods for the Analysis of Data. Biometrika 1968, 55, 1–17. [Google Scholar] [CrossRef]
  53. Dodge, Y. Q-Q Plot (Quantile to Quantile Plot). In The Concise Encyclopedia of Statistics; Springer: New York, NY, USA, 2008; pp. 437–439. [Google Scholar] [CrossRef]
  54. Rachev, S.T. Tempered Stable Models in Finance: Theory and Applications. Ph.D. Thesis, University of Bergamo, Bergamo, Italy, 2009. [Google Scholar]
  55. Ivanenko, D.; Knopova, V.; Platonov, D. On Approximation of Some Lévy Processes. Austrian J. Stat. 2025, 54, 177–199. [Google Scholar] [CrossRef]
  56. Kallenberg, O. Foundations of Modern Probability; Springer: New York, NY, USA, 1997; Volume 2. [Google Scholar]
  57. Ferguson, T.S.; Klass, M.J. A Representation of Independent Increment Processes Without Gaussian Components. Ann. Math. Stat. 1972, 43, 1634–1643. [Google Scholar] [CrossRef]
  58. Nzokem, A. Simulation of Generalized Tempered Stable (GTS) Random Variates via Series Representations: A Case Study of Bitcoin and Ethereum. Preprint 2025. [Google Scholar] [CrossRef]
  59. Yuan, S.; Kawai, R. Numerical aspects of shot noise representation of infinitely divisible laws and related processes. Probab. Surv. 2021, 18, 201–271. [Google Scholar] [CrossRef]
  60. Godsill, S.; Kontoyiannis, I.; Tapia Costa, M. Generalised shot-noise representations of stochastic systems driven by non-Gaussian Lévy processes. Adv. Appl. Probab. 2024, 56, 1215–1250. [Google Scholar] [CrossRef]
  61. Gil-Pelaez, J. Note on the Inversion Theorem. Biometrika 1951, 38, 481–482. [Google Scholar] [CrossRef]
  62. Carr, P.; Madan, D.B. Option Valuation Using the Fast Fourier Transform. J. Comput. Financ. 1999, 2, 61–73. [Google Scholar] [CrossRef]
  63. Fang, F.; Oosterlee, C.W. A Novel Pricing Method for European Options Based on Fourier-Cosine Series Expansions. SIAM J. Sci. Comput. 2008, 31, 826–848. [Google Scholar] [CrossRef]
  64. Bailey, D.H.; Swarztrauber, P.N. The Fractional Fourier Transform and Applications. SIAM Rev. 1991, 33, 389–404. [Google Scholar] [CrossRef]
  65. Bailey, D.H.; Swarztrauber, P.N. A Fast Method for the Numerical Evaluation of Continuous Fourier and Laplace Transforms. SIAM J. Sci. Comput. 1994, 15, 1105–1110. [Google Scholar] [CrossRef]
  66. Massey, F.J. The Kolmogorov-Smirnov Test for Goodness of Fit. J. Am. Stat. Assoc. 1951, 46, 68–78. [Google Scholar] [CrossRef]
  67. Dimitrova, D.S.; Kaishev, V.K.; Tan, S. Computing the Kolmogorov-Smirnov Distribution When the Underlying CDF is Purely Discrete, Mixed, or Continuous. J. Stat. Softw. 2020, 95, 1–42. [Google Scholar] [CrossRef]
  68. Kolmogorov, A.N. Sulla Determinazione Empirica di una Legge di Distribuzione. G. Dell’Istituto Ital. Degli Attuari 1933, 4, 83–91. [Google Scholar]
  69. Smirnov, N.V. Table for Estimating the Goodness of Fit of Empirical Distributions. Ann. Math. Stat. 1948, 19, 279–281. [Google Scholar] [CrossRef]
  70. Marsaglia, G.; Tsang, W.W.; Wang, J. Evaluating Kolmogorov’s Distribution. J. Stat. Softw. 2003, 8, 1–4. [Google Scholar] [CrossRef]
  71. D’Agostino, R.B.; Stephens, M.A. Goodness-of-Fit Techniques; Marcel Dekker: New York, NY, USA, 1986. [Google Scholar]
  72. Stephens, M.A. EDF Statistics for Goodness of Fit and Some Comparisons. J. Am. Stat. Assoc. 1974, 69, 730–737. [Google Scholar] [CrossRef]
  73. Shorack, G.R.; Wellner, J.A. Empirical Processes with Applications to Statistics; John Wiley & Sons: New York, NY, USA, 1986. [Google Scholar]
  74. Krysicki, W.; Bartos, J.; Dyczka, W.; Królikowska, K.; Wasilewski, M. Rachunek Prawdopodobieństwa i Statystyka Matematyczna w zadaniach; Wydawnictwo Naukowe PWN: Warszawa, Poland, 1999; Volume 2. [Google Scholar]
  75. Anderson, T.W. Anderson-Darling Test. In The Concise Encyclopedia of Statistics; Dodge, Y., Ed.; Springer: New York, NY, USA, 2008; pp. 12–14. [Google Scholar] [CrossRef]
  76. Anderson, T.W.; Darling, D.A. A Test of Goodness of Fit. J. Am. Stat. Assoc. 1954, 49, 765–769. [Google Scholar] [CrossRef]
  77. von Mises, R. Wahrscheinlichkeit, Statistik und Wahrheit; Julius Springer: Berlin/Heidelberg, Germany, 1931; English translation: Probability, Statistics and Truth, Macmillan, 1957. [Google Scholar]
  78. Anderson, T.W.; Darling, D.A. Asymptotic Theory of Certain “Goodness of Fit” Criteria Based on Stochastic Processes. Ann. Math. Stat. 1952, 23, 193–212. [Google Scholar] [CrossRef]
  79. Durbin, J. Distribution Theory for Tests Based on the Sample Distribution Function; SIAM: Philadelphia, PA, USA, 1973. [Google Scholar]
  80. Lewis, P.A.W. Distribution of the Anderson-Darling Statistic. Ann. Math. Stat. 1961, 32, 1118–1124. [Google Scholar] [CrossRef]
  81. Marsaglia, G.; Marsaglia, J. Evaluating the Anderson-Darling Distribution. J. Stat. Softw. 2004, 9, 1–5. [Google Scholar] [CrossRef]
  82. Anderson, T.W. Anderson-Darling Tests of Goodness-of-Fit. In International Encyclopedia of Statistical Science; Lovric, M., Ed.; Springer: Berlin/eidelberg, Germany, 2011; pp. 52–54. [Google Scholar] [CrossRef]
Figure 1. Empirical behavior of E [ N ] for Bitcoin and S&P 500 index.
Figure 1. Empirical behavior of E [ N ] for Bitcoin and S&P 500 index.
Mca 30 00106 g001
Figure 2. Bitcoin daily return simulation: Q-Q Plot with Algorithm 1.
Figure 2. Bitcoin daily return simulation: Q-Q Plot with Algorithm 1.
Mca 30 00106 g002
Figure 3. Ethereum daily return simulation: Q-Q Plot with Algorithm 1.
Figure 3. Ethereum daily return simulation: Q-Q Plot with Algorithm 1.
Mca 30 00106 g003
Figure 4. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 2.
Figure 4. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 2.
Mca 30 00106 g004
Figure 5. Equity daily return simulation: Q-Q Plot with Algorithm 2.
Figure 5. Equity daily return simulation: Q-Q Plot with Algorithm 2.
Mca 30 00106 g005
Figure 6. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 3.
Figure 6. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 3.
Mca 30 00106 g006
Figure 7. Equity daily return simulation: Q-Q Plot with Algorithm 3.
Figure 7. Equity daily return simulation: Q-Q Plot with Algorithm 3.
Mca 30 00106 g007
Figure 8. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 4.
Figure 8. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 4.
Mca 30 00106 g008
Figure 9. Equity daily return simulation: Q-Q Plot with Algorithm 4.
Figure 9. Equity daily return simulation: Q-Q Plot with Algorithm 4.
Mca 30 00106 g009
Figure 10. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 5.
Figure 10. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 5.
Mca 30 00106 g010
Figure 11. Equity daily return simulation: Q-Q Plot with Algorithm 5.
Figure 11. Equity daily return simulation: Q-Q Plot with Algorithm 5.
Mca 30 00106 g011
Figure 12. VG* probability density error ( f ˜ N Q ( x k ) f ˜ Q N ( x k ) ).
Figure 12. VG* probability density error ( f ˜ N Q ( x k ) f ˜ Q N ( x k ) ).
Mca 30 00106 g012
Figure 13. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 6.
Figure 13. Cryptocurrency daily return simulation: Q-Q Plot with Algorithm 6.
Mca 30 00106 g013
Figure 14. Equity daily return simulation: Q-Q Plot with Algorithm 6.
Figure 14. Equity daily return simulation: Q-Q Plot with Algorithm 6.
Mca 30 00106 g014
Figure 15. Asymptotic statistic ( m D m ) PDF.
Figure 15. Asymptotic statistic ( m D m ) PDF.
Mca 30 00106 g015
Figure 16. Asymptotic Anderson–Darling (AD) statistic ( A m 2 ) PDF.
Figure 16. Asymptotic Anderson–Darling (AD) statistic ( A m 2 ) PDF.
Mca 30 00106 g016
Table 1. GTS parameter estimation for Bitcoin and Ethereum [26].
Table 1. GTS parameter estimation for Bitcoin and Ethereum [26].
ParameterBitcoinEthereum
EstimateStd Err Pr ( Z > | z | ) EstimateStd Err Pr ( Z > | z | )
μ −0.1216(0.375) 7.5 × 10 01 −0.4854(1.008) 6.3 × 10 01
β + 0.3155(0.136) 2.0 × 10 02 0.3904(0.164) 1.7 × 10 02
β 0.4066(0.117) 4.9 × 10 04 0.4045(0.210) 5.4 × 10 02
α + 0.7477(0.047) 6.2 × 10 56 0.9582(0.106) 1.1 × 10 19
α 0.5446(0.037) 4.8 × 10 48 0.8005(0.110) 4.2 × 10 13
λ + 0.2465(0.036) 4.9 × 10 12 0.1667(0.029) 1.1 × 10 08
λ 0.1748(0.026) 2.2 × 10 11 0.1708(0.036) 2.5 × 10 06
Table 2. GTS parameter estimation for S&P 500 Index and SPY ETF [26].
Table 2. GTS parameter estimation for S&P 500 Index and SPY ETF [26].
ParameterS&P 500 IndexSPY ETF
EstimateStd Err Pr ( Z > | z | ) EstimateStd Err Pr ( Z > | z | )
μ −0.2494(0.208) 2.3 × 10 01 −0.2606(0.135) 5.3 × 10 02
β + 0.3286(0.308) 2.9 × 10 01 0.3408(0.189) 7.1 × 10 02
β 0.0886(0.176) 6.1 × 10 01 0.0222(0.212) 9.2 × 10 01
α + 0.7924(0.350) 2.4 × 10 02 0.7877(0.225) 4.6 × 10 04
α 0.5422(0.107) 3.6 × 10 07 0.5971(0.141) 2.4 × 10 05
λ + 1.2797(0.348) 2.4 × 10 04 1.2885(0.226) 1.2 × 10 08
λ 0.9371(0.144) 8.0 × 10 11 1.0143(0.177) 9.4 × 10 09
Table 3. Transformed parameters and GTS parameters.
Table 3. Transformed parameters and GTS parameters.
GTS Distribution ParametersTransformed Parameters
Financial AssetsVariable μ β α λ θ θ λ E [ N ]
S&P 500 index X + −0.249410.328620.792431.2797435.948546.004933.7557
X 0.088640.542250.93713 1.4 × 10 9 1.3 × 10 9 629.582
SPY ETF X + −0.260640.340880.787761.2885629.256637.698731.3803
X 0.022210.597111.01435 4.1 × 10 64 4.2 × 10 64 6.8 × 10 11
Bitcoin X + −0.121570.315550.747710.2465337.40959.22267.5071
X 0.406460.544570.174775.60720.98002.6961
Ethereum X + −0.485380.390440.958250.1667126.63554.44055.9876
X 0.404480.800480.1707914.72122.51424.2715
Table 4. Describtion of the proposal bivariate density functions [14].
Table 4. Describtion of the proposal bivariate density functions [14].
g ( x , u ) (X, U) C ( β , λ ) X λ
Gamma and Unform bivariate X Γ ( β λ β , 1 ) , U U ( 0 , π ) C 1 ( β , λ ) X λ
Gamma and Unform bivariate X Γ ( ( 1 β ) λ β + 1 , 1 ) , U U ( 0 , π ) C 2 ( β , λ ) X λ
Gamma and truncated normal bivariate X Γ ( β λ β , 1 ) , U N ( 0 , σ 2 , l b = 0 , u b = π ) C 3 ( β , λ ) B 1 β ( u ) X 1 β β
Gamma and truncated normal bivariate X Γ ( ( 1 β ) λ β + 1 , 1 ) , U N ( 0 , σ 2 , l b 1 = 0 , u b = π ) ) C 4 ( β , λ ) B 1 β ( u ) X 1 β β
1  σ 2 = 1 β ( 1 β ) λ β    lb(ub) stands for lower bound (upper bound).
Table 5. Expected complexity empirical values.
Table 5. Expected complexity empirical values.
ParametersExpected Complexities
Financial Assets Variable β θ λ C 1 ( β , θ λ ) C 2 ( β , θ λ ) C 3 ( β , θ λ ) C 4 ( β , θ λ ) C ( β , θ λ )
S&P 500 index X + 0.3286246.00493.48683.99051.57871.80671.5787
X 0.08864 1.3 × 10 9 2.35366.16171.30133.40681.3013
SPY ETF X + 0.3408837.69873.53393.91811.60221.77641.6022
X 0.02221 4.2 × 10 64 2.260912.9791.17256.73091.1725
Bitcoin X + 0.315559.22263.04523.12471.84121.88931.8412
X 0.406460.98003.41022.19602.78131.79101.7910
Ethereum X + 0.390444.44053.36522.82092.05701.72431.7243
X 0.404482.51423.38292.55812.28201.72561.7256
Table 6. Kolmogorov–Smirnov test results for different sampling methods.
Table 6. Kolmogorov–Smirnov test results for different sampling methods.
Financial AssetsStatisticsExact Sampling MethodNumerical Sampling Method
SSR 1 Double Single Shot Inverse Inverse
Rejection Rejection 2 Noise 3 Tail Integral 4 Transform 5
Bitcoin D ^ m 0.00280.07500.00250.00780.00310.0019
m D ^ m 0.784221.3850.70432.20450.86900.5234
p_value0.57000.00300.70410.00010.43700.9470
Ethereum D ^ m 0.00310.07000.00310.00300.00250.0010
m D ^ m 0.886019.8650.88600.85690.69350.398
p_value0.41100.00200.41100.45490.72210.9970
S&P 500 D ^ m 0.08360.00310.17450.00390.0024
m D ^ m 23.6430.874749.3441.10830.6811
p_value 0.00780.42860.00000.17130.7424
SPY ETF D ^ m 0.08970.00250.30230.00310.0026
m D ^ m 25.3660.695385.4980.88730.7464
p_value 0.00000.71910.00000.41060.6332
1 Standard Stable Rejection (SSR); 2 Two-Dimensional Single Rejection; 3 Shot Noise Representation; 4 Inverse Tail Integral Representation; 5 Inverse Transform Sampling Scheme.
Table 7. Anderson–Darling test results for different sampling methods.
Table 7. Anderson–Darling test results for different sampling methods.
Financial AssetsStatisticsExact Sampling MethodNumerical Sampling Method
SSR 1 Double Single Shot Inverse Inverse
Rejection Rejection 2 Noise 3 Tail Integral 4 Transform 5
Bitcoin A ^ m 2 0.4850496.50.60493.2990.86770.2876
p_value0.76220.00000.64340.01940.43470.9472
Ethereum A ^ m 2 0.4993428.80.70510.95250.25790.2464
p_value0.74750.00000.55450.38320.96600.9722
S&P 500 A ^ m 2 621.50.681652920.82440.2924
p_value 0.00000.57440.00000.46380.9438
SPY ETF A ^ m 2 658.40.540915220.78340.7232
p_value 0.00000.70550.00000.49320.5397
1 Standard Stable Rejection (SSR); 2 Two-Dimensional Single Rejection; 3 Shot Noise Representation; 4 Inverse Tail Integral Representation; 5 Inverse Transform Sampling Scheme.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Nzokem, A.; Maposa, D. High-Performance Simulation of Generalized Tempered Stable Random Variates: Exact and Numerical Methods for Heavy-Tailed Data. Math. Comput. Appl. 2025, 30, 106. https://doi.org/10.3390/mca30050106

AMA Style

Nzokem A, Maposa D. High-Performance Simulation of Generalized Tempered Stable Random Variates: Exact and Numerical Methods for Heavy-Tailed Data. Mathematical and Computational Applications. 2025; 30(5):106. https://doi.org/10.3390/mca30050106

Chicago/Turabian Style

Nzokem, Aubain, and Daniel Maposa. 2025. "High-Performance Simulation of Generalized Tempered Stable Random Variates: Exact and Numerical Methods for Heavy-Tailed Data" Mathematical and Computational Applications 30, no. 5: 106. https://doi.org/10.3390/mca30050106

APA Style

Nzokem, A., & Maposa, D. (2025). High-Performance Simulation of Generalized Tempered Stable Random Variates: Exact and Numerical Methods for Heavy-Tailed Data. Mathematical and Computational Applications, 30(5), 106. https://doi.org/10.3390/mca30050106

Article Metrics

Back to TopTop