Next Article in Journal
Research on Retailers’ CSR Mechanisms and Behavior Strategies Considering Information Asymmetry Under Demand Uncertainty
Previous Article in Journal
A Stacking-Based Fusion Framework for Dynamic Demand Forecasting in E-Commerce
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Efficient Quasi-Monte Carlo Algorithm for High Dimensional Numerical Integration

1
School of Mathematics and Statistics, Northwestern Polytechnical University, Xi’an 710129, China
2
Department of Mathematics, The University of Tennessee, Knoxville, TN 37996, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Mathematics 2025, 13(21), 3437; https://doi.org/10.3390/math13213437
Submission received: 7 September 2025 / Revised: 20 October 2025 / Accepted: 22 October 2025 / Published: 28 October 2025
(This article belongs to the Section E: Applied Mathematics)

Abstract

In this paper, we develop a fast numerical algorithm, termed MDI-LR, for the efficient implementation of quasi-Monte Carlo lattice rules in computing d-dimensional integrals of a given function. The algorithm is based on converting the underlying lattice rule into a tensor-product form through an affine transformation, and further improving computational efficiency by incorporating a multilevel dimension iteration (MDI) strategy. This approach computes the function evaluations at the integration points collectively and iterates along each transformed coordinate direction, allowing substantial reuse of computations. As a result, the algorithm avoids the need to explicitly store integration points or compute function values at those points independently. Extensive numerical experiments are conducted to evaluate the performance of MDI-LR and compare it with the straightforward implementation of quasi-Monte Carlo lattice rules. The results demonstrate that MDI-LR achieves a computational complexity of order O ( N 2 d 3 ) or better, where N denotes the number of points in each transformed coordinate direction. Thus, MDI-LR effectively mitigates the curse of dimensionality and revitalizes the use of QMC lattice rules for high dimensional integration.

1. Introduction

Numerical integration is an essential tool and building block in many scientific and engineering fields which requires to evaluate or estimate integrals of given (explicitly or implicitly) functions, which becomes very challenging in high dimensions due to the so-called the curse of dimensionality (CoD). They are seen in evaluating quantities of stochastic interests, solving high dimensional partial differential equations, or computing value functions of an option of a basket of securities. The goal of this paper is to develop and test an efficient algorithm based on quasi-Monte Carlo methods for evaluating the d-dimensional integral:
I d ( f ) : = Ω f ( x ) d x
for a given function f : Ω : = [ 0 , 1 ] d R and d > > 1 .
Classical numerical integration methods, such as tensor product and sparse grid methods [1,2] as well as Monte Carlo (MC) methods [3,4] require the evaluation of a function at a set of integration points. The computational complexity of the first two types of methods grows exponentially with the dimension d in the problem (i.e., the CoD), which limits their practical usage. Monte Carlo (MC) methods are often the default methods for high dimensional integration problems due to their ability of handling complicated functions and mitigating the CoD. We recall that the MC method approximates the integral by randomly sampling points within the integration domain and averaging their function values. The classical MC method has the following form:
Q n , d ( f ) = 1 n i = 0 n 1 f ( x i ) ,
where { x i } i = 0 n 1 denotes independent and uniformly distributed random samples in the integration domain Ω . The expected error for the MC method is proportional to σ ( f ) n , where σ ( f ) 2 stands for the variance of f. If f is square-integrable then the expected error in (2) has the order O ( n 1 2 ) (note that the convergence rate is independent of the dimension d). Evidently, the MC method is simple and easy to implement, making them a popular choice for many applications. However, the MC method is slow to converge [5,6], especially for high dimensional problems, and the accuracy of the approximation depends on the number of random samples. One way to improve the convergence rate of the Monte Carlo method is to use quasi-Monte Carlo methods [7,8].
Quasi-Monte Carlo (QMC) methods [9,10] employ integration techniques that use point sets with better distribution properties than random sampling. Similar to the MC method, the QMC method also has the general form (2), but, unlike the MC method, the integration points { x i } i = 0 n 1 Ω are chosen deterministically and methodically. The deterministic nature of the QMC method could lead to guaranteed error bounds and that the convergence rate could be faster than the O ( n 1 2 ) order of the MC method for sufficiently smooth functions. QMC error bounds are typically given in the form of Koksma-Hlawka-type inequalities as follows:
| I d ( f ) Q n , d ( f ) | D ( x 0 , x 1 , , x n 1 ) V ( f ) ,
where D ( x 0 , x 1 , , x n 1 ) is a (positive) discrepancy function which measures the non-uniformity of the point set { x i } i = 0 n 1 and V ( f ) is a (positive) functional which measures the variability of f. Error bounds of this type separate the dependence on the cubature points from the dependence on the integrand. The QMC point sets with discrepancy of order O ( n 1 ( log n ) d ) or better are collectively known as low-discrepancy point sets [11].
One of the most popular QMC methods is the lattice rule, whose integration points are chosen to have a lattice structure, low-discrepancy, and better distribution properties than random sampling [12,13,14], hence, resulting in a more accurate method with faster convergence rate. However, traditional lattice rules still have limitations when applied to high dimensional problems. Good lattice rules almost always involve searching (cf. [15,16]), the cost of an exhaustive search (for n fixed) grows exponentially with the dimension d. Moreover, like the MC method, the number of integration points required to achieve a reasonable accuracy also increases exponentially with the dimension d (i.e., the CoD phenomenon), which makes the method computationally infeasible for very high dimensional integration problems.
Recent advances in quasi-Monte Carlo (QMC) methods have substantially extended their applicability and efficiency in high-dimensional numerical integration. In the past few years, researchers have developed various forms of randomized QMC (RQMC), adaptive transformation, and hybridization techniques that integrate ideas from importance sampling, stochastic optimization, and even machine learning-inspired transport mappings. For instance, Guth and Kaarnioja [17] applied QMC techniques to partial differential equations with generalized Gaussian input uncertainty, demonstrating that properly designed low-discrepancy sampling can substantially reduce variance in high-dimensional PDE solvers. Wang and Wang [18] analyzed the convergence rate of QMC with importance sampling in reproducing kernel Hilbert spaces (RKHS), providing new theoretical insights for unbounded functions. Melnikov and Milz [19] explored RQMC in the context of risk-averse stochastic optimization, confirming its superior performance in variance reduction and robustness. In financial modeling, Hok and Kucherenko [20] demonstrated the “unreasonable effectiveness” of RQMC for option pricing and risk analysis, achieving remarkable accuracy improvements over traditional Monte Carlo approaches. Furthermore, Imai, Kuo, and Owen [21] proposed a dimension-reduction-enhanced RQMC method that significantly reduces the effective dimension in high-dimensional integrals. Collectively, these recent studies highlight a clear trend toward the hybridization of adaptive transformations, dimension reduction, and randomized QMC techniques, which resonates with the motivation and design philosophy of the present work. Nevertheless, despite these promising developments, most existing QMC variants still face intrinsic challenges when tackling extremely high-dimensional problems—particularly those with dimensions exceeding one hundred—where convergence rates deteriorate rapidly and computational costs remain prohibitive.
To overcome the limitations of QMC lattice rules, we first propose an improved QMC lattice rule based on a change of variables and reformulate it as a tensor product rule in the transformed coordinates. We then develop an efficient implementation algorithm, called MDI-LR, by adapting the multilevel dimension iteration (MDI) idea first proposed by the authors in [22], for the improved QMC lattice rule. The proposed MDI-LR algorithm optimizes the function evaluations at integration points by clustering them and sharing computations via a symbolic-function based dimension/coordinate iteration procedure. This MDI-LR algorithm significantly reduces the computational complexity of the QMC lattice rule from an exponential growth in dimension d to a polynomial order O ( N 2 d 3 ) , where N denotes the number of integration points in each (transformed) coordinate direction. Thus, the MDI-LR effectively overcomes the CoD and revitalizes QMC lattice rules for high dimensional integration.
The remainder of this paper is organized as follows. In Section 2, we first briefly review the rank-one lattice rule and its properties. In Section 4, we introduce a reformulation of this lattice rule and proposed a tensor product generalization based on an affine transformation. In Section 3, we introduce our MDI-LR algorithm for efficiently implementing the proposed lattice rule based on a multilevel dimension iteration idea. In Section 5, we present extensive numerical experiments to test the performance of the proposed MDI-LR algorithm and compare its performance with the original lattice rule and the improved lattice rule with standard implementation. The numerical experiments show that the MDI-LR algorithm is much faster and more efficient in medium and high dimensional cases. In Section 6, we numerically examine the impact of parameters appeared in MDI-LR algorithm, including the choice of the generating vector z for the lattice rule. In Section 7, we present a detail numerical study of the computational complexity for the MDI-LR algorithm. This is achieved by using regression techniques to discover the relationship between CPU time and dimension d. Finally, the paper is concluded with a summary given in Section 8.

2. Preliminaries

In this section, we first briefly recall some basic materials about Quasi-Monte Carlo (QMC) lattice rules for evaluating integrals (1) and their properties. They will set stage for us to introduce our fast implementation algorithm in the later sections.

2.1. Quasi-Monte Carlo Lattice Rules

Lattice rules are a class of quasi-Monte Carlo (QMC) methods which were first introduced by Korobov in [12] to approximate (1) with periodic integrand f. A lattice rule is an equal-weight cubature rule whose cubature points are those points of an integration lattice that lie in the half-open unit cube [ 0 ,   1 ) d . Every lattice point set includes the origin. The projection of the lattice points onto each coordinate axis are equally spaced. Essentially, the integral is approximated in each coordinate direction by a rectangle rule (or a trapezoidal rule if the integrand is periodic). The simplest lattice rules are called rank-one lattice rules. They use a lattice point set generated by multiples of a single generator vector, which are defined as follows.
Definition 1 
(rank-one lattice rule). An n-point rank-one lattice rule in d-dimensions, also known as the method of good lattice points, is a QMC method with cubature points
x i = i z n , i = 0 , 1 , , n 1 ,
where z Z d , known as the generating vector, is a d-dimensional integer vector having no factor in common with n, and the braces operator { · } takes the fractional part of the input vector.
Every lattice rule can be written as a multiple sum involving one or more generating vectors. The minimal number of generating vectors required to generate a lattice rule is known as the rank of the rule. Besides rank-one lattice rules which have only one generating vector, there are also lattice rules having rank up to d. f is said to have an absolutely convergent Fourier series expansion if the following is true:
f ( x ) = h Z d f ^ ( h ) e 2 π i h · x , i = 1 ,
where the Fourier coefficient is defined as follows:
f ^ ( h ) = Ω f ( x ) e 2 π i h · x d x .
The following theorem gives two characterizations for the error of the lattice rules (cf. [13] (Theorem 1) and [10] (Theorem 5.2).
Theorem 1. 
Let Q n , d denote a lattice rule (not necessarily rank-one) and let L denote the associated integration lattice. If f has an absolutely convergent Fourier series (5), then the following is true:
Q n , d ( f ) I d ( f ) = h L { 0 } f ^ ( h ) ,
where L : = { h Z d : h · x Z x L } is the dual lattice associated with L .
Theorem 2. 
Let Q n , d denote a rank-one lattice rule with a generating vector z . If f has an absolutely convergent Fourier series (5), then the following is true:
Q n , d ( f ) I d ( f ) = h Z d { 0 } h · z 0 ( m o d n ) f ^ ( h ) .
It follows from (7) that the least upper bound of the error for the class E α ( c ) of functions whose Fourier coefficients satisfy | f ^ ( h ) | c i = 1 d h i α is given by the following:
Q n , d ( f ) I d ( f ) c h Z d { 0 } h · z 0 ( m o d n ) 1 i = 1 d h ¯ i α ,
where α > 1 , c > 0 , h = ( h 1 , h 2 , , h d ) , h ¯ i : = max ( 1 , | h i | ) , and | h i | denotes the absolute value of the i-th component of h .
Let
P α , n , d ( z ) : = h Z d { 0 } h · z 0 ( m o d n ) 1 i = 1 d h ¯ i α .
For fixed n and α , a good lattice point z is so chosen to make P α , n , d ( z ) as small as possible. It ws proved by Niederreiter in [23,24] (Theorem 2.11) that for a prime n (or a prime power) there exists a lattice point z such that the following is true:
P α , n , d ( z ) = O ( log n ) α d n α .
This was achieved by proving that P α , n , d ( z ) has the following expansion:
P α , n , d ( z ) = 1 + 1 n k = 0 n 1 j = 1 d 1 + h { 0 } e 2 π i k h z j / n | h | α = 1 + 1 n j = 1 d 1 + 2 ζ ( α ) + 1 n k = 1 n 1 j = 1 d 1 + ( 1 ) α 2 + 1 ( 2 π ) α α ! B α k z j n ,
where
ζ ( α ) : = j = 1 1 j α , α > 1 ;
B α ( λ ) : = ( 1 ) α 2 + 1 α ! ( 2 π ) α h Z { 0 } e 2 π i h λ | h | α , λ [ 0 ,   1 ] ;
| h | : = h 1 2 + + h d 2 .
As expected, the performance of a lattice rule depends heavily on the choice of the generating vector z . For large n and d, an exhaustive search to find such a generating vector by minimizing some desired error criterion is practically impossible. Below we list a few common strategies for constructing lattice generating vectors.

2.2. Examples of Good Rank-One Lattice Rules

The first example is the Fibonacci lattice, we refer the reader to [10] for the details.
Example 1 
(Fibonacci lattice). Let z = ( 1 , F k ) and n = F k + 1 , where F k and F k + 1 are consecutive Fibonacci numbers. Then the resulting two-dimensional lattice set generated by z is called a Fibonacci lattice.
Fibonacci lattices in 2-d have a certain optimality property, but there is no obvious generalization to higher dimensions that retains the optimality property (cf. [10]).
The second example is so-called Korobov lattices, we refer the reader to [25,26] for the details.
Example 2 
(Korobov lattice). Let a be an integer satisfying 1 a n 1 . Supposed a and n are relatively prime (i.e., their greatest common factor (GCF) is 1) and
z = z ( a ) : = ( 1 , a , a 2 , , a d 1 ) mod n .
Then, the resulting d-dimensional lattice set generated by z is called a Korobov lattice.
It is easy to see that there are (at most) n 1 choices for the Korobov parameter a, which leads to (at most) n 1 choices for the generating vector z . Thus it is feasible in practice to search through the (at most) n 1 choices and take the one that fulfills the desired error criterion such as the one that minimizes P α , n , d ( z ) , and (11) allows P α , n , d ( z ) to be computed in O ( d n 2 ) operations (cf. [14]).
The last example is called the CBC lattice which is based on the component-by-component construction (cf. [27]).
Example 3 
(CBC lattice). Let N n : = { z Z : 1 z n 1 and suppose that a and n are relatively prime. Let E n , d ( s h ) denote a shifted error operator defined in [27]. Then the generating vector z = ( z 1 , z 2 , , z d ) of the CBC lattice is defined component-wise as follows:
(i)
Set z 1 = 1 .
(ii)
With z 1 held fixed, choose z 2 from N n to minimize [ E n , d ( s h ) ( ( z 1 , z 2 ) ) ] 2 in 2-d.
(iii)
With z 1 , z 2 held fixed, choose z 3 from N n to minimize [ E n , d ( s h ) ( ( z 1 , z 2 , z 3 ) ) ] 2 in 3-d.
(iv)
repeat the above process until all { z j } j = 1 d are determined.
It is well-known that [27], with general weights, the cost of the CBC algorithm is prohibitively expensive, thus in practice some special structure is always adopted, among them, product weights, order-dependent weights, finite-order weights, and POD (product and order-dependent) weights are commonly used. In each of the d steps of the CBC algorithm, the search space N n has cardinality n 1 . Then the overall search space for the CBC algorithm is reduced to a size of order O ( d n ) (cf. [28] (page 11)). Hence, this provides a feasible way of constructing a generating vector z .
Figure 1 shows a two-dimensional lattice with 81 points, the corresponding generating vectors are (1, 2), (1, 4) and (1, 7) respectively. Figure 2 shows a three-dimensional lattice with 81 points, and the corresponding generating vectors are, respectively, (1, 2, 4), (1, 4, 16), and (1, 7, 49).

3. Reformulation of Lattice Rules

Clearly, the lattice point set of each QMC lattice rule has some pattern or structure. Indeed, one main goal of this section is precisely to describe the pattern. We show that a lattice rule almost has a tensor product reformulation viewed in an appropriately transformed coordinate space via an affine transformation. This discovery allows us to introduce a tensor product rule as an improvement to the original QMC lattice rule. More importantly, the reformulation lays an important jump pad for us to develop an efficient and fast implementation algorithm (or solver), called the MDI-LR algorithm, based on the idea of multilevel dimension iteration [22], for evaluating the QMC lattice rule (2).

3.1. Construction of Affine Coordinate Transformations

From Figure 1 and Figure 2, we see that the distribution of lattice points are on lines/planes which are not parallel to the coordinate axes/planes. However, those lines/planes are parallel to each other, this observation suggests that they can be made to parallel to the coordinate axes/planes via affine transformations. Below, we prove that is indeed the case and explicitly construct such an affine transformation for a given QMC lattice rule.
Theorem 3. 
Let z = ( z 1 , z 2 , , z d ) Z d be the generating vector of a rank-one lattice rule. The j-th lattice point associated with z is defined by x j = j z n = { j z 1 n } , , { j z d n } , where the operator { · } defines as taking the fractional part component-wise. The collection { x j } j = 0 n 1 thus forms the n-point set of a rank-one QMC lattice rule in dimension d. Define the following:
A = 1 z 1 1 z 2 0 0 0 0 1 z 2 1 z 3 0 0 0 0 0 1 z d 1 1 z d 0 0 0 0 1 a n d b = 0 0 n x d z d · z d n .
Notice that A R d × d and b R d . Then y j : = a b s ( A x j + b ) , j = 0 , 1 , , n 1 form a Cartesian grid in the new coordinate system, where a b s ( y ) denotes the operation of taking the absolute value component-wise in the vector y .
Proof. 
By the definition of x j , we have x j = { j z 1 n } , { j z 2 n } , , { j z d n } . A direct computation yields the following:
y j = a b s ( A x j + b ) = a b s 1 z 1 { j z 1 n } 1 z 2 { j z 2 n } 1 z 2 { j z 2 n } 1 z 3 { j z 3 n } 1 z d 1 { j z d 1 n } 1 z d { j z d n } { j z d n } { n z d { j z d n } } · z d n .
Recall that { x } and x denote respectively the fractional and integer parts of the number x. Because
1 z i j z i n = 1 z i j z i n j z i n = j n 1 z i j z i n ,
then
1 z i 1 j z i 1 n 1 z i j z i n = 1 z i j z i n 1 z i 1 j z i 1 n ,
and
y j = a b s 1 z 1 { j z 1 n } 1 z 2 { j z 2 n } 1 z 2 { j z 2 n } 1 z 3 { j z 3 n } 1 z d 1 { j z d 1 n } 1 z d { j z d n } { j z d n } { { j z d n } z d n } · z d n = a b s 1 z 2 j z 2 n 1 z 1 j z 1 n 1 z 3 j z 3 n 1 z 2 j z 2 n 1 z d j z d n 1 z d 1 j z d 1 n z d n j n z d j z d n .
It is easy to check that the following is true:
1 z i j z i n 1 z i 1 j z i 1 n = 0 , 0 j < n z i 1 z i , n z i j < n z i 1 1 z i 1 1 z i , n z i 1 j < 2 n z i 2 z i 1 z i 1 , 2 n z i j < 2 n z i 1 2 z i 1 2 z i , 2 n z i 1 j < 3 n z i .
On the other hand, let the following be true:
Γ n 1 1 : = y s 1 | y s 1 = a b s 1 z 2 i z 2 n 1 z 1 i z 1 n , i = 0 , 1 , , n 1 , s 1 = 0 , 1 , , n 1 1 , Γ n 2 1 : = y s 2 | y s 2 = a b s 1 z 3 i z 3 n 1 z 2 i z 2 n , i = 0 , 1 , , n 1 , s 2 = 0 , 1 , , n 2 1 , Γ n d 1 1 : = { y s d 1 | y s d 1 = a b s 1 z d i z d n 1 z d 1 i z d 1 n , i = 0 , 1 , , n 1 , s d 1 = 0 , 1 , , n d 1 1 } , Γ n d 1 : = y s d | y s d = a b s z d n i n z d i z d n , i = 0 , 1 , , n 1 , s d = 0 , 1 , , n d 1 , Γ n d : = Γ n 1 1 Γ n 2 1 Γ n d 1 ,
where
n i = lcm ( z i , z i + 1 ) min ( z i , z i + 1 ) , i = 1 , 2 , , d 1 ; n d = n z d ,
and lcm represents the least common multiple.
For any y k = ( y s 1 , y s 2 , , y s d ) Γ n d , we have k = s 1 + s 2 n 1 + s 3 n 1 n 2 + + s d n 1 n 2 n d 1 . Since s 1 = 0 , 1 , , n 1 , , s d = 0 , 1 , , n d , then k = 0 , 1 , , n , , ( n 1 n 2 n d ) . For y j in the set described by (17), j = 1 , 2 , , n , we obtain the following:
y j = a b s 1 z 2 j z 2 n 1 z 1 j z 1 n 1 z 3 j z 3 n 1 z 2 j z 2 n 1 z d j z d n 1 z d 1 j z d 1 n z d n k n z d j z d n .
Let y i 1 : = a b s ( 1 z 2 j z 2 n 1 z 1 j z 1 n ) ; t follows that there exists an s 1 such that s 1 = n 1 j n 1 , resulting in y i 1 = y s 1 Γ n 1 1 . In the same way, y i 2 Γ n 2 1 , , y i d Γ n d 1 . Therefore, we conclude that y j = ( y i 1 , y i 2 , , y i d ) Γ n d , that is, the transformed lattice points have the Cartesian product structure.    □
Lemma 1. 
Let x j = j z n for j = 1 , 2 , , n 1 denote the Korobov lattice point set, where z = ( 1 , a , a 2 , , a d 1 ) , 1 a n 1 and a is an integer parameter that determines the generating vector of the Korobov lattice. Assume that a and n are relatively prime. Define y j : = abs ( A x j + b ) , j = 0 , 1 , , n 1 , which satisfies the conclusion of Theorem 3. Moreover, if a = n 1 d , then the number of points in each coordinate direction of the lattice set Γ n d is the same, that is, n 1 = n 2 = = n d = a , where n d denotes the number of lattice points along the d-th coordinate direction.
Proof. 
From Theorem 3 we have the following:
y j = a b s 1 a j a n 1 a 2 j a 2 n 1 a j a n 1 a d 1 j a d 1 n 1 a d 2 j a d 2 n a d 1 n j n a d 1 j a d 1 n ,
and
Γ n 1 1 = y s 1 | y s 1 = a b s 1 a j a n , j = 0 , 1 , , n 1 , s 1 = 0 , 1 , , n 1 1 = 0 , 1 a , 2 a , , a 1 a , Γ n 2 1 = y s 2 | y s 2 = a b s 1 a 2 j a 2 n 1 a j a n , j = 0 , 1 , , n 1 , s 2 = 0 , 1 , , n 2 1 = 0 , 1 a 2 , 2 a 2 , , a 1 a 2 , Γ n d 1 1 = { y s d 1 | y s d 1 = a b s 1 a d 1 j a d 1 n 1 a d 2 j a d 2 n , j = 0 , 1 , , n 1 , s d 1 = 0 , 1 , , n d 1 1 } = 0 , 1 a d 1 , 2 a d 1 , , a 1 a d 1 , Γ n d 1 = { y s d | y s d = a b s a d 1 n j n a d 1 j a d 1 n , j = 0 , 1 , , n 1 , s d = 0 , 1 , , n d 1 } = 0 , a d 1 n , 2 a d 1 n , , n a d 1 n .
For y j in the set described by (21), let y i 1 : = a b s ( 1 a j a n ) , it follows that there exists an s 1 such that s 1 = n 1 j n 1 , resulting in y i 1 = s 1 a = y s 1 Γ n 1 1 . Similarly, y i 2 = s 2 a 2 = y s 2 Γ n 2 1 , , y i d Γ n d 1 . Therefore, we conclude that y j = ( y i 1 , y i 2 , , y i d ) Γ n d , Obviously, the transformed lattice points have the Cartesian product structure. Moreover, if a = [ n ] 1 d , then
n i = lcm ( z i , z i + 1 ) min ( z i , z i + 1 ) = z i + 1 z i = a , i = 1 , 2 , , d 1 , n d = n z d = a d a d 1 = a .
Hence n 1 = n 2 = = n d 1 = n d = a . The proof is complete.    □
Left graph of Figure 3 shows a 2-d example of 81-point rank-one lattice with the generating vector ( 1 , 4 ) . Right graph displays transformed lattice under the affine coordinate transformation y = A x + b from R 2 to itself, where
A = 1 1 4 0 1 , b = 0 { 81 x 2 4 } · 4 81 .
Figure 4 demonstrates a specific example in 3-d. The left graph is a 161-point rank-one lattice with generating vector ( 1 ,   4 ,   16 ) . The right one shows the transformed points under the affine coordinate transformation y = A x + b from R 3 to itself, where the following is true:
A = 1 1 4 0 0 1 4 1 16 0 0 1 , b = 0 0 { 161 x 3 16 } · 16 161 .

3.2. Improved Lattice Rules

From Figure 3 and Figure 4 we see that the transformed lattice point sets do not exactly form tensor product grids because many lines miss one point. By adding those “missing" points which can be performed systematically, we easily make them become tensor product grids in the transformed coordinate system. Since more integration points are added to the QMC lattice rule, the resulting quadrature rule is expected to be more accurate (which is supported by our numerical tests), hence, it is an improvement to the original QMC lattice rule. We also note that those added points would correspond to ghost points in the original coordinates.
Definition 2 
(Improved QMC lattice rule). Let z R d and x i = i z n , i = 0 , 1 , , n 1 , be a rank-one lattice point set and y i : = A x i + b , i = 0 , 1 , , n 1 for some A R d × d and b R d (which uniquely determine an affine transformation). Suppose there exists n ( < < n ) points so that together the n + n points form a tensor product grid in the transformed coordinate system, then the QMC lattice rule obtained by using those n + n sampling points is called an improved QMC lattice rule, and denoted by Q ^ n , d ( f ) .
Figure 5 shows a 81-point (i.e., n = 81 ) 2-d rank-one lattice with the generating vector ( 1 , 7 ) , the transformed lattice (middle), and the improved tensor product grid (right). Three points (in red color) are added on the top; so, n = 3 for this example.

4. The MDI-LR Algorithm

Since an improved rank-one lattice is a tensor product grid in the transformed coordinate system and its corresponding quasi-Monte Carlo (QMC) rule is a tensor product rule with equal weight, w = 1 n + n . This tensor product improvement allows us to apply the multilevel dimension iteration (MDI) approach, which was proposed by the authors in [22], for a fast implementation of the original QMC lattice rule, especially in the high dimension case. The resulting algorithm will be called the MDI-LR algorithm throughout this paper,

Formulation of the MDI-LR Algorithm

To formulate our MDI-LR algorithm, we first recall the MDI idea/algorithm in simple terms (cf. [22]). For a tensor product rule, we need to compute a multi-summation with variable limits as follows:
i 1 = 1 n 1 i 2 = 1 n 2 i d = 1 n d f ( ξ i 1 , ξ i 2 , , ξ i d ) ,
which involves n 1 n 2 n d function evaluations for the given function f if one uses the conventional approach by computing function value at each point independently, that inevitably leads to the curse of dimensionality (CoD). To make computation feasible in high dimensions, it is imperative to save the computational cost by evaluating the summation more efficiently.
The main idea of the MDI approach proposed in [22] is to compute those n 1 n 2 n d function values in cluster (not independently) and to compute the summation layer-by-layer based on a dimension iteration with help of symbolic computation. To that end, we write the following:
i 1 = 1 n 1 i 2 = 1 n 2 i d = 1 n d f ( ξ i 1 , ξ i 2 , , ξ i d ) = i m + 1 = 1 n m + 1 i d = 1 n d f d m ( ξ i m + 1 , , ξ d ) ,
where 1 m < < d is fixed and
f d m ( x 1 , , x d m ) : = i 1 = 1 n 1 i m = 1 n m f ( ξ i 1 , , ξ i m , x 1 , , x d m ) ,
The MDI approach recursively generates a sequence of symbolic functions { f d m , f d 2 m , , f d l m } ; each function has m fewer arguments than its predecessor (because the dimension is reduced by m at each iteration). As already mentioned above, the MDI approach explores the lattice structure of the tensor product integration point set; instead of evaluating function values at all integration points independently, it evaluates them in cluster and iteratively along m-coordinate directions, and the function evaluation at any integration point is not completed until the last step of the algorithm is executed. In some sense, the implementation strategy of the MDI approach is to trade large space complexity for low time complexity. That being said, however, the price to be paid by the MDI approach for the speedy evaluation of the multi-summation is that those symbolic function needs to be saved during the iteration process, which often takes up more computer memory.
For example, consider 2-d function f ( x 1 , x 2 ) = x 1 2 + x 1 x 2 + x 2 2 and let n 1 = n 2 = N . In the standard approach, to compute the function value f ( ξ i 1 , ξ i 2 ) at an integration point ( ξ i 1 , ξ i 2 ) , one needs to compute three multiplications ξ i 1 ξ i 1 = ξ i 1 2 , ξ i 1 ξ i 2 and ξ i 2 ξ i 2 = ξ i 2 2 , and two additions. To compute N 2 function values, it requires a total of 3 N 2 multiplications and 3 N 2 1 additions. On the other hand, the first for-loop of the MDI approach generates f 1 ( x ) = i 2 n f ( x , ξ i 2 ) which requires N evaluations of ξ i 2 x 1 (symbolic computations) and N evaluations of ξ i 2 ξ i 2 , as well as 3 ( N 1 ) additions. The second for-loop generates i 1 = 1 N f 1 ( ξ i 1 ) which requires N evaluations of ξ i 1 ξ i 1 and N evaluations of ξ i 1 ξ ¯ i 2 , as well as 3 N 1 additions. After the second for-loop completes, we obtain the summation value. The computation complexity of the MDI approach consists of a total of 4 N multiplications and 6 N 4 additions. Which is much cheaper than the standard approach. In fact, the speedup is even more dramatic in higher dimensions.
It is easy to see that the MDI approach can not be applied to the QMC rule (2) because it is not in a multi-summation form. However, we have showed in Section 3 that this obstacle can be overcome by a simple affine coordinate transformation (i.e., change of variables) and adding a few integration points.
Let y = A x + b denote the affine transformation, then the integral (1) is equivalent to the following:
I d ( f ) = 1 | A | Ω ^ f A 1 ( y b ) d y = 1 | A | Ω ^ g ( y ) d y ,
where | A | stands for the determinant of A R d × d and
g ( y ) : = f A 1 ( y b ) , Ω ^ : = y | y = A x + b , x Ω .
Then, our improved QMC rank-one lattice rule for (2) in the y -coordinate system takes the following form:
Q ^ n , d ( f ) = 1 n + n i = 0 n + n 1 f ( x i ) = 1 | A | s 1 = 1 n 1 s 2 = 1 n 2 s d = 1 n d g ( y s 1 , y s 2 , , y s d ) .
Let the following be true:
J ( g , Ω ) : = s 1 = 1 n 1 s 2 = 1 n 2 s d = 1 n d g ( y s 1 , y s 2 , , y s d ) ,
Clearly, it is a multi-summation with variable limits. Thus, we can apply the MDI approach to compute it efficiently. Before doing that, we first need to extend the MDI algorithm, Algorithm 2.3 of [22], to the case of variable limits. We name the extend algorithm as MDI ( d , g , Ω d , N d , m ) , which is defined in Algorithm 1 below.
Algorithm 1 MDI(d, g, Ω , N d , m )
Inputs:  d ( 4 ) , g , Ω , m ( = 1 ,   2 ,   3 ) , N k = ( n 1 , n 2 , , n k ) , k = 1 , 2 , , d .
Output:  J = J ( g , Ω ) .
1:
Ω d = Ω , g d = g , = [ d m ] .
2:
for  k = d : m : d m (the index is decreased by m at each iteration) do
3:
     Ω d m = P k k m Ω k .
4:
    Construct symbolic function g k m by (28) below).
5:
    MDI ( k , g k , Ω k , N k , m ) :=MDI ( k m , g k m , Ω k m , N k m , m ) .
6:
end for
7:
J = MDI ( d m , g d m , Ω d m , N d m , m ) .
8:
return J.
where P k k m denotes the natural embedding from R k to R k m by deleting the first m components of vectors in R k , and the following is true:
g k m ( s 1 , , s k m ) = i 1 , , i m = 1 n 1 , , n m w i 1 w i 2 w i m g k ( ξ 1 , , ξ m , s 1 , , s k m ) .
Remark 1. 
(a) 
Algorithm 1 recursively generates a sequence of symbolic functions { g d , g d m , g d 2 m , g d m } , each function has m fewer arguments than its predecessor.
(b) 
Since m 3 , when d = 2 , 3 , we simply use the underlying low dimensional QMC quadrature rules. As was performed in [22], we name those low dimensional algorithms as 2d-MDI ( g , Ω , N 2 ) and 3d-MDI ( g , Ω , N 3 ) , and introduce the following conventions.
If k = 1 , set MDI ( k , g k , Ω k , n 1 , m ) : = J ( g k , Ω k ) , which is computed by using the underlying 1-d QMC quadrature rule.
If k = 2 , set MDI ( k , g k , Ω k , N k , m ) : = 2d-MDI ( g k , Ω k , N k ) .
If k = 3 , set MDI ( k , g k , Ω k , N k , m ) : = 3d-MDI ( g k , Ω k , N k ) .
We note that when k = 1 , 2 , 3 , the parameter m becomes a dummy variable and can be given any value.
(c) 
We also note that the MDI algorithm in [22] has an additional parameter r which selects the 1-d quadrature rule. However, such a choice is not needed here because the underlying QMC rule is used as the 1-d quadrature rule.
We are now ready to define our MDI-LR algorithm, which is denoted by MDI-LR ( d , g , Ω d , N d , m ) , which is defined in Algorithm 2 below, by using the above MDI algorithm to evaluate Q ^ n , d ( f ) in (27).
Algorithm 2 MDI-LR(f, Ω , d , a , n )
Inputs:  f , Ω , d , a , n .
Output:  Q ^ n , d ( f ) = Q n + n , d ( f ) .
1:
Initialize z = ( 1 , a , a 2 , , a d 1 ) , J = 0 , Q = 0 , m = 1 .
2:
Construct matrix A and b by (15).
3:
g ( y ) : = f ( A 1 ( y b ) ) .
4:
Generate the vector N d by (19).
5:
Ω ^ : = y | y = A x + b , x Ω .
6:
J = MDI ( d , g , Ω ^ , N d , m )
7:
Q = J | A | .
8:
return  Q ^ n , d ( f ) = Q .
Noting that, here, we set m = 1 , that is, the dimension is reduced by 1 at each dimension iteration; this is because the numerical tests of [22] show that, when m = 1 , the MDI algorithm is more efficient than when m > 1 . Also, the upper limit vector N d depends on the choice of the underlying QMC rule. In Lemma 1 we showed that when N = [ n 1 d ] and a = N , then n 1 = n 2 = = n d = N , that is, the number of integration points is the same in each (transformed) coordinate direction.

5. Numerical Performance Tests

In this section, we present extensive and purposely designed numerical experiments to gauge the performance of the proposed MDI-LR algorithm and to demonstrate its superiority over the standard implementations of the QMC lattice rule (SLR) and the improved lattice rule (Imp-LR) for computing high dimensional integrals. All our numerical experiments are performed in Matlab on a desktop PC with Intel(R) Xeon(R) Gold 6226R CPU 2.90GHz and 32GB RAM. It should be noted that the MDI-LR algorithm does not require any prior knowledge of the integrand’s smoothness, which makes it applicable to a broad class of problems, including those with limited regularity.

5.1. Two and Three-Dimensional Tests

We first test our MDI-LR on simple 2- and 3-d examples and to compare its performance (in terms of the CPU time) with the SLR and Imp-LR methods.
Test 1. Let Ω = [ 0 ,   1 ] 2 and consider the following 2-d integrands:
f ( x ) : = x 2 exp x 1 x 2 e 2 ; f ^ ( x ) : = sin 2 π + x 1 2 + x 2 2 .
Table 1 and Table 2 present the computational results (errors and CPU times) of the SLR, Imp-LR and MDI-LR method for approximating I 2 ( f ) and I 2 ( f ^ ) , respectively. Recall that the Imp-LR is obtained by adding some sampling points on the boundary of the integration domain in the transformed coordinates, and the MDI-LR algorithm provides a fast implementation of the Imp-LR using the MDI approach. From Table 1 and Table 2, we observe that in low dimensions (e.g., d = 2 ), all three methods require very little CPU time, and the SLR method may even be slightly faster than the Imp-LR and MDI-LR methods. Since the Imp-LR and MDI-LR methods use additional sampling points on the boundary, which leads to slightly higher accuracy compared to SLR. Moreover, the MDI-LR method is specifically designed to accelerate the Imp-LR for computing high-dimensional integrals. As the dimension increases, the advantage of the MDI-LR becomes significant: it achieves much higher accuracy while maintaining moderate CPU time, resulting in substantially lower computational complexity and greater efficiency than SLR in high dimensions.
Test 2. Let Ω = [ 0 ,   1 ] 3 and we consider the following 3-d integrands:
f ( x ) : = exp x 1 + x 2 + x 3 ( e 1 ) 3 ; f ^ ( x ) : = sin 2 π + x 1 2 + x 2 2 + x 3 2 .
Table 3 and Table 4 present the simulation results (errors and CPU time) of the SLR, Imp-LR, and MDI-LR methods for computing I 3 ( f ) and I 3 ( f ^ ) in Test 2. We observe that the SLR method requires less CPU time in both simulations. The advantage of the MDI-LR method in accelerating the computation does not materialize in low dimensions as seen in Test 1. Once again, the Imp-LR and MDI-LR have higher accuracy compared to the SLR method because they use additional sampling points on the boundary of the transformed domain.

5.2. High Dimensional Tests

Since the MDI-LR method is designed for computing high dimensional integrals, its performance for d > > 1 is more important and anticipated, which is indeed the main task of this subsection. First, we test and compare the performance (in terms of CPU time) of the SLR, Imp-LR, and MDI-LR methods for computing high dimensional integrals as the number of lattice points grows due to the dimension increases. Then, we also test the performance of the SLR and MDI-LR methods for computing high dimensional integrals when the number of lattice points increases slowly in the dimension d.
Test 3. Let Ω = [ 0 ,   1 ] d for 2 d 50 and consider the following Gaussian integrand:
f ( x ) = 1 2 π exp 1 2 | x | 2 ,
where | x | stands for the Euclidean norm of the vector x R d .
Table 5 shows the relative errors and CPU times of SLR, Imp-LR, and MDI-LR methods for approximating the Gaussian integral I d ( f ) . The simulation results indicate that SLR and Imp-LR methods are more efficient when d < 7 , but they struggle to compute integrals when d > 11 as the number of lattice points increases exponentially in the dimension. However, this is not a problem for the MDI-LR method, which can compute this high dimensional integral easily. Moreover, the MDI-LR method improves the accuracy of the original QMC rule significantly by adding some integration points on the boundary of the transformed domain.
Table 6 shows the relative errors and CPU times of the SLR and MDI-LR methods for computing I d ( f ) when the number of lattice points increase slowly in the dimension d. As the dimension increases, the CPU time required by the SLR method also increases sharply (see Figure 6). When approximating the Gaussian integral of about 30 dimensions with 10 11 lattice points, the SLR method requires 74 h to obtain a result with relatively low accuracy. In contrast, the MDI-LR method only takes about one second to obtain a more accurate value, this demonstrates that the acceleration effect of the MDI-LR method is quite dramatic.
It is well known that it is difficult to obtain high accuracy approximations in high dimensions because the number of integration points required is enormous. A natural question is whether the MDI-LR method can handle very high (i.e., d 1000 ) dimensional integration with reasonable accuracy. First, we note that the answer is machine dependent, as expected. Next, we present a test on the computer at our disposal to provide a positive answer to this question
Test 4. Let Ω = [ 0 ,   1 ] d and consider the following integrands:
f ( x ) = exp i = 1 d ( 1 ) i + 1 x i , f ^ ( x ) = i = 0 d 1 0 . 9 2 + ( x i 0.6 ) 2 .
We use the algorithm MDI-LR to compute I d ( f ) and I d ( f ^ ) with parameters a = 8 ,   20 , and an increasing sequence of d. The computed results are presented in Table 7. The simulation is stopped at d = 1000 because it is already in the very high dimension regime. These tests demonstrate the efficacy and potential of the MDI-LR method in efficiently computing high dimensional integrals. However, we note that, in terms of efficiency and accuracy, the MDI-LR method underperforms its two companion methods; namely, the MDI-TP [22] and MDI-SG [29] methods, as shown in Figure 7. The main reason for the underperformance is that the original lattice rule is unable to provide high accuracy integral approximations and the MDI-LR is a fast implementation algorithm (i.e., solver) for the modified lattice rule, Imp-LR. Nevertheless, the lattice rule has its own advantages, such as allowing flexible integration points and giving better results for periodic integrands.

6. Influence of Parameters

The original MDI algorithm involves three crucial input parameters: r, m, and N. The parameter r determines the one-dimensional basis value quadrature rule, while m sets the step size in the multidimensional iteration, and N represents the number of integration points in each coordinate direction. The algorithm MDI-LR is similar to the original MDI, but uses the QMC rank-one lattice rule with generating vector z , so the parameter r is muted. Here we focus on the Korobov approach in constructing the generating vector z , which is defined as z = z ( a ) : = ( 1 , a , a 2 , , a d 1 ) . Moreover, the improved tensor product rule (in the transformed coordinate system) that is implemented by the algorithm Imp-LR has variable upper limits in the summation (cf. (27)); hence, N is now replaced by N d which is determined by the underlying QMC lattice rule. Furthermore, as explained earlier, we set m = 1 due to our experience in [22]. As a result, the only parameter to select is a. Below, we first test the influence of the Korobov parameter a on the efficiency of the algorithm MDI-LR and then test dependence of its performance on N d and d.

6.1. Influence of Parameter a

In this subsection, we investigate the impact of the generating vector z = z ( a ) : = ( 1 , a , a 2 , , a d 1 ) in the algorithm MDI-LR. We note that similar methods can be constructed using other z .
Test 5. Let Ω = [ 0 ,   1 ] d and consider the following integrands:
f ( x ) = 1 2 π exp 1 2 | x | 2 , f ^ ( x ) = cos 2 π + 2 i = 1 d x i , f ˜ ( x ) = i = 0 d 1 0 . 9 2 + ( x i 0.6 ) 2 .
We compare the performance of the algorithm MDI-LR with different Korobov parameters a while holding other parameters unchanged when computing I d ( f ) , I d ( f ^ ) , and I d ( f ˜ ) .
Figure 8 shows the computed results for d = 5 , 10 and a = 4 ,   6 ,   8 ,   10 ,   12 ,   14 ,   16 , respectively. We observe that the algorithm MDI-LR with different parameters a has different accuracy and the effect could be significant. These results indicate that the algorithm is most efficient when a = N , where N = [ n 1 d ] and n represents the total number of integral points. This is because when a smaller a is used, although fewer integration points need to be evaluated in each coordinate direction in the first d 1 dimension iterations, since the total number of integral points n is the same, the amount of computation will increase dramatically. When using a larger a, more integration points need to be used in each coordinate direction in the first d 1 dimension iterations. Only when the integration points are equally distributed to each coordinate direction, the efficiency of the algorithm MDI-LR can be optimized. A total of 100 points are shown in Figure 9. When a = 2 , only 2 iterations in the x 1 -direction are needed, but 50 iterations in the x 2 -direction must performed, hence, a total of 52 iterations in the two directions are required. On the other hand, when a = 20 , a total of 25 iterations in the two directions are required. It is easy to check that the least total of 20 iterations occurs when a = 10 . The difference in accuracy is obvious, because the different a leads to different generating vector z . Which in turn results in different integration points. We note that it was already well studied in the literature on how to choose a to achieve the highest accuracy (cf. [10]).

6.2. Influence of Parameter N = [ n 1 d ]

In the previous section, we know that the algorithm is most efficient when a = N , where N represents the number of integration points in each direction. This section aims to investigate the impact of N on the MDI-LR algorithm. For this purpose, we conduct tests by setting a = N and d = 5 and d = 10 .
Test 6. Let Ω , f, f ^ and f ˜ be the same as in Test 5.
Table 8, Table 9 and Table 10 present a performance comparison for algorithm MDI-LR with d = 5 , 10 and N = 4 ,   6 ,   8   ,   10   ,   12 ,   14 ,   16 , respectively. We note that the quality of the computed results also depend on types of the integrands. As expected, more integration points must be used to achieve a good accuracy for very oscillatory and fast growth integrands.

7. Computational Complexity

7.1. The Relationship Between the CPU Time and N

In this subsection, we examine the relationship between CPU time and the parameters N = [ n 1 d ] and a = N using a regression technique based on test data.
Figure 10 and Figure 11 show CPU time as a function of N obtained by the least squares regression with the fitted function given in Table 11. All results show that CPU time grows in proportion to N 3 .

7.2. The Relationship Between the CPU Time and the Dimension d

In this subsection, we exploit the computational complexity (in terms of CPU time as a function of d) using the least squares regression on numerical test data.
Test 7. Let Ω = [ 0 , 1 ] d , we consider the following five integrands:
f 1 ( x ) = exp i = 1 d ( 1 ) i + 1 x i , f 2 ( x ) = i = 1 d 1 0 . 9 2 + ( x i 0.6 ) 2 , f 3 ( x ) = 1 2 π exp 1 2 | x | 2 , f 4 ( x ) = cos 2 π + i = 1 d 2 x i , f 5 ( x ) = exp i = 1 d ( 1 ) i + 1 x i 2 , f 6 ( x ) = ( 1 + i = 1 d x i ) ( d + 1 ) .
Figure 12 displays the the CPU time as functions of d obtained by the least square regression whose analytical expressions are given in Table 12. We note that the parameters of the algorithm MDI-LR only affect the coefficients of the fitted function, not the power of the polynomials. These results show that the CPU time required by the proposed algorithm MDI-LR grows at most with polynomial order O ( d 3 N 2 ) .
We assess the quality of the fitted curves using the R-square criterion in Matlab, defined by R- square = 1 i n ( y i y ^ i ) 2 i n ( y i y ¯ ) 2 , where y i is a test data output, y ^ i is the predicted value, and y ¯ is the mean of y i . As shown in Table 12, the R-square values of all fitted functions are close to 1, indicating their high accuracy. These results support the observation that the CPU time grows no more than cubically with the dimension d. Combined with the results of Test 6 in Section 6.2, we conclude that the computational cost of the proposed MDI-LR algorithm scales at most polynomially in the order of O ( N 2 d 3 ) .

8. Conclusions

In this paper, we develop a fast numerical algorithm, termed MDI-LR, for the efficient implementation of quasi-Monte Carlo (QMC) lattice rules in computing the d-dimensional integral of a given function. To formulate the algorithm, we first employ an affine transformation to map standard rank-one lattice points into a tensor-product-like lattice and then enhance computational reuse and scalability through a multilevel dimension iteration (MDI) approach. This design eliminates the need for explicitly storing integration points or independently evaluating function values, thereby improving both efficiency and scalability.
The main innovations of the proposed method are fourfold. First, the affine-transformed lattice enhances the uniformity of integration point distribution, particularly near domain boundaries. Second, the transformation allows us systematically to add a small number of supplementary points near the boundary of the integration domain to obtain a perfect tensor-product lattice in the new coordinate system. The tensor-product lattice gives a better coverage of the integration domain and results in a slightly higher accurate quadrature rule than the original QMC lattice rule with almost same number of integration points. Third, the transformed QMC lattice (i.e., the tensor-product lattice) can be seamlessly embedded into the Multidimensional Integration (MDI) acceleration framework, which substantially improves computational efficiency while maintaining the numerical accuracy of the quadrature rule. Finally, the proposed MDI-LR algorithm demonstrates excellent scalability and robustness in very high-dimensional problems (up to thousands of dimensions), surpassing direct implementation of QMC methods in both efficiency and accuracy.
Extensive numerical experiments confirm that the MDI-LR algorithm achieves a computational complexity of approximately O ( d 3 N 2 ) or better, effectively mitigating the curse of dimensionality. These results indicate that the MDI-LR algorithm makes QMC lattice rules not only competitive but also practically applicable to large-scale high-dimensional integration problems. Future work will focus on extending the proposed framework to general Monte Carlo methods and applying it to high-dimensional partial differential equations and real-world computational models.

Author Contributions

Conceptualization, X.F.; methodology, H.Z. and X.F.; code and simulation, H.Z.; writing—original draft preparation, H.Z.; writing—revision and editing, X.F. All authors have read and agreed to the published version of the manuscript.

Funding

The work of X.F. was partially supported by the NSF grant: DMS-2309626.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Bungartz, H.-J.; Griebel, M. Sparse grids. Acta Numer. 2014, 13, 147–269. [Google Scholar] [CrossRef]
  2. Gerstner, T.; Griebel, M. Numerical integration using sparse grids. Numer. Algorithms 1998, 18, 209–232. [Google Scholar] [CrossRef]
  3. Caflisch, R.E. Monte Carlo and quasi-Monte Carlo methods. Acta Numer. 1998, 7, 1–49. [Google Scholar] [CrossRef]
  4. Ogata, Y. A Monte Carlo method for high dimensional integration. Numer. Math. 1989, 55, 137–157. [Google Scholar] [CrossRef]
  5. Bratley, P.; Fox, B.; Niederreiter, H. Implementation and Tests of Low Discrepancy Sequences. ACM Trans. Model. Comput. Simul. 1992, 2, 195–213. [Google Scholar] [CrossRef]
  6. Niederreiter, H. Low-discrepancy and low-dispersion sequences. J. Number Theory 1988, 30, 51–70. [Google Scholar] [CrossRef]
  7. Faure, H. Good permutations for extreme discrepancy. J. Number Theory 1992, 42, 47–56. [Google Scholar] [CrossRef]
  8. Sobol, I. Uniformly distributed sequences with an additional uniform property. USSR Comput. Math. Math. Phys. 1977, 16, 236–242. [Google Scholar] [CrossRef]
  9. Kuo, F.Y.; Schwab, C.; Sloan, I.H. Quasi-Monte Carlo methods for high-dimensional integration: The standard (weighted Hilbert space) setting and beyond. ANZIAM J. 2011, 53, 1–37. [Google Scholar] [CrossRef]
  10. Dick, J.; Kuo, F.Y.; Sloan, I.H. High-dimensional integration: The quasi-Monte Carlo way. Acta Numer. 2013, 22, 133–288. [Google Scholar] [CrossRef]
  11. Niederreiter, H. Random Number Generation and Quasi-Monte Carlo Methods; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992. [Google Scholar] [CrossRef]
  12. Korobov, N.M. The approximate computation of multiple integrals. Dokl. Akad. Nauk SSSR 1959, 124, 1207–1210. (In Russian) [Google Scholar] [CrossRef]
  13. Sloan, I.H.; Joe, S. Lattice Methods for Multiple Integration; Oxford University Press: Oxford, UK, 1994. [Google Scholar] [CrossRef]
  14. Wang, X.; Sloan, I.H.; Dick, J. On Korobov lattice rules in weighted spaces. SIAM J. Numer. Anal. 2004, 42, 1760–1779. [Google Scholar] [CrossRef]
  15. Krommer, A.R.; Ueberhuber, C.W. Computational Integration; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1998. [Google Scholar] [CrossRef]
  16. Haber, S. Experiments on optimal coefficients. In Applications of Number Theory to Numerical Analysis; Academic Press: New York, NY, USA, 1972; pp. 11–37. [Google Scholar] [CrossRef]
  17. Guth, P.A.; Kaarnioja, V. Quasi-Monte Carlo for partial differential equations with generalized Gaussian input uncertainty. SIAM J. Numer. Anal. 2025, 63, 1666–1690. [Google Scholar] [CrossRef]
  18. Wang, H.; Wang, X. On the convergence rate of Quasi Monte Carlo method with importance sampling for unbounded functions in RKHS. Appl. Math. Lett. 2025, 160, 109352. [Google Scholar] [CrossRef]
  19. Melnikov, O.; Milz, J. Randomized quasi-Monte Carlo methods for risk-averse stochastic optimization. J. Optim. Theory Appl. 2025, 206, 14. [Google Scholar] [CrossRef]
  20. Hok, J.; Kucherenko, S. The unreasonable effectiveness of Randomized Quasi-Monte Carlo in option pricing and risk analysis. Available at SSRN. 2025. Available online: https://www.broda.co.uk/Slides/Risk_RQMC_PCA_Presentation.pdf (accessed on 28 May 2025). [CrossRef]
  21. Imai, J.; Tan, K.S. Dimension reduction for Quasi-Monte Carlo methods via quadratic regression. Math. Comput. Simul. 2025, 227, 371–390. [Google Scholar] [CrossRef]
  22. Feng, X.; Zhong, H. A fast multilevel dimension iteration algorithm for high dimensional numerical integration. Ann. Math. Sci. Appl. 2023, 83, 427–460. [Google Scholar] [CrossRef]
  23. Niederreiter, H. Existence of good lattice points in the sense of Hlawka. Monatsh. Math. 1978, 86, 203–219. [Google Scholar] [CrossRef]
  24. Niederreiter, H. Quasi-Monte Carlo methods and pseudo-random numbers. Bull. Am. Math. Soc. 1978, 84, 957–1041. [Google Scholar] [CrossRef]
  25. Niederreiter, H.; Winterhof, A. Applied Number Theory; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
  26. Korobov, N.M. Properties and calculation of optimal coefficients. Dokl. Akad. Nauk 1960, 132, 1009–1012. [Google Scholar]
  27. Sloan, I.H.; Reztsov, A. Component-by-component construction of good lattice rules. Math. Comput. 2002, 71, 263–273. [Google Scholar] [CrossRef]
  28. Kritzer, P.; Niederreiter, H.; Pillichshammer, F. Ian Sloan and Lattice Rules. In Contemporary Computational Mathematics—A Celebration of the 80th Birthday of Ian Sloan; Springer: Cham, Switzerland, 2018; pp. 741–769. [Google Scholar] [CrossRef]
  29. Zhong, H.; Feng, X. An efficient and fast sparse grid algorithm for high dimensional numerical integration. Mathematics 2023, 11, 4191. [Google Scholar] [CrossRef]
Figure 1. Two-dimensional lattice with 81 points. (a) The corresponding generating vector is ( 1 ,   2 ) . (b) The corresponding generating vector is ( 1 ,   4 ) . (c) The corresponding generating vector is ( 1 ,   7 ) . We observe that the distribution of the 81 lattice points strongly depends on the generating vector. The colors in (ac) have no particular meaning, they are used to help viewing those points.
Figure 1. Two-dimensional lattice with 81 points. (a) The corresponding generating vector is ( 1 ,   2 ) . (b) The corresponding generating vector is ( 1 ,   4 ) . (c) The corresponding generating vector is ( 1 ,   7 ) . We observe that the distribution of the 81 lattice points strongly depends on the generating vector. The colors in (ac) have no particular meaning, they are used to help viewing those points.
Mathematics 13 03437 g001
Figure 2. Three-dimensional lattice with 81 points. (a) The corresponding generating vector is ( 1 ,   2   , 4 ) . (b) The corresponding generating vector is ( 1 ,   4 , 16 ) . (c) The corresponding generating vector is ( 1 ,   7 ,   49 ) . Again, we observe that the distribution of the 81 lattice points strongly depends on the generating vector. The colors in (ac) have no particular meaning, they are used to help viewing those points.
Figure 2. Three-dimensional lattice with 81 points. (a) The corresponding generating vector is ( 1 ,   2   , 4 ) . (b) The corresponding generating vector is ( 1 ,   4 , 16 ) . (c) The corresponding generating vector is ( 1 ,   7 ,   49 ) . Again, we observe that the distribution of the 81 lattice points strongly depends on the generating vector. The colors in (ac) have no particular meaning, they are used to help viewing those points.
Mathematics 13 03437 g002
Figure 3. (Left): 81-point lattice with the generating vector ( 1 ,   4 ) . (Right): transformed lattice after affine coordinate transformation.
Figure 3. (Left): 81-point lattice with the generating vector ( 1 ,   4 ) . (Right): transformed lattice after affine coordinate transformation.
Mathematics 13 03437 g003
Figure 4. (Left): 161-point rank-one lattice with generating vector is ( 1 ,   4 ,   16 ) . (Right): transformed lattice after coordinate transformation.
Figure 4. (Left): 161-point rank-one lattice with generating vector is ( 1 ,   4 ,   16 ) . (Right): transformed lattice after coordinate transformation.
Mathematics 13 03437 g004
Figure 5. (Left): 81-point lattice with the generating vector ( 1 ,   7 ) . (Middle): transformed lattice. (Right): improved tensor product grid after adding three points (in red color).
Figure 5. (Left): 81-point lattice with the generating vector ( 1 ,   7 ) . (Middle): transformed lattice. (Right): improved tensor product grid after adding three points (in red color).
Mathematics 13 03437 g005
Figure 6. CPU time comparison of SLR and MDI-LR simulations: (a) the number of lattice points increases in dimension; (b) the number of lattice points increases slowly.
Figure 6. CPU time comparison of SLR and MDI-LR simulations: (a) the number of lattice points increases in dimension; (b) the number of lattice points increases slowly.
Mathematics 13 03437 g006
Figure 7. Comparison of MDI-LR, MDI-SG, and MDI-TP methods.
Figure 7. Comparison of MDI-LR, MDI-SG, and MDI-TP methods.
Mathematics 13 03437 g007
Figure 8. Performance comparison of algorithm MDI-LR with n = 1 + 10 d and a = 4 ,   6 ,   8 ,   10 ,   12 ,   14 ,   16 for computing I d ( f ) , I d ( f ^ ) and I d ( f ˜ ) . (a) d = 5 , CPU time comparison. (b) d = 10 , CPU time comparison. (c) d = 5 , comparison of relative errors. (d) d = 10 , comparison of relative errors.
Figure 8. Performance comparison of algorithm MDI-LR with n = 1 + 10 d and a = 4 ,   6 ,   8 ,   10 ,   12 ,   14 ,   16 for computing I d ( f ) , I d ( f ^ ) and I d ( f ˜ ) . (a) d = 5 , CPU time comparison. (b) d = 10 , CPU time comparison. (c) d = 5 , comparison of relative errors. (d) d = 10 , comparison of relative errors.
Mathematics 13 03437 g008aMathematics 13 03437 g008b
Figure 9. Distribution of 100 integration points in the transformed coordinate system when a = 2 ,   10 ,   20 ,   respectively.
Figure 9. Distribution of 100 integration points in the transformed coordinate system when a = 2 ,   10 ,   20 ,   respectively.
Mathematics 13 03437 g009
Figure 10. The relationship between the CPU time and parameter N when d = 5 : (a) I d ( f ) ; (b) I d ( f ^ ) ; (c) I d ( f ˜ ) .
Figure 10. The relationship between the CPU time and parameter N when d = 5 : (a) I d ( f ) ; (b) I d ( f ^ ) ; (c) I d ( f ˜ ) .
Mathematics 13 03437 g010
Figure 11. The relationship between the CPU time and parameter N when d = 10 : (a) I d ( f ) ; (b) I d ( f ^ ) ; (c) I d ( f ˜ ) .
Figure 11. The relationship between the CPU time and parameter N when d = 10 : (a) I d ( f ) ; (b) I d ( f ^ ) ; (c) I d ( f ˜ ) .
Mathematics 13 03437 g011
Figure 12. The relationship between the CPU time and dimension d.
Figure 12. The relationship between the CPU time and dimension d.
Mathematics 13 03437 g012
Table 1. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = n 2 = N for approximating I 2 ( f ) .
Table 1. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = n 2 = N for approximating I 2 ( f ) .
SLR (Standard LR)Imp-LR (Improved LR)MDI-LR
Total
Nodes ( n )
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
101 1.332 × 10 2 0.0422 1.218 × 10 3 0.0423 1.218 × 10 3 0.0877
501 5.169 × 10 3 0.0567 2.520 × 10 4 0.0547 2.520 × 10 4 0.3230
1001 4.051 × 10 3 0.0610 1.269 × 10 4 0.0657 1.269 × 10 4 0.5147
5001 2.570 × 10 3 0.0755 2.489 × 10 5 0.0754 2.489 × 10 5 1.6242
10,001 2.094 × 10 4 0.0922 1.220 × 10 5 0.0921 1.220 × 10 5 3.9471
40,001 7.294 × 10 5 0.1782 3.050 × 10 6 0.1787 3.050 × 10 6 7.0408
Table 2. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = n 2 = N or approximating I 2 ( f ^ ) .
Table 2. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = n 2 = N or approximating I 2 ( f ^ ) .
SLR (Standard LR)Imp-LR(Improved LR)MDI-LR
Total
Nodes ( n )
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
101 1.163 × 10 2 0.0415 1.072 × 10 3 0.0410 1.072 × 10 3 0.0980
501 6.794 × 10 3 0.0539 1.399 × 10 4 0.0546 1.399 × 10 4 0.3498
1001 3.814 × 10 3 0.0647 7.040 × 10 5 0.0653 7.040 × 10 5 0.5028
5001 1.858 × 10 3 0.0723 1.3411 × 10 5 0.0733 1.341 × 10 5 1.7212
10,001 1.175 × 10 4 0.0965 6.759 × 10 6 0.0945 6.759 × 10 6 3.4528
40,001 2.937 × 10 5 0.1386 1.689 × 10 6 0.1399 1.689 × 10 6 6.1104
Table 3. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = n 2 = n 3 = N for computing I 3 ( f ) .
Table 3. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = n 2 = n 3 = N for computing I 3 ( f ) .
SLR (Standard LR)Imp-LR (Improved LR)MDI-LR
Total
Nodes ( n )
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
101 3.426 × 10 3 0.0574 4.985 × 10 3 0.0588 4.985 × 10 3 0.0877
1001 6.276 × 10 3 0.0634 1.249 × 10 3 0.0654 1.249 × 10 3 0.2684
10001 9.920 × 10 4 0.0833 3.124 × 10 4 0.0877 3.124 × 10 4 0.6322
100,001 5.717 × 10 4 0.1500 5.907 × 10 5 0.1499 5.907 × 10 5 2.5866
1,000,001 1.369 × 10 5 1.0589 1.249 × 10 5 1.0587 1.249 × 10 5 14.737
10,000,001 8.441 × 10 6 9.8969 3.124 × 10 6 10.280 3.124 × 10 6 91.897
Table 4. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = n 2 = n 3 = N for computing I 3 ( f ^ ) .
Table 4. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = n 2 = n 3 = N for computing I 3 ( f ^ ) .
SLR (Standard LR)Imp-LR (Improved LR)MDI-LR
Total
Nodes ( n )
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
101 1.866 × 10 2 0.0580 1.008 × 10 3 0.0554 1.008 × 10 3 0.1366
1001 9.746 × 10 3 0.0628 2.739 × 10 4 0.0649 2.739 × 10 4 0.3804
10,001 1.001 × 10 3 0.0820 6.337 × 10 5 0.0828 6.337 × 10 5 1.1032
100,001 7.063 × 10 4 0.1443 1.326 × 10 5 0.1557 1.326 × 10 5 4.8794
1,000,001 2.211 × 10 5 1.1163 2.810 × 10 6 1.2104 2.810 × 10 6 20.305
10,000,001 1.650 × 10 5 10.207 7.026 × 10 7 10.427 7.026 × 10 7 101.22
Table 5. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = = n d = N for computing I d ( f ) .
Table 5. Relative errors and CPU times of SLR, Improved LR and MDI-LR simulations with N = [ n 1 d ] , a = N , n 1 = = n d = N for computing I d ( f ) .
SLR (Standard LR)
Total Nodes ( 1 + 10 d )
Imp-LR (Improved LR)
Total Nodes ( 1.1 × 10 d )
MDI-LR
Total Nodes ( 1.1 × 10 d )
Dimension
( d )
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
2 4.802 × 10 3 0.0622 5.398 × 10 4 0.0637 5.398 × 10 4 0.1335
4 3.796 × 10 3 0.1068 1.131 × 10 3 0.1206 1.131 × 10 3 0.5780
6 7.780 × 10 3 1.2450 1.723 × 10 3 1.2745 1.723 × 10 3 1.2890
8 1.189 × 10 2 124.91 2.315 × 10 3 126.85 2.315 × 10 3 1.4083
10 1.602 × 10 2 13,084 2.908 × 10 3 13,255 2.908 × 10 3 3.1418
11 1.809 × 10 2 132,927 3.204 × 10 3 141,665 3.204 × 10 3 3.8265
12failedfailedfailedfailed 3.501 × 10 3 4.5919
Table 6. Relative errors and CPU times of SLR and MDI-LR simulations with the same number of integration points for computing I d ( f ) .
Table 6. Relative errors and CPU times of SLR and MDI-LR simulations with the same number of integration points for computing I d ( f ) .
SLRMDI-LR
Dimension
( d )
Total
Nodes ( n )
a ValueRelative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
21 + 10 3 31 4.8020 × 10 4 0.0369 6.1474 × 10 5 0.432905
61 + 10 6 10 7.7798 × 10 3 1.2450 1.7745 × 10 3 0.790102
101 + 10 6 4 5.3673 × 10 2 1.2453 1.8683 × 10 2 0.582487
141 + 10 8 4 7.9282 × 10 2 144.759 2.6253 × 10 2 0.536131
181 + 10 9 3 1.5827 × 10 1 1649.59 6.1158 × 10 2 0.774606
221 + 10 10 3 2.0007 × 10 1 18,694.04 7.5249 × 10 2 0.702708
261 + 10 11 3 2.4341 × 10 1 217,381.41 8.9527 × 10 2 0.866122
301 + 10 11 3 2.9009 × 10 1 269,850.87 1.0399 × 10 1 1.045107
Table 7. Computed results for I d ( f ) and I d ( f ^ ) by algorithm MDI-LR.
Table 7. Computed results for I d ( f ) and I d ( f ^ ) by algorithm MDI-LR.
I d ( f )
Nodes ( 1 × 8 d )
I d ( f ^ )
Nodes ( 1 × 20 d )
Dimension
( d )
a ValueRelative
Error
CPU
Time (s)
a ValueRelative
Error
CPU
Time (s)
108 6.4884 × 10 3 0.432906320 1.6107 × 10 3 0.9851172
1008 6.3022 × 10 2 71.25307620 1.6225 × 10 2 11.1203255
3008 1.7740 × 10 1 1856.9101820 4.9469 × 10 2 37.0903112
5008 2.7781 × 10 1 8076.9242920 8.3801 × 10 2 65.9497657
7008 3.6597 × 10 1 20,969.9616220 1.1925 × 10 1 108.989057
9008 4.4337 × 10 1 47,870.5084320 1.5587 × 10 1 157.487672
10008 4.7845 × 10 1 69,991.8801720 1.7462 × 10 1 189.132615
Table 8. Performance comparison of algorithm MDI-LR with d = 5 , 10 , a = N and N = 4 , 6 , 8 , 10 , 12 , 14 , 16 for computing I d ( f ) .
Table 8. Performance comparison of algorithm MDI-LR with d = 5 , 10 , a = N and N = 4 , 6 , 8 , 10 , 12 , 14 , 16 for computing I d ( f ) .
d = 5 d = 10
N (n)Korobov
Parameter ( a )
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
4 ( 1 + 4 d )4 8.6248 × 10 3 0.1456465 1.8003 × 10 2 0.3336161
6 ( 1 + 6 d )6 3.8967 × 10 3 0.1911801 8.0284 × 10 3 0.5690320
8 ( 1 + 8 d )8 2.2145 × 10 3 0.3373442 4.5314 × 10 3 0.9552591
10 ( 1 + 10 d )10 1.4271 × 10 3 0.3884146 2.9078 × 10 3 1.9385378
12 ( 1 + 12 d )12 9.9601 × 10 4 0.6545521 2.0234 × 10 3 3.5639475
14 ( 1 + 14 d )14 7.3448 × 10 4 0.7224777 1.4889 × 10 3 6.0036393
16 ( 1 + 16 d )16 5.6396 × 10 4 1.0909097 1.1414 × 10 3 8.4313528
Table 9. Performance comparison of algorithm MDI-LR with d = 5 , 10 , a = N and N = 4 , 6 , 8 , 10 , 12 , 14 , 16 for computing I d ( f ^ ) .
Table 9. Performance comparison of algorithm MDI-LR with d = 5 , 10 , a = N and N = 4 , 6 , 8 , 10 , 12 , 14 , 16 for computing I d ( f ^ ) .
d = 5 d = 10
N (n)Korobov
Parameter ( a )
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
4 ( 1 + 4 d )4 4.9621 × 10 2 0.1323887 1.0585 × 10 1 0.2775234
6 ( 1 + 6 d )6 2.2181 × 10 2 0.1955847 4.6141 × 10 2 0.4267706
8 ( 1 + 8 d )8 1.2558 × 10 2 0.2689113 2.5836 × 10 2 0.5697773
10 ( 1 + 10 d )10 8.0791 × 10 3 0.3227299 1.6517 × 10 2 0.7828456
12 ( 1 + 12 d )12 5.6328 × 10 3 0.4056192 1.1470 × 10 2 0.9228344
14 ( 1 + 14 d )14 4.1513 × 10 3 0.4940739 8.4305 × 10 3 1.0968489
16 ( 1 + 16 d )16 3.1863 × 10 3 0.6079693 6.4576 × 10 3 1.2933549
Table 10. Performance comparison of algorithm MDI-LR with d = 5 , 10 , a = N and N = 4 , 6 , 8 , 10 , 12 , 14 , 16 for computing I d ( f ˜ ) .
Table 10. Performance comparison of algorithm MDI-LR with d = 5 , 10 , a = N and N = 4 , 6 , 8 , 10 , 12 , 14 , 16 for computing I d ( f ˜ ) .
d = 5 d = 10
N (n)Korobov
Parameter ( a )
Relative
Error
CPU
Time (s)
Relative
Error
CPU
Time (s)
4 ( 1 + 4 d )4 1.9003 × 10 2 0.1254485 3.9895 × 10 2 0.2460844
6 ( 1 + 6 d )6 8.5331 × 10 3 0.1802281 1.7625 × 10 2 0.3613987
8 ( 1 + 8 d )8 4.8390 × 10 3 0.2114595 9.9155 × 10 3 0.4414383
10 ( 1 + 10 d )10 3.1153 × 10 3 0.2748469 6.3531 × 10 3 0.4892808
12 ( 1 + 12 d )12 2.1729 × 10 3 0.3092816 4.4172 × 10 3 0.5859328
14 ( 1 + 14 d )14 1.6018 × 10 3 0.3602077 3.2488 × 10 3 0.6783681
16 ( 1 + 16 d )16 1.2296 × 10 3 0.4157161 2.4897 × 10 3 0.7819849
Table 11. Relationship between the CPU time and parameter N.
Table 11. Relationship between the CPU time and parameter N.
IntegrandamdFitting FunctionR-Square
f ( x ) N15 h 1 ( N ) = 0.007569 × N 1.772 0.9687
f ^ ( x ) N15 h 2 ( N ) = 0.02326 × N 1.165 0.9920
f ˜ ( x ) N15 h 3 ( N ) = 0.03592 × N 0.8767 0.9946
f ( x ) N110 h 4 ( N ) = 0.002136 × N 2.992 0.9968
f ^ ( x ) N110 h 5 ( N ) = 0.05679 × N 1.125 0.9984
f ˜ ( x ) N110 h 6 ( N ) = 0.07872 × N 0.8184 0.9901
Table 12. The relationship between CPU time as a function of the dimension d.
Table 12. The relationship between CPU time as a function of the dimension d.
Integrand
aNmFitting FunctionR-Square
f 1 881 g 1 = 1.057 × 10 6 × N 2 d 3 0.9973
10101 g 2 = 1.192 × 10 6 × N 2 d 3 0.9995
20201 g 3 = 1.433 × 10 6 × N 2 d 3 0.9978
f 2 10101 g 4 = 0.0001774 × N d 1.611 0.9983
14141 g 5 = 0.003028 × N d 1.147 0.9987
20201 g 6 = 0.000539 × N d 1.41 0.9964
f 3 881 g 7 = 7.334 × 10 6 × N 2 d 3 0.9983
10101 g 8 = 9.321 × 10 6 × N 2 d 3 0.9986
14141 g 9 = 1.339 × 10 5 × N 2 d 3 0.9972
f 4 10101 g 10 = 1.164 × 10 6 × N 2 d 3 0.9988
20201 g 11 = 1.319 × 10 6 × N 2 d 3 0.9974
f 5 10101 g 12 = 6.479 × 10 5 × N 2 d 2.557 0.9996
14141 g 13 = 1.164 × 10 5 × N 2 d 3 0.9993
f 6 10101 g 14 = 1.556 × 10 6 × N 2 d 3 0.9983
20201 g 15 = 8.328 × 10 6 × N 2 d 2.431 0.9998
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhong, H.; Feng, X. An Efficient Quasi-Monte Carlo Algorithm for High Dimensional Numerical Integration. Mathematics 2025, 13, 3437. https://doi.org/10.3390/math13213437

AMA Style

Zhong H, Feng X. An Efficient Quasi-Monte Carlo Algorithm for High Dimensional Numerical Integration. Mathematics. 2025; 13(21):3437. https://doi.org/10.3390/math13213437

Chicago/Turabian Style

Zhong, Huicong, and Xiaobing Feng. 2025. "An Efficient Quasi-Monte Carlo Algorithm for High Dimensional Numerical Integration" Mathematics 13, no. 21: 3437. https://doi.org/10.3390/math13213437

APA Style

Zhong, H., & Feng, X. (2025). An Efficient Quasi-Monte Carlo Algorithm for High Dimensional Numerical Integration. Mathematics, 13(21), 3437. https://doi.org/10.3390/math13213437

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop