Next Article in Journal
Mathematical Modeling of the State of the Battery of Cargo Electric Vehicles
Next Article in Special Issue
An Aggregation-Based Algebraic Multigrid Method with Deflation Techniques and Modified Generic Factored Approximate Sparse Inverses
Previous Article in Journal
A Unifying Principle in the Theory of Modular Relations
Previous Article in Special Issue
Adaptive Load Incremental Step in Large Increment Method for Elastoplastic Problems
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Challenging the Curse of Dimensionality in Multidimensional Numerical Integration by Using a Low-Rank Tensor-Train Format

by
Boian Alexandrov
1,
Gianmarco Manzini
1,*,
Erik W. Skau
1,
Phan Minh Duc Truong
1 and
Radoslav G. Vuchov
2
1
Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545, USA
2
Computational Mathematics Department, Sandia National Laboratories, Albuquerque, NM 87185, USA
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(3), 534; https://doi.org/10.3390/math11030534
Submission received: 25 October 2022 / Revised: 13 January 2023 / Accepted: 16 January 2023 / Published: 19 January 2023
(This article belongs to the Special Issue Advanced Numerical Analysis and Scientific Computing)

Abstract

:
Numerical integration is a basic step in the implementation of more complex numerical algorithms suitable, for example, to solve ordinary and partial differential equations. The straightforward extension of a one-dimensional integration rule to a multidimensional grid by the tensor product of the spatial directions is deemed to be practically infeasible beyond a relatively small number of dimensions, e.g., three or four. In fact, the computational burden in terms of storage and floating point operations scales exponentially with the number of dimensions. This phenomenon is known as the curse of dimensionality and motivated the development of alternative methods such as the Monte Carlo method. The tensor product approach can be very effective for high-dimensional numerical integration if we can resort to an accurate low-rank tensor-train representation of the integrand function. In this work, we discuss this approach and present numerical evidence showing that it is very competitive with the Monte Carlo method in terms of accuracy and computational costs up to several hundredths of dimensions if the integrand function is regular enough and a sufficiently accurate low-rank approximation is available.

1. Introduction

The ability to calculate very high-dimensional integrals is crucial, for example in quantum mechanics, statistical physics, inverse problems, uncertainty quantifications, probability theory, and many other fields in science and engineering (see [1,2,3]).
To calculate such integrals accurately, we need to overcome the curse of dimensionality, which is bad scaling of the computational complexity in terms of storage and floating point operations with the number of dimensions. In the case of numerical integration, we need one function evaluation at each integration node and one floating point operation to accumulate the result, i.e., a multiplication by the quadrature weight and an addition. If a one-dimensional quadrature formula is composed of m nodes and weights, the corresponding tensor product in d dimensions is composed by m d nodes and weights. This exponential scaling in the number of dimensions makes this straightforward approach practically impossible except for very small values of d.
The situation is even worse if we try to decompose the integration domain into subdomains (cells) to improve the resolution or adapt the cubature rule when the integrand function shows a low regularity. Partitioning each direction into n sub-intervals leads to a tensor grid with n d cells and, eventually, to n d m d function evaluations and floating point operations. Even if we take only one integration node per cell by setting m = 1 as in the midpoint rule, such approach to numerical integration still remains impractical. A “pedantic” example illustrates why. Setting d = n = 100 leads to 10 200 function evaluations and as many floating point operations. Assuming, very optimistically, that a function evaluation costs only one floating point operation, this approach requires about 2 × 10 188 s on a teraflop machine, which virtually performs 10 12 floating point operations per second. This time is about 4.58 × 10 170 times the age of the Universe, which has been recently estimated as 4.17 × 10 17 s (see Planck Collaboration (2020) [4]).
A non-exhaustive list of methods for multidimensional numerical integration includes the Monte Carlo method and its variants such as the quasi Monte Carlo method [5,6], and the multi-level Monte Carlo (MLMC) method [7]; the sparse grid method [8,9] and the combination technique based on sparse grids [10]; grid-based approaches such as the lattice method [11,12]. It is also worth mentioning that several software libraries are available in the public domain to address this issue, such as The Sparse Grids Matlab Kit [13], a collection of Matlab functions for high-dimensional quadrature and interpolation, based on the combination technique version of sparse grids, and PyApprox [14], which implements methods such as low-rank tensor-decomposition, Gaussian processes, and polynomial chaos expansions. A detailed presentation of these methods or a systematic review of the huge literature that is available on numerical integration are beyond the scope of this paper and we refer the interested reader to a few of the many textbooks and review papers dealing with these topics, see, e.g., [15,16,17,18] and the references therein.
A breakthrough for integrating a multivariate function on a multidimensional grid is offered by the compressive nature of low-rank tensor formats, see [19,20] for a general introduction to tensor representations and a review. The most commonly used tensor formats include the tensor train (TT) and tensor-train cross approximation [21,22], the quantized tensor train (QTT) [23], the canonical polyadic decomposition (CPD) [24], Tucker decomposition [25], and hierarchical Tucker decomposition [26]. TT-based integration has been applied in many fields, such as, just to mention a few, quantum physics and chemistry [27,28], signal processing [29], stochastics, and uncertainty quantification [30]. It is worth mentioning that recent applications of low-rank tensor formats can also be found for solving high-dimensional parabolic PDEs [31] and large ODEs with conservation laws [32]. Using such formats makes it possible to handle efficiently the function evaluation at the integration nodes and the saxpy floating point operations that accumulate the numerical value of the integral (the acronym “saxpy” stands for “scalar alpha x plus y”, i.e., multiplication by a scalar number and addition). Numerical integration of high-dimensional singular function using the tensor-train format on parallel machines is also investigated in references [33,34].
In this work, we mainly focus on the TT format, which we discuss in the next sections, and its application to composite numerical integration. Our major goal is to show that if it is possible to use an accurate low-rank tensor-train format to represent the integrand as a grid function, we can design very accurate and very efficient multivariate cubature formulas from univariate quadrature formulas in an almost straightforward way. To this end, we focus on cubatures built from the Newton-Cotes (trapezoidal and Simpson), Clenshaw-Curtis, and Gauss-Legendre formulas. However, the strategy is very general and can be adopted for all univariate integration formulas. For the sake of comparison, we consider the Monte Carlo method and its publicly available implementation provided by the library GAIL [35].
The outline of the paper is as follows. In Section 2, we discuss the problem of multidimensional numerical integration and the composite approach, where the computational domain is split into a grid of regularly equispaced cells and a univariate quadrature rule is applied inside each cell and in any direction. In Section 3, we review some basic concepts about the tensor-train format. In Section 4, we investigate the performance of our method for the integration of a set of representative functions and assess its performance. In Section 5, we offer our final remarks and discuss possible future work.

2. Multidimensional Numerical Integration

In this section, we introduce the notation of the paper and briefly review the construction of composite cubature formulas through the tensor product of univariate high-order accurate quadrature rules. First, we split the multidimensional domain into the union of hypercells, which are the tensor product of univariate partitions along the space directions. Then, inside each cell, we build the multidimensional cubature as the tensor product of univariate quadrature rules, and we characterize the accuracy of such a numerical approximation by considering the contribution of each direction to the total integration error.

2.1. Tensor Product Construction of Composite Cubature Formulas

Let f : Ω R be a real-valued function defined on the multidimensional hypercube Ω = X = 1 d Ω ( ) = 0 , 1 d . We want to design a composite integration algorithm that numerically approximates the multidimensional integral
I ( f ) = Ω f ( x 1 , x d ) d x 1 d x d ,
where x is the position variable along the -th direction, and = 1 , , d is the integer index running over all the space directions. To this end, we introduce a family of multidimensional grids { Ω n ̲ } . We uniquely identify each member Ω n ̲ of such a family by the array of strictly positive integers n ̲ = ( n 1 , , n d ) . Each grid Ω n ̲ is the tensor product of d independent partitionings Ω n ( ) for = 1 , , d , into n 1 equally spaced subintervals of [ 0 , 1 ] , so that Ω n ̲ = X = 1 d Ω n ( ) . We let h = 1 ( n 1 ) to be the space size in the -th direction, i.e., the distance between two consecutive nodes of Ω n ( ) . We assume that h tends to zero uniformly in , which implies that n tends to uniformly in . We label the grid nodes of Ω n ̲ by using the multi-index i ̲ = ( i 1 , , i d ) , where i = 1 , , n . We let I n ̲ denote the set of all admissible multi-indices in accordance with the given n ̲ , i.e., all possible d-uple combinations of positive integers i between 1 and n for the multidimensional mesh Ω n ̲ . The multidimensional grid nodes are given by
x i ̲ = x ( i 1 , , i d ) = x i 1 ( 1 ) , , x i d ( d ) ,
where x i ( ) = h ( i 1 ) is the position of the i -th node along the -th direction, so that, for all , it holds that x 1 = 0 when i = 1 and x n = 1 when i = n . In view of this definition, every grid Ω n ̲ decomposes Ω into a set of N n ̲ = ( n 1 1 ) ( n d 1 ) smaller hypercubes, the cells, that are isomorphic to 0 , h 1 × × 0 , h d . We denote the cell corresponding to the multi-index i ̲ by C i ̲ , so that the “ i ̲ -th” cell is formally given by
C i ̲ = x i 1 ( 1 ) , x i 1 + 1 ( 1 ) × × x i d ( d ) , x i d + 1 ( d ) .
Every mesh Ω n ̲ = { C i ̲ } i ̲ I n ̲ forms a finite cover of Ω , i.e., Ω ¯ = i ̲ I n ̲ C i ̲ , and the mesh cells are nonoverlapping in the sense that the d-dimensional Lebesgue measure (hypervolume) of the intersection of any possible pair of distinct cells is zero, i.e., C i ̲ C j ̲ = 0 for all i ̲ , j ̲ I n ̲ , i ̲ j ̲ . Therefore, we can reformulate I ( f ) in (1) as the sum of cell subintegrals,
I ( f ) = i ̲ I n ̲ I C i ̲ ( f ) where I C i ̲ ( f ) = C i ̲ f ( x 1 , x d ) d x 1 d x d ,
and devise a computer algorithm for the composite numerical integration over Ω n ̲ by assembling local multidimensional cubature rules that approximate all integrals I C i ̲ ( f ) . In the literature, the word composite (or, sometimes, compound) normally refers to the combination of one-dimensional Newton-Cotes formulas to subintegrals obtained after splitting a univariate integration domain into a finite number of nonoverlapping subdomains. Hereafter, we shall use this word to refer to a generic integration algorithm on a multidimensional tensor grid. We will consider composite formulas obtained from the Newton-Cotes as well as Gauss-Legendre and Clenshaw-Curtis quadratures. For the sake of exposition, we assume that the quadrature rule is the same in all univariate subintervals x i ( ) , x i + 1 ( ) . Different integration formulas can be considered in any subinterval at the price of a more complex description of the algorithm.
Let Q m = ξ q , η q q = 1 m be an m-point quadrature rule with nodes ξ q and weights η q for the approximation of a line integral over the reference segment [ 1 , 1 ] . We do not require (nor exclude) that the extremal points ± 1 are part of the integration nodes. We map the reference domain [ 1 , 1 ] onto all the d one-dimensional intervals x i ( ) , x i + 1 ( ) forming a given cell C i ̲ , identified by the multi-index i ̲ . Accordingly, the q -th reference node with coordinates ξ q [ 1 , 1 ] is mapped onto the node with the -th coordinate equal to x i , q ( ) = x i ( ) + x i + 1 ( ) x i ( ) ( ξ q + 1 ) / 2 , and the corresponding weight becomes w q = η q x i + 1 ( ) x i ( ) / 2 . The tensor product of the d directions provides a local subgrid of quadrature nodes inside C i ̲ . We denote these subgrid nodes by using the multi-index q ̲ = ( q 1 , q d ) , so that
x i ̲ , q ̲ = x i 1 , q 1 ( 1 ) , , x i d , q d ( d ) .
The multi-index i ̲ identifies the i ̲ -th cell, i.e., C i ̲ , and the multi-index q ̲ = q 1 , , q d identifies the quadrature node inside C i ̲ . In fact, the index q = 1 , , m labels the q -th node of the m-node univariate quadrature rule along the -th space direction, denoted by the superscript ( ) in x i , q ( ) . We denote the set of all possible values of q ̲ by Q m . The nodes x i ̲ , q ̲ are all located in the interior of C i ̲ if the corresponding reference nodes are inside ( 1 , 1 ) as in the case of the Gauss-Legendre integration rules. Figure 1 illustrates the notation for the case with d = 2 .
The integration weight associated with the q ̲ -th node of the i ̲ -th cell is the product of the one-dimensional weights identified inside C i ̲ by q ̲ = q 1 , , q d , where q = 1 , , m for = 1 , , d . We have that
w q ̲ = w q 1 ( 1 ) w q d ( d ) ,
where w q ( ) is the weight associated to the q -th node along the (univariate) -th direction of cell C i ̲ . In Equation (3), we do not need to label the weights with the cell index i ̲ since we assumed that the integration formula is the same in every cell, and so are the weights.
Finally, our composite cubature rule on the multidimensional mesh Ω n ̲ is obtained by applying the local cubature rule to all cells C i ̲ of grid Ω n ̲ and adding the results, so that
Q n ̲ m ( f ) : = i ̲ I n ̲ q ̲ Q m w q ̲ f ( x i ̲ , q ̲ ) = i ̲ I n ̲ q 1 = 1 m q d = 1 m w q 1 ( 1 ) w q d ( d ) f ( x i 1 , q 1 ( 1 ) , , x i d , q d ( d ) ) ,
where we take the summation over all the possible instances of the multi-indices i ̲ and q ̲ .

2.2. Accuracy of Multidimensional Cubature Formulas on the Reference Cell

The numerical integration error that characterizes the composite numerical integration formula with n ̲ partitionings of Ω reads as:
E n ̲ ( f ) : = I ( f ) Q n ̲ m ( f ) .
An estimate of such error is fundamental to assess the accuracy of our cubature formulas. We can estimate the accuracy of a multidimensional numerical integration rule if we know the accuracy of the univariate quadrature rule used in each space direction of the tensor product construction. Let Ω ^ = [ 1 , 1 ] d be the multidimensional reference cell, and ξ = ( ξ 1 , , ξ d ) T R d with 1 ξ 1 for = 1 , , d , the multidimensional position vector in Ω ^ . For all integers = 2 , , d , we consider the notation ξ ( 1 , 1 ) = ( ξ 1 , , ξ 1 ) and ξ ( , d ) = ( ξ , , ξ d ) , with the convention that ξ ( 1 , 1 ) = ( ξ 1 ) and ξ ( d , d ) = ( ξ d ) . Using this notation, we introduce the partition ξ = ( ξ 1 , , ξ 1 , ξ , , ξ d ) = ( ξ ( 1 , 1 ) , ξ ( , d ) ) . Similarly, we introduce the notation i ̲ ( 1 , 1 ) = ( i 1 , , i 1 ) and i ̲ ( , d ) = ( i , , i d ) . Again, for consistency, we say that i ̲ ( 1 , 1 ) = ( i 1 ) , i ̲ ( d , d ) = ( i d ) , and we split the d-dimensional multi-index i ̲ as i ̲ = ( i 1 , , i 1 , i , , i d ) = ( i ̲ ( 1 , 1 ) , i ̲ ( , d ) ) . Next, we consider the reduced domain of integration Ω ^ ( ) = [ 1 , 1 ] d ( 1 ) R d ( 1 ) , which contains all the vectors ξ ( , d ) such that ( 0 , ξ ( , d ) ) Ω ^ , and the reduced set of integration indices I , d that contains all ( d ( 1 ) ) -sized multi-indices of the form i ̲ = ( i , , i d ) . Finally, we set d V ( ξ ( 1 , 1 ) ) = d ξ 1 d ξ 1 , and introduce the partial numerical integration formula for the function f ^ : Ω ^ R :
Q ( , d ) ( f ^ ) ( ξ ( 1 , 1 ) ) = q = 1 m q d = 1 m η q ( ) η q d ( d ) f ^ ξ ( 1 , 1 ) , ξ q ( ) , , ξ q d ( d ) ,
where we apply the quadrature formula in all directions from to d. Finally, we define the partial cubature rule that integrates f ^ exactly on the first 1 directions and numerically along the remaining ones from to d, so that for 1 < d we write that
I 1 ( f ^ ) : = Ω ^ ( 1 ) Q ( , d ) ( f ^ ) ( ξ ( 1 , 1 ) ) d V ( ξ ( 1 , 1 ) ) .
We complete this definition by setting I 0 ( f ^ ) = Q n ̲ m ( f ^ ) and I d ( f ^ ) = I ( f ^ ) . To ease the notation, hereafter we will write I instead of I ( f ^ ) . The integration error on Ω ^ is the telescopic sum
E ^ ( f ^ ) : = I d I 0 = = 1 d I I 1 = 1 d I I 1 .
Since it holds that ξ ( 1 , ) = ( ξ ( 1 , 1 ) , ξ ) , and d V ( ξ ( 1 , ) ) = d V ( ξ ( 1 , 1 ) ) d ξ , we split the integral over Ω ^ ( ) = Ω ^ ( 1 ) × [ 1 , 1 ] as follows
Ω ^ ( ) d V ( ξ ( 1 , ) ) = Ω ^ ( 1 ) × [ 1 , 1 ] d V ( ξ ( 1 , 1 ) ) d ξ = Ω ^ ( 1 ) ξ = 1 ξ = 1 d ξ d V ( ξ ( 1 , 1 ) ) .
Using this splitting, we find that
I I 1 = Ω ^ ( ) Q ( + 1 , d ) ( f ^ ) ( ξ ( 1 , ) ) d V ( ξ ( 1 , ) ) Ω ^ ( 1 ) Q ( , d ) ( f ^ ) ( ξ ( 1 , 1 ) ) d V ( ξ ( 1 , 1 ) ) = q + 1 = 1 m q d = 1 m η q + 1 ( + 1 ) η q d ( d ) Ω ^ ( ) f ^ ξ ( 1 , ) , ξ q + 1 ( + 1 ) , , ξ q d ( d ) d V ( ξ ( 1 , ) ) q = 1 m q d = 1 m η q ( ) η q d ( d ) Ω ^ ( 1 ) f ^ ξ ( 1 , 1 ) , ξ q ( ) , , ξ q d ( d ) d V ( ξ ( 1 , 1 ) ) = q + 1 = 1 m q d = 1 m η q + 1 ( + 1 ) η q d ( d ) Ω ^ ( 1 ) × [ 1 , 1 ] f ^ ξ ( 1 , 1 ) , ξ , ξ q + 1 ( + 1 ) , , ξ q d ( d ) d V ξ ( 1 , 1 ) d ξ q = 1 m q + 1 = 1 m q d = 1 m η q ( ) η q + 1 ( + 1 ) η q d ( d ) Ω ^ ( 1 ) f ^ ξ ( 1 , 1 ) , ξ q ( ) , ξ q + 1 ( + 1 ) , , ξ q d ( d ) d V ( ξ ( 1 , 1 ) ) = q + 1 = 1 m q d = 1 m η q + 1 ( + 1 ) η q d ( d ) Ω ^ ( 1 ) [ [ 1 , 1 ] f ^ ξ ( 1 , 1 ) , ξ , ξ q + 1 ( + 1 ) , , ξ q d ( d ) d ξ q + 1 = 1 m q d = 1 m η q + 1 ( + 1 ) η q d ( d ) Ω ^ ( 1 ) [ q = 1 m η q ( ) f ^ ξ ( 1 , 1 ) , ξ q ( ) , ξ q + 1 ( + 1 ) , , ξ q d ( d ) ] d V ( ξ ( 1 , 1 ) ) .
The univariate quadrature rule ( ξ q , η q ) q = 1 m is applied in the -th direction to the integral inside the brackets in the last step. Let E ^ ( f ^ ) be an upper bound on the integration error along direction , clearly dependent on the regularity of f ^ , so that
[ 1 , 1 ] f ^ ξ ( 1 , 1 ) , ξ , ξ q + 1 ( + 1 ) , , ξ q d ( d ) d ξ q = 1 m η q ( ) f ^ ξ ( 1 , 1 ) , ξ q ( ) , ξ q + 1 ( + 1 ) , , ξ q d ( d ) E ^ ( f ^ ) ,
for all points ξ ( 1 , 1 ) Ω ^ ( 1 ) and all sets of quadrature nodes along the remaining ( d ) directions ξ q + 1 ( + 1 ) , , ξ q d ( d ) . Since the quadrature weights are positive (see Remark 1 below), we find that
I I 1 E ^ ( f ^ ) q + 1 = 1 m q d = 1 m η q + 1 ( + 1 ) η q d ( d ) Ω ^ ( 1 ) d V ( ξ ( 1 , 1 ) ) = 2 d 1 E ^ ( f ^ ) ,
since for t = + 1 , , d we have the ( d ) summations q t = 1 m t η q t ( t ) = [ 1 , 1 ] = 2 and it holds that
Ω ^ ( 1 ) d V ( ξ ( 1 , 1 ) ) = Ω ^ ( 1 ) = 2 1 ,
Ω ^ ( 1 ) being the ( 1 ) -dimensional measure of Ω ^ ( 1 ) and [ 1 , 1 ] = 2 the one-dimensional measure of the reference interval [ 1 , 1 ] . An upper bound on the error of the multidimensional reference domain Ω ^ is thus given by
E ^ ( f ^ ) = 1 d I I 1 2 d 1 = 1 d E ^ ( f ^ ) .
In view of (6), we only need to know E ^ ( f ^ ) for = 1 , , d , i.e., the accuracy of the univariate quadrature in every direction to estimate the error of the multidimensional cubature formula. Let E ^ ( f ^ ) = max E ^ ( f ^ ) . Then, we find that
E ^ ( f ^ ) C ( d ) E ^ ( f ^ ) ,
where the constant C depends on d but is independent of m.
Remark 1
(Weight positivity). The weights of the Clenshaw-Curtis quadratures can be computed using the formulas given in [36] Equations (4) and (5), and it is immediate to deduce that they are all positive. A straightforward argument allows us to prove that the weights of the Gauss-Legendre quadratures must also be strictly positive. Consider the polynomial
p i ( ξ ) = Π 1 j m j i ( ξ ξ j ) 2 .
Its integral over [ 1 , 1 ] must be strictly positive, independently of the number of points m, since p i ( ξ ) 0 for all ξ [ 1 , 1 ] . The m-point Gauss-Legendre quadrature rule Q m integrates p i ( ξ ) exactly since p i ( ξ ) is a polynomial of degree 2 ( m 1 ) . Noting that p i ( ξ j ) = 0 for all j i by construction, we find that
0 < 1 1 p i ( ξ ) d ξ = j = 1 m η j p i ( ξ j ) = η i p i ( ξ i ) .
This relation implies that η j > 0 since p i ( ξ i ) > 0 , and the positivity of all integration weights follows by repeating this argument for all i from 1 to m. The weights for the Clenshaw-Curtis ( m = 2 , 3 , 4 ) and Gauss-Legendre ( m = 1 , 2 , 3 ) formulas used for the numerical experiments of Section 4 are reported in Appendix A.
Finally, we note the weights of two Newton-Cotes formulas used in this work, i.e., the trapezoidal and Simpson quadrature rule, are positive. However, the weight positivity is not a property of this family of quadrature rules, so we can apply the argument discussed in this section to the trapezoidal and Simpson quadrature rule but not to all the Newton-Cotes formulas.
For the cubature families built upon the Newton-Cotes (trapezoidal and Simpson) formulas, Clenshaw-Curtis formulas, and Gauss-Legendre formulas, the upper bound E ^ ( f ^ ) depends on the number of quadrature nodes m and the regularity of f ^ . We summarize these results, which are easily available from the literature (see [16,17,18,37]), in the following theorems. We assume that the univariate integrand function f ^ ( ξ ) and its first ν derivatives f ^ ( ν ) ( ξ ) , up to some given integer ν > 0 , have (at least) the regularity that is mentioned in each theorem’s assertion.
Theorem 1
(Newton-Cotes formulas for m = 2 , 3 ).
  • Let f ^ C 2 ( 1 , 1 ) ; then, the Newton-Cotes formula with two integration nodes ( m = 2 , trapezoidal rule) satisfies
    E ^ ( f ^ ) 2 3 f ^ ( 2 ) ( ζ )
    for some point ζ ( 1 , 1 ) .
  • Let f ^ C 4 ( 1 , 1 ) ; then, the Newton-Cotes formula with three integration nodes ( m = 3 , Simpson rule) satisfies
    E ^ ( f ^ ) 16 45 f ^ ( 4 ) ( ζ )
    for some point ζ ( 1 , 1 ) .
Proof. 
The theorem statements follow from [37] (Equations (9.12) and (9.16)), by setting h = 2 for the size of the reference one-dimensional integration cell ( 1 , 1 ) . □
Theorem 2
(Clenshaw-Curtis formulas). Let the univariate integrand function f ^ ( ξ ) and its derivatives f ^ ( ν 1 ) ( ξ ) be absolutely continuous on the integration domain [ 1 , 1 ] . Let the ν-th derivative f ^ ( ν ) ( ξ ) have bounded variation V ( f ^ ( ν ) ) for some integer number ν 1 . Then, the m-point Clenshaw-Curtis quadrature applied to f ^ on the integration domain [ 1 , 1 ] satisfies
E ^ ( f ^ ) C V ( f ^ ( ν ) ) π ν ( m 1 ν ) ν f o r m > 1 + ν ,
where C is a constant independent of m.
Proof. 
The theorem statement is adapted from [38] (Theorem 5.1) and the first statement of [18] (Theorem 19.4) by setting the number of quadrature nodes to m. □
Theorem 3
(Gauss-Legendre formulas). Let the univariate integrand function f ^ ( ξ ) and its derivatives f ^ ( ν 1 ) ( ξ ) be absolutely continuous on the integration domain [ 1 , 1 ] . Let the ν-th derivative f ^ ( ν ) ( ξ ) have bounded variation V ( f ^ ( ν ) ) for some integer number ν 1 . Then, the m-point Gauss-Legendre quadrature applied to f ^ on the integration interval [ 1 , 1 ] satisfies
E ^ ( f ^ ) C V ( f ^ ( ν ) ) π ν ( m 2 ν 2 ) 2 ν + 1 f o r m > 2 ν + 2 ,
where C is a constant independent of m.
Proof. 
The theorem statement is adapted from [38] Theorem 5.1 and the second statement of [18] Theorem 19.4 by setting the number of quadrature nodes to m. □

2.3. Accuracy of Composite Cubature Formulas for Multidimensional Numerical Integration

Applying a scaling argument to a local estimate of the approximation error and summing over all the mesh cells, we obtain the upper bound on the integration error of the Newton-Cotes schemes. The error estimates depend on the regularity of f ( x ) . We assume that f in each formula has (at least) the regularity shown by its right-hand side infinity norm . We denote again the partial derivatives of order ν of the scalar function f by f ( ν ) .
Theorem 4 (Convergence of composite quadrature formulas).
Let f : Ω R , and E n ̲ ( f ) : = I ( f ) Q n ̲ m ( f ) . Under the same assumptions of Theorem 1, it holds that
  • Composite Newton-Cotes formulafor m = 2 : E n ̲ ( f ) C h 2 f ( 2 ) , (trapezoidal rule);
  • Composite Newton-Cotes formulafor m = 3 : E n ̲ ( f ) C h 4 f ( 4 ) , (Simpson rule).
Moreover, under the assumption that f C m , it holds that
  • Composite Clenshaw-Curtis formulasfor m 2 : E n ̲ ( f ) C h m f ( m ) ;
and under the assumption that f C 2 m , it holds that
  • Composite Gauss-Legendre formulasfor m 2 : E n ̲ ( f ) C h 2 m f ( 2 m ) .
The positive constant factor C is independent of n ̲ (so, on the mesh size parameter h), but can depend on the number of dimensions d.
Proof. 
The theorem statements follow from an interpolation error estimate and a scaling argument. We split the one-dimensional domain of integration [ 0 , 1 ] into N equispaced subintervals [ x k , x k + h ] of size h = 1 N , so that x 0 = 0 and x N = 1 . Accordingly, we decompose the integral of f ( x ) and its numerical approximation via the m-node quadrature rule as the sum of local contributions over the N subintervals:
I ( f )   : = 0 1 f ( x ) d x = k = 0 N 1 x k x k + h f ( x ) d x = k = 0 N 1 I k ( f ) ,
Q m ( f )   : = k = 0 N 1 q = 1 m w q f ( x k , q ) = k = 0 N 1 Q k m ( f ) .
Then, we remap every subinterval [ x k , x k + h ] onto the reference interval [ 1 , 1 ] . Thus, for all x [ x k , x k + h ] and ξ [ 1 , 1 ] such that x ( ξ ) = x k + h 2 ( ξ + 1 ) it holds that f ( x ( ξ ) ) = f ^ k ( ξ ) (the subindex “k” in the symbol “ f ^ k ” indicates that this is the image of the restriction of f to [ x k , x k + h ] through the coordinate mapping x ( ξ ) ). For any positive integer ν , a straightforward calculation shows that
ν f ^ k ( ξ ) ξ ν = ν f ( x ) x ν x ( ξ ) ξ ν = h 2 ν ν f ( x ) x ν .
We rewrite the exact and approximate integration formulas on the reference interval as follows:
I ( f k ) = h 2 1 1 f ^ k ( ξ ) d ξ = : h 2 I ^ ( f ^ k ) and Q m ( f k ) = h 2 q η q f ^ k ( ξ q ) = : h 2 Q ^ m ( f ^ k ) ,
where the summation is taken on all the quadrature nodes (and using obvious definitions of the functionals “ I ^ ( · ) ” and “ Q ^ m ( · ) ”). We bound the error on the k-th cell as follows
E k ( f )   : = I k ( f ) Q k m ( f ) = h 2 I ^ ( f ^ k ) Q ^ m ( f ^ k ) = : h 2 E ^ ( f ^ k ) .
To estimate E ^ ( f ^ k ) , we introduce the polynomial interpolation I m f ^ k ( ξ ) of the integrand function f ^ k ( ξ ) at the m quadrature nodes on the reference interval [ 1 , 1 ] . This interpolation is the unique polynomial of degree m 1 in ξ that takes the same values of f ^ k at the m quadrature nodes ξ q , i.e., f ^ k ( ξ q ) = I m f ^ k ( ξ q ) for q = 1 , , m . If we apply Q ^ m to f ^ k and I m f ^ k , we obtain the same integral approximation because the quadrature rule depends only on the quadrature node values f ^ k ( ξ q ) = I m f ^ k ( ξ q ) . Moreover, the quadrature rule Q ^ m is of interpolatory type and is exact on all polynomials of degree (up to) m 1 . Therefore, we find that
Q ^ m ( f ^ k ) = Q ^ m I m f ^ k ( ξ ) = 1 1 I m f ^ k ( ξ ) d ξ .
Using the formula above, we immediately find that
E ^ ( f ^ k ) = 1 1 f ^ k ( ξ ) d ξ Q ^ m ( f ^ k ) = 1 1 f ^ k ( ξ ) I m f ^ k ( ξ ) d ξ 1 1 f ^ k ( ξ ) I m f ^ k ( ξ ) d ξ sup ξ ( 1 , 1 ) f ^ k ( ξ ) I m f ^ k ( ξ ) 1 1 d ξ = 2 f ^ k I m f ^ k , ( 1 , 1 ) ,
where f ^ k I m f ^ k , ( 1 , 1 ) is the interpolation error over the reference cell ( 1 , 1 ) . The regularity assumptions of the theorem imply that f ^ k is ν times continuously differentiable for some integer ν > 0 , whose value changes according to the quadrature formula. From standard arguments of polynomial interpolation theory, we know that f ^ k I m f ^ k , ( 1 , 1 ) C f ^ k ( ν ) , ( 1 , 1 ) , where f ^ k ( ν ) , ( 1 , 1 ) denotes the -norm of f ^ k ( ν ) ( ξ ) , the ν -th derivative of f ^ k ( ξ ) on ( 1 , 1 ) , and C is a real, positive constant that may depend on the number of quadrature nodes m. Using the derivative scaling (14), we find that
E ^ ( f ^ k ) C f ^ ( ν ) , ( 1 , 1 ) C h ν f ( ν ) , k ,
where f ( ν ) , k = sup x ( x k , x k + h ) f ( ν ) ( x ) denotes the -norm of f ( ν ) ( x ) on ( x k , x k + h ) . Here, C is a constant independent of h, but that may still depend on the number of quadrature nodes m. Summing over all the cells and noting that h N = 1 , we find the following estimate for the integration error in each direction:
E ( f ) = k E k ( f ) max k E k ( f ) N max k E ^ ( f ^ k ) h 2 N C h ν max k f ( ν ) , k C h ν f ( ν ) .
The theorem’s assertions follow on first setting ν = 2 for the trapezoidal rule and ν = 4 for the Simpson rule (in accordance with Theorem 1), and setting ν = m for the Clenshaw-Curtis formulas and ν = 2 m for the Gauss-Legendre formula (in accordance with the theorem’s regularity assumptions), and then taking the maximum over all the directions. □
Remark 2.
The estimate for the Gauss-Legendre formulas can also be derived in a straightforward way by using the formula for “ R n ” in [39] (Example 5.2, page 146) and setting directly “ b a = h ” (according to the notation used therein). Note, however, that the derivation of such a formula is not presented in [39].

3. Low-Rank TT Cross Approximation and Numerical Integration of a Multidimensional Function

In this section, we briefly review the low-rank tensor-train approximation of a multidimensional grid function and its application to the numerical integration. An extensive presentation and discussion of these concepts and algorithms can be found in [21,22].

3.1. Tensor-Train Decomposition: Basic Definitions and Properties

A d-order tensor A R n 1 × n d is a multidimensional array whose entries are addressed by d indices as
A : = A ( i 1 , , i d ) , i = 1 , , n , = 1 , , d ,
according to a notation used in MATLAB@ [40] and the Python programming language [41]. Hereafter, we shall refer to the integers n ̲ = ( n 1 , , n d ) as the mode sizes. The set of all d-dimensional arrays with the arithmetic operations of addition and multiplication by a scalar number, which we assume to be defined in the natural way, is a linear space. We endow such a linear space with the inner (dot) product and induced norm
A , B = i 1 , , i d A ( i 1 , , i d ) B ( i 1 , , i d ) , A F 2 = A , A ,
for all A , B R n 1 × n d , and where the summation runs over the set of all admissible values of the multi-index ( i 1 , , i d ) . The norm A F is usually called the Frobenius norm of A.
We say that the d-order tensor A admits a tensor-train representation if there exists a ( d + 1 ) -tuple of strictly positive integers r 0 , r 1 , r d , with r 0 = r d = 1 , and d three-dimensional tensors G R r 1 × n × r , = 1 , , d , such that
A ( i 1 , , i d ) = α 0 = 1 r 0 α 1 = 1 r 1 α d = 1 r d G 1 ( α 0 , i 1 , α 1 ) G 2 ( α 1 , i 2 , α 2 ) G d ( α d 1 , i d , α d ) ,
for all i = 1 , , n , = 1 , , d . The tensors G are called the TT cores and the integers r the TT ranks (also known as the compression ranks) of the tensor-train representation. Since r 0 = r d = 1 the tensors G 1 and G d are two-dimensional arrays, i.e., matrices, and the right-hand side of (16) is a scalar number. Moreover, each factor G ( : , i , : ) for any fixed i = 1 , , n is a matrix with size r 1 × r , so that we can reformulate (16) in the compact matrix multiplication form given by
A ( i 1 , , i d ) = G 1 ( : , i 1 , : ) G d ( : , i d , : ) ,
which allows for a faster implementation using BLAS level-3 routines [42].
The storage of the tensor-train representation of A requires n r 1 r real numbers. If n and r are convenient upper bounds of all the mode sizes and TT ranks, so that n n and r r for all , the storage is proportional to O ( d n r 2 ) instead of O ( n d ) of the full format representation.
The tensor-train representation of a tensor A is related to an exact or approximate skeleton decomposition of its unfolding matrices A , = 1 , , d , cf. [21]. The -th unfolding matrix A is defined by suitably remapping the entries of A into a matrix with size ( n 1 n ) × ( n + 1 n d ) . To this end, we consider the integer-valued functions φ ( i 1 , , i ) , = 1 , , d 1 , which are iteratively defined as
φ 1 ( i 1 ) = i 1 , φ ( i 1 , , i ) = φ 1 ( i 1 , , i 1 ) + k = 1 1 n k ( i 1 ) = 2 , , d 1 .
Then, we set
A ( φ ( i 1 , i ) , φ d ( i + 1 , i d ) ) : = A ( i 1 , , i d ) .
The existence of an exact TT decomposition of a tensor A R n 1 × n d with compression ranks r 1 , , r d 1 equal to the ranks of its matrix unfoldings A 1 , , A d 1 is proved in [21]. Such compression ranks are the smallest possible ones among all possible exact tensor-train decompositions of A. The problem of computing approximate tensor-train decompositions with smaller ranks is addressed in [22]. In particular, if all the unfolding matrices A of tensor A can be approximated by rank- r matrices A ˜ through an approximate skeleton decomposition with accuracy ϵ so that
A A ˜ F ϵ = 1 , 2 , , d 1 ,
then, there exists an approximate tensor A ˜ with compression ranks r satisfying
A A ˜ F 2 d 1 = 1 d 1 ϵ 2 .
In the numerical experiments of Section 4, we shall consider the low-rank tensor-train approximation of the grid functions given by using the TT cross interpolation of multidimensional arrays proposed in [22]. The TT cross-skeletal decomposition allows us to approximate a multivariate function using only a few entries of the tensor array resulted from the function sampling at the grid nodes, thus avoiding working with the array containing the full multidimensional grid function.

3.2. Numerical Integration of a Low-Rank Tensor-Train Representation of a Multidimensional Grid Function

For any mesh cell C i ̲ , we collect the nodes and weights defining the elemental multidimensional integration rule in the pair X i ̲ m , W i ̲ m , where X i ̲ m = x i ̲ , q ̲ is the set of the quadrature nodes and W i ̲ m = w i ̲ , q ̲ is the set of quadrature weights. The global integration rule in (4) is associated with ( X n ̲ m , W n ̲ m ) , so that, for any cell index i ̲ I n ̲ , it holds that X n ̲ m C i ̲ = X i ̲ m and W n ̲ m C i ̲ = W i ̲ m , where we denote the restriction of quantity ( * ) to cell C i ̲ as ( * ) C i ̲ . We represent the grid function given by sampling function f at the nodes of a generic d-dimensional grid X by the d-dimensional tensor F X = ( F X ( j 1 , , j d ) ) . If X = X n ̲ m , every index j runs from 1 to m n . The nodes of X n ̲ m are not equispaced if the local multidimensional cubature rule is built by taking the tensor product of univariate quadrature rules with non-equispaced nodes such as the Clenshaw-Curtis and Gauss-Legendre formulas. The integration formula (4) can be rewritten as the tensor inner product
Q n ̲ m ( f ) = F X , W n ̲ m .
Tensor W n ̲ m has compression ranks equal to 1, and if we know a TT decomposition of F X with compression ranks r , we compute the composite cubature rule Q n ̲ m ( f ) on mesh Ω n ̲ using Algorithm 1 with a computational cost proportional to m = 1 d n r r 1 . Recalling again the uniform upper bounds n n and r r , we find that the storage scales up like O ( d m n r 2 ) . The computational complexity is thus polynomial in the mode sizes and compression ranks, and under the (strong) assumption that the error ϵ in (18) and (19) is small for all , the computational complexity may depend only linearly on the number of dimensions. We prove this result in the following theorem.
Algorithm 1: Tensor-Train Cubature Algorithm.
Mathematics 11 00534 i001
Theorem 5.
Let F X admit the low-rank approximation F ˜ X in the tensor-train format so that
F ˜ X ( j 1 , j 2 , , j d ) = α 1 = 1 r 1 α 2 = 1 r 2 α d 1 = 1 r d 1 G 1 ( 1 , j 1 , α 1 ) G 2 ( α 1 , j 2 , α 2 ) G d ( α d 1 , j d , 1 ) ,
where r r n , for = 1 , , d 1 , are the tensor-train ranks and j = 1 , , m n , for = 1 , , d , is the index running through all the m n quadrature nodes of the n n intervals along the direction ℓ. Then, to compute the numerical cubature (20) using Algorithm 1 we need O d r 2 m n r + ( d 1 ) r 2 saxpy operations.
Proof. 
Let w be the one-dimensional m n -sized vector collecting all the weights of all quadrature nodes along the direction . Then, we write the cubature rule (20) as
Q n ̲ m ( f )   = α 1 = 1 r 1 α 2 = 1 r 2 α d 1 = 1 r d 1 w 1 × 2 G 1 ( 1 , · , α 1 ) w 2 × 2 G 2 ( α 1 , · , α 2 ) w d × 2 G d ( α d 1 , · , 1 ) = α 1 = 1 r 1 α 2 = 1 r 2 α d 1 = 1 r d 1 G ˜ 1 ( 1 , α 1 ) G ˜ 2 ( α 1 , α 2 ) G ˜ d ( α d 1 , 1 ) ,
where G ˜ ( α 1 , α ) = w × 2 G ( α 1 , · , α ) is the mode-2 product between vector w and the tensor-train core tensor G ( α 1 , i , α ) , i.e, the contraction along the -th mode index. For every pair of indices ( α 1 , α ) of the contracted core G ˜ ( α 1 , α ) , the contraction requires m n saxpy operations, and since G ˜ is an r 1 × r -sized matrix, the contraction of all the d cores requires d m n r 1 r d m n r 2 of such operations. Eventually, the calculation proceeds as a sequence of vector-matrix product from left to right between a temporary intermediate one-dimensional r 1 -sized array and the two-dimensional r 1 × r -sized matrix for = 2 , d , thus requiring other = 2 d r 1 r ( d 1 ) r 2 saxpy operations. □
Remark 3.
If tensor F X admits a low-rank approximation F ˜ X with compression ranks r ˜ r ˜ r , we have the even better scaling proportional to O ( d m n r ˜ 2 ) .
Remark 4.
Using the TT cross interpolation algorithm for multidimensional arrays [22] makes it possible to compute such an approximate TT decomposition of F X using only a very small fraction of the grid values of f n ̲ ( X n ̲ m ) and without pre-computing and storing the full tensor F X . For the details of this algorithm and its practical implementation, we refer again to [22].
We characterize the accuracy of this approximation by introducing the tolerance factor τ , which is such that τ F X F ˜ X F . The tolerance factor has clearly an influence on the ranks of the tensor-train approximation of the multidimensional tensor of the grid function values, e.g., F X , and we can expect that a smaller tolerance implies bigger ranks. The interplay between tolerance τ and the ranks is crucial to obtain an efficient integration algorithm. If instead of computing Q n ̲ m ( f ) , we compute Q n ̲ m ( f ˜ ) , where f ˜ is an approximation of f represented by F ˜ X , and we use the linearity of Q n ̲ m ( · ) , we immediately find the bound on the integration error given by
E n ̲ ( f ˜ ) = I ( f ) Q n ̲ m ( f ˜ ) I ( f ) Q n ̲ m ( f ) + Q n ̲ m ( f ) Q n ̲ m ( f ˜ ) = E n ̲ ( f ) + Q n ̲ m ( f f ˜ ) .
Assuming for simplicity that h = h for = 1 , , d , so that we have the same number of points n corresponding to n 1 = 1 h partitions in every direction, we note that
W n ̲ m F 2 = i ̲ I n ̲ q ̲ Q m w q ̲ 2 ( n 1 ) d h d = 1 .
Using this fact, definition (20) and the Cauchy-Schwarz inequality, we find that
Q n ̲ m ( f f ˜ ) = F X F ˜ X , W n ̲ m F X F ˜ X F W n ̲ m F τ .
If τ E n ̲ ( f ) , we can neglect the last term of (21), and we find that E n ̲ ( f ˜ ) E n ̲ ( f ) , i.e., the convergence properties of the cubature remain essentially the same even if integrate f ˜ instead of f.

4. Numerical Experiments

The Newton-Cotes (trapezoidal and Simpson rules), Clenshaw-Curtis, and Gauss formulas are all of the interpolatory type. In fact, they can be derived by integrating the polynomial interpolant supported by the quadrature nodes and are characterized by the polynomial degree and the following exactness property [15]. For any integer m 0 ,
  • the Newton-Cotes formulas with m = 2 points (trapezoidal rule) and m = 3 points (Simpson’s rule) integrate exactly the polynomials of degree 1 and 3, respectively;
  • the m-point Clenshaw-Curtis formula integrates exactly all polynomials of degree up to m 1 ;
  • the m-point Gauss-Legendre formula integrates exactly all polynomials of degree up to 2 m 1 .
These exactness properties readily extend to the multidimensional compound cubature. In this section, we assess the performance of our strategy to build such cubatures by investigating the “h” and the “p” convergence of the method (using a terminology widely adopted in the finite element method). Here, h = 1 / N is the (uniform) step size of each univariate partition of the tensor product grid, and N = n 1 is the number of intervals of every partition. Moreover, we set the tolerance factor τ = 10 16 , and we note that the observed rank is typically between 5 and 10.

4.1. Test Case 1: h-Convergence of a Smooth Function

The three panels in Figure 2 show the log-log plot of the error curves versus the number of intervals per direction N for the numerical integration of the multidimensional function
  • Test Case 1: f ( x ) = sin 1 + 2 π = 1 d x + 1 d = 1 d x 10 ; I ( f ) = 1 / 2 1 / 99 ,
on the integration domain Ω = [ 0 , 1 ] d .
We consider d = 10 , 20 , 50 , 100 . We perform the numerical integration using (i) the Newton-Cotes formulas (trapezoidal and Simpson quadrature rules); (ii) the Clenshaw-Curtis formula with m = 2 , 4 , 5 ; (iii) the Gauss-Legendre formula with m = 1 , 2 , 3 . The convergence rate is expected to be proportional to N 2 (trapezoidal rule); N 4 (Simpson rule); N m (Clenshaw-Curtis rule); N 2 m (Gauss-Legendre rule), and is shown by the slope of the triangle close to each curve. The slopes of the error curves reflect the numerical orders of convergence, and Figure 2 shows that these slopes are in agreement with the theoretical expectations. We also note that the convergence behavior seems to be unaffected by the number of dimensions.
Figure 2. Test Case 1. Relative approximation error curves versus the number of intervals per direction N for the numerical integration of a d-dimensional polynomial of degree 10, d = 10 , 20 , 50 , 100 , using (i) Newton–Cotes formulas (trapezoidal and Simpson quadrature rules); (ii) Clenshaw–Curtis formula with m = 2 , 4 , 5 ; (iii) Gauss–Legendre formulas with m = 1 , 2 , 3 . The expected convergence rate is shown by the number close to the triangular shapes and is proportional to N 2 (trapezoidal rule); N 4 (Simpson rule); N m (Clenshaw-Curtis rule); N 2 m (Gauss-Legendre rule).
Figure 2. Test Case 1. Relative approximation error curves versus the number of intervals per direction N for the numerical integration of a d-dimensional polynomial of degree 10, d = 10 , 20 , 50 , 100 , using (i) Newton–Cotes formulas (trapezoidal and Simpson quadrature rules); (ii) Clenshaw–Curtis formula with m = 2 , 4 , 5 ; (iii) Gauss–Legendre formulas with m = 1 , 2 , 3 . The expected convergence rate is shown by the number close to the triangular shapes and is proportional to N 2 (trapezoidal rule); N 4 (Simpson rule); N m (Clenshaw-Curtis rule); N 2 m (Gauss-Legendre rule).
Mathematics 11 00534 g002

4.2. Test Cases 2 and 3: h-Convergence and Integration of Multidimensional Exponential Functions

In the second and third test cases, we consider the Gauss bell-shaped function and the exponential function, which we denote as Genz-4 and Genz-5 following the nomenclature introduced in [43,44]:
  • Test Case 2: Genz-4: f ( x ) = exp = 1 d ( x w ) 2 ;   I ( f ) = π 2 erf ( 1 ) d ;
  • Test Case 3: Genz-5: f ( x ) = exp = 1 d x w ;   I ( f ) = 1 1 e d .
In Figure 3 and Figure 4, we show the log-log plots of the error curves versus the number of intervals N per direction for the numerical integration of the d-dimensional version of these functions. We consider d = 10 , 20 , 50 , 100 . We perform the numerical integration using the (i) trapezoidal formula ( m = 2 ); (ii) Simpson formula ( m = 4 ); (iii) Clenshaw-Curtis formula ( m = 4 ); (iv) Gauss-Legendre formula ( m = 2 ) . The expected convergence rate, still reflected by the slope of the triangles near each curve, is expected to be proportional to N 2 (trapezoidal rule) and N 4 (Simpson, Clenshaw-Curtis, and Gauss-Legendre rules). As in the previous test case, the numerical slopes are in agreement with the theoretical slopes, still shown by the triangle in each plot. The number of dimensions plays, however, a more significant role than the previous test case, and the constant factor that appears in every error bound, see Theorem 4, changes with d. This behavior is the same for all the cubature schemes that we consider in this test case.

4.3. Test Case 4: p-Convergence and Integration of Multidimensional Exponential Functions

In Figure 5, we show the linear-log plots of the error curves that are obtained by applying the cubature rules built using the univariate Clenshaw-Curtis formula (right column) and the Gauss-Legendre formula (left column). The two panels on top refer to the numerical integration of the Genz-4 function, while the two panels at the bottom refer to the numerical integration of the Genz-5 function. We carry out all calculations on a very coarse grid with only two intervals per direction, so the resolution is 2 × 2 × × 2 (d times), and we have a total of 2 d multidimensional cells. The numerical integration is performed for d = 10 , 20 , 50 , 100 , 200 . We build the cubature rules using the univariate Clenshaw-Curtis and Gauss-Legendre formulas with m integration nodes with m = 1 , 2 , , 10 . The four plots of Figure 5 show that the convergence of the Clenshaw-Curtis and Gauss-Legendre formula is (roughly) proportional to m γ for some positive number γ .

4.4. Test Case 5: Adaptivity for Numerical Integration of Multidimensional, Weakly Singular Functions

In this test case, we show how we can adapt our cubature formulas to the integration of functions with lower regularity. For this test case, we present only the results obtained with Gauss-Legendre quadrature rule. The results obtained with the Clenshaw-Curtis quadrature rule are essentially the same. To this end, we consider the so-called ANOVA function, see, for example, [6]:
  • Test Case 5: ANOVA: f ( x ) = Π = 1 d x x ¯ ;   I ( f ) = 1 .
This function is only continuous on the integration domain [ 0 , 1 ] d , as all its directional derivatives have a singularity at the point with -th coordinate equal to x = x ¯ . We assume that x ( 0 , 1 ) for all directional index = 1 , , d . We perform the numerical integration versus the number of dimensions, using the Gauss-Legendre formula with 2 nodes, and the equispaced grids with N = 1 , 2 , 4 partitions per direction. When x ¯ = 1 / 2 , the left panel of Figure 6 shows that the numerical integration with the partition N = 1 does not provide a good result. The reason for this behavior is that the singularity of the integrand is inside the (unique) integration cell. However, when we consider the tensor product grid with N = 2 and 4 partitions per direction, the cubature rule provides a very accurate result with an error of the order of the machine’s precision. In such a case, the multidimensional grid decomposes the computational domain in such a way that the integrand singularity matches with the cell interfaces. Therefore, we are integrating over subdomains where the integrand is actually smooth, and since the Gauss-Legendre is exact on linear polynomials the final error must be zero up to the machine’s precision. This same behavior is reflected by the right panel of Figure 6, which refers to the integration of the ANOVA function with x ¯ = 3 / 4 . In such a case, only the partition with N = 4 cells per direction matches the location of the singularity at a cell interface. This is the only situation in this test case that can be expected to provide a numerical value for the integral that is close to the machine precision, as reflected by the numerical results.

4.5. Test Case 6: Integration of a High-Degree Polynomial and a Function with a Singular High-Order Derivative

We consider the function
  • Test Case 6: f μ ( x ) = x 1 π 4 μ + 1 d l = 1 d T μ x l for   x 1 > π 4 , 1 d l = 1 d T μ x l for   x 1 = π 4 , x 1 π 4 μ + 1 d l = 1 d T μ x l for   x 1 < π 4 ,
defined on the integration domain Ω = [0, 1]d, where T μ x l is the univariate Chebyshev polynomial of the first kind of degree μ along the direction l. The exact value of the integral of the function f μ ( x ) is reported in Table 1. We write the first nine derivatives along the direction x 1 of this function as follows:
for   k = 1 , , μ : f μ k ( x ) = c μ , k x 1 π 4 μ k + 1 d l = 1 d T μ k x l for   x 1 > π 4 1 d l = 1 d T μ k x l for   x 1 = π 4 c μ , k x 1 π 4 μ k + 1 d l = 1 d T μ k x l for   x 1 < π 4
where c μ , k = Π s = 1 k ( μ s + 1 ) and T μ k is the k-th derivative of T μ ( x l ) . Function f(x) and its first μ − 1 derivatives f μ k , 1 ≤ kμ − 1, are clearly continuous at the singular point x 1 = π / 4 and since the other directions are not affected by the singularity, which is only along the hyperplane x 1 = π / 4 all these functions are continuous on Ω. Instead, the μ-th derivative along x 1 is singular at x 1 = π / 4 , as it reads
f μ μ x = μ ! + 1 d l = 1 d T μ μ x l for   x 1 > π 4 , μ ! + 1 d l = 1 d T μ μ x l for   x 1 < π 4 .
The regularity of f is sufficient to ensure the convergence of the Gauss-Legendre integration formula for m = 1,…, μ since according to Theorem 4, we only need a regularity up to fm. When mμ/2, the Gauss-Legendre quadrature integrates exactly all polynomials of degree up to μ − 1, and we expect to see the integration error at the machine precision level. Figure 7 shows the linear-log plot of the error curves versus the number of nodes m. For comparison, we consider a tensor product mesh with two partitions per direction, so that the singularity coincides with a cell interface that is orthogonal to the first spatial direction. In this last situation, the singularity does not affect the accuracy of the numerical integration, as discussed in Test Case 5. The expected behavior is confirmed for d = 10.

4.6. Test Case 7: Efficiency Benchmarks through a Comparison with Monte Carlo Methods

Figure 8 shows the log-log plot of the error curves versus the elapsed time. We compare the performance of the strategy proposed in this paper using the Gauss-Legendre quadrature formula and the performance of two Monte Carlo integration methods (MC Lattice, MC Sobol, see [35]). These two variants of the Monte Carlo method are provided by the library GAIL [35]. The elapsed time is measured in seconds and the number of dimensions of the integrated function, d, is indicated below each panel. The numerical integration is performed on function Genz-2 (see [43,44]):
  • Test Case 7:Genz-2: f x = Π l = 1 d 4 π 1 + x l 1 2 1 ; I f = 1 ,
on a d-dimensional grid with d = 10, 20, 50, 100, 200, 500. These results show that the numerical integration error of the proposed cubature formula can be up to 10 orders of magnitude smaller for comparable elapsed times.

5. Conclusions

In this paper, we discuss an approach to extend the one-dimensional integration rule to a multidimensional grid and presented numerical evidence demonstrating its effectiveness. The straightforward extension of a one-dimensional quadrature to multidimensional grids by the tensor product of the spatial directions is deemed practically impossible for dimensions greater than three or four. The computational burden in terms of storage and floating point operations grows exponentially with the number of dimensions due to the curse of dimensionality. By employing a low-rank tensor-train representation of the integrand functions, however, the tensor product method can be utilized effectively for high-dimensional numerical integration. We assessed the effectiveness of this approach by applying it to the numerical integration of a selected set of multidimensional functions. In particular, the composite integration technique can be competitive with the Monte Carlo method in terms of accuracy and computational costs if the integrand function is sufficiently regular and admits a low-rank tensor train. We obtain such a low-rank representation by applying the TT cross interpolation algorithm, which provides an approximation of the multidimensional tensor collecting the values of the integrand function at the integration nodes. We proved that the accuracy of our composite numerical cubatures is dominated by the “standard” numerical integration error, which depends on the regularity of the integrand function whenever such an error is much bigger than the error introduced by the TT cross approximation algorithm. An open issue is the characterization of the functional interpolation that the TT cross approximate tensor provides. We can assume that this functional interpolation belongs to a subspace of a finite-dimensional approximation space that we can construct, for example, by the tensor product of univariate polynomials, piecewise polynomials interpolations, or similar constructions. If we had a characterization of this subspace, then we could analyze the numerical integration error directly through the function approximation based on the TT cross interpolation algorithm. We will investigate this topic in our future work.

Author Contributions

Conceptualization, B.A., G.M., E.W.S., P.M.D.T. and R.G.V.; Methodology, B.A., G.M., E.W.S., P.M.D.T. and R.G.V.; Validation, B.A., G.M., E.W.S., P.M.D.T. and R.G.V.; Writing—original draft, B.A., G.M., E.W.S., P.M.D.T. and R.G.V.; Writing—review & editing, B.A., G.M., E.W.S., P.M.D.T. and R.G.V. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Laboratory Directory Research and Development (LDRD) grant number 20230067DR and 20210485ER.

Institutional Review Board Statement

This document has been approved for public release as the Los Alamos Technical Report LA-UR-22-29661.

Data Availability Statement

Not applicable.

Acknowledgments

This work was partially supported by the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory under Grants 20190020DR, 20230067DR and 20210485ER, and in part by LANL Institutional Computing Program. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). This article has been authored by an employee of the National Technology & Engineering Solutions of Sandia, LLC under Contract No. DE-NA0003525 with the U.S. Department of Energy (DOE). The employee owns all right, title and interest in and to the article and is solely responsible for its contents. The United States Government retains, and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this article or allows others to do so, for United States Government purposes. The DOE will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan https://www.energy.gov/downloads/doepublic-access-plan.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Clenshaw-Curtis and Gauss-Legendre Quadrature Rules

Clenshaw-Curtis quadratures can be derived through an expansion of the integrand function on Chebyshev polynomials, or, equivalently, through the change of variables x = cos θ and, then, applying a discrete cosine transform (DCT) approximation for the cosine series. For the readers’ convenience, we report the value of the Clenshaw-Curtis quadrature nodes and weights of order 2, 3, 4, which we used in our numerical experiments, in Table A1.
Table A1. Nodes and weights of the Clenshaw-Curtis quadrature formulas of order 2, 3, and 4 on the interval [−1, 1].
Table A1. Nodes and weights of the Clenshaw-Curtis quadrature formulas of order 2, 3, and 4 on the interval [−1, 1].
Clenshaw-Curtiss-2Clenshaw-Curtiss-3Clenshaw-Curtiss-4
NodesWeightsNodesWeightsNodesWeights
1−11−11/3−11/9
21104/3−1/28/9
3 11/31/28/9
4 11/9
The nodes ξ i , i = 1,…, n, of the Gauss-Legendre quadrature of order n are the n roots of P n ( ξ ) , the univariate Legendre polynomial of degree n defined on the reference interval [−1, 1]. The corresponding weight is given by the formula w i = 2 / 1 ξ i 2 P n ξ i 2 . For the readers’ convenience, we report the Gauss-Legendre quadrature formulas of order 1, 2, and 3, which we used in our numerical experiments, in Table A2.
Table A2. Nodes and weights of the Gauss-Legendre quadrature formulas of order 1, 2, and 3 on the interval [−1, 1].
Table A2. Nodes and weights of the Gauss-Legendre quadrature formulas of order 1, 2, and 3 on the interval [−1, 1].
Clenshaw-Curtiss-1Clenshaw-Curtiss-2Clenshaw-Curtiss-3
NodesWeightsNodesWeightsNodesWeights
102−1/ 3 1 3 / 5 5/9
2 1/ 3 108/9
3 3 / 5 5/9

References

  1. Brooks, S.; Gelman, A.; Jones, G.L.; Meng, X.L. Handbook of Markov Chain Monte Carlo; Handbooks of Modern Statistical Methods; Chapman & Hall/CRC: Boca Raton, FL, USA, 2011. [Google Scholar]
  2. Stuart, A.M. Inverse problems: A Bayesian perspective. Acta Numer. 2010, 19, 451–559. [Google Scholar] [CrossRef] [Green Version]
  3. Meyer, H.D.; Manthe, U.; Cederbaum, L.S. The multi-configurational time-dependent Hartree approach. Chem. Phys. Lett. 1990, 165, 73–78. [Google Scholar] [CrossRef]
  4. Aghanim, N.; Akrami, Y.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Ballardini, M.; Banday, A.J.; Barreiro, R.B.; Bartolo, N.; Basak, S.; et al. Planck 2018 results. Astron. Astrophys. 2020, 641, A6. [Google Scholar] [CrossRef] [Green Version]
  5. Barbu, A.; Zhu, S.C. Monte Carlo Methods; Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–17. [Google Scholar]
  6. Wang, X.; Fang, K.T. The effective dimension and quasi-Monte Carlo integration. J. Complex. 2003, 19, 101–124. [Google Scholar] [CrossRef] [Green Version]
  7. Giles, M.B. Multilevel Monte Carlo methods. Acta Numer. 2015, 24, 259–328. [Google Scholar] [CrossRef] [Green Version]
  8. Gerstner, T.; Griebel, M. Numerical integration using sparse grids. Numer. Algorithms 1998, 18, 209. [Google Scholar] [CrossRef]
  9. Novak, E.; Ritter, K. High dimensional integration of smooth functions over cubes. Numer. Math. 1996, 75, 79–97. [Google Scholar] [CrossRef]
  10. Griebel, M.; Schneider, M.; Zenger, C. A combination technique for the solution of sparse grid problems. In Iterative Methods in Linear Algebra; de Groen, P., Beauwens, R., Eds.; IMACS, Elsevier: Amsterdam, The Netherlands, 1992; pp. 263–281. [Google Scholar]
  11. Sloan, I.H. Lattice methods for multiple integration. J. Comput. Appl. Math. 1985, 12–13, 131–143. [Google Scholar] [CrossRef] [Green Version]
  12. Sloan, I.H.; Joe, S. Lattice Methods for Multiple Integration; Oxford University Press: Oxford, UK, 1994. [Google Scholar]
  13. Piazzola, C.; Tamellini, L. The Sparse Grids Matlab kit—A Matlab implementation of sparse grids for high-dimensional function approximation and uncertainty quantification. arXiv 2022, arXiv:2203.09314. [Google Scholar]
  14. Jakeman, J.D. PyApprox Software Library. Available online: https://sandialabs.github.io/pyapprox/index.html (accessed on 1 December 2022).
  15. Cools, R. Constructing cubature formulae: The science behind the art. Acta Numer. 1997, 6, 1–54. [Google Scholar] [CrossRef]
  16. Davis, P.J.; Rabinowitz, P.; Rheinbolt, W. Methods of Numerical Integration, 2nd ed.; Computer Science and Applied Mathematics; Elsevier Inc.: Amsterdam, The Netherlands; Academic Press: Cambridge, MA, USA, 1984. [Google Scholar]
  17. Brass, H.; Petras, K. Quadrature Theory; Mathematical Surveys and Monographs 178; American Mathematical Society: Providence, RI, USA, 2011. [Google Scholar]
  18. Trefethen, L.N. Approximation Theory and Approximation Practice; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 2013. [Google Scholar] [CrossRef]
  19. Hackbusch, W. Tensor Spaces and Numerical Tensor Calculus, 1st ed.; Springer Series in Computational Mathematics 42; Springer: Berlin/Heidelberg, Germany, 2012. [Google Scholar]
  20. Hackbusch, W. Numerical tensor calculus. Acta Numer. 2014, 23, 651–742. [Google Scholar] [CrossRef] [Green Version]
  21. Oseledets, I.V. Tensor-Train decomposition. SIAM J. Sci. Comput. 2011, 33. [Google Scholar] [CrossRef]
  22. Oseledets, I.; Tyrtyshnikov, E. TT-cross approximation for multidimensional arrays. Linear Algebra Its Appl. 2010, 432, 70–88. [Google Scholar] [CrossRef] [Green Version]
  23. Khoromskij, B.N. O(dlog n)-quantics approximation of N-d tensors in high-dimensional numerical modeling. Constr. Approx. 2011, 34, 257–280. [Google Scholar] [CrossRef]
  24. Savostyanov, D.V. Fast revealing of mode ranks of tensor in canonical form. Numer. Math. Theory Methods Appl. 2009, 2, 439–444. [Google Scholar] [CrossRef]
  25. Oseledets, I.V.; Savostianov, D.V.; Tyrtyshnikov, E.E. Tucker Dimensionality Reduction of Three-Dimensional Arrays in Linear Time. SIAM J. Matrix Anal. Appl. 2008, 30, 939–956. [Google Scholar] [CrossRef]
  26. Hackbusch, W.; Kühn, S. A new scheme for the tensor representation. J. Fourier Anal. Appl. 2009, 15, 706–722. [Google Scholar] [CrossRef] [Green Version]
  27. Oseledets, I.V.; Savostyanov, D.V.; Tyrtyshnikov, E.E. Cross approximation in tensor electron density computations. Numer. Linear Algebra Appl. 2010, 17, 935–952. [Google Scholar] [CrossRef]
  28. Dolgov, S.; Khoromskij, B. Simultaneous state-time approximation of the chemical master equation using tensor product formats. Numer. Linear Algebra Appl. 2014, 22, 197–219. [Google Scholar] [CrossRef] [Green Version]
  29. Dolgov, S.; Khoromskij, B.; Savostyanov, D. Superfast Fourier transform using QTT approximation. J. Fourier Anal. Appl. 2012, 18, 915–953. [Google Scholar] [CrossRef]
  30. Zhang, Z.; Yang, X.; Oseledets, I.V.; Karniadakis, G.E.; Daniel, L. Enabling high-dimensional hierarchical uncertainty quantification by ANOVA and tensor-train decomposition. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2015, 34, 63–76. [Google Scholar] [CrossRef] [Green Version]
  31. Richter, L.; Sallandt, L.; Nüsken, N. Solving high-dimensional parabolic PDEs using the tensor-train format. In Proceedings of the 38th International Conference on Machine Learning, Virtual, 18–24 July 2021. [Google Scholar]
  32. Dolgov, S.V. A tensor decomposition algorithm for large ODEs with conservation laws. Comput. Methods Appl. Math. 2019, 19, 23–38. [Google Scholar] [CrossRef] [Green Version]
  33. Dolgov, S.; Savostyanov, D. Parallel cross interpolation for high-precision calculation of high-dimensional integrals. Comput. Phys. Commun. 2019, 246, 106869. [Google Scholar] [CrossRef]
  34. Vysotsky, L.I.; Smirnov, A.V.; Tyrtyshnikov, E.E. Tensor-train numerical integration of multivariate functions with singularities. Lobachevskii J. Math. 2021, 42, 1608–1621. [Google Scholar] [CrossRef]
  35. Choi, S.C.T.; Ding, Y.; Hickernell, F.J.; Jiang, L.; Jiménez Rugama, L.A.; Tong, X.; Zhang, Y.; Zhou, X. GAIL: Guaranteed Automatic Integration Library (Versions 1.0–2.2). MATLAB Software (2013–2017). 2021. Available online: http://gailgithub.github.io/GAIL_Dev/ (accessed on 1 June 2021).
  36. Imhof, J.P. On the method for numerical integration of Clenshaw and Curtis. Numer. Math. 1963, 5, 138–141. [Google Scholar] [CrossRef]
  37. Quarteroni, A.; Sacco, R.; Saleri, F. Numerical Mathematics, 2nd ed.; Texts in Applied Mathematics; Springer: Berlin/Heidelberg, Germany, 2007; Volume 37. [Google Scholar]
  38. Trefethen, L.N. Is Gauss Quadrature Better than Clenshaw–Curtis? SIAM Rev. 2008, 50. [Google Scholar] [CrossRef]
  39. Kahaner, D.; Moler, C.; Nash, S. Numerical Methods and Software; Prentice Hall: Hoboken, NJ, USA, 1988. [Google Scholar]
  40. MATLAB, Version 9.8.0 (R2020a); The MathWorks Inc.: Natick, MA, USA, 2020.
  41. Van Rossum, G.; Drake, F.L., Jr. Python Reference Manual; Centrum Voor Wiskunde en Informatica: Amsterdam, The Netherlands, 1995. [Google Scholar]
  42. Blackford, L.S.; Demmel, J.; Dongarra, J.; Du, I.; Hammarling, S.; Henry, G.; Heroux, M.; Kaufman, L.; Lumsdaine, A.; Petitet, A.; et al. An updated set of basic linear algebra subprograms (BLAS). ACM Trans. Math. Softw. 2002, 28, 135–151. [Google Scholar]
  43. Genz, A. Testing multidimensional integration routines. In Tools, Methods, and Languages for Scientific and Engineering Computation; Ford, B., Rault, J.C., Eds.; Elsevier: Amsterdam, The Netherlands, 1984; pp. 81–94. [Google Scholar]
  44. Genz, A. A package for testing multiple integration subroutines. In Merical Integration: Recent Developments, Software and Applications; Keast, P., Reidel, G.F., Eds.; Springer: Berlin/Heidelberg, Germany, 1987; pp. 337–340. [Google Scholar]
Figure 1. Notation. For d = 2 , we show the generic cell C i ̲ = x i 1 ( 1 ) , x i 1 + 1 ( 1 ) × x i 2 ( 2 ) , x i 2 + 1 ( 2 ) , with i ̲ = ( i 1 , i 2 ) ; the univariate quadrature nodes (black dots) with coordinate x i , q ( ) , with = 1 , 2 , q = 1 , 2 , 3 , which corresponds to the 3-point Gauss–Legendre integration rule; the cubature nodes inside the cell (red dots) are provided by the tensor product of the one-dimensional quadrature nodes along the directions = 1 , 2 . To make the figure easily readable, we show all the quadrature and cubature nodes relative to cell C i ̲ , but we report only the coordinates of the quadrature nodes at the bottom and left sides, and the coordinates of the internal bottom-left node, i.e., x i ̲ , q ̲ , with i ̲ = ( i 1 , i 2 ) , and q ̲ = ( 1 , 1 ) .
Figure 1. Notation. For d = 2 , we show the generic cell C i ̲ = x i 1 ( 1 ) , x i 1 + 1 ( 1 ) × x i 2 ( 2 ) , x i 2 + 1 ( 2 ) , with i ̲ = ( i 1 , i 2 ) ; the univariate quadrature nodes (black dots) with coordinate x i , q ( ) , with = 1 , 2 , q = 1 , 2 , 3 , which corresponds to the 3-point Gauss–Legendre integration rule; the cubature nodes inside the cell (red dots) are provided by the tensor product of the one-dimensional quadrature nodes along the directions = 1 , 2 . To make the figure easily readable, we show all the quadrature and cubature nodes relative to cell C i ̲ , but we report only the coordinates of the quadrature nodes at the bottom and left sides, and the coordinates of the internal bottom-left node, i.e., x i ̲ , q ̲ , with i ̲ = ( i 1 , i 2 ) , and q ̲ = ( 1 , 1 ) .
Mathematics 11 00534 g001
Figure 3. Test Case 2. Relative approximation error curves versus the number of intervals N per direction for the numerical integration of a d-dimensional Gaussian bell-shaped function (Genz-4), d = 10 , 20 , 50 , 100 , using the (i) trapezoidal formula ( m = 2 ); (ii) Simpson formula ( m = 4 ); (iii) Clenshaw–Curtis formula ( m = 4 ); (iv) Gauss–Legendre formulas ( m = 2 ) . The expected convergence rate is shown by the number close to the triangular shapes and is proportional to N 2 (trapezoidal rule) and N 4 (Simpson, Clenshaw-Curtis, and Gauss-Legendre rules).
Figure 3. Test Case 2. Relative approximation error curves versus the number of intervals N per direction for the numerical integration of a d-dimensional Gaussian bell-shaped function (Genz-4), d = 10 , 20 , 50 , 100 , using the (i) trapezoidal formula ( m = 2 ); (ii) Simpson formula ( m = 4 ); (iii) Clenshaw–Curtis formula ( m = 4 ); (iv) Gauss–Legendre formulas ( m = 2 ) . The expected convergence rate is shown by the number close to the triangular shapes and is proportional to N 2 (trapezoidal rule) and N 4 (Simpson, Clenshaw-Curtis, and Gauss-Legendre rules).
Mathematics 11 00534 g003
Figure 4. Test Case 3. Relative approximation error curves versus the number of intervals N per direction for the numerical integration of a d-dimensional exponential function (Genz-5), d = 10 , 20 , 50 , 100 , using the (i) trapezoidal formula ( m = 2 ); (ii) Simpson formula ( m = 4 ); (iii) Clenshaw–Curtis formula ( m = 4 ); (iv) Gauss–Legendre formulas ( m = 2 ) . The expected convergence rate is shown by the number close to the triangular shapes and is proportional to N 2 (trapezoidal rule) and N 4 (Simpson, Clenshaw–Curtis, and Gauss–Legendre rules).
Figure 4. Test Case 3. Relative approximation error curves versus the number of intervals N per direction for the numerical integration of a d-dimensional exponential function (Genz-5), d = 10 , 20 , 50 , 100 , using the (i) trapezoidal formula ( m = 2 ); (ii) Simpson formula ( m = 4 ); (iii) Clenshaw–Curtis formula ( m = 4 ); (iv) Gauss–Legendre formulas ( m = 2 ) . The expected convergence rate is shown by the number close to the triangular shapes and is proportional to N 2 (trapezoidal rule) and N 4 (Simpson, Clenshaw–Curtis, and Gauss–Legendre rules).
Mathematics 11 00534 g004
Figure 5. Test Case 4. Relative approximation error curves versus the number of integration nodes m of the univariate quadrature rule for the numerical integration of a d-dimensional Gaussian bell-shaped function (Genz-4, top panels) and the exponential function (Genz-5, bottom panels). We integrate both multidimensional functions on a 2 d grid with d = 10 , 20 , 50 , 100 , 200 . In panels (i) and (iii), we use the m-points Clenshaw–Curtis (CC) formulas; in panels (ii) and (iv), we use the m-points Gauss–Legendre (GL) formulas. In all cases, we consider m = 1 , 2 , , m .
Figure 5. Test Case 4. Relative approximation error curves versus the number of integration nodes m of the univariate quadrature rule for the numerical integration of a d-dimensional Gaussian bell-shaped function (Genz-4, top panels) and the exponential function (Genz-5, bottom panels). We integrate both multidimensional functions on a 2 d grid with d = 10 , 20 , 50 , 100 , 200 . In panels (i) and (iii), we use the m-points Clenshaw–Curtis (CC) formulas; in panels (ii) and (iv), we use the m-points Gauss–Legendre (GL) formulas. In all cases, we consider m = 1 , 2 , , m .
Mathematics 11 00534 g005
Figure 6. Test Case 5. Approximation error curves versus the number of dimensions d for the numerical integration of a singular function using different partitions of the multidimensional domains. We apply the Gauss–Legendre formula with m = 4 . The plot on the left shows the errors computed using grids with two and three partitions per direction, and the singularity located at the middle of every direction. The plot on the right shows the same test case with grids with two, three, and four partitions per direction, and the singularity located at the three-quarter mark of every direction. When the singularity is located at an interface between two cells, the integration formula is exact and the error is of the order of the machine’s precision, independent of the number of dimensions.
Figure 6. Test Case 5. Approximation error curves versus the number of dimensions d for the numerical integration of a singular function using different partitions of the multidimensional domains. We apply the Gauss–Legendre formula with m = 4 . The plot on the left shows the errors computed using grids with two and three partitions per direction, and the singularity located at the middle of every direction. The plot on the right shows the same test case with grids with two, three, and four partitions per direction, and the singularity located at the three-quarter mark of every direction. When the singularity is located at an interface between two cells, the integration formula is exact and the error is of the order of the machine’s precision, independent of the number of dimensions.
Mathematics 11 00534 g006
Figure 7. Test Case 6. Approximation error curves versus the number of nodes m for the numerical integration of the combination of a singular function and the Chebyshev degree-10 polynomial of the first kind on a d = 10 dimensional domain.
Figure 7. Test Case 6. Approximation error curves versus the number of nodes m for the numerical integration of the combination of a singular function and the Chebyshev degree-10 polynomial of the first kind on a d = 10 dimensional domain.
Mathematics 11 00534 g007
Figure 8. Test Case 7. Approximation error curves versus the elapsed time. We compare the performance of the strategy proposed in this paper and that of two Monte Carlo integration methods (MC Lattice, MC Sobol). The elapsed time is measured in seconds and the number of dimensions of the integrated function, d, is indicated below each panel. Each panel shows that our integration strategy is more efficient than the Monte Carlo method.
Figure 8. Test Case 7. Approximation error curves versus the elapsed time. We compare the performance of the strategy proposed in this paper and that of two Monte Carlo integration methods (MC Lattice, MC Sobol). The elapsed time is measured in seconds and the number of dimensions of the integrated function, d, is indicated below each panel. Each panel shows that our integration strategy is more efficient than the Monte Carlo method.
Mathematics 11 00534 g008
Table 1. Test Case 6: exact value of the integral of the function f μ ( x ) for μ = 1,…,10.
Table 1. Test Case 6: exact value of the integral of the function f μ ( x ) for μ = 1,…,10.
μ I ( f μ )
10.831452111670637
2−0.491529937082487
3−0.404343692999220
4−0.126345103704993
50.205801972908859
6−0.054903301002036
7−0.148568167693588
8−0.028507651909931
90.108930994483310
10−0.016477706416599
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alexandrov, B.; Manzini, G.; Skau, E.W.; Truong, P.M.D.; Vuchov, R.G. Challenging the Curse of Dimensionality in Multidimensional Numerical Integration by Using a Low-Rank Tensor-Train Format. Mathematics 2023, 11, 534. https://doi.org/10.3390/math11030534

AMA Style

Alexandrov B, Manzini G, Skau EW, Truong PMD, Vuchov RG. Challenging the Curse of Dimensionality in Multidimensional Numerical Integration by Using a Low-Rank Tensor-Train Format. Mathematics. 2023; 11(3):534. https://doi.org/10.3390/math11030534

Chicago/Turabian Style

Alexandrov, Boian, Gianmarco Manzini, Erik W. Skau, Phan Minh Duc Truong, and Radoslav G. Vuchov. 2023. "Challenging the Curse of Dimensionality in Multidimensional Numerical Integration by Using a Low-Rank Tensor-Train Format" Mathematics 11, no. 3: 534. https://doi.org/10.3390/math11030534

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop