Next Article in Journal
From the Crossing Numbers of K5 + Pn and K5 + Cn to the Crossing Numbers of Wm + Sn and Wm + Wn
Previous Article in Journal
Three Existence Results in the Fixed Point Theory
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Optimal and Efficient Approximations of Gradients of Functions with Nonindependent Variables

by
Matieyendou Lamboni
1,2
1
Department DFR-ST, University of Guyane, 97346 Cayenne, France
2
228-UMR Espace-Dev, University of Guyane, University of Réunion, IRD, University of Montpellier, 34090 Montpellier, France
Axioms 2024, 13(7), 426; https://doi.org/10.3390/axioms13070426
Submission received: 12 May 2024 / Revised: 16 June 2024 / Accepted: 19 June 2024 / Published: 25 June 2024
(This article belongs to the Special Issue Recent Research on Functions with Non-Independent Variables)

Abstract

:
Gradients of smooth functions with nonindependent variables are relevant for exploring complex models and for the optimization of the functions subjected to constraints. In this paper, we investigate new and simple approximations and computations of such gradients by making use of independent, central, and symmetric variables. Such approximations are well suited for applications in which the computations of the gradients are too expansive or impossible. The derived upper bounds of the biases of our approximations do not suffer from the curse of dimensionality for any 2-smooth function, and they theoretically improve the known results. Also, our estimators of such gradients reach the optimal (mean squared error) rates of convergence (i.e., O ( N 1 ) ) for the same class of functions. Numerical comparisons based on a test case and a high-dimensional PDE model show the efficiency of our approach.

1. Introduction

Nonindependent variables arise when at least two variables do not vary independently, and such variables are often characterized by their covariance matrices, distribution functions, copulas, and weighted distributions (see, e.g., [1,2,3,4,5,6,7]). Recently, dependency models provide explicit functions that link these variables together by means of additional independent variables [8,9,10,11,12]. Models with nonindependent input variables, including functions subjected to constraints, are widely encountered in different scientific fields, such as data analysis, quantitative risk analysis, and uncertainty quantification (see, e.g., [13,14,15]).
Analyzing such functions requires being able to calculate or to compute their dependent gradients, that is, the gradients that account for the dependencies among the inputs. Recall that gradients are involved in (i) inverse problems and optimization (see, e.g., [16,17,18,19,20]), (ii) exploring complex mathematical models or simulators (see [21,22,23,24,25,26,27,28] for independent inputs and [9,15] for nonindependent variables); (iii) Poincaré inequalities and equalities [9,28,29,30], and recently in (iv) the derivative-based ANOVA (i.e., exact expansions) of functions [28]. While the first-order derivatives of functions with nonindependent variables have been derived in [9] for screening dependent inputs of high-dimensional models, the theoretical expressions of the gradients of such functions (dependent gradients) have been introduced in [15], thereby enhancing the difference between the gradients and the first-order partial derivatives when the input variables are dependent or correlated.
In high-dimensional settings and for time-demanding models, having an efficient approach for computing the dependent gradients provided in [15] using a few model evaluations is worth investigating. So far, the adjoint methods can provide the exact classical gradients for some classes of PDE/ODE-based models [31,32,33,34,35,36]. Additionally, Richardson’s extrapolation and its generalization considered in [37] provide accurate estimates of the classical gradients using a number of model runs that strongly depends on the dimensionality. In contrast, the Monte Carlo approach allows for computing the classical gradients using a number of model runs that can be very less than the dimensionality (i.e., d N ) [17,38,39]. The Monte Carlo approach is a consequence of the Stokes theorem, which claims that the expectation of a function evaluated at a random point about x R d is the gradient of a certain function. Such a property leads to randomized approximations of the classical gradients in derivative-free optimization or zero-order stochastic optimization (see [16,18,19,20] and the references therein). Such approximations are also relevant for applications in which the computations of the gradients are impossible [20].
Most of the randomized approximations of the classical gradients, including the Monte Carlo approach, rely on randomized kernels and/or random vectors that are uniformly distributed on the unit ball. The qualities of such approximations are often assessed by the upper bounds of the biases and the rates of convergence. The upper bounds provided in [19,20,40] depend on the dimensionality in general.
In this paper, we propose new surrogates of the gradients of smooth functions with nonindependent inputs and the associated estimators that comply with the following requirements:
  • They are simple and applicable to a wide class of functions by making use of model evaluations at randomized points, which are only based on independent, central, and symmetric variables;
  • They lead to a dimension-free upper bound of the bias, and they improve the best known upper bounds of the bias for the classical gradients;
  • They lead to the optimal and parametric (mean squared error) rates of convergence;
  • They are going to increase the computational efficiency and accuracy of the gradients estimates by means of a set of constraints.
The surrogates of the dependent gradients are derived in Section 3 by combining the properties of (i) the generalized Richardson extrapolation approach thanks to a set of constraints and (ii) the Monte Carlo approach based only on independent random variables that are symmetrically distributed about zero. Such expressions are followed by their order of approximations, biases, and a comparison with known results for the classical gradients. We also provide the estimators of such surrogates and their associated mean squared errors, including the rates of convergence for a wide class of functions (see Section 3.3). A number of numerical comparisons is considered so as to assess the efficiency of our approach. Section 4 presents comparisons of our approach to other methods and simulations based on a high-dimensional PDE (spatiotemporal) model with given autocollaborations among the initial conditions, which are considered in Section 5 to compare our approach to the adjoint-based methods. We conclude this work in Section 6.

2. Preliminaries

For an integer d > 0 , let X : = ( X 1 , , X d ) be a random vector of continuous and nonindependent variables having F as the joint cumulative distribution function (CDF) (i.e., X F ). For any j { 1 , , d } , we use F x j or F j for the marginal CDF of X j and F j 1 for its inverse. Also, we use ( j ) : = ( 1 , , j 1 , j + 1 , , d ) and X j : = ( X 1 , , X j 1 , X j + 1 , , X d ) . The equality (in distribution) X = d Z means that X and Z have the same CDF.
As the sample values of X are dependent, here we use f x k for the formal partial derivative of f with respect to x k , that is, the partial derivative obtained by considering other inputs as constant or independent of x k . Thus, f : = f x 1 , , f x d T stands for the formal or classical gradient of f.
Given an open set Ω R d , consider a weak partial differentiable function f : Ω R [41,42]. Given ı : = ( i 1 , , i d ) N d , denote D ( ı ) f : = k = 1 d i k x k f ; ( x ) ı = x ı : = k = 1 d x k i k , ı ! = i 1 ! i d ! , and consider the Hölder space of α -smooth functions given by x , y R d
H α : = f : R d R n : f ( x ) 0 i 1 + + i d α 1 D ( ı ) f ( y ) ı ! x y ı M α x y 2 α ,
with α 1 and M α > 0 . We use · 2 for the Euclidean norm, | | · | | 1 for the L 1 norm, E ( · ) for the expectation, and V ( · ) for the variance.
For the stochastic evaluations of functions, consider L , q N \ { 0 } , β R with = 1 , , L , h : = ( h 1 , , h d ) R + d , and denote with V : = ( V 1 , , V d ) the d-dimensional random vectors of independent variables satisfying the following: j { 1 , , d } ,
E V j = 0 ; E V j 2 = σ 2 ; E V j 2 q + 1 = 0 ; E V j 2 q < + .
The random vectors of the independent variables that are symmetrically distributed about zero are instances of V , including the standard Gaussian random vector and the symmetric uniform distributions about zero.
Also, denote h V : = ( h 1 V 1 ; , h d V d ) ; h 1 V : = ( V 1 / h 1 ; , V d / h d ) and β h V : = ( β h 1 V 1 ; , β h d V d ) . The real β s are used for controlling the order of approximations and the order of derivatives (i.e., | | ı | | 1 = 1 , 2 ) that we are interested in. Finally, h j s are used to define a neighborhood of a sample point of X (i.e., x ). Thus, using β m a x : = max | β 1 | , , | β L | and keeping in mind the variance of β h j V j , we assume that j { 1 , , d } ,
Assumption 1. 
β m a x h j σ 1 / 2 or equivalently 0 < β m a x h j | V j | 1 for bounded V j s .

3. Main Results

This section aims at providing new expressions of the gradient of a function with nonindependent variables and the associated order of approximations. We are also going to derive the estimators of such a gradient, including the optimal and parametric rates of convergence. Recall that the input variables are said to be nonindependent whenever there exist at least two variables X j , X k such that the joint CDF F j , k ( x j , x k ) F j ( x j ) F k ( x k ) .

3.1. Stochastic Expressions of the Gradients of Functions with Dependent Variables

Using the fact that X F , with F ( x ) j = 1 d F j ( x j ) , we are able to model X as follows [8,9,10,11,12,14,43]:
X j = d r j X j , Z j = r 1 , j X j , Z j , , r j 1 , j X j , Z j , r j + 1 , j X j , Z j , r d , j X j , Z j T ,
where r j : R d R d 1 ; X j ; and Z j : = Z 1 , , Z j 1 , Z j + 1 , Z d are independent. Moreover, we have X j , X j = d X j , r j X j , Z , and it is worth noting that the function r j is invertible with respect to Z j for continuous variables, that is,
Z j = r j 1 X j | X j .
Note that the formal Jacobian matrix of g : R d R d , x x is the identity matrix. As x is a sample value of X , the dependent Jacobean of g based on the above dependency function is clearly not the identity matrix due to the fact that such a matrix accounts for the dependencies among the elements of x . The dependent partial derivatives of x with respect to x j are then given by [9,15]
J ( j ) x : = x x j = r 1 , j x j     1     j th position r d , j x j T x j , r j 1 x j | x j ,
and the dependent Jacobian matrix becomes (see [15] for more details)
J d x : = J ( 1 ) x , , J ( d ) x .
Moreover, the gradient of f with nonindependent variables is given by [15]
g r a d ( f ) ( x ) : = J d x T J d x 1 f ( x ) = G 1 ( x ) f ( x ) ,
with G ( x ) : = J d x T J d x being the tensor metric and G 1 ( x ) being its generalized inverse. Based on the above framework, Theorem 1 provides the stochastic expression of g r a d ( f ) ( x ) . In what follows, denote 𝟙 : = 1 , , 1 T R d .
Theorem 1. 
Assume that f H α , with α 2 L , Assumption 1 holds and that the β s are distinct. Then, there exists α 1 { 1 , , L } and reals coefficients C 1 , , C L such that
g r a d ( f ) ( x ) = G 1 ( x ) = 1 L C E f x + β h V V h 1 σ 2 + O h 2 2 α 1 𝟙 .
Proof. 
See Appendix A for the detailed proof. □
Using the Kronecker symbol δ 1 , r , the setting L = 1 , β 1 = 1 , C 1 = 1 , or the constraints = 1 L = 2 C β r = δ 1 , r ; r = 0 , 1 lead to the order of approximation O h 2 2 , while the constraints = 1 L C β r = δ 1 , r ; r = 1 , 3 , 5 , , 2 L 1 allow for increasing that order up to O h 2 2 L . For distinct β s, the above constraints lead to the existence of the constants C 1 , , C L . Indeed, some constraints rely on the Vandermonde matrix of the form
A L : = 1 1 1 β 1 β 2 β L β 1 2 β 2 2 β L 2 ,
which is invertible for distinct values of the β s (i.e., β 1 β 2 ), because the determinant det A L = 1 1 < 2 L β 1 β 2 .
Remark 1. 
For an even integer L, the following nodes may be considered: β 1 , , β L = ± 2 k , k = 0 , , L 2 2 . When L is odd, one may add 0 to the above set. Of course, there are other possibilities, provided that = 1 L C β = 1 .
Beyond the strong assumption made on the functions in Theorem 1, and knowing that increasing L will require more evaluations of f at random points, we are going to derive the upper bounds of the biases of our appropriations under different structural assumptions on the deterministic functions f and V , such as f H α , with α > 1 . To that end, denote R : = R 1 , , R d as a d-dimensional random vector of independent variables that are centered about zero and standardized (i.e., E [ R k 2 ] = 1 , k = 1 , , d ), and let R c be the set of such random vectors. Define
K 1 : = inf R R c G 1 ( x ) E R 2 R 2 1 ; K 2 : = inf R R c G 1 ( x ) E R 2 R 2 2 ;
with G 1 the matrix obtained by putting the entries of G 1 in the absolute value.
When 1 < α 2 , only L = 1 or L = 2 can be considered for any function that belongs to H α . To be able to derive the parametric rates of convergence, Corollary 1 starts providing the upper bounds of the bias when L = 2 .
Corollary 1. 
Consider β 1 = 1 , β 2 = 1 ; C 1 = 1 / 2 ; and C 2 = 1 / 2 . If f H 2 and Assumption 1 hold, then there exists M 2 > 0 such that
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 1 σ M 2 K 1 h 2 ;
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 2 σ M 2 K 2 h 2 .
Proof. 
Detailed proofs are provided in Appendix B. □
For a particular choice of V , we obtain the results below.
Corollary 2. 
Consider β 1 = 1 , β 2 = 1 ; C 1 = 1 / 2 ; C 2 = 1 / 2 . If V k U ( ξ , ξ ) , where ξ > 0 , k = 1 , , d , f H 2 , and Assumption 1 hold, then
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 1 M 2 ξ G 1 ( x ) 𝟙 1 h 1 ;
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 2 M 2 ξ G 1 ( x ) 𝟙 2 h 1 .
Proof. 
Since | V k | ξ , we have h V 1 ξ h 1 , and the results hold using the following upper bounds: M 2 G 1 ( x ) E V 2 σ 2 h V 1 1 and M 2 G 1 ( x ) E V 2 σ 2 h V 1 2 obtained in Appendix B. □
It is worth noting that choosing h k = h and ξ = 1 / d 2 leads to the dimension-free upper bound of the bias, that is,
g r a d ( f ) ( x ) G 1 ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 1 M 2 h d G 1 ( x ) 𝟙 1 ,
because G 1 ( x ) 𝟙 2 is a function of d in general.
For the sequel of generality, Corollary 3 provides the bias of our approximations for highly smooth functions. To that end, define
K 2 , L : = inf R R c G 1 ( x ) E R 2 R 2 L 2 ; K 3 : = = 1 L + 1 C β 1 + L .
Corollary 3. 
For an odd integer L > 2 , consider = 1 L + 1 C β r = δ 1 , r ; r = 0 , 1 , , L . If f H 1 + L and Assumption 1 hold, then there exists M 1 + L > 0 such that
g r a d ( f ) ( x ) G 1 ( x ) = 1 L + 1 C E f x + β h V V h 1 σ 2 2 σ L M 1 + L K 2 , L K 3 h 2 L .
Moreover, if V k U ( ξ , ξ ) , with ξ > 0 and k = 1 , , d , then
g r a d ( f ) ( x ) G 1 ( x ) = 1 L + 1 C E f x + β h V V h 1 σ 2 2 ξ L M 1 + L G 1 ( x ) 𝟙 2 h 1 L K 3 .
Proof. 
The proofs are similar to those of Corollary 1 (see Appendix B). □
In view of the results provided in Corollary 3, finding the β s and Cs that minimize the quantity K 3 = = 1 L + 1 C β 1 + L might be helpful for improving the above upper bounds.

3.2. Links to Other Works for Independent Input Variables

Recall that for independent input variables, the matrix G 1 ( x ) comes down to the identity matrix, and g r a d ( f ) = f . Thus, Equation (7) becomes
f ( x ) = 1 L = 2 C E f x + β h V V h 1 σ 2 2 M 2 h ,
when ξ = d / d 2 . Taking ξ = d / d leads to the upper bound M 2 h d .
Other results about the upper bounds of the bias of the (formal) gradient approximations have been provided in [19,20] (and the references therein) under the same assumptions made on f and the evaluations of f. Such results rely on a random vector S that is uniformly distributed on the unit ball and a kernel K. Under such a framework, the upper bound derived in [19,20] is
f ( x ) d h E f ( x + U h S ) S K ( U ) 2 2 2 α d M α h α 1 ,
where U U ( 1 , 1 ) is independent of S . Therefore, our results improve the upper bound obtained in [19,20] when α = 2 , for instance.

3.3. Computation of the Gradients of Functions with Dependent Variables

Consider a sample of V given by V i : = V i , 1 , , V i , d i = 1 N . Using Equation (3), the estimator of g r a d ( f ) ( x ) is derived as follows:
g r a d ( f ) ^ ( x ) : = G 1 ( x ) 1 N i = 1 N = 1 L C f x + β h V i V i h 1 σ 2 .
To assess the quality of such an estimator, it is common to use the mean squared error (MSE), including the rates of convergence. The MSEs are often used in statistics for determining the optimal value of h as well. Theorem 2 and Corollary 4 provide such quantities of interest. To that end, define
K 4 : = inf R R c E G 1 ( x ) R h 1 2 2 R 2 2 .
Theorem 2. 
Consider β 1 = 1 , β 2 = 1 ; C 1 = 1 / 2 ; and C 2 = 1 / 2 . If f H 2 and Assumption 1 hold, then
E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 σ 2 M 2 2 K 2 2 h 2 2 + M 1 2 K 4 h 2 2 N .
Moreover, if V k U ( ξ , ξ ) , with ξ > 0 , k = 1 , , d and R 0 : = V / σ , then
E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 M 2 2 ξ 2 G 1 ( x ) 𝟙 2 2 h 1 2 + 3 d M 1 2 h 2 2 N E G 1 ( x ) R 0 h 1 2 2 .
Proof. 
See Appendix C. □
Using a uniform bandwidth, that is, h k = h , with k = 1 , , d , the upper bounds of MSEs provided in Theorem 2 have simple expressions. Indeed, the upper bounds in Equations (8) and (9) become, respectively,
σ 2 M 2 2 K 2 2 d h 2 + M 1 2 d N inf R R c E G 1 ( x ) R 2 2 R 2 2 ;
M 2 2 ξ 2 G 1 ( x ) 𝟙 2 2 d 2 h 2 + 3 d M 1 2 N E G 1 ( x ) R 0 2 2 .
It comes out that the second terms of the above upper bounds do not depend on the bandwidth h. This key observation leads to the derivation of the optimal and parametric rates of convergence of the proposed estimator.
Corollary 4. 
Under the assumptions made in Theorem 2, if ξ = d 3 / 2 and h k = h N γ / 2 , with γ ] 1 , 2 [ , then we have
E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 = O N 1 d 2 .
Proof. 
The proof is straightforward, since h 2 N γ and N h when N . □
It is worth noting that the upper bound of the squared bias obtained in Corollary 4 does not depend on the dimensionality, thanks to the choice of ξ = d 3 / 2 . But, the derived rate of convergence depends on d 2 , thus meaning that our estimator suffers from the curse of dimensionality. In higher dimensions, an attempt to improve our results consists of controlling the upper bound of the second-order moment of the estimator through = 1 L C β .
Remark 2. 
For highly smooth functions (i.e., f H 1 + L , with L > 3 ) and under the assumptions made in Corollary 3, we can check that (see Appendix C)
E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 ξ 2 L M 1 + L 2 G 1 ( x ) 𝟙 2 2 h 1 2 L K 3 2 + 3 d M 1 2 h 2 2 N = 1 L + 1 C β 2 E G 1 ( x ) R 0 h 1 2 2 .

4. Computations of the Formal Gradient of Rosenbrock’s Function

For comparing our approach to (i) the finite differences method (FDM) using the R package numDeriv [44], with h = 10 4 , and (ii) the Monte Carlo (MC) approach provided in [17], with h = 10 4 , let us consider the Rosenbrock function given as follows: x R d ,
r ( x ) : = k = 1 d 1 ( 1 x k ) 2 + 100 x k + 1 x k 2 2 .
The gradient of that function at 0 is r ( 0 ) = 2 , , 2 , 0 T R 100 (see [17]). To assess the numerical accuracy of each approach, the following measure is considered:
E r r : = r ( 0 ) r ^ ( 0 ) 1 r ( 0 ) 1 ,
where r ^ ( 0 ) is the estimated value of the gradient. Table 1 reports the values of E r r for the three approaches. To obtain the results using our approach, we have used h = 1 / N , with N as the sample size and ξ = 1 / d 2 = 10 4 , with d = 100 . Also, the Sobol sequence has been used for generating the values of the V j s, and the Gram–Schmidt algorithm is applied to obtain (perfect) orthogonal vectors for a given N.
Based on Table 1, our approach provides efficient results compared to other methods. Since the FDM is not possible when N < 2 d = 200 , it comes out that our approach is quite flexible thanks to L and the fact that the gradient can be computed for every value of N. Increasing N improved our results, as expected.

5. Application to a Heat PDE Model with Stochastic Initial Conditions

5.1. Heat Diffusion Model and Its Formal Gradient

Consider a time-dependent model f ( x , t ) defined by the one-dimensional (1-D) diffusion PDE with stochastic initial conditions, that is,
f t D 2 f 2 x = 0 , x ] 0 , 1 [ , t [ 0 , T ] f ( x , t = 0 ) = Z ( x ) x [ 0 , 1 ] f ( x = 0 , t ) = 0 , f ( x = 1 , t ) = 1 , t [ 0 , T ] ,
where D R + represents the diffusion coefficient. It is common to consider J ( Z ( x ) ) : = 1 2 0 T 0 10 f ( x , t ) 2 d x d t as the quantity of interest (QoI). The spatial discretisation consists in subdividing the spatial domain [ 0 , 1 ] in d equally sized cells, which leads to d initial conditions or inputs given by Z ( x j ) with j = 1 , , d . Given zero-mean random variables ( R j , j = 1 , , d ) , assume that X j : = Z ( x j ) = sin ( 2 π x j ) + s j R j , j = 1 , , d , where s j R + represents the inverse precision about our knowledge on the initial conditions. For the dynamic aspect, a time step of 0.025 is considered starting from 0 up to T = 5 .
Given a direction z ( x ) and the Gâteaux derivative f ˇ ( x , t ) : = f z ( x ) , the tangent linear model is derived as follows:
f ˇ t D 2 f ˇ 2 x = 0 , x ] 0 , 1 [ , t [ 0 , T ] f ˇ ( x , t = 0 ) = z ( x ) , x [ 0 , 1 ] f ˇ ( x = 0 , t ) = f ˇ ( x = 1 , t ) = 0 , t [ 0 , T ] ,
and we can check that the adjoint model (AM) (i.e., f a ) is given by
f a t D 2 f a 2 x = f , x ] 0 , 1 [ , t [ 0 , T ] f a ( x = 0 , t ) = f a ( x = 1 , t ) = 0 , t [ 0 , T ] f a ( x , T ) = 0 , x [ 0 , 1 ] .
The formal gradient of J ( Z ( x ) ) with respect to the inputs Z ( x ) is Z J ( Z ( x ) ) = f a ( x , 0 ) . Remark that the above gradient relies on f a ( x , 0 ) , and only one evaluation of such a function is needed.

5.2. Spatial Autocorrelations of Initial Conditions and the Tensor Metric

Recall that the above gradient is based on the assumption of independent input variables, thus suggesting that the initial conditions within different cells are uncorrelated. To account for the spatial autocorrelations between different cells, assume that the d input variables follow the Gaussian process with the following autocorrelation function:
ρ ( X j 1 , X j 2 ) = 1 2 | j 1 j 2 | 𝟙 [ 0 , 3 ] ( | j 1 j 2 | ) ; j 1 , j 2 { 1 , , d } ,
where 𝟙 [ 0 , 3 ] ( | j 1 j 2 | ) = 1 if | j 1 j 2 | [ 0 , 3 ] and is zero otherwise. Such spatial autocorrelations lead to the correlation matrix of the form
R : = 1 0.5 0.25 0.125 0 0 0 0 0.5 1 0.5 0.25 0.125 0 0 0 0.25 0.5 1 0.5 0.25 0.125 0 0 0.125 0.25 0.5 1 0.5 0.25 0.125 0 0 0 .
Using the same standard deviation s j = s leads to the following covariance matrix Σ = s 2 R , and X = ( X 1 , , X d ) N d μ , Σ , with μ : = sin ( 2 π c 1 ) , , sin ( 2 π c d ) , and c 1 , , c d being the centers of the cells. The associated dependency model is given below.
Consider the diagonal matrix D j = d i a g Σ 1 , 1 , , Σ j 1 , j 1 , Σ j + 1 , j + 1 , , Σ d , d and the Gaussian random vector W N d 1 μ j , D j . Denote with Σ ( j ) the matrix obtained by moving the j th row and column of Σ to the the first row and column; L ( j ) is the Cholesky factor of Σ ( j ) , and μ ( j ) : = ( μ j , μ 1 , , μ j 1 , μ j + 1 , , μ d ) . We can see that ( X j , X j ) N d μ ( j ) , Σ ( j ) , and the dependency model is given by [10]
X j , X j = L ( j ) 1 Σ j , j X j E [ X j ] D j 1 / 2 W μ j + μ ( j ) ; j = 1 , , d .
Based on Equation (10), we have X j x j = L 1 , 1 ( j ) Σ j , j = Σ 1 , 1 ( j ) Σ j , j = Σ j , j Σ j , j . Thus, we can deduce that J ( j ) = Σ , j Σ j , j , with Σ , j being the j th column of Σ , and the dependent Jacobian becomes J d = J ( 1 ) , , J ( d ) = Σ , 1 Σ 1 , 1 , , Σ , d Σ d , d = Σ s 2 = R , since Σ j , j = s j 2 = s 2 , and Σ = s 2 R . The tensor metric is given by G = R T R .

5.3. Comparisons between Exact Gradient and Estimated Gradients

For running the above PDE-based model using the R package deSolve [45], we are given D = 0.0011 and s = 1.96 . The exact and formal gradient associated with the mean values of the initial conditions is obtained by running the corresponding adjoint model. For estimating the gradient using the proposed estimators, we consider L = 2 , 3 and N = 50 , 100 , 150 , 200 . We also use h = 1 / N and V j N ( 0 , 1 ) , j = 1 , , d = 50 . The Sobol sequence is used for generating the random values of the V j s, and the Gram–Schmidt algorithm is applied to obtain perfect orthogonal vectors for a given N.
Figure 1 shows the comparisons between the estimated and the exact values of the formal gradient f (i.e., ρ ( X j 1 , Z j 2 ) = 0 ) for L = 1 , 2 . Likewise, Figure 2 and Figure 3 depict the dependent gradient g r a d ( f ) = R T R 1 f and its estimation. The estimates of both gradients are in line with the exact values using only N L = 50 (respectively, N L = 100 ) model evaluations when L = 1 and N = 50 (respectively, L = 1 and N = 100 or L = 2 and N = 50 ). Increasing the values of L and N gives the same quasiperfect results for both the formal and dependent gradients (see Figure 3).

6. Conclusions

In this paper, we have proposed new, simple, and generic approximations of the gradients of functions with nonindependent input variables by means of independent, central, and symmetric variables and a set of constraints. It comes out that the biases of our approximations for a wide class of functions, such as 2-smooth functions, do not suffer from the curse of dimensionality by properly choosing the set of independent, central, and symmetric variables. For functions including only independent input variables, a theoretical comparison has shown that the upper bounds of the bias of the formal gradient derived in this paper outperformed the best known results.
For computing the dependent gradient of the function of interest, we have provided estimators of such a gradient by making use of evaluations of that function at L N randomized points. Such estimators reach the optimal (mean squared error) rates of convergence (i.e., O ( N 1 d 2 ) ) for a wide class of functions. Numerical comparisons using a test case and simulations based on a PDE model with given autocollaborations among the initial conditions have shown the efficiency of our approach, even when L = 1 , 2 constraints were used. Our approach is therefore flexible, thanks to L and the fact that the gradient can be computed for every value of the sample size N in general.
While the proposed estimators reach the parametric rate of convergence, note that the second-order moments of such estimators depend on d 2 . An attempt to reach a dimension-free rate of convergence requires working in C rather than R when L = 2 . In the future, it is worth investigating the derivation of the optimal rates of convergence that are dimension-free or (at least) are linear with respect to d by considering L > 3 constraints. Also, combining such a promising approach with a transformation of the original space might be helpful for reducing the number of model evaluations in higher dimensions.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Acknowledgments

We would like to thank the reviewers for their comments that have helped to improve our manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Proof of Theorem 1

As ı = ( ı 1 , , ı d ) , let k = 0 , , 0 ,     1     k th position , 0 , 0 R d and q = ( q 1 , , q d ) N d . Multiplying the Taylor expansion of f x + β h V about x , that is,
f x + β h V = p = 0 m | | ı | | 1 = p D ( ı ) f ( x ) ı ! β p h V ı + O β h V 2 m + 1 ,
by V h 1 σ 2 R d and the constant C , as well as taking the sum over = 1 , , L , we can see that the expectation E : = = 1 L C E f x + β h V V h 1 σ 2 becomes
E = p 0 | | ı | | 1 = p D ( ı ) f ( x ) ı ! C β p E V ı h ı V h 1 σ 2 .
Firstly, for a given k { 1 , , d } and by independence, we can see that
E V ı h ı V k h k 1 = E V ı + k h ı k 0
if ı k = 2 q k + 1 ; thus, ı j = 2 q j for any j { 1 , , d } \ { k } , with q k , q j N , which implies that ı = k + 2 q . Thus, one obtains f x k when | | ı | | 1 = | | k + 2 q | | 1 = 1 , and the fact that E V k 2 = σ 2 ; E V j = 0 ; and C β = 1 . At this point, by taking k = 1 , , d and setting L = 1 , β = 1 and C = 1 , this results in the approximation of f ( x ) of order O ( h 2 2 ) , because when | | ı | | 1 = 2 , E V ı h ı V k h k 1 = 0 .
Secondly, for L > 1 the constraints = 1 L C β r + 1 = δ 0 , r r = 0 , 2 , , 2 ( L 1 ) , we eliminate some higher-order terms so as to reach the order O h 2 2 L . Using other constraints, we complete the proof, thus bearing in mind Equation (2).

Appendix B. Proof of Corollary 1

For q = ( q 1 , , q d ) N d ; k { 1 , , d } and k = 0 , , 0 ,     1     k th position , 0 , 0 R d , consider s k : = q + k : | | q | | 1 = 1 . As f H 2 , we can write
f ( x + β h V ) = | | ı | | 1 = 0 1 D ( ı ) f ( x ) β | | ı | | 1 ( h V ) ı ı ! + | | ı | | 1 = 2 ı s k D ( ı ) f ( x ) β 2 ( h V ) ı ı ! + R k h , β , V ,
with the remainder term
R k h , β , V = | | ı | | 1 = 2 ı s k D ( ı ) f ( x + β h V ) β 2 ( h V ) ı ı ! = | | q | | 1 = 1 ı s k D ( k + q ) f ( x + β h V ) β 2 ( h V ) q + k ( k + q ) ! = h k V k β 2 | | q | | 1 = 1 D ( k + q ) f ( x + β h V ) ( h V ) q ( k + q ) ! .
Denote R k 0 : = | | q | | 1 = 1 D ( k + q ) f ( x + β h V ) ( h V ) q ( k + q ) ! , and remark that | R k 0 | M 2 h V 1 . Using Theorem 1, we can see that the absolute value of the bias, that is,
B : = g r a d ( f ) ( x ) G 1 ( x ) = 1 L C E f x + β h V V h 1 σ 2 1 is given by
B = G 1 ( x ) E f ( x ) = 1 L C f x + β h V V h 1 σ 2 1 = G 1 ( x ) = 1 L C β 2 σ 2 E V 1 2 R 1 0 , , V d 2 R d 0 T 1 = 1 L C β 2 σ 2 G 1 ( x ) E V 1 2 R 1 0 , , V d 2 R d 0 T 1 = 1 L C β 2 M 2 G 1 ( x ) E V 2 σ 2 h V 1 1 ,
using the expansion of the product between matrices.
Using the same reasoning and taking the Euclidean norm, we obtain
B 2 = G 1 ( x ) E f ( x ) = 1 L C f x + β h V V h 1 σ 2 2 = 1 L C β 2 M 2 G 1 ( x ) E V 2 σ 2 h V 1 2 .
The results hold using R : = V / σ .

Appendix C. Proof of Theorem 2

Firstly, remark that M S E : = E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 is given by
M S E = E g r a d ( f ) ^ ( x ) E g r a d ( f ) ^ ( x ) 2 2 + E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 .
Since the bias E E g r a d ( f ) ^ ( x ) g r a d ( f ) ( x ) 2 2 has been derived in previous Corollaries, we are going to treat the second-order moment.
Secondly, since f H 2 implies that f H 1 , we have f ( x + β h V ) f ( x ) M 1 β h V 2 . Also, since = 1 L C = 0 , we then have
= 1 L C f ( x + β h V ) = = 1 L C f ( x + β h V ) f ( x ) ,
which leads to = 1 L C ( | u | ) f ( x + β h V ) = 1 L C β M 1 h V 2 and
Q ( x ) : = G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) = G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) f ( x ) .
Thirdly, using (3), we can see that E Q ( x ) = E g r a d ( f ) ^ ( x ) . Bearing in mind the definition of the Euclidean norm and the variance, the centered second-order moment, that is, V g r a d : = E g r a d ( f ) ^ ( x ) E g r a d ( f ) ^ ( x ) 2 2 is given by
V g r a d 1 N E G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) E g r a d ( f ) ^ ( x ) 2 2 1 N E G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) 2 2 = ( A 1 ) 1 N E G 1 ( x ) V h 1 σ 2 = 1 L C f ( x + β h V ) f ( x ) 2 2 1 N E G 1 ( x ) V h 1 σ 2 2 2 h V 2 2 M 1 2 = 1 L C β 2 1 N E G 1 ( x ) V h 1 σ 2 2 2 V 2 2 h 2 2 M 1 2 = 1 L C β 2
when bearing in mind the Hölder inequality. The results hold using R : = V / σ and the fact that when V k U ( ξ , ξ ) , V 2 2 d ξ 2 , and σ 2 = ξ 2 / 3 .

References

  1. Rosenblatt, M. Remarks on a Multivariate Transformation. Ann. Math. Statist. 1952, 23, 470–472. [Google Scholar] [CrossRef]
  2. Nataf, A. Détermination des distributions dont les marges sont données. Comptes Rendus L’Académie Des Sci. 1962, 225, 42–43. [Google Scholar]
  3. Joe, H. Dependence Modeling with Copulas; Chapman & Hall/CRC: London, UK, 2014. [Google Scholar]
  4. McNeil, A.J.; Frey, R.; Embrechts, P. Quantitative Risk Management; Princeton University Press: Princeton, NJ, USA; Oxford, UK, 2015. [Google Scholar]
  5. Navarro, J.; Ruiz, J.M.; Aguila, Y.D. Multivariate weighted distributions: A review and some extensions. Statistics 2006, 40, 51–64. [Google Scholar] [CrossRef]
  6. Sklar, A. Fonctions de Rpartition à n Dimensions et Leurs Marges. Publ. l’Institut Stat. L’Université Paris 1959, 8, 229–231. [Google Scholar]
  7. Durante, F.; Ignazzi, C.; Jaworski, P. On the class of truncation invariant bivariate copulas under constraints. J. Math. Anal. Appl. 2022, 509, 125898. [Google Scholar] [CrossRef]
  8. Skorohod, A.V. On a representation of random variables. Theory Probab. Appl. 1976, 21, 645–648. [Google Scholar]
  9. Lamboni, M.; Kucherenko, S. Multivariate sensitivity analysis and derivative-based global sensitivity measures with dependent variables. Reliab. Eng. Syst. Saf. 2021, 212, 107519. [Google Scholar] [CrossRef]
  10. Lamboni, M. Efficient dependency models: Simulating dependent random variables. Math. Comput. Simul. 2022, 200, 199–217. [Google Scholar] [CrossRef]
  11. Lamboni, M. On exact distribution for multivariate weighted distributions and classification. Methodol. Comput. Appl. Probab. 2023, 25, 41. [Google Scholar] [CrossRef]
  12. Lamboni, M. Measuring inputs-outputs association for time-dependent hazard models under safety objectives using kernels. Int. J. Uncertain. Quantif. 2024, 1–17. [Google Scholar] [CrossRef]
  13. Kucherenko, S.; Klymenko, O.; Shah, N. Sobol’ indices for problems defined in non-rectangular domains. Reliab. Eng. Syst. Saf. 2017, 167, 218–231. [Google Scholar] [CrossRef]
  14. Lamboni, M. On dependency models and dependent generalized sensitivity indices. arXiv 2021, arXiv:2104.12938. [Google Scholar]
  15. Lamboni, M. Derivative Formulas and Gradient of Functions with Non-Independent Variables. Axioms 2023, 12, 845. [Google Scholar] [CrossRef]
  16. Nemirovsky, A.; Yudin, D. Problem Complexity and Method Efficiency in Optimization; Wiley & Sons: New York, NY, USA, 1983; p. 404. [Google Scholar]
  17. Patelli, E.; Pradlwarter, H. Monte Carlo gradient estimation in high dimensions. Int. J. Numer. Methods Eng. 2010, 81, 172–188. [Google Scholar] [CrossRef]
  18. Agarwal, A.; Dekel, O.; Xiao, L. Optimal Algorithms for Online Convex Optimization with Multi-Point Bandit Feedback. In Proceedings of the The 23rd Conference on Learning Theory, COLT 2010, Haifa, Israel, 27–29 June 2010; pp. 28–40. [Google Scholar]
  19. Bach, F.; Perchet, V. Highly-Smooth Zero-th Order Online Optimization. In Proceedings of the 29th Annual Conference on Learning Theory, New York, NY, USA, 23–26 June 2016; Volume 49, pp. 257–283. [Google Scholar]
  20. Akhavan, A.; Pontil, M.; Tsybakov, A.B. Exploiting Higher Order Smoothness in Derivative-Free Optimization and Continuous Bandits. Red Hook, NY, USA, 2020. NIPS’20. Available online: https://arxiv.org/abs/2006.07862 (accessed on 18 June 2024).
  21. Sobol, I.M.; Kucherenko, S. Derivative based global sensitivity measures and the link with global sensitivity indices. Math. Comput. Simul. 2009, 79, 3009–3017. [Google Scholar] [CrossRef]
  22. Kucherenko, S.; Rodriguez-Fernandez, M.; Pantelides, C.; Shah, N. Monte Carlo evaluation of derivative-based global sensitivity measures. Reliab. Eng. Syst. Saf. 2009, 94, 1135–1148. [Google Scholar] [CrossRef]
  23. Lamboni, M.; Iooss, B.; Popelin, A.L.; Gamboa, F. Derivative-based global sensitivity measures: General links with Sobol’ indices and numerical tests. Math. Comput. Simul. 2013, 87, 45–54. [Google Scholar] [CrossRef]
  24. Roustant, O.; Fruth, J.; Iooss, B.; Kuhnt, S. Crossed-derivative based sensitivity measures for interaction screening. Math. Comput. Simul. 2014, 105, 105–118. [Google Scholar] [CrossRef]
  25. Fruth, J.; Roustant, O.; Kuhnt, S. Total interaction index: A variance-based sensitivity index for second-order interaction screening. J. Stat. Plan. Inference 2014, 147, 212–223. [Google Scholar] [CrossRef]
  26. Lamboni, M. Derivative-based generalized sensitivity indices and Sobol’ indices. Math. Comput. Simul. 2020, 170, 236–256. [Google Scholar] [CrossRef]
  27. Lamboni, M. Derivative-based integral equalities and inequality: A proxy-measure for sensitivity analysis. Math. Comput. Simul. 2021, 179, 137–161. [Google Scholar] [CrossRef]
  28. Lamboni, M. Weak derivative-based expansion of functions: ANOVA and some inequalities. Math. Comput. Simul. 2022, 194, 691–718. [Google Scholar] [CrossRef]
  29. Bobkov, S. Isoperimetric and Analytic Inequalities for Log-Concave Probability Measures. Ann. Probab. 1999, 27, 1903–1921. [Google Scholar] [CrossRef]
  30. Roustant, O.; Barthe, F.; Iooss, B. Poincaré inequalities on intervals-application to sensitivity analysis. Electron. J. Statist. 2017, 11, 3081–3119. [Google Scholar] [CrossRef]
  31. Le Dimet, F.X.; Talagrand, O. Variational algorithms for analysis and assimilation of meteorological observations: Theoretical aspects. Tellus A Dyn. Meteorol. Oceanogr. 1986, 38, 97–110. [Google Scholar] [CrossRef]
  32. Le Dimet, F.X.; Ngodock, H.E.; Luong, B.; Verron, J. Sensitivity analysis in variational data assimilation. J.-Meteorol. Soc. Jpn. 1997, 75, 245–255. [Google Scholar] [CrossRef]
  33. Cacuci, D.G. Sensitivity and Uncertainty Analysis—Theory; Chapman & Hall/CRC: Boca Raton, FL, USA, 2005. [Google Scholar]
  34. Gunzburger, M.D. Perspectives in Flow Control and Optimization; SIAM: Philadelphia, PA, USA, 2003. [Google Scholar]
  35. Borzi, A.; Schulz, V. Computational Optimization of Systems Governed by Partial Differential Equations; SIAM: Philadelphia, PA, USA, 2012. [Google Scholar]
  36. Ghanem, R.; Higdon, D.; Owhadi, H. Handbook of Uncertainty Quantification; Springer International Publishing: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  37. Guidotti, E. Calculus: High-Dimensional Numerical and Symbolic Calculus in R. J. Stat. Softw. 2022, 104, 1–37. [Google Scholar] [CrossRef]
  38. Ancell, B.; Hakim, G.J. Comparing Adjoint- and Ensemble-Sensitivity Analysis with Applications to Observation Targeting. Mon. Weather Rev. 2007, 135, 4117–4134. [Google Scholar] [CrossRef]
  39. Pradlwarter, H. Relative importance of uncertain structural parameters. Part I: Algorithm. Comput. Mech. 2007, 40, 627–635. [Google Scholar] [CrossRef]
  40. Polyak, B.; Tsybakov, A. Optimal accuracy orders of stochastic approximation algorithms. Probl. Peredachi Inf. 1990, 26, 45–53. [Google Scholar]
  41. Zemanian, A. Distribution Theory and Transform Analysis: An Introduction to Generalized Functions, with Applications; Dover Books on Advanced Mathematics; Dover Publications: Mineola, NY, USA, 1987. [Google Scholar]
  42. Strichartz, R. A Guide to Distribution Theory and Fourier Transforms; Studies in Advanced Mathematics; CRC Press: Boca, FL, USA, 1994. [Google Scholar]
  43. Lamboni, M. Kernel-based Measures of Association Between Inputs and Outputs Using ANOVA. Sankhya A 2024. [Google Scholar] [CrossRef]
  44. Gilbert, P.; Varadhan, R. R-Package numDeriv: Accurate Numerical Derivatives; CRAN Repository. 2019. Available online: http://optimizer.r-forge.r-project.org/ (accessed on 18 June 2024).
  45. Soetaert, K.; Petzoldt, T.; Setzer, R.W. Solving Differential Equations in R: Package deSolve. J. Stat. Softw. 2010, 33, 1–25. [Google Scholar] [CrossRef]
Figure 1. Exact gradient versus estimated gradients using L = 1 (∘) and L = 2 (+) of the QoI by considering the inputs as independent (formal gradients).
Figure 1. Exact gradient versus estimated gradients using L = 1 (∘) and L = 2 (+) of the QoI by considering the inputs as independent (formal gradients).
Axioms 13 00426 g001
Figure 2. Exact gradient versus estimated gradients using L = 1 (∘) and L = 2 (+) of the QoI by considering the autocorrelations among the inputs (dependent gradients).
Figure 2. Exact gradient versus estimated gradients using L = 1 (∘) and L = 2 (+) of the QoI by considering the autocorrelations among the inputs (dependent gradients).
Axioms 13 00426 g002
Figure 3. Exact gradient versus estimated gradients using L = 2 (∘) and L = 3 (+) of the QoI by considering the autocorrelations anong the inputs (dependent gradients).
Figure 3. Exact gradient versus estimated gradients using L = 2 (∘) and L = 3 (+) of the QoI by considering the autocorrelations anong the inputs (dependent gradients).
Axioms 13 00426 g003
Table 1. Values of E r r for three different approximations of the formal gradients.
Table 1. Values of E r r for three different approximations of the formal gradients.
Number of Total Model Evaluations (i.e., LN )
10015020020010001000
Methods
FDM [44]---0.005--
MC [17] 0.042 -----
L = 1 L = 1 L = 1 L = 2 L = 1 L = 2
This paper0.0350.0140.0090.0090.00200.00199
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lamboni, M. Optimal and Efficient Approximations of Gradients of Functions with Nonindependent Variables. Axioms 2024, 13, 426. https://doi.org/10.3390/axioms13070426

AMA Style

Lamboni M. Optimal and Efficient Approximations of Gradients of Functions with Nonindependent Variables. Axioms. 2024; 13(7):426. https://doi.org/10.3390/axioms13070426

Chicago/Turabian Style

Lamboni, Matieyendou. 2024. "Optimal and Efficient Approximations of Gradients of Functions with Nonindependent Variables" Axioms 13, no. 7: 426. https://doi.org/10.3390/axioms13070426

APA Style

Lamboni, M. (2024). Optimal and Efficient Approximations of Gradients of Functions with Nonindependent Variables. Axioms, 13(7), 426. https://doi.org/10.3390/axioms13070426

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop