Next Article in Journal
A Brief Survey of Fixed-Parameter Parallelism
Next Article in Special Issue
Multi-Fidelity Gradient-Based Strategy for Robust Optimization in Computational Fluid Dynamics
Previous Article in Journal
Deep Learning-Enabled Semantic Inference of Individual Building Damage Magnitude from Satellite Images
Previous Article in Special Issue
Sensitivity Analysis for Microscopic Crowd Simulation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification

1
ONERA, 29 Avenue de la Division Leclerc, 92320 Châtillon, France
2
Laboratoire MSSMat—UMR CNRS 8579, CentraleSupélec, 8–10 rue Joliot Curie, 91190 Gif sur Yvette, France
3
Mathematics Institute and School of Engineering, University of Warwick, Coventry CV4 7AL, UK
4
Zuse Institute Berlin, Takustraße 7, 14195 Berlin, Germany
*
Author to whom correspondence should be addressed.
Algorithms 2020, 13(8), 196; https://doi.org/10.3390/a13080196
Submission received: 10 July 2020 / Revised: 5 August 2020 / Accepted: 10 August 2020 / Published: 13 August 2020

Abstract

:
Motivated by the desire to numerically calculate rigorous upper and lower bounds on deviation probabilities over large classes of probability distributions, we present an adaptive algorithm for the reconstruction of increasing real-valued functions. While this problem is similar to the classical statistical problem of isotonic regression, the optimisation setting alters several characteristics of the problem and opens natural algorithmic possibilities. We present our algorithm, establish sufficient conditions for convergence of the reconstruction to the ground truth, and apply the method to synthetic test cases and a real-world example of uncertainty quantification for aerodynamic design.

1. Introduction

This paper considers the problem of adaptively reconstructing a monotonically increasing function F from imperfect pointwise observations of this function. In the statistical literature, the problem of estimating a monotone function is commonly known as isotonic regression, and it assumed that the observed data consist of noisy pointwise evaluations of F . However, we consider this problem under assumptions that differ from the standard formulation, and these differences motivate our algorithmic approach to the problem. To be concrete, our two motivating examples are that
F ( x ) : = P Ξ μ [ g ( Ξ ) x ]
is the cumulative distribution function (CDF) of a known real-valued function g of a random variable Ξ with known distribution μ , or that
F ( x ) : = sup ( g , μ ) A P Ξ μ [ g ( Ξ ) x ]
is the supremum of a family of such CDFs over some class A . We assume that we have access to a numerical optimisation routine that can, for each x and some given numerical parameters q (e.g., the number of iterations or other convergence tolerance parameters), produce a numerical estimate or observation G ( x , q ) of F ( x ) ; furthermore, we assume that G ( x , q ) F ( x ) is always true, i.e., the numerical optimisation routine always under-estimates the true optimum value, and that the positive error F ( x ) G ( x , q ) can be controlled to some extent through the choice of the optimisation parameters q, but remains essentially influenced by randomness in the optimisation algorithm for each x. The assumption G ( x , q ) F ( x ) is for example coherent with either Equation (1), which may be approached by increasing the number of samples (say q) in a Monte Carlo simulation, or Equation (2), which is a supremum over a set that may be explored only partially by the algorithm.
A single observation G ( x , q ) yields some limited information about F ( x ) ; a key limitation is that one may not even know a priori how accurate G ( x , q ) is. Naturally, one may repeatedly evaluate G at x, perhaps with different values of the optimisation parameters q, in order to more accurately estimate F ( x ) . However, a key observation is that a suite of observations G ( x i , q i ) , i = 1 , , I , contains much more information than simply estimates of F ( x i ) , i = 1 , , I , and this information can and must be used. For example, suppose that the values ( G ( x i , q i ) ) i = 1 I are not increasing, e.g., because
G ( x i , q i ) > G ( x i , q i ) and x i < x i .
Such a suite of observations would be inconsistent with the axiomatic requirement that F is an increasing function. In particular, while the observation at x i may be relatively good or bad on its own merits, the observation G ( x i , q i ) at x i , which violates monotonicity, is in some sense “useless” as it gives no better lower bound on F ( x i ) than the observation at x i does. The observation at x i is thus a good candidate for repetition with more stringent optimisation parameters q—and this is not something that could have been known without comparing it to the rest of the data set.
The purpose of this article is to leverage this and similar observations to define an algorithm for the reconstruction of the function F , repeating old observations of insufficient quality and introducing new ones as necessary. The principal parameter in the algorithm is an “exchange rate” E that quantifies the degree to which the algorithm prefers to have a few high-quality evaluations versus many poor-quality evaluations. Our approach is slightly different from classical isotonic (or monotonic) regression, which is understood as the least-squares fitting of an increasing function to a set of points in the plane. The latter problem is uniquely solvable and its solution can be constructed by the pool adjacent violators algorithm (PAVA) extensively studied in Barlow et al. [1]. This algorithm consists of exploring the data set from left to right until the monotonicity condition is violated, and replacing the corresponding observations by their average while back-averaging to the left if needed to maintain monotonicity. Extensions to the PAVA have been developed by de Leeuw et al. [2] to consider non least-squares loss functions and repeated observations, by Tibshirani et al. [3] to consider “nearly isotonic” or “nearly convex” fits, and by Jordan et al. [4] to consider general loss functions and partially ordered data sets. Useful references on isotonic regression also include Robertson et al. [5] and Groeneboom and Jongbloed [6].
The remainder of this paper is structured as follows. Section 2 presents the problem description and notation, after which the proposed adaptive algorithm for the reconstruction of F is presented in Section 3. We demonstrate the convergence properties of the algorithm in Section 3.2 and study its performance on several analytically tractable test cases in Section 4. Section 5 details the application of the algorithm to a challenging problem of the form Equation (2) drawn from aerodynamic design. Some closing remarks are given in Section 6.

2. Notation and Problem Description

In the following, the “ground truth” response function that we wish to reconstruct is denoted F : [ a , b ] R and has inputs x [ a , b ] R . It is assumed that F is monotonically increasing and non-constant on [ a , b ] . In contrast, G : [ a , b ] × R + R denotes the numerical process used to obtain an imperfect pointwise observation y of F ( x ) at some point x [ a , b ] for some numerical parameter q R + . Here, on a heuristic level, q > 0 stands for the “quality” of the noisy evaluation G ( x , q ) .
The main aim of this paper is to show the effectiveness of the proposed algorithm for the adaptive reconstruction of F , which could be continuous or not, from imperfect pointwise observations G ( x i , q i ) of F , where we are free to choose x i + 1 and q i + 1 adaptively-based on x j , q j , and G ( x j , q j ) for j i
First, we associate with I imperfect pointwise observations { x i , y i : = G ( x i , q i ) } i = 1 I [ a , b ] × R , positive numbers { q i } i = 1 I R + which we will call qualities. The quality q i quantifies the confidence we have in the pointwise observation y i of F ( x i ) using the numerical process G ( x i , q i ) . The higher this value, the greater the confidence. We divide this quality as the product of two different numbers c i and r i , q i = c i × r i , with the following definitions:
  • Consistency c i { 0 , 1 } : This describes the fact that two successive points must be monotonically consistent with respect to each other. That is, when one takes two input values x 2 > x 1 , one should have y 2 y 1 as y must be monotonically increasing. There is no consistency associated with the very first data point as it does not have any predecessor.
  • Reliability r i R + : This describes how confident we are about the numerical value. Typically, it will be related to some error estimator if one is available, or the choice of optimisation parameters. It is expected that the higher the reliability, the closer the pointwise observation is to the true value, on average.
Typically, if the observation y i + 1 = G ( x i + 1 , q i + 1 ) is consistent with regard to the observation y i = G ( x i , q i ) where x i + 1 > x i , the quality q i + 1 associated with y i + 1 will be equal to q i + 1 = r i + 1 R + * since c i + 1 = 1 in this case. If the value is not consistent, we have q i + 1 = r i + 1 × c i + 1 = 0 . Finally, if  x = a there is no notion of consistency as there is no point preceding it. Thereby, the quality associated with this point is only equal to its reliability.
Moreover, we associate to these pointwise observations a notion of area, illustrated in Figure 1 and defined as follows. Consider two consecutive points x i and x i + 1 with their respective observations y i and y i + 1 , the area a i for these two points is
a i = ( x i + 1 x i ) × ( y i + 1 y i ) .
Thus, we can define a vector a = { a i } i = 1 I 1 which contains all the computed areas for the whole dataset. In addition, we can assure that if we take two points x 1 and x 2 > x 1 with y 1 = F ( x 1 ) and y 2 = F ( x 2 ) —namely the error at these point is equal to zero, the graph of ground truth function F must lie in the rectangular area spanned by the two points ( x 1 , F ( x 1 ) ) and ( x 2 , F ( x 2 ) ) .
To adopt a conservative point of view, we choose as the approximating function F of F a piecewise constant interpolation function, say:
F ( x ) = i = 1 I 1 y i 1 [ x i , x i + 1 ) ( x ) ,
where 1 I denotes the indicator function of the interval I . We do not want this interpolation function to overestimate the true function F as one knows that the numerical estimate in our case always underestimates the ground truth function F ( x ) . See Figure 1 for an illustration of this choice, which can be viewed as a worst-case approach. Indeed, this chosen interpolation function is the worst possible function underestimating F given two points x 1 and x 2 and their respective observations y 1 and y 2 .

3. Reconstruction Algorithms

The reconstruction algorithm that we propose, Algorithm 1, is driven to produce a sequences of reconstructions that converges to F by following a principle of area minimisation: we associate to the discrete data set { x i , y i } i = 1 I [ a , b ] × R a natural notion of area (3) as explained above, and seek to drive this area towards zero. The motivation behind this objective is in Proposition 2 which states that the area converges to 0 as more points are added to the data set. However, the objective of minimising the area is complicated by the fact that evaluations of F are imperfect. Therefore, a key user-defined parameter in the algorithm is E ( 0 , ) , which can be thought of as an “exchange rate” that quantifies to what extent the algorithm prefers to redo poor-quality evaluations of the target function versus driving the area measure to zero.

3.1. Algorithm

The main algorithm is organized as follows, starting from I ( 0 ) 2 points and a dataset that is assumed to be consistent at the initial step n = 0 . It goes through N iterations, where N is either fixed a priori, or obtained a posteriori once a stopping criterion is met. Note that q new stands for the quality of a newly generated observation y new for any new point x new introduced by the algorithm. The latter is driven by the user-defined “exchange rate” E as explained just above. At each step n, the algorithm computes the weighted area WA ( n ) as the minimum of the quality times the sum of the areas of the data points:
WA ( n ) = q ( n ) × A ( n ) ,
where
q ( n ) = min 1 i I ( n ) { q i ( n ) } , A ( n ) = i = 1 I ( n ) 1 a i ( n ) ,
a i ( n ) is the area computed by Equation (3) at step n (see also Equation (9)), and I ( n ) is the number of data points. Then it is divided into two parts according to the value of WA ( n ) compared to  E .
Algorithm 1: Adaptive algorithm to reconstruct a monotonically increasing function F
Input: I ( 0 ) 2 , { x i ( 0 ) , y i ( 0 ) , q i ( 0 ) } i = 1 I ( 0 ) and E .
Output: { x i ( N ) , y i ( N ) , q i ( N ) } i = 1 I ( N ) with I ( N ) I ( 0 ) .
Initialization:
Get the worst quality point and its index:
  • q ( 0 ) = min 1 i I ( 0 ) { q i ( 0 ) } ;
  • i ( 0 ) = arg min 1 i I ( 0 ) { q i ( 0 ) } .
Compute the area of each pair of data points: a i ( 0 ) = ( x i + 1 ( 0 ) x i ( 0 ) ) × ( y i + 1 ( 0 ) y i ( 0 ) ) .
Get the biggest rectangle and its index:
  • a + ( 0 ) = max 1 i I ( 0 ) 1 { a i ( 0 ) } ;
  • i + ( 0 ) = arg max 1 i I ( 0 ) 1 { a i ( 0 ) } .
Define the weighted area at step n = 0 as WA ( 0 ) = q ( 0 ) × i = 1 I ( 0 ) 1 a i ( 0 ) .
Algorithms 13 00196 i001
  • If WA ( n ) < E , then the algorithm aims at increasing the quality q ( n ) of the worst data point (the one with the lowest quality) with index i ( n ) = arg min 1 i I ( n ) { q i ( n ) } at step n. It stores the corresponding old value y old , searches for a new value y new by improving successively the quality of this very point, and stops when y new > y old .
  • If WA ( n ) E , then the algorithm aims at driving the total area A ( n ) to zero. In that respect, it identifies the biggest rectangle
    a + ( n ) = max 1 i I ( n ) 1 { a i ( n ) }
    and its index
    i + ( n ) = arg max 1 i I ( n ) 1 { a i ( n ) }
    and adds a new point x new at the middle of this biggest rectangle. Then, it computes a new data value y new = G ( x new , q new ) with a new quality q new .
In both cases, the numerical parameters q new (for example several iterations, or the size of a sampling set or a population) are arbitrary and any value can be chosen in practice each time a new point x new is added to the dataset. They can be increased arbitrarily as well each time such a new point has to be improved. Indeed, the numerical parameters q of the optimisation routine we have access to can be increased as much as desired, and increasing them will improve the estimates G ( x , q ) of the true function F ( x ) uniformly in x; see Assumption 1. The algorithm then verifies the consistency of the dataset by checking the quality of each point. If there is any inconsistent point, the algorithm computes a new value until obtaining consistency by improving successively the corresponding reliability. This is achieved in a finite number of steps starting from an inconsistent point and exploring the dataset from the left to the right.
Finally, the algorithm updates the quality vector { q i ( n + 1 ) } i = 1 I ( n + 1 ) , the area vector { a i ( n + 1 ) } i = 1 I ( n + 1 ) , the worst quality q ( n + 1 ) and the index i ( n + 1 ) of the corresponding point, the biggest rectangle a + ( n + 1 ) and its index i + ( n + 1 ) , and then the new weighted area WA ( n + 1 ) .

3.2. Proof of Convergence

We denote by I ( n ) the number of data points, and { x i ( n ) , y i ( n ) , q i ( n ) } i = 1 I ( n ) the positions of the data points, the observations given by the optimization algorithm at these positions, and the qualities associated with the optimization algorithm at the step n of Algorithm 1. For each i = 1 , , I ( n ) 1 , we define s i ( n ) = [ x i ( n ) , x i + 1 ( n ) [ [ a , b ] and the vector containing all rectangle areas { a i ( n ) } i = 1 I ( n ) 1 by:
a i ( n ) = ( x i + 1 ( n ) x i ( n ) ) × ( y i + 1 ( n ) y i ( n ) ) .
The pointwise observation y i ( n ) = G ( x i ( n ) , q i ( n ) ) is thus associated with the quality q i ( n ) R + , which quantifies the confidence we have in this observation as outlined in the problem description in Section 2. This number can represent the inverse error achieved by the optimization algorithm, for example, or the number of iterations, or the number of individuals in a population, or any other numerical parameter pertaining to this optimization process. The higher it is, the closer the observation is to the true target value. Therefore we consider the following assumption on the numerical process G.
Assumption 1.
G ( x , q ) converges to F ( x ) as q + uniformly in x [ a , b ] , that is:
ϵ > 0 , Q > 0 s u c h t h a t q Q , x [ a , b ] , G ( x , q ) F ( x ) ϵ .
Moreover, we can guarantee that:
x [ a , b ] , q R + , G ( x , q ) F ( x ) .
That is, the optimisation algorithm will always underestimate the true value F ( x ) . In this way, one can model the relationship between the numerical estimate G and the true value F as:
x [ a , b ] , q R + , G ( x , q ) = F ( x ) ϵ ( x , q ) ,
where ϵ is a positive random variable. These assumptions imply some robustness and stability of the algorithm we use.
In the following, we will assume that I ( 0 ) 2 . That is, we have at least two data points at the beginning of the reconstruction algorithm. Also among these points, we have one point at x = a and another one at x = b . Moreover, we will assume that the initial dataset is consistent. Since Algorithm 1 recomputes the inconsistent points at all steps, we can also consider in the following that any new numerical observation is actually consistent. Also, we need to guarantee that the weighted area WA ( n ) will permanently oscillate about E as the iteration step n is increasing; this is the purpose of Assumption 3 below as shown in the subsequent Proposition 1. From these properties it will then be shown that Algorithm 1 is convergent, as stated in Theorem 1.
Assumption 2.
Any new numerical value obtained by Algorithm 1 is consistent.
Assumption 3.
q ( n ) + as n .
Within Assumption 2 all points have a consistency of 1, and therefore q = r > 0 the reliability. Besides, one has G ( x i ( n ) , q i ( n ) ) G ( x i + 1 ( n ) , q i + 1 ( n ) ) , i.e., y i ( n ) y i + 1 ( n ) for all points i and steps n. We finally define the sequence of piecewise constant reconstruction functions F ( n ) as follows.
Definition 1.
For each x [ a , b ] , we define the reconstructing function F ( n ) at step n as:
F ( n ) ( x ) = i = 1 I ( n ) 1 y i ( n ) 1 s i ( n ) ( x ) ,
and F ( n ) ( x I ( n ) ( n ) ) = F ( n ) ( b ) = y I ( n ) ( n ) .
Now let
E + : = { n N ; WA ( n ) E } , E : = { n N ; WA ( n ) < E } ,
which are such that E + E = N and E + E = . In order to prove the convergence (in a sense to be given) of Algorithm 1, we first need to establish the following intermediate results, Proposition 1, Proposition 2, and Proposition 3. They clarify the behaviour of the sequence WA ( n ) when points are added to the dataset and the largest area a + ( n ) is divided into four parts at each iteration step n; see Figure 2.
Proposition 1.
E + is infinite.
Proof. 
Let us assume that E + is finite: N such that n N , n E . Therefore we are in the situation WA ( n ) < E , the minimum quality q ( n ) of the data goes to infinity, and the total area A ( n ) is modified although the evaluation points { x i ( n ) } i = 1 I ( n ) and their number I ( n ) are unchanged; thus they are independent of n. Repeating this step yields
lim n A ( n ) = i = 1 I 1 ( x i + 1 x i ) ( F ( x i + 1 ) F ( x i ) ) = A > 0
since F is monotonically increasing and non-constant on [ a , b ] , and Assumption 1 is used. Consequently WA ( n ) + as n , that is WA ( n ) E n N 1 for some N 1 , which is a contradiction. □
The set E + is therefore of the form
E + = k 1 m k , n k ,
where
m k , n k : = { n N ; m k n n k } .
Let us introduce the strictly increasing application φ : N N such that φ ( p ) is the pth element of E + (in increasing order), and m k , n k = φ ( p k + 1 , p k + 1 ) . p is the counter of the elements of E + , and n is the corresponding iteration number.
Proposition 2.
Let I ( φ ( p ) ) = I ( φ ( 0 ) ) + p . Then
A ( φ ( p ) ) = i = 1 I ( φ ( p ) ) 1 a i ( φ ( p ) ) = O 1 p
as p , and A ( n ) 0 as n 0 .
Proof. 
Let k 1 and n = φ ( p ) m k , n k , where p p k + 1 , p k + 1 . Let A ( n ) be given by Equation (6), a + ( n ) be given y Equation (7), and i + ( n ) be given by Equation (8). At iteration n + 1 one has:
x i ( n + 1 ) = x i ( n ) for 1 i i + ( n ) , 1 2 x i + ( n ) ( n ) + x i + ( n ) + 1 ( n ) for i = i + ( n ) + 1 , x i 1 ( n ) for i + ( n ) + 2 i I ( n + 1 ) .
Also y i ( n + 1 ) y i + 1 ( n + 1 ) for 1 i I ( n + 1 ) 1 . One may check that a + ( n ) = 2 a i + ( n ) ( n + 1 ) + 2 a i + ( n ) + 1 ( n + 1 ) (see Figure 2) and therefore:
A ( n + 1 ) = A ( n ) a + ( n ) + a i + ( n ) ( n + 1 ) + a i + ( n ) + 1 ( n + 1 ) = A ( n ) 1 2 a + ( n ) .
Besides A ( n ) ( I ( n ) 1 ) a + ( n ) so that one has:
A ( n + 1 ) A ( n ) A ( n ) 2 ( I ( n ) 1 ) A ( n ) 2 ( I ( n ) 1 ) 1 2 ( I ( n ) 1 ) ,
or:
A ( φ ( p ) + 1 ) A ( φ ( p ) ) 2 ( I ( φ ( p ) ) 1 ) 1 2 ( I ( φ ( p ) ) 1 ) .
At this stage two situations arise:
  • either p p k + 1 , p k + 1 1 , in which case φ ( p ) + 1 = φ ( p + 1 ) ;
  • or p = p k + 1 , in which case by our algorithm A ( n ) is kept constant from n = n k + 1 to n = m k + 1 ; that is A ( n k + 1 ) = A ( m k + 1 ) , or:
    A ( φ ( p k + 1 ) + 1 ) = A ( φ ( p k + 1 + 1 ) ) .
The choice of k being arbitrary, one concludes that Equation (14) also reads p N :
A ( φ ( p + 1 ) ) A ( φ ( p ) ) 2 ( I ( φ ( p ) ) 1 ) 1 2 ( I ( φ ( p ) ) 1 ) A ( φ ( p ) ) 2 ( I ( φ ( 0 ) ) + p 1 ) 1 2 ( I ( φ ( 0 ) ) + p 1 ) .
Thus:
A ( φ ( p ) ) A ( φ ( 1 ) ) i = 1 p 1 2 ( I ( φ ( 0 ) ) + i 1 ) 1 2 ( I ( φ ( 0 ) ) + i 1 ) A ( φ ( 1 ) ) i = 1 p 1 1 + α i 1 + β i ,
letting α = I ( φ ( 0 ) ) 3 2 and β = I ( φ ( 0 ) ) 1 .
However,
i = 1 p log 1 + α i = α i = 1 p 1 i + C p
where lim p C p = C , and
i = 1 p 1 i = log p + γ + ϵ p ,
where γ is the Euler constant and lim p ϵ p = 0 . Consequently:
i = 1 p 1 log 1 + α i i = 1 p 1 log 1 + β i = ( α β ) log ( p 1 ) + C p = ( α β ) log p + log 1 1 p + C p = log 1 p + C p ,
since α β = 1 2 ; again C p and C p are sequences with constant limits lim p C p = C and lim p C p = C . Therefore,
i = 1 p 1 1 + α i 1 + β i = C p ( 1 + ϵ p )
where C is a constant, and lim p ϵ p = 0 . One also concludes that A ( n ) , which is either kept constant or equal to A ( φ ( p ) ) , converges to 0 as n . Hence the claimed results hold. □
Proposition 3.
E is infinite.
Proof. 
Let us assume that E is finite: N such that n N , n E + . Therefore we are in the situation WA ( n ) E > 0 , and φ ( n ) has the form φ ( n ) = n n 0 , n N for some n 0 N . From Proposition 2:
A ( n n 0 ) = O 1 n ,
thus A ( n ) 0 and WA ( n ) 0 as n since q ( n ) is kept unchanged, which is a contradiction. □
We now provide three results on the convergence of Algorithm 1. As is to be expected, the algorithm can only be shown to converge uniformly when the target response function F is sufficiently smooth; otherwise, the convergence is at best pointwise or in mean.
Theorem 1
(Algorithm convergence). Assume that F is strictly increasing. Then, for any choice of E > 0 , Algorithm 1 is convergent in the following senses:
  • If F is piecewise continuous on [ a , b ] , then lim n F ( n ) ( x ) = F ( x ) at all points x [ a , b ] where F is continuous;
  • If F is continuous on [ a , b ] , then convergence holds uniformly: F ( n ) F n 0 .
Proof. 
Let E > 0 . We know from Propositions 1 and 3 that WA ( n ) will oscillate about E in the iterating process as n , while lim n q ( n ) = + from Assumption 3. Furthermore, let
Δ ( n ) : = sup 1 i I ( n ) 1 x i + 1 ( n ) x i ( n ) .
Assuming for example that for some j, s j ( n ) = [ x j ( n ) , x j + 1 ( n ) ) is never divided in two in the iteration process and is thus independent of n, it turns out that a j ( n ) ( x j + 1 x j ) ( F ( x j + 1 ) F ( x j ) ) > 0 as n , which is impossible because A ( n ) goes to 0 as n from Proposition 2. Therefore there exists some m N * (depending on n) such that Δ ( n + m ) 1 2 Δ ( n ) ; also the sequence Δ ( n ) is decreasing, hence Δ ( n ) 0 as n .
Now let x [ x i ( n ) , x i + 1 ( n ) ) . Then:
F ( n ) ( x ) F ( x ) = G ( x i ( n ) , q i ( n ) ) F ( x ) G ( x i ( n ) , q i ( n ) ) F ( x i ( n ) ) + F ( x i ( n ) ) F ( x ) .
However, x i ( n ) x as n because Δ ( n ) 0 ; thus if F is continuous at x, the second term on the right hand side above goes to 0 as n . However, if F is continuous everywhere on [ a , b ] , it is in addition uniformly continuous on [ a , b ] by Heine’s theorem, and the second term goes to 0 as n uniformly on [ a , b ] . Finally, invoking Assumption 1, the first term on the right hand side above also tends to 0 as n . This completes the proof. □
Proposition 4
(Convergence in mean). Let F : [ a , b ] R be piecewise continuous. Then Algorithm 1 is convergent in mean in the sense that
F ( n ) F 1 n 0 .
Proof. 
We can check that the sequence F ( n ) is monotone. Indeed, if WA ( n ) < E , then by construction we have
F ( n + 1 ) ( x ) F ( n ) ( x ) y i ( n ) ( n + 1 ) y i ( n ) ( n ) 1 s ( n ) ( x ) 0
where s ( n ) = [ x i ( n ) ( n ) , x i ( n ) + 1 ( n ) ) . However, if WA ( n ) > E , then consistency implies that
F ( n + 1 ) ( x ) F ( n ) ( x ) y i + ( n ) + 1 ( n + 1 ) y i + ( n ) ( n ) 1 s + ( n + 1 ) ( x ) 0
where s + ( n + 1 ) = [ x i + ( n ) + 1 ( n + 1 ) , x i + ( n ) + 2 ( n + 1 ) ) . The claim now follows from the monotone convergence theorem and the fact that F ( 0 ) is integrable. □

4. Test Cases

To show the effectiveness of Algorithm 1, we try it on two cases, in which F is a continuous function and a discontinuous function respectively. For both cases, the error between the numerical estimate and the ground truth function is modelled as a random variable following a Log-normal distribution. That is,
x [ a , b ] , ϵ ( x ) Log N ( μ ( x ) , σ 2 ) ,
with σ 2 = 1 and μ ( x ) is chosen as P [ 0 ϵ ( x ) 0.1 · F ( x ) ] = 0.9 . Thus, the mean μ is different for each x [ a , b ] .
As we have access to the ground truth function and for validation purpose, the quality value associated woth a numerical point is the inverse of the relative error. Moreover, we assume that the initial points are consistent.
For illustrative purposes, we set the parameter E = 15 for the examples considered below.

4.1. F Is a Continuous Function

First, consider the function F C 0 ( [ 1 , 2 ] , [ 1 , 2 ] ) defined as follows:
F ( x ) = F 1 ( x ) if   x [ 1 , 3 2 ] , F 2 ( x ) if   x [ 3 2 , 2 ] ,
with
F 1 ( x ) = a 1 exp ( x 3 ) + b 1 , F 2 ( x ) = a 2 exp ( ( 3 x ) 3 ) + b 2 ,
where:
a 1 = 1 2 ( exp ( 1 ) exp ( 27 / 8 ) ) , b 1 = 3 2 exp ( 19 / 8 ) 2 ( 1 exp ( 19 / 8 ) ) , a 2 = a 1 , b 2 = 2 a 1 exp ( 27 / 8 ) + b 1 .
The target function F and the reconstructions F ( n ) obtained through the algorithm for several values of the step n are shown in Figure 3. For each n, the reconstruction F ( n ) is increasing and the initial points are consistent. The -norm and 1-norm of the error appear to converge to zero with approximate rates 0.512 and 0.534 respectively.

4.2. F Is a Discontinuous Function

Now, consider the discontinuous function F defined as follows:
F ( x ) = F 1 if   x [ 1 , 3 2 ] , F 2 if   x ( 3 2 , 2 ] ,
where F 1 and F 2 are given by (16), and:
a 1 = 1 2 ( exp ( 1 ) exp ( 27 / 8 ) ) , b 1 = 3 2 exp ( 19 / 8 ) 2 ( 1 exp ( 19 / 8 ) ) , a 2 = 2 5 ( exp ( 8 ) exp ( 27 / 8 ) ) , b 2 = 10 8 exp ( 37 / 8 ) 5 ( 1 exp ( 37 / 8 ) ) .
Here, F is piecewise continuous on [ 1 , 3 2 ] and ] 3 2 , 2 ] . In this case, one can apply Proposition 4. The target function F and the reconstructions F ( n ) obtained through the algorithm for several values of the step n are shown in Figure 4. Observe that the approximation quality, as measured by the -norm of the error F F ( n ) , quite rapidly saturates and does not converge to zero. This is to be expected for this discontinuous target F , since closeness of two functions in the supremum norm mandates that they have approximately the same discontinuities in exactly the same places. The 1-norm error, in contrast, appears to converge at the rate 0.561 .
Regarding computational cost, the number of calls to the numerical model is lower when F is continuous than when it is discontinuous. For both examples above and for the same number of data points, the number of evaluations of the numerical model (analytical formula in the present case) in the discontinuous case is about six times higher than the number of evaluations in the continuous case. This is because the algorithm typically adds more points near discontinuities and the effort of making them consistent increases the number of calls to the model.

4.3. Influence of the User-Defined Parameter E

We consider the case in which F is discontinuous, as in Section 4.2. We will show the influence of the choice of the parameter E on the reconstruction function F ( n ) .

4.3.1. Case E 1

Let us consider the case E = 10 4 1 . This choice corresponds to the case where one wishes to split over redo the worst quality point. This can be seen on Figure 5 where the worst quality is almost constant over 100 steps while the sum of areas strongly decreases; see Figure 5e and Figure 5f respectively. At each step, the algorithm is adding a new point by splitting the biggest rectangle. One can note on Figure 5f that the minimum of the quality is not constant. It means that when the algorithm added a new data point, the point with the worst quality was not consistent any more and had to be recomputed. In summary, in this case, we obtain more points but with lower quality values.

4.3.2. Case E 1

We now consider the case E = 10 4 1 . This choice corresponds to the case where one wishes to redo the worst quality point over split. This can be seen on Figure 6 where the sum of areas stays more or less the same over 100 steps while the minimum of the quality surges; see Figure 6f and Figure 6e respectively. There is no new point. The algorithm is only redoing the worst quality point to improve it. To sum up, we obtain fewer points with higher quality values.

5. Application to Optimal Uncertainty Quantification

5.1. Optimal Uncertainty Quantification

In the optimal uncertainty quantification paradigm proposed by Owhadi et al. [7] and further developed by, e.g., Sullivan et al. [8] and Han et al. [9], upper and lower bounds on the performance of an incompletely specified system are calculated via optimisation problems. More concretely, one is interested in the probability that a system, whose output is a function g : X R of inputs Ξ distributed according to a probability measure μ on an input space X , satisfies g ( Ξ ) x , where x is a specified performance threshold value. We emphasise that although we focus on a scalar performance measure, the input Ξ may be a multivariate random variable.
In practice, μ and g are not known exactly; rather, it is known only that ( μ , g ) A for some admissible subset A of the product space of all probability measures on X with the set of all real-valued functions on X . Thus, one is interested in
P ̲ A : = ( x ) inf ( μ , g ) A P Ξ μ [ g ( Ξ ) x ] and P ¯ A ( x ) : = sup ( μ , g ) A P Ξ μ [ g ( Ξ ) x ] .
The inequality
0 P ̲ A ( x ) P Ξ μ [ g ( Ξ ) x ] P ¯ A ( x ) 1
is, by definition, the tightest possible bound on the quantity of interest P Ξ μ [ g ( Ξ ) x ] that is compatible with the information used to specify A . Thus, the optimal UQ perspective enriches the principles of worst- and best-case design to account for distributional and functional uncertainty. We concentrate our attention hereafter, without loss of generality, on the least upper bound P ¯ A ( x ) .
Remark 1.
The main focus of this paper is the dependency of P ¯ A ( x ) on x. In practice, an underlying task is, for any individual x, reducing the calculation of P ¯ A ( x ) to a tractable finite-dimensional optimisation problem. Central enabling results here are the reduction theorems of (Owhadi et al. [7], Section 4), which loosely speaking, say that if, for each g, { μ ( μ , g ) A } is specified by a system of m equality or inequality constraints on expected values of arbitrary test functions under μ, then for the determination of P ¯ A ( x ) it is sufficient to consider only distributions μ that are convex combinations of at most m + 1 point masses; the optimisation variables are then the m independent weights and m + 1 locations in X of these point masses. If μ factors as a product of distributions (i.e., Ξ is a vector with independent components), then this reduction theorem applies componentwise.
As a function of the performance threshold x, P ¯ A ( x ) is an increasing function, and so it is potentially advantageous to determine P ¯ A ( x ) jointly for a wide range of x values using the algorithm developed above. Indeed, determining P ¯ A ( x ) for many values of x, rather than just one value, is desirable for multiple reasons:
  • Since numerical optimisation to determine P ¯ A ( x ) may be affected by errors, computing several values of P ¯ A ( x ) could lead to validate their consistency as the function x P ¯ A ( x ) must be increasing;
  • The function P ¯ A ( x ) can be discontinuous. Thus, by computing several values of P ¯ A ( x ) , one can highlight potential discontinuities and can identify key threshold values of x P ¯ A ( x ) .

5.2. Test Case

For the application of Algorithm 1 to OUQ, we study the robust shape optimization of the two-dimensional RAE2822 airfoil [10] (Appendix A6) using ONERA’s CFD software elsA [11]. The following example is taken from Dumont et al. [12]. The shape of the original RAE2822 is altered using four bumps located at four different locations: 5 % , 20 % , 40 % , and 60 % of the way along the chord c (see Figure 7). These bumps are characterised by B-splines functions.
The lift-to-drag ratio C l C d of the RAE2822 wing profile (see Figure 8) at Reynolds Number R e = 6.5 × 10 6 , Mach number M = 0.729 and angle of attack α = 2.31 ° is chosen as the performance function g with inputs Ξ = ( Ξ 1 , Ξ 2 , Ξ 3 , Ξ 4 ) , where ( Ξ i ) i = 1 4 is the amplitude of each bump. They will be considered as random variables over their respective range given in Table 1.
The corresponding flow values are the ones described in test case # 6 together with the wall interferences corrections formulas given in [13] (Chapter 6) and in [14] (Section 5.1). Moreover, we will assume that ( Ξ i ) i = 1 4 are mutually independent. An ordinary Kriging procedure has been chosen to build a metamodel (or response surface) of g , which is identified with the actual response function g in the subsequent analysis. A tensorised grid of 9 equidistributed abscissas for each parameter is used. The model is then based on N = 9 4 = 6561 observations. In that respect, a Gaussian kernel
K ( Ξ , Ξ ) = exp 1 2 i = 1 4 ( Ξ i Ξ i ) 2 γ i 2
has been chosen, where Ξ = ( Ξ 1 , Ξ 2 , Ξ 3 , Ξ 4 ) and Ξ = ( Ξ 1 , Ξ 2 , Ξ 3 , Ξ 4 ) are inputs of the function g , and where γ = ( γ 1 , γ 2 , γ 3 , γ 4 ) are the parameters of the kernel. These parameters are chosen to minimize the variance between the ground truth data defined by the N observations and their Kriging metamodel g . The responce surfaces in the ( Ξ 1 , Ξ 3 ) plan for two values of ( Ξ 2 , Ξ 4 ) are shown in Figure 9.
One seeks to determine P ¯ A ( x ) : = sup μ A P Ξ μ [ g ( Ξ ) x ] , where the admissible set A is defined as follows:
A = ( g , μ ) | Ξ X = X 1 × X 2 × X 3 × X 4 g : X Y is known equal to g μ = μ 1 μ 2 μ 3 μ 4 E Ξ μ [ g ( Ξ ) ] = LD .
A priori, finding P ¯ A ( x ) is not computationally tractable because it requires a search over a infinite-dimensional space of probability measures defined by A . Nevertheless, as described briefly in Remark 1, it has been shown in Owhadi et al. [7] that this optimisation problem can be reduced to a finite-dimensional one, where now the probability measures are products of finite convex combinations of Dirac masses.
Remark 2.
The ground truth law μ of each input variable given in Table 1 is only used to compute the expected value E Ξ μ [ g ( Ξ ) ] = L D . This expected value is computed with 10 4 samples.
Remark 3.
The admissible set A from (17) can be understood as follows:
  • One knows the range of each input parameter ( Ξ i ) i = 1 , , 4 ;
  • g is exactly known as g = g ;
  • ( Ξ i ) i = 1 , , 4 are independent;
  • One only knows the expected value of g: E Ξ μ [ g ( Ξ ) ] .
The optimisation problem of determining P ¯ A ( x ) for each chosen x was solved using the Differential Evolution algorithm of Storn and Price [15] within the mystic optimisation framework [16]. Ten iterations of Algorithm 1 have been performed using E = 1 × 10 4 . The evolution of P ¯ A ( x ) as function of the iteration count, n, is shown in Figure 10. At n = 0 —see Figure 10a—two consistent points are present at x = 57.51 and x = 67.51 . At this step, WA ( 0 ) = 35289 . As WA ( 0 ) E , at next step n = 1 , the algorithm adds a new point at the middle of the biggest rectangle—see Figure 10b and Figure 11b. After n = 10 steps, eight points are now present in total with a minimum quality increasing from 5000 to 11,667 and with a total area decreasing from 7.05 to 0.84 ; see Figure 11a and Figure 11b respectively.
The number of iterations in this complex numerical experiment has been limited to 10 because obtaining new or improved data points consistent throughout the optimization algorithm may take up to two days (wall-clock time on a personal computer equipped with an Intel Core i5-6300HQ processor with 4 cores and 6 MB cache memory) for one single point. This running time is increased further for data points of higher quality. Nevertheless, this experiment shows that the proposed algorithm can be used for real-world examples in an industrial context.

6. Concluding Remarks

In this paper we have developed an algorithm to reconstruct a monotonically increasing function such as the cumulative distribution function of a real-valued random variable, or the least upper bound of the performance criterion of a system as a function of its performance threshold. In particular, this latter setting has relevance to the optimal uncertainty quantification (OUQ) framework of [7] we have in mind for applications to real-world incompletely specified systems. The algorithm uses imperfect pointwise evaluations of the target function, subject to partially controllable one-sided errors, to direct further evaluations either at new sites in the function’s domain or to improve the quality of evaluations at already-evaluated sites. It allows for some flexibility at targeting either strategy through a user-defined “exchange rate” parameter, yielding an approximation of the target function with a few high-quality points or alternatively more lower-quality points. We have studied its convergence properties and have applied it to several examples: known target functions that are either continuous and discontinuous, and a performance function for aerodynamic design of a well-documented standard profile in the OUQ setting.
Algorithm 1 is reminiscent of the classical PAVA approach to isotonic regression that applies to statistical inference with order restrictions. Examples of its use can be found in shape constrained or parametric density problems as illustrated in e.g., [6]. Possible improvements and extensions of our algorithm include weighting the areas a i ( n ) as they are summed up to form the total weighted area WA ( n ) driving the iterative process, in order to optimally enforce both the addition of “steps” s i ( n ) in the reconstruction function F ( n ) of Definition 1, and the improvement of their “heights” y i ( n ) . This could be achieved considering for example the following alternative definition i + ( n ) = arg max i { ( I ( n ) i 1 ) a i ( n ) } in Algorithm 1, which results in both adding a step to the i + ( n ) -th current one and possibly improving all subsequent evaluations y i ( n + 1 ) , i > i + ( n ) . We may further envisage to adapt the ideas elaborated in this research to the reconstruction of convex functions by extending the notion of consistency. These perspectives shall be considered in future works.

Author Contributions

Conceptualization, L.B. and T.J.S.; methodology, L.B. and T.J.S.; software, L.B.; validation, J.-L.A., É.S., and T.J.S.; formal analysis, L.B., J.-L.A., É.S., and T.J.S.; investigation, L.B.; resources, L.B., J.-L.A., É.S., and T.J.S.; data curation, L.B.; writing–original draft preparation, L.B.; writing—review and editing, L.B., J.-L.A., É.S., and T.J.S.; visualization, L.B.; supervision, É.S. and T.J.S.; project administration, T.J.S.; funding acquisition, L.B., É.S., and T.J.S. All authors have read and agreed to the published version of the manuscript.

Funding

The work of J.-L.A. and É.S. has been partially supported by ONERA within the Laboratoire de Mathématiques Appliquées pour l’Aéronautique et Spatial (LMA 2 S). L.B. is supported by a CDSN grant from the French Ministry of Higher Education (MESRI) and a grant from the German Academic Exchange Service (DAAD), Program #57442045. T.J.S. has been partially supported by the Freie Universität Berlin within the Excellence Strategy of the DFG, including project TrU-2 of the Excellence Cluster “MATH+ The Berlin Mathematics Research Center” (EXC-2046/1, project 390685689) and DFG project 415980428.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CFDComputational Fluid Dynamics
DOAJDirectory of open access journals
MDPIMultidisciplinary Digital Publishing Institute
OUQOptimal Uncertainty Quantification
PAVAPool-Adjacent-Violators Algorithm

References

  1. Barlow, R.E.; Bartholomew, D.J.; Bremner, J.M.; Brunk, H.D. Statistical Interference under Order Restrictions. The Theory and Application of Isotonic Regression; John Wiley & Sons: London, UK, 1972. [Google Scholar]
  2. De Leeuw, J.; Hornik, K.; Mair, P. Isotone optimization in R: Pool-Adjacent-Violators Algorithm (PAVA) and active set methods. J. Stat. Softw. 2009, 32, 1–24. [Google Scholar] [CrossRef] [Green Version]
  3. Tibshirani, R.J.; Hoefling, H.; Tibshirani, R. Nearly-isotonic regression. Technometrics 2011, 53, 54–61. [Google Scholar] [CrossRef] [Green Version]
  4. Jordan, A.I.; Mühlemann, A.; Ziegel, J.F. Optimal solutions to the isotonic regression problem. arXiv 2019, arXiv:1904.04761. [Google Scholar]
  5. Robertson, T.; Wright, F.T.; Dykstra, R.L. Order Restricted Statistical Inference; John Wiley & Sons: London, UK, 1988. [Google Scholar]
  6. Groeneboom, P.; Jongbloed, G. Nonparametric Estimation under Shape Constraints: Estimators. Algorithms and Asymptotics; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
  7. Owhadi, H.; Scovel, C.; Sullivan, T.J.; McKerns, M.; Ortiz, M. Optimal Uncertainty Quantification. SIAM Rev. 2013, 55, 271–345. [Google Scholar] [CrossRef] [Green Version]
  8. Sullivan, T.J.; McKerns, M.; Meyer, D.; Theil, F.; Owhadi, H.; Ortiz, M. Optimal Uncertainty Quantification for legacy data observations of Lipschitz functions. ESAIM Math. Model. Numer. Anal. 2013, 47, 1657–1689. [Google Scholar] [CrossRef] [Green Version]
  9. Han, S.; Tao, M.; Topcu, U.; Owhadi, H.; Murray, R.M. Convex Optimal Uncertainty Quantification. SIAM J. Optim. 2015, 25, 1368–1387. [Google Scholar] [CrossRef] [Green Version]
  10. Cook, P.H.; McDonald, M.A.; Firmin, M.C.P. Aerofoil RAE 2822 Pressure Distributions, and boundary layer and wake measurements. In Experimental Data Base for Computer Program Assessment; AGARD Advisory Report No. 138; NATO: Brussels, Belgium, 1979. [Google Scholar]
  11. Cambier, L.; Heib, S.; Plot, S. The ONERA elsA CFD software: Input from research and feedback from industry. Mech. Ind. 2013, 14, 159–174. [Google Scholar] [CrossRef] [Green Version]
  12. Dumont, A.; Hantrais-Gervois, J.L.; Passaggia, P.Y.; Peter, J.; Salah el Din, I.; Savin, É. Ordinary kriging surrogates in aerodynamics. In Uncertainty Management for Robust Industrial Design in Aeronautics: Findings and Best Practice Collected During UMRIDA, a Collaborative Research Project (2013–2016) Funded by the European Union; Hirsch, C., Wunsch, D., Szumbarski, J., Łaniewski-Wołłk, Ł., Pons-Prats, J., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 229–245. [Google Scholar] [CrossRef]
  13. Garner, H.C.; Rogers, E.W.E.; Acum, W.E.A.; Maskell, E.C. Subsonic Wind Tunnel Jwall Corrections; AGARDo-Graph 109; NATO: Brussels, Belgium, 1966. [Google Scholar]
  14. Haase, W.; Bradsma, F.; Elsholz, E.; Leschziner, M.; Schwamborn, D. EUROVAL—An European Initiative on Validation of CFD Codes; Vieweg Verlag: Wiesbaden, Germany, 1993. [Google Scholar]
  15. Storn, R.; Price, K. Differential Evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
  16. McKerns, M.M.; Strand, L.; Sullivan, T.J.; Fang, A.; Aivazis, M.A.G. Building a framework for predictive science. In Proceedings of the 10th Python in Science Conference (SciPy 2011), Austin, TX, USA, 11–16 July 2011; van der Walt, S., Millman, J., Eds.; pp. 76–86. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Possible ground truth functions between two consecutive points x 1 and x 2 , and our choice of piecewise constant interpolant.
Figure 1. Possible ground truth functions between two consecutive points x 1 and x 2 , and our choice of piecewise constant interpolant.
Algorithms 13 00196 g001
Figure 2. New area when one adds a point at the middle of the biggest rectangle.
Figure 2. New area when one adds a point at the middle of the biggest rectangle.
Algorithms 13 00196 g002
Figure 3. Evolution of F ( n ) and the - and 1-norms of the error F F ( n ) as functions of the iteration count, n, for a smooth ground truth F .
Figure 3. Evolution of F ( n ) and the - and 1-norms of the error F F ( n ) as functions of the iteration count, n, for a smooth ground truth F .
Algorithms 13 00196 g003
Figure 4. Evolution of F ( n ) and the - and 1-norms of the error F F ( n ) as functions of the iteration count, n, for a discontinuous ground truth F .
Figure 4. Evolution of F ( n ) and the - and 1-norms of the error F F ( n ) as functions of the iteration count, n, for a discontinuous ground truth F .
Algorithms 13 00196 g004
Figure 5. Evolution of F ( n ) and the minimum of the quality and the total area as functions of the iteration count, n, for a discontinuous ground truth F with E = 10 4 .
Figure 5. Evolution of F ( n ) and the minimum of the quality and the total area as functions of the iteration count, n, for a discontinuous ground truth F with E = 10 4 .
Algorithms 13 00196 g005
Figure 6. Evolution of F ( n ) and the minimum of the quality and the total area as functions of the iteration count, n, for a discontinuous ground truth F with E = 10 4 .
Figure 6. Evolution of F ( n ) and the minimum of the quality and the total area as functions of the iteration count, n, for a discontinuous ground truth F with E = 10 4 .
Algorithms 13 00196 g006
Figure 7. Black lines: Maximum and minimum deformation of the RAE2822 profile. Red: Maximum deformation of the third bump alone. Blue: Minimum deformation of the third bump alone. This image is taken from Dumont et al. [12].
Figure 7. Black lines: Maximum and minimum deformation of the RAE2822 profile. Red: Maximum deformation of the third bump alone. Blue: Minimum deformation of the third bump alone. This image is taken from Dumont et al. [12].
Algorithms 13 00196 g007
Figure 8. Picture depicting the lift C l and the drag C d of an airfoil.
Figure 8. Picture depicting the lift C l and the drag C d of an airfoil.
Algorithms 13 00196 g008
Figure 9. Response surface in the ( Ξ 1 , Ξ 3 ) plane with ( Ξ 2 = 0.0025 , Ξ 4 = 0 ) (a) and ( Ξ 2 = 0.0025 , Ξ 4 = 0 ) (b). These images are taken from Dumont et al. [12].
Figure 9. Response surface in the ( Ξ 1 , Ξ 3 ) plane with ( Ξ 2 = 0.0025 , Ξ 4 = 0 ) (a) and ( Ξ 2 = 0.0025 , Ξ 4 = 0 ) (b). These images are taken from Dumont et al. [12].
Algorithms 13 00196 g009
Figure 10. Evolution of P ¯ A ( x ) as function of the iteration count, n.
Figure 10. Evolution of P ¯ A ( x ) as function of the iteration count, n.
Algorithms 13 00196 g010
Figure 11. Evolution of the minimum of the quality and the total area as function of the iteration count, n.
Figure 11. Evolution of the minimum of the quality and the total area as function of the iteration count, n.
Algorithms 13 00196 g011
Table 1. Range of each input parameter.
Table 1. Range of each input parameter.
RangeLaw
Bump 1: Ξ 1 [−0.0025c; +0.0025c] μ 1 : Beta law with α = 6 , β = 6
Bump 2: Ξ 2 [−0.0025c; +0.0025c] μ 2 : Beta law with α = 2 , β = 2
Bump 3: Ξ 3 [−0.0025c; +0.0025c] μ 3 : Beta law with α = 2 , β = 2
Bump 4: Ξ 4 [−0.0025c; +0.0025c] μ 4 : Beta law with α = 2 , β = 2

Share and Cite

MDPI and ACS Style

Bonnet, L.; Akian, J.-L.; Savin, É.; Sullivan, T.J. Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification. Algorithms 2020, 13, 196. https://doi.org/10.3390/a13080196

AMA Style

Bonnet L, Akian J-L, Savin É, Sullivan TJ. Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification. Algorithms. 2020; 13(8):196. https://doi.org/10.3390/a13080196

Chicago/Turabian Style

Bonnet, Luc, Jean-Luc Akian, Éric Savin, and T. J. Sullivan. 2020. "Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification" Algorithms 13, no. 8: 196. https://doi.org/10.3390/a13080196

APA Style

Bonnet, L., Akian, J. -L., Savin, É., & Sullivan, T. J. (2020). Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification. Algorithms, 13(8), 196. https://doi.org/10.3390/a13080196

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop