Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification
Abstract
:1. Introduction
2. Notation and Problem Description
 Consistency${c}_{i}\in \{0,1\}$: This describes the fact that two successive points must be monotonically consistent with respect to each other. That is, when one takes two input values ${x}_{2}>{x}_{1}$, one should have ${y}_{2}\ge {y}_{1}$ as y must be monotonically increasing. There is no consistency associated with the very first data point as it does not have any predecessor.
 Reliability${r}_{i}\in {\mathbb{R}}_{+}$: This describes how confident we are about the numerical value. Typically, it will be related to some error estimator if one is available, or the choice of optimisation parameters. It is expected that the higher the reliability, the closer the pointwise observation is to the true value, on average.
3. Reconstruction Algorithms
3.1. Algorithm
Algorithm 1: Adaptive algorithm to reconstruct a monotonically increasing function ${F}^{\u2020}$ 
Input: ${I}^{(0)}\ge 2$, ${\{{x}_{i}^{(0)},{y}_{i}^{(0)},{q}_{i}^{(0)}\}}_{i=1}^{{I}^{(0)}}$ and $\mathcal{E}$. 
Output: ${\{{x}_{i}^{(N)},{y}_{i}^{(N)},{q}_{i}^{(N)}\}}_{i=1}^{{I}^{(N)}}$ with ${I}^{(N)}\ge {I}^{(0)}$. 
Initialization: 
Get the worst quality point and its index:

Compute the area of each pair of data points: ${a}_{i}^{(0)}=({x}_{i+1}^{(0)}{x}_{i}^{(0)})\times ({y}_{i+1}^{(0)}{y}_{i}^{(0)})$. 
Get the biggest rectangle and its index:

Define the weighted area at step $n=0$ as ${\mathrm{WA}}^{(0)}={q}_{}^{(0)}\times {\displaystyle \sum _{i=1}^{{I}^{(0)}1}}{a}_{i}^{(0)}$. 
 If ${\mathrm{WA}}^{(n)}<\mathcal{E}$, then the algorithm aims at increasing the quality ${q}_{}^{(n)}$ of the worst data point (the one with the lowest quality) with index ${i}_{}^{(n)}={\mathrm{arg}\phantom{\rule{0.166667em}{0ex}}\mathrm{min}}_{1\le i\le {I}^{(n)}}\{{q}_{i}^{(n)}\}$ at step n. It stores the corresponding old value ${y}_{\mathrm{old}}$, searches for a new value ${y}_{\mathrm{new}}$ by improving successively the quality of this very point, and stops when ${y}_{\mathrm{new}}>{y}_{\mathrm{old}}$.
 If ${\mathrm{WA}}^{(n)}\ge \mathcal{E}$, then the algorithm aims at driving the total area ${\mathrm{A}}^{(n)}$ to zero. In that respect, it identifies the biggest rectangle$${a}_{+}^{(n)}=\underset{1\le i\le {I}^{(n)}1}{max}\{{a}_{i}^{(n)}\}$$$${i}_{+}^{(n)}=\underset{1\le i\le {I}^{(n)}1}{\mathrm{arg}\phantom{\rule{0.166667em}{0ex}}\mathrm{max}}\{{a}_{i}^{(n)}\}$$
3.2. Proof of Convergence
 either $p\in \u27e6{p}_{k}+1,{p}_{k+1}1\u27e7$, in which case $\phi (p)+1=\phi (p+1)$;
 or $p={p}_{k+1}$, in which case by our algorithm ${\mathrm{A}}^{(n)}$ is kept constant from $n={n}_{k}+1$ to $n={m}_{k+1}$; that is ${\mathrm{A}}^{({n}_{k}+1)}={\mathrm{A}}^{({m}_{k+1})}$, or:$${\mathrm{A}}^{(\phi ({p}_{k+1})+1)}={\mathrm{A}}^{(\phi ({p}_{k+1}+1))}\phantom{\rule{0.166667em}{0ex}}.$$
 If ${F}^{\u2020}$ is piecewise continuous on $[a,b]$, then ${lim}_{n\to \infty}{F}^{(n)}(x)={F}^{\u2020}(x)$ at all points $x\in [a,b]$ where ${F}^{\u2020}$ is continuous;
 If ${F}^{\u2020}$ is continuous on $[a,b]$, then convergence holds uniformly: $\parallel {F}^{(n)}{F}^{\u2020}{\parallel}_{\infty}\underset{n\to \infty}{\to}0$.
4. Test Cases
4.1. ${F}^{\u2020}$ Is a Continuous Function
4.2. ${F}^{\u2020}$ Is a Discontinuous Function
4.3. Influence of the UserDefined Parameter $\mathcal{E}$
4.3.1. Case $\mathcal{E}\ll 1$
4.3.2. Case $\mathcal{E}\gg 1$
5. Application to Optimal Uncertainty Quantification
5.1. Optimal Uncertainty Quantification
 Since numerical optimisation to determine ${\overline{P}}_{\mathcal{A}}(x)$ may be affected by errors, computing several values of ${\overline{P}}_{\mathcal{A}}(x)$ could lead to validate their consistency as the function $x\mapsto {\overline{P}}_{\mathcal{A}}(x)$ must be increasing;
 The function ${\overline{P}}_{\mathcal{A}}(x)$ can be discontinuous. Thus, by computing several values of ${\overline{P}}_{\mathcal{A}}(x)$, one can highlight potential discontinuities and can identify key threshold values of $x\mapsto {\overline{P}}_{\mathcal{A}}(x)$.
5.2. Test Case
 One knows the range of each input parameter ${({\Xi}_{i})}_{i=1,\cdots ,4}$;
 g is exactly known as $g={g}^{\u2020}$;
 ${({\Xi}_{i})}_{i=1,\cdots ,4}$ are independent;
 One only knows the expected value of g: ${\mathbb{E}}_{\Xi \sim \mu}[g(\Xi )]$.
6. Concluding Remarks
Author Contributions
Funding
Conflicts of Interest
Abbreviations
CFD  Computational Fluid Dynamics 
DOAJ  Directory of open access journals 
MDPI  Multidisciplinary Digital Publishing Institute 
OUQ  Optimal Uncertainty Quantification 
PAVA  PoolAdjacentViolators Algorithm 
References
 Barlow, R.E.; Bartholomew, D.J.; Bremner, J.M.; Brunk, H.D. Statistical Interference under Order Restrictions. The Theory and Application of Isotonic Regression; John Wiley & Sons: London, UK, 1972. [Google Scholar]
 De Leeuw, J.; Hornik, K.; Mair, P. Isotone optimization in R: PoolAdjacentViolators Algorithm (PAVA) and active set methods. J. Stat. Softw. 2009, 32, 1–24. [Google Scholar] [CrossRef] [Green Version]
 Tibshirani, R.J.; Hoefling, H.; Tibshirani, R. Nearlyisotonic regression. Technometrics 2011, 53, 54–61. [Google Scholar] [CrossRef] [Green Version]
 Jordan, A.I.; Mühlemann, A.; Ziegel, J.F. Optimal solutions to the isotonic regression problem. arXiv 2019, arXiv:1904.04761. [Google Scholar]
 Robertson, T.; Wright, F.T.; Dykstra, R.L. Order Restricted Statistical Inference; John Wiley & Sons: London, UK, 1988. [Google Scholar]
 Groeneboom, P.; Jongbloed, G. Nonparametric Estimation under Shape Constraints: Estimators. Algorithms and Asymptotics; Cambridge University Press: Cambridge, UK, 2014. [Google Scholar]
 Owhadi, H.; Scovel, C.; Sullivan, T.J.; McKerns, M.; Ortiz, M. Optimal Uncertainty Quantification. SIAM Rev. 2013, 55, 271–345. [Google Scholar] [CrossRef] [Green Version]
 Sullivan, T.J.; McKerns, M.; Meyer, D.; Theil, F.; Owhadi, H.; Ortiz, M. Optimal Uncertainty Quantification for legacy data observations of Lipschitz functions. ESAIM Math. Model. Numer. Anal. 2013, 47, 1657–1689. [Google Scholar] [CrossRef] [Green Version]
 Han, S.; Tao, M.; Topcu, U.; Owhadi, H.; Murray, R.M. Convex Optimal Uncertainty Quantification. SIAM J. Optim. 2015, 25, 1368–1387. [Google Scholar] [CrossRef] [Green Version]
 Cook, P.H.; McDonald, M.A.; Firmin, M.C.P. Aerofoil RAE 2822 Pressure Distributions, and boundary layer and wake measurements. In Experimental Data Base for Computer Program Assessment; AGARD Advisory Report No. 138; NATO: Brussels, Belgium, 1979. [Google Scholar]
 Cambier, L.; Heib, S.; Plot, S. The ONERA elsA CFD software: Input from research and feedback from industry. Mech. Ind. 2013, 14, 159–174. [Google Scholar] [CrossRef] [Green Version]
 Dumont, A.; HantraisGervois, J.L.; Passaggia, P.Y.; Peter, J.; Salah el Din, I.; Savin, É. Ordinary kriging surrogates in aerodynamics. In Uncertainty Management for Robust Industrial Design in Aeronautics: Findings and Best Practice Collected During UMRIDA, a Collaborative Research Project (2013–2016) Funded by the European Union; Hirsch, C., Wunsch, D., Szumbarski, J., ŁaniewskiWołłk, Ł., PonsPrats, J., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 229–245. [Google Scholar] [CrossRef]
 Garner, H.C.; Rogers, E.W.E.; Acum, W.E.A.; Maskell, E.C. Subsonic Wind Tunnel Jwall Corrections; AGARDoGraph 109; NATO: Brussels, Belgium, 1966. [Google Scholar]
 Haase, W.; Bradsma, F.; Elsholz, E.; Leschziner, M.; Schwamborn, D. EUROVAL—An European Initiative on Validation of CFD Codes; Vieweg Verlag: Wiesbaden, Germany, 1993. [Google Scholar]
 Storn, R.; Price, K. Differential Evolution—A simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 1997, 11, 341–359. [Google Scholar] [CrossRef]
 McKerns, M.M.; Strand, L.; Sullivan, T.J.; Fang, A.; Aivazis, M.A.G. Building a framework for predictive science. In Proceedings of the 10th Python in Science Conference (SciPy 2011), Austin, TX, USA, 11–16 July 2011; van der Walt, S., Millman, J., Eds.; pp. 76–86. [Google Scholar] [CrossRef] [Green Version]
Range  Law  

Bump 1: ${\Xi}_{1}$  [−0.0025c; +0.0025c]  ${\mu}_{1}^{\u2020}$: Beta law with $\alpha =6,\beta =6$ 
Bump 2: ${\Xi}_{2}$  [−0.0025c; +0.0025c]  ${\mu}_{2}^{\u2020}$: Beta law with $\alpha =2,\beta =2$ 
Bump 3: ${\Xi}_{3}$  [−0.0025c; +0.0025c]  ${\mu}_{3}^{\u2020}$: Beta law with $\alpha =2,\beta =2$ 
Bump 4: ${\Xi}_{4}$  [−0.0025c; +0.0025c]  ${\mu}_{4}^{\u2020}$: Beta law with $\alpha =2,\beta =2$ 
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bonnet, L.; Akian, J.L.; Savin, É.; Sullivan, T.J. Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification. Algorithms 2020, 13, 196. https://doi.org/10.3390/a13080196
Bonnet L, Akian JL, Savin É, Sullivan TJ. Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification. Algorithms. 2020; 13(8):196. https://doi.org/10.3390/a13080196
Chicago/Turabian StyleBonnet, Luc, JeanLuc Akian, Éric Savin, and T. J. Sullivan. 2020. "Adaptive Reconstruction of Imperfectly Observed Monotone Functions, with Applications to Uncertainty Quantification" Algorithms 13, no. 8: 196. https://doi.org/10.3390/a13080196