# A Comparative Study of Infill Sampling Criteria for Computationally Expensive Constrained Optimization Problems

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Background

#### 2.1. Gaussian Process Modeling

#### 2.2. Constrained Bayesian Optimization

Algorithm 1 Constrained Bayesian Optimization (CBO). |

Require: objective function f, constraint functions ${g}^{1},\dots ,{g}^{m}$, infill sampling criterion $\alpha $, initial design points ${X}_{n}=\{{x}_{1},{x}_{2},\dots ,{x}_{n}\}$, ${f}_{n}=\left\{f({X}_{n})\right\}$, ${g}^{mn}=\{{g}^{1}({X}_{n}),\dots ,{g}^{m}({X}_{n})\}$ |

repeat |

1: Fit or update GPs for the objective and constraint functions |

2: Maximize the infill sampling criterion: ${x}_{n+1}={argmax}_{x\in D}\alpha (x)$ |

3: Evaluate $f({x}_{n+1})$, ${g}^{1}({x}_{n+1}),\dots ,{g}^{m}({x}_{n+1})$ |

4: Add the new data to the observation sets ${X}_{n}$, ${f}_{n}$, and ${g}^{mn}$ |

5: Update the counter $n\leftarrow n+1$ |

until termination condition is met |

return best solution found |

#### 2.3. Infill Sampling Criteria

#### 2.3.1. Expected Feasible Improvement (EFI)

#### 2.3.2. Constrained Expected Improvement (CEI)

#### 2.3.3. Stepwise Uncertainty Reduction (SUR)

#### 2.3.4. Augmented Lagrangian (AL)

## 3. Empirical Experiments

#### 3.1. Experimental Setup

#### 3.1.1. Test Problem Description

#### 3.1.2. Experimental Settings

#### 3.1.3. Comparison Metrics

**Quality of the final solution**, represented by the mean and standard deviation of the best feasible solution found;**Efficiency and speed of finding a feasible region**, represented by the count of the number of trials (out of 20) for which a feasible solution is found and how fast the first feasible solution is found;**Total number of feasible points being sampled**, which is the proportion of feasible solutions to the total number of observations.

#### 3.2. Results of Constrained Bayesian Optimization

**Scenario 1:**10 uniform points, all generated outside feasible regions;

#### 3.2.1. Scenario 1: Infeasible Initial Design Points

#### 3.2.2. Scenario 2: Latin Hypercube Initial Design Points

## 4. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## Appendix A. Problem Definition

#### Appendix A.1. Benchmark Problems

- Problem G02 (modified, 2d)$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)=-|\frac{{cos}^{4}({x}_{1})+{cos}^{4}({x}_{2})-2{cos}^{2}({x}_{1}){cos}^{2}({x}_{2})}{\sqrt{{x}_{1}^{2}+2{x}_{2}^{2}}}|\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {g}_{1}(x)=0.75-{x}_{1}{x}_{2}\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & {g}_{2}(x)={x}_{1}+{x}_{2}-15\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & 0\le {x}_{i}\le 10(i=1,2).\hfill \end{array}$$
- Problem G03 (modified, 2d)$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)=-2{x}_{1}{x}_{2}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {g}_{1}(x)={x}_{1}^{2}+{x}_{2}^{2}-1=0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & 0\le {x}_{i}\le 1(i=1,2).\hfill \end{array}$$
- Problem G04$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)=5.3578547{x}_{3}^{2}+0.8356891{x}_{1}{x}_{5}+37.293239{x}_{1}-40792.141\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill {g}_{1}(x)& =85.334407+0.0056858{x}_{2}{x}_{5}+0.0006262{x}_{1}{x}_{4}-0.0022053{x}_{3}{x}_{5}-92\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {g}_{2}(x)& =-85.334407-0.0056858{x}_{2}{x}_{5}-0.0006262{x}_{1}{x}_{4}+0.0022053{x}_{3}{x}_{5}\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {g}_{3}(x)& =80.51249+0.0071317{x}_{2}{x}_{5}+0.0029955{x}_{1}{x}_{2}+0.0021813{x}_{3}^{2}-110\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {g}_{4}(x)& =-80.51249-0.0071317{x}_{2}{x}_{5}-0.0029955{x}_{1}{x}_{2}-0.0021813{x}_{3}^{2}+90\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {g}_{5}(x)& =9.300961+0.0047026{x}_{3}{x}_{5}+0.0012547{x}_{1}{x}_{3}+0.0019085{x}_{3}{x}_{4}-25\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & \hfill {g}_{6}(x)& =-9.300961-0.0047026{x}_{3}{x}_{5}-0.0012547{x}_{1}{x}_{3}-0.0019085{x}_{3}{x}_{4}+20\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & 78\le {x}_{1}\le 102,33\le {x}_{2}\le 45\mathrm{and}27\le {x}_{i}\le 45(i=3,4,5).\hfill \end{array}$$The global minimum of G04 is ${x}^{*}=(78,33,29.9953,45,36.7758)$, and $f({x}^{*})=-\mathrm{30,665.5387}$.
- Problem G06$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)={({x}_{1}-10)}^{3}+{({x}_{2}-20)}^{3}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {g}_{1}(x)=-{({x}_{1}-5)}^{2}-{({x}_{2}-5)}^{2}+100\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & {g}_{2}(x)={({x}_{1}-6)}^{2}+{({x}_{2}-5)}^{2}-82.81\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & 13\le {x}_{1}\le 100\mathrm{and}0\le {x}_{2}\le 100.\hfill \end{array}$$The global minimum of G06 is ${x}^{*}=(14.095,0.843)$, and $f({x}^{*})=-6961.814.$
- Problem G08$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)=-\frac{{sin}^{3}(2\pi {x}_{1})sin(2\pi {x}_{2})}{{x}_{1}^{3}({x}_{1}+{x}_{2})}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {g}_{1}(x)={x}_{1}^{2}-{x}_{2}+1\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & {g}_{2}(x)=1-{x}_{1}+{({x}_{2}-4)}^{2}\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & 0\le {x}_{i}\le 10(i=1,2).\hfill \end{array}$$The global minimum of G08 is ${x}^{*}=(1.228,4.24537)$, and $f({x}^{*})=-0.09583$.
- Problem G09$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)={({x}_{1}-10)}^{2}+5{({x}_{2}-12)}^{2}+{x}_{3}^{4}+3{({x}_{4}-11)}^{2}+10{x}_{5}^{6}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & +7{x}_{6}^{2}+{x}_{7}^{4}-4{x}_{6}{x}_{7}-10{x}_{6}-8{x}_{7}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {g}_{1}(x)=-127+2{x}_{1}^{2}+3{x}_{2}^{4}+{x}_{3}+4{x}_{4}^{2}+5{x}_{5}\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & {g}_{2}(x)=-282+7{x}_{1}+3{x}_{2}+10{x}_{3}^{2}+{x}_{4}-{x}_{5}\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & {g}_{3}(x)=-196+23{x}_{1}+{x}_{2}^{2}+6{x}_{6}^{2}-8{x}_{7}\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & {g}_{4}(x)=4{x}_{1}^{2}+{x}_{2}^{2}-3{x}_{1}{x}_{2}+2{x}_{3}^{2}+5{x}_{6}-11{x}_{7}\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & -10\le {x}_{i}\le 10(i=1,\dots ,7).\hfill \end{array}$$The global minimum of G09 is ${x}^{*}=(2.3305,1.95137,-0.4775,4.3657,-0.6244,1.0381,1.5942)$, and $f({x}^{*})=680.63$.
- Problem G11$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)={x}_{1}^{2}+{({x}_{2}-1)}^{2}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {g}_{1}(x)={x}_{2}-{x}_{1}^{2}=0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & -1\le {x}_{i}\le 1(i=1,2).\hfill \end{array}$$The global minimum of G11 is ${x}^{*}=(-0.707,0.5)$, and $f({x}^{*})=0.7499$.
- Problem G12$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)=\frac{-\left(100-{({x}_{1}-5)}^{2}-{({x}_{2}-5)}^{2}-{({x}_{3}-5)}^{2}\right)}{100}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {g}_{1}(x)={({x}_{1}-5)}^{2}+{({x}_{2}-5)}^{2}+{({x}_{3}-5)}^{2}-0.0625\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & 0\le {x}_{i}\le 10(i=1,2,3).\hfill \end{array}$$The global minimum of G12 is ${x}^{*}=(5,5,5)$, and $f({x}^{*})=-1$.
- Problem G24$$\begin{array}{cccc}\hfill \phantom{\rule{1.em}{0ex}}& \underset{x}{\mathrm{minimize}}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& f(x)=-{x}_{1}-{x}_{2}\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \mathrm{subject}\phantom{\rule{4.pt}{0ex}}\mathrm{to}\phantom{\rule{1.em}{0ex}}\hfill & \hfill \phantom{\rule{1.em}{0ex}}& {g}_{1}(x)=-2{x}_{1}^{4}+8{x}_{1}^{3}-8{x}_{1}^{2}+{x}_{2}-2\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & {g}_{2}(x)=-4{x}_{1}^{4}+32{x}_{1}^{3}-88{x}_{1}^{2}+96{x}_{1}+{x}_{2}-36\le 0\hfill \\ \hfill \phantom{\rule{1.em}{0ex}}& \phantom{\rule{1.em}{0ex}}\hfill & & 0\le {x}_{1}\le 3\mathrm{and}0\le {x}_{2}\le 4.\hfill \end{array}$$The global minimum of G24 is ${x}^{*}=(2.3295,3.17849)$, and $f({x}^{*})=-5.50801$.

#### Appendix A.2. Pressure Vessel Design Problem

## References

- Łukasik, S.; Żak, S. Firefly algorithm for continuous constrained optimization tasks. In International Conference on Computational Collective Intelligence; Springer: Berlin/Heidelberg, Germany, 2009; pp. 97–106. [Google Scholar]
- Tuba, M.; Bacanin, N. Improved seeker optimization algorithm hybridized with firefly algorithm for constrained optimization problems. Neurocomputing
**2014**, 143, 197–207. [Google Scholar] [CrossRef] - Askarzadeh, A. A novel metaheuristic method for solving constrained engineering optimization problems: Crow search algorithm. Comput. Struct.
**2016**, 169, 1–12. [Google Scholar] [CrossRef] - Arora, S.; Singh, H.; Sharma, M.; Sharma, S.; Anand, P. A new hybrid algorithm based on Grey wolf optimization and crow search algorithm for unconstrained function optimization and feature selection. IEEE Access
**2019**, 7, 26343–26361. [Google Scholar] [CrossRef] - Strumberger, I.; Minovic, M.; Tuba, M.; Bacanin, N. Performance of elephant herding optimization and tree growth algorithm adapted for node localization in wireless sensor networks. Sensors
**2019**, 19, 2515. [Google Scholar] [CrossRef] [PubMed] [Green Version] - Chung, H.; Shin, K.s. Genetic algorithm-optimized long short-term memory network for stock market prediction. Sustainability
**2018**, 10, 3765. [Google Scholar] [CrossRef] [Green Version] - Wang, J.; Cheng, Z.; Ersoy, O.K.; Zhang, P.; Dai, W.; Dong, Z. Improvement analysis and application of real-coded genetic algorithm for solving constrained optimization problems. Math. Probl. Eng.
**2018**, 2018, 1–16. [Google Scholar] [CrossRef] - Tuba, E.; Strumberger, I.; Bacanin, N.; Zivkovic, D.; Tuba, M. Brain Storm Optimization Algorithm for Thermal Image Fusion using DCT Coefficients. In Proceedings of the 2019 IEEE Congress on Evolutionary Computation (CEC), Wellington, New Zealand, 10–13 June 2019; pp. 234–241. [Google Scholar]
- Kaur, P.; Sharma, M. Diagnosis of Human Psychological Disorders using Supervised Learning and Nature-Inspired Computing Techniques: A Meta-Analysis. J. Med Syst.
**2019**, 43, 204. [Google Scholar] [CrossRef] - Vivekanandan, T.; Iyengar, N.C.S.N. Optimal feature selection using a modified differential evolution algorithm and its effectiveness for prediction of heart disease. Comput. Biol. Med.
**2017**, 90, 125–136. [Google Scholar] [CrossRef] - Sharma, M.; Singh, G.; Singh, R. Design and analysis of stochastic DSS query optimizers in a distributed database system. Egypt. Inform. J.
**2016**, 17, 161–173. [Google Scholar] [CrossRef] [Green Version] - Močkus, J. On Bayesian methods for seeking the extremum. In Optimization Techniques IFIP Technical Conference; Springer: Berlin/Heidelberg, Germany, 1975; pp. 400–404. [Google Scholar]
- Močkus, J.; Tiesis, V.; Zilinskas, A. The application of Bayesian methods for seeking the extremum. Towards Glob. Optim.
**1978**, 2, 117–129. [Google Scholar] - Jones, D.R.; Schonlau, M.; Welch, W.J. Efficient global optimization of expensive black-box functions. J. Glob. Optim.
**1998**, 13, 455–492. [Google Scholar] [CrossRef] - Weihs, C.; Herbrandt, S.; Bauer, N.; Friedrichs, K.; Horn, D. Efficient Global Optimization: Motivation, Variations, and Applications. Arch. Data Sci. Ser.
**2017**, 2, 26. [Google Scholar] - Bartoli, N.; Kurek, I.; Lafage, R.; Lefebvre, T.; Priem, R.; Bouhlel, M.; Morlier, J.; Stilz, V.; Regis, R. Improvement of efficient global optimization with mixture of experts: Methodology developments and preliminary results in aircraft wing design. In Proceedings of the 17th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Washington, DC, USA, 13–17 June 2016. [Google Scholar]
- Quttineh, N.H.; Holmström, K. Implementation of a One-Stage Efficient Global Optimization (EGO) Algorithm; Technical Report 2; Linköping University: Linkoping, Sweden, 2009. [Google Scholar]
- Hebbal, A.; Brevault, L.; Balesdent, M.; Taibi, E.G.; Melab, N. Efficient Global Optimization Using Deep Gaussian Processes. In Proceedings of the 2018 IEEE Congress on Evolutionary Computation (CEC), Rio de Janeiro, Brazil, 8–13 July 2018; pp. 1–8. [Google Scholar]
- Mehdad, E.; Kleijnen, J. Efficient Global Optimization for Black-Box Simulation Via Sequential Intrinsic Kriging. SSRN Electron. J.
**2015**, 69, 1725–1737. [Google Scholar] [CrossRef] [Green Version] - Ur Rehman, S.; Langelaar, M. Efficient global robust optimization of unconstrained problems affected by parametric uncertainties. Struct. Multidiscip. Optim.
**2015**, 52, 319–336. [Google Scholar] [CrossRef] [Green Version] - Jeong, S.; Obayashi, S. Efficient global optimization (EGO) for multi-objective problem and data mining. In Proceedings of the 2005 IEEE Congress on Evolutionary Computation, Scotland, UK, 2–5 September 2005; Volume 3, pp. 2138–2145. [Google Scholar]
- ur Rehman, S.; Langelaar, M. Expected improvement based infill sampling for global robust optimization of constrained problems. Optim. Eng.
**2017**, 18, 723–753. [Google Scholar] [CrossRef] [Green Version] - Ye, Q.; Pan, H.; Liu, C. A framework for final drive simultaneous failure diagnosis based on fuzzy entropy and sparse bayesian extreme learning machine. Comput. Intell. Neurosci.
**2015**, 2015, 427965. [Google Scholar] [CrossRef] [Green Version] - Kopsiaftis, G.; Protopapadakis, E.; Voulodimos, A.; Doulamis, N.; Mantoglou, A. Gaussian process regression tuned by bayesian optimization for seawater intrusion prediction. Comput. Intell. Neurosci.
**2019**, 2019, 2859429. [Google Scholar] [CrossRef] - Chen, Z. An overview of bayesian methods for neural spike train analysis. Comput. Intell. Neurosci.
**2013**, 2013. [Google Scholar] [CrossRef] [Green Version] - Schonlau, M.; Welch, W.J.; Jones, D.R. Global versus local search in constrained optimization of computer models. Lect. Notes Monogr. Ser.
**1998**, 34, 11–25. [Google Scholar] - Gardner, J.R.; Kusner, M.J.; Xu, Z.; Weinberger, K.Q.; Cunningham, J.P. Bayesian Optimization with Inequality Constraints. In Proceedings of the 31st International Conference on International Conference on Machine Learning, Beijing, China, 21–26 June 2014. [Google Scholar]
- Sasena, M.J.; Papalambros, P.; Goovaerts, P. Exploration of metamodeling sampling criteria for constrained global optimization. Eng. Optim.
**2002**, 34, 263–278. [Google Scholar] [CrossRef] - Priem, R.; Bartoli, N.; Diouane, Y. On the Use of Upper Trust Bounds in Constrained Bayesian Optimization Infill Criteria. In Proceedings of the AIAA Aviation 2019 Forum, Dallas, TX, USA, 17–21 June 2019; p. 2986. [Google Scholar]
- Srinivas, N.; Krause, A.; Kakade, S.; Seeger, M. Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design. In Proceedings of the 27th International Conference on International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 1015–1022. [Google Scholar]
- Gramacy, R.; Lee, H. Optimization Under Unknown Constraints. Bayesian Stat.
**2011**, 9, 229–256. [Google Scholar] - Gelbart, M.A.; Snoek, J.; Adams, R.P. Bayesian Optimization with Unknown Constraints. In Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, Quebec City, QC, Canada, 23–27 July 2014; pp. 250–259. [Google Scholar]
- Hernández-Lobato, J.M.; Gelbart, M.A.; Hoffman, M.W.; Adams, R.P.; Ghahramani, Z. Predictive entropy search for bayesian optimization with unknown constraints. In Proceedings of the 32nd International Conference on Machine Learning (ICML), Lille, France, 6–11 July 2015; pp. 1699–1707. [Google Scholar]
- Hernández-Lobato, J.M.; Hoffman, M.W.; Ghahramani, Z. Predictive entropy search for efficient global optimization of black-box functions. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8–13 December 2014; pp. 918–926. [Google Scholar]
- Lam, R.; Willcox, K. Lookahead bayesian optimization with inequality constraints. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 1890–1900. [Google Scholar]
- Ariafar, S.; Coll-Font, J.; Brooks, D.; Dy, J. An ADMM Framework for Constrained Bayesian Optimization. In Proceedings of the NIPS Workshop on Bayesian Optimization, Long Beach, CA, USA, 9 December 2017. [Google Scholar]
- Jiao, R.; Zeng, S.; Li, C.; Jiang, Y.; Jin, Y. A complete expected improvement criterion for Gaussian process assisted highly constrained expensive optimization. Inf. Sci.
**2019**, 471, 80–96. [Google Scholar] [CrossRef] - Picheny, V. Multiobjective optimization using Gaussian process emulators via stepwise uncertainty reduction. Stat. Comput.
**2013**, 25, 1265–1280. [Google Scholar] [CrossRef] [Green Version] - Picheny, V. A Stepwise uncertainty reduction approach to constrained global optimization. In Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, Reykjavik, Iceland, 22–25 April 2014; Volume 33, pp. 787–795. [Google Scholar]
- Gramacy, R.B.; Gray, G.A.; Digabel, S.L.; Lee, H.K.H.; Ranjan, P.; Wells, G.; Wild, S.M. Modeling an Augmented Lagrangian for Blackbox Constrained Optimization. Technometrics
**2016**, 58, 1–11. [Google Scholar] [CrossRef] - Picheny, V.; Ginsbourger, D.; Krityakierne, T. Comment: Some Enhancements Over the Augmented Lagrangian Approach. Technometrics
**2016**, 58, 17–21. [Google Scholar] [CrossRef] - Rasmussen, C.; Williams, C. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2006; p. 248. [Google Scholar]
- Roustant, O.; Ginsbourger, D.; Deville, Y. DiceKriging, DiceOptim: Two R Packages for the Analysis of Computer Experiments by Kriging-Based Metamodeling and Optimization. J. Stat. Softw.
**2013**, 51, 1–55. [Google Scholar] [CrossRef] [Green Version] - Gelbart, M.A. Constrained Bayesian Optimization and Applications. Ph.D. Thesis, Harvard University, Cambridge, MA, USA, 2015. [Google Scholar]
- Damblin, G.; Couplet, M.; Iooss, B. Numerical studies of space filling designs: Optimization of Latin Hypercube Samples and subprojection properties. J. Simul.
**2013**, 7, 276–289. [Google Scholar] [CrossRef] [Green Version] - Ke, X.; Zhang, Y.; Li, Y.; Du, T. Solving design of pressure vessel engineering problem using a fruit fly optimization algorithm. Int. J. Simul. Syst. Sci. Technol.
**2016**, 17, 5. [Google Scholar] [CrossRef] - Quiñonero-Candela, J.; Rasmussen, C.E. A unifying view of sparse approximate Gaussian process regression. J. Mach. Learn. Res.
**2005**, 6, 1939–1959. [Google Scholar] - Wang, K.; Pleiss, G.; Gardner, J.; Tyree, S.; Weinberger, K.Q.; Wilson, A.G. Exact Gaussian processes on a million data points. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 14622–14632. [Google Scholar]
- Krityakierne, T.; Ginsbourger, D. Global optimization with sparse and local Gaussian process models. In International Workshop on Machine Learning, Optimization and Big Data; Springer: Cham, Switzerland, 2015; pp. 185–196. [Google Scholar]
- Eriksson, D.; Pearce, M.; Gardner, J.; Turner, R.D.; Poloczek, M. Scalable global optimization via local bayesian optimization. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 5496–5507. [Google Scholar]
- Krityakierne, T.; Baowan, D. Aggregated GP-based Optimization for Contaminant Source Localization. Oper. Res. Perspect.
**2020**, 7, 100151. [Google Scholar] [CrossRef] - Bossek, J.; Doerr, C.; Kerschke, P. Initial Design Strategies and their Effects on Sequential Model-Based Optimization. arXiv
**2020**, arXiv:2003.13826. [Google Scholar] - Shields, M.D.; Zhang, J. The generalization of Latin hypercube sampling. Reliab. Eng. Syst. Saf.
**2016**, 148, 96–108. [Google Scholar] [CrossRef] [Green Version]

**Figure 1.**Distribution of the iteration number for which the first feasible solution is found (Scenario 1) for the criteria: expected feasible improvement (EFI), constrained expected improvement (CEI), stepwise uncertainty reduction (SUR), and augmented Lagrangian (AL). The lower the plot, the faster the criterion is in finding the first feasible solution.

**Figure 2.**Distribution of the number of feasible solutions over the course of 100 iterations (Scenario 1). The higher the plot, the more frequently the feasible solutions are visited.

**Figure 3.**Distribution of the iteration number for which the first feasible solution is found (Scenario 2). The lower the plot, the faster the criterion is in finding the first feasible solution.

**Figure 4.**Distribution of the number of feasible solutions over the course of 200 iterations (Scenario 2). The higher the plot, the more frequently the feasible solutions are visited.

Problem | d | Characteristic | $\mathit{f}({\mathit{x}}^{*})$ | No. of Constr. | Type of Constr. | Feasibility Ratio |
---|---|---|---|---|---|---|

G02 | 2 | Nonlinear | - | 2 | inequality | 0.83 |

G03 | 2 | Polynomial | - | 1 | equality | 0.01 |

G04 | 5 | Quadratic | −30,665.539 | 6 | inequality | 0.26 |

G06 | 2 | Cubic | −6961.814 | 2 | inequality | 0.00 |

G08 | 2 | Nonlinear | −0.095825 | 2 | inequality | 0.01 |

G09 | 7 | Polynomial | 680.63 | 4 | inequality | 0.01 |

G11 | 2 | Quadratic | 0.7499 | 1 | equality | 0.01 |

G12 | 3 | Quadratic | −1.0 | 1 | inequality | 0.00 |

G24 | 2 | Linear | −5.508 | 2 | inequality | 0.44 |

P.V. | 4 | Polynomial | 5821.192 | 4 | inequality | 0.40 |

**Table 2.**Number of initial feasible solutions in the Latin Hypercube Sampling Design for Scenario 2 over 20 repetitions.

Problem | ${\mathit{n}}_{0}$ | Max | Min | Mean | Sd |
---|---|---|---|---|---|

G02 | 10 | 10 | 6 | 7.93 | 0.91 |

G03 | 10 | 1 | 0 | 0.23 | 0.43 |

G04 | 25 | 10 | 5 | 6.87 | 1.33 |

G06 | 10 | 0 | 0 | 0.00 | 0.00 |

G08 | 10 | 1 | 0 | 0.27 | 0.45 |

G09 | 35 | 1 | 0 | 0.03 | 0.18 |

G11 | 10 | 1 | 0 | 0.13 | 0.35 |

G12 | 15 | 0 | 0 | 0.00 | 0.00 |

G24 | 10 | 7 | 2 | 4.53 | 1.11 |

P.V. | 20 | 10 | 6 | 7.77 | 1.07 |

**Table 3.**Results for Scenario 1 after 100 iterations, 20 independent runs for the criteria: expected feasible improvement (EFI), constrained expected improvement (CEI), stepwise uncertainty reduction (SUR), and augmented Lagrangian (AL). For each problem, the number in the brackets in the row “mean” gives the number of trials (out of 20) for which no feasible solutions are found. Mean and standard deviation values represent the best feasible solutions calculated over those trials that can find at least one feasible solution. The row “best” gives the best solution values out of 20 trials.

EFI | CEI | SUR | AL | ||
---|---|---|---|---|---|

G02 | mean | −0.354523 | −0.349112 | −0.328333 | −0.011866 (2) |

sd | 0.030 | 0.034 | 0.044 | 0.012 | |

best | −0.364945 | −0.364912 | −0.364876 | −0.045774 | |

G03 | mean | −1.00493 | −1.004921 | −1.002134 | −1.004999 |

sd | $4.4\times {10}^{-5}$ | $7.3\times {10}^{-5}$ | 0.002 | $1.3\times {10}^{-6}$ | |

best | −1.004988 | −1.004986 | −1.004589 | −1.005 | |

G04 | mean | −30,660.391634 | −30,658.9148 | −30,412.700791 | −30,663.696439 |

sd | 5.369 | 8.54 | 104.2 | 2.384 | |

best | −30,665.385912 | −30,665.408571 | −30,557.454 | −30,665.512788 | |

G06 | mean | −6907.923157 | −6900.394384 | −3905.944359 (5) | −6961.568873 |

sd | 39.91 | 104.3 | 1486.2 | 0.313 | |

best | −6953.363959 | −6954.57663 | −6290.35333 | −6961.80541 | |

G08 | mean | −0.09579 | −0.09286 | −0.090028 | −0.070216 |

sd | $1.1\times {10}^{-4}$ | 0.012 | 0.010 | 0.030 | |

best | −0.095825 | −0.095825 | −0.095825 | −0.095149 | |

G09 | mean | 1061.523216 | 1054.971218 | 989.331029 | 1080.731166 |

sd | 93.37 | 110 | 61.89 | 162.7 | |

best | 938.882413 | 818.435864 | 848.779923 | 834.284207 | |

G11 | mean | 0.745064 | 0.74507 | 0.747024 | 0.745004 |

sd | $2.4\times {10}^{-5}$ | $2.3\times {10}^{-5}$ | 0.001 | $4.6\times {10}^{-6}$ | |

best | 0.74503 | 0.745034 | 0.745136 | 0.745 | |

G12 | mean | −1 | −1 | −0.999919 | −1 |

sd | $1.3\times {10}^{-7}$ | $2.1\times {10}^{-7}$ | $8.2\times {10}^{-5}$ | $3.3\times {10}^{-7}$ | |

best | −1 | −1 | −0.999993 | −1 | |

G24 | mean | −5.506425 | −5.505559 | −5.44143 | −2.360917 |

sd | 0.001 | 0.002 | 0.017 | 0.640 | |

best | −5.507484 | −5.507677 | −5.474746 | −3.103283 |

Problem | Results |
---|---|

G02 | $EFI\approx CEI\prec SUR$ |

G03 | $AL\prec EFI\approx CEI\prec SUR$ |

G04 | $AL\prec EFI\approx CEI\prec SUR$ |

G06 | $AL\prec CEI\prec EFI$ |

G08 | $EFI\approx CEI\prec SUR\prec AL$ |

G09 | $SUR\prec EFI\approx CEI\approx AL$ |

G11 | $AL\prec EFI\approx CEI\prec SUR$ |

G12 | $EFI\approx CEI\prec AL\prec SUR$ |

G24 | $EFI\approx CEI\prec SUR\prec AL$ |

**Table 5.**Results for Scenario 2 after 200 iterations, 20 independent runs. For each problem, the number in brackets in the row “mean” gives the number of trials (out of 20) for which no feasible solutions are found. Mean and standard deviation values represent the best feasible solutions calculated over those trials that can find at least one feasible solution. The row “best” gives the best solution values out of 20 trials.

EFI | CEI | SUR | AL | ||
---|---|---|---|---|---|

G02 | mean | −0.364025 | −0.364312 | −0.359966 | −0.015449 (4) |

sd | 0.001 | $6.4\times {10}^{-4}$ | 0.003 | 0.009 | |

best | −0.364952 | −0.364964 | −0.363933 | −0.026312 | |

G03 | mean | −1.004955 | −1.004936 | −1.002705 | −1.004999 |

sd | $4.5\times {10}^{-5}$ | $6.2\times {10}^{-5}$ | 0.001 | $1.2\times {10}^{-6}$ | |

best | −1.004995 | −1.004993 | −1.004749 | −1.005 | |

G04 | mean | −30,657.612772 | −30,656.937017 | −30,304.856407 | −30,663.376405 |

sd | 11.773 | 11.29 | 110.9 | 2.117 | |

best | −30,663.739482 | −30,664.743145 | −30,509.511844 | −30,665.506828 | |

G06 | mean | −6918.658417 | −6910.902959 | −4183.120175 | −6961.75848 |

sd | 24.591 | 57.828 | 1492 | 0.115 | |

best | −6948.863162 | −6952.440967 | −6805.861054 | −6961.813406 | |

G08 | mean | −0.095822 | −0.095379 | −0.093684 | −0.061701 (2) |

sd | $6.9\times {10}^{-6}$ | 0.002 | 0.003 | 0.044 | |

best | −0.095825 | −0.095825 | −0.095823 | −0.095392 | |

G09 | mean | 921.327255 | 865.05661 | 908.279316 | 887.020006 |

sd | 75.58 | 74 | 63.3 | 86.42 | |

best | 771.65775 | 749.67991 | 780.921707 | 762.976464 | |

G11 | mean | 0.745055 | 0.74506 | 0.746072 | 0.745002 |

sd | $3.6\times {10}^{-5}$ | $3.3\times {10}^{-5}$ | $5.7\times {10}^{-4}$ | $1.3\times {10}^{-6}$ | |

best | 0.745022 | 0.745024 | 0.745058 | 0.745 | |

G12 | mean | −1 | −1 | −0.999931 | −1 |

sd | $2.6\times {10}^{-8}$ | $3.1\times {10}^{-8}$ | $7.5\times {10}^{-5}$ | $4.4\times {10}^{-7}$ | |

best | −1 | −1 | −0.999999 | −1 | |

G24 | mean | −5.506132 | −5.506621 | −5.455703 | −1.942322 |

sd | 0.001 | $8.3\times {10}^{-4}$ | 0.024 | 0.620 | |

best | −5.507517 | −5.507572 | −5.499659 | −3.938104 | |

P.V. | mean | 5941.776519 | 5916.992288 | 6147.141779 | 394,221.943568 |

sd | 63.6 | 25.6 | 133 | 150,000 | |

best value | 5888.547144 | 5888.510311 | 5913.191216 | 300,359.489514 |

Problem | Results |
---|---|

G02 | $EFI\approx CEI\prec SUR$ |

G03 | $AL\prec EFI\approx CEI\prec SUR$ |

G04 | $AL\prec EFI\approx CEI\prec SUR$ |

G06 | $AL\prec EFI\approx CEI\prec SUR$ |

G08 | $EFI\approx CEI\prec SUR$ |

G09 | $AL\approx CEI\prec EFI\approx SUR$ |

G11 | $AL\prec EFI\approx CEI\prec SUR$ |

G12 | $EFI\approx CEI\prec AL\prec SUR$ |

G24 | $EFI\approx CEI\prec SUR\prec AL$ |

P.V. | $EFI\approx CEI\prec SUR\prec AL$ |

Problem | Scenario 1 | Scenario 2 |
---|---|---|

G02 | EFI, CEI | EFI, CEI |

G03 | AL | AL |

G04 | AL | AL |

G06 | AL | AL |

G08 | EFI, CEI | EFI, CEI |

G09 | SUR | CEI, AL |

G11 | AL | AL |

G12 | EFI, CEI | EFI, CEI |

G24 | EFI, CEI | EFI, CEI |

P.V. | - | EFI, CEI |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Chaiyotha, K.; Krityakierne, T.
A Comparative Study of Infill Sampling Criteria for Computationally Expensive Constrained Optimization Problems. *Symmetry* **2020**, *12*, 1631.
https://doi.org/10.3390/sym12101631

**AMA Style**

Chaiyotha K, Krityakierne T.
A Comparative Study of Infill Sampling Criteria for Computationally Expensive Constrained Optimization Problems. *Symmetry*. 2020; 12(10):1631.
https://doi.org/10.3390/sym12101631

**Chicago/Turabian Style**

Chaiyotha, Kittisak, and Tipaluck Krityakierne.
2020. "A Comparative Study of Infill Sampling Criteria for Computationally Expensive Constrained Optimization Problems" *Symmetry* 12, no. 10: 1631.
https://doi.org/10.3390/sym12101631