# Machine Learning Alternatives to Response Surface Models

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Response Surface Methodology

## 3. Machine Learning Approaches

#### 3.1. knn

#### 3.2. CART

#### 3.3. Ensemble Methods

#### 3.4. Support Vector Machines

#### 3.5. Neural Networks

#### 3.6. Multidimensional Output Approaches

#### 3.7. Hyperparameters and Their Tuning

#### 3.8. Metrics Used to Compare the Models

- Root Mean Square Error, $RMSE=\sqrt{\frac{{\sum}_{i=1}^{n}{({y}_{i}-{\widehat{y}}_{i})}^{2}}{n}}$
- Mean Absolute Error, $MAE=\frac{1}{n}{\sum}_{i=1}^{n}|{y}_{i}-{\widehat{y}}_{i}|$
- Mean Absolute Percentage Error, $MAPE=\frac{1}{n}{\sum}_{i=1}^{n}\left|\frac{{y}_{i}-{\widehat{y}}_{i}}{{y}_{i}}\right|$
- Determination Coefficient, ${R}^{2}=1-\frac{{\sum}_{i=1}^{n}{({y}_{i}-{\widehat{y}}_{i})}^{2}}{{\sum}_{i=1}^{n}{({y}_{i}-\overline{y})}^{2}}$
- Akaike Information Criterion, $AIC=2k-2ln\left(L\right)$, where k is the number of parameters to be estimated in the model and L is the maximum likelihood function of the model

- The explained variance by the model is the proportion of the variance due to the factors, $EV=1-\frac{var(y-\widehat{y})}{var\left(y\right)}$
- The Nash and Sutcliffe Efficiency is equivalent to the ${R}^{2}$ but uses absolute differences rather than quadratic, $NSE=1-\frac{{\sum}_{i=1}^{n}|{y}_{i}-{\widehat{y}}_{i}|}{{\sum}_{i=1}^{n}|{y}_{i}-\overline{y}|},\phantom{\rule{0.277778em}{0ex}}\phantom{\rule{0.277778em}{0ex}}-\infty <NSE\le 1$
- The agreement index d is a standardized measure of the degree of model prediction error, $d=1-\frac{{\sum}_{i=1}^{n}|{y}_{i}-{\widehat{y}}_{i}|}{{\sum}_{i=1}^{n}\left(\right)open="("\; close=")">|{y}_{i}-\overline{y}|+|{\widehat{y}}_{i}-\overline{y}|}$
- The average absolute deviations from a central point (this metric is defined and used in [8], but the name given by the authors does not seem appropriate for us as it is not in line with their definition), $AAD(\%)=100\times \frac{1}{n}{\sum}_{i=1}^{n}\frac{{\widehat{y}}_{i}-{y}_{i}}{{y}_{i}}$

## 4. Comparisons between RSM and ML Approaches

## 5. Experiments

#### 5.1. Simulated Model

#### 5.1.1. Tuning the Models

#### 5.1.2. Results

#### 5.2. Use Case

## 6. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Acknowledgments

## Conflicts of Interest

## Appendix A. Supplementary Material

#### Appendix A.1. Mean and Standard Deviation of Hyperparameter Values

**Table A1.**Average and standard deviation (SD) of the hyperparameters for all responses with $\sigma =1$ and $n=50$.

Hyperparameters | Average Y1 | SD Y1 | Average Y2 | SD Y2 | Average Y3 | SD Y3 |
---|---|---|---|---|---|---|

k | 4.06 | 1.268 | 9.34 | 2.21 | 8.4 | 2.304 |

minsplit | 3 | 2.474 | 1.3 | 1.199 | 1.2 | 0.99 |

minbucket | 1.8 | 1.852 | 5.4 | 1.641 | 5.6 | 1.37 |

cp | 0.003 | 0.002 | 0.023 | 0.011 | 0.018 | 0.012 |

minsize | 5.46 | 3.382 | 5.54 | 3.57 | 5.44 | 3.144 |

mtry | 3.2 | 0.7 | 2.3 | 0.839 | 2.64 | 1.208 |

n.trees | 93 | 58.038 | 50 | 0 | 54 | 28.284 |

interaction.depth | 1.96 | 0.88 | 1.08 | 0.34 | 1.08 | 0.396 |

shrinkage | 0.1 | 0 | 0.1 | 0 | 0.1 | 0 |

n.minobsinnode | 10 | 0 | 10 | 0 | 10 | 0 |

nrounds | 70 | 47.38 | 50 | 0 | 51 | 7.071 |

max_depth | 1.2 | 0.495 | 1.36 | 1.064 | 1.12 | 0.521 |

eta | 0.3 | 0 | 0.306 | 0.024 | 0.304 | 0.02 |

gamme | 0 | 0 | 0 | 0 | 0 | 0 |

colsample_bytree | 0.712 | 0.1 | 0.684 | 0.1 | 0.664 | 0.094 |

min_child_weight | 1 | 0 | 1 | 0 | 1 | 0 |

subsample | 0.743 | 0.178 | 0.828 | 0.187 | 0.833 | 0.197 |

degree | 1.32 | 0.653 | 1.76 | 0.87 | 1.42 | 0.758 |

scale | 2.312 | 3.661 | 0.141 | 0.292 | 0.412 | 1.977 |

${C}_{poly}$ | 1.66 | 1.358 | 1.535 | 1.445 | 0.48 | 0.756 |

sigma | 0.537 | 0.42 | 0.525 | 0.515 | 0.497 | 0.503 |

${C}_{rad}$ | 5.08 | 7.359 | 0.35 | 0.196 | 0.46 | 1.1 |

layer1 | 4.96 | 1.355 | 5.04 | 1.525 | 3.64 | 1.549 |

layer2 | 5.12 | 1.223 | 4.64 | 1.306 | 3.64 | 1.601 |

layer3 | 4.92 | 1.523 | 5.04 | 1.355 | 3.72 | 1.715 |

minsplit | 2.8 | 2.424 | 2.8 | 2.424 | 2.8 | 2.424 |

minbucket | 2.4 | 2.268 | 2.4 | 2.268 | 2.4 | 2.268 |

cp | 0.006 | 0.006 | 0.006 | 0.006 | 0.006 | 0.006 |

layer1 | 4 | 1.512 | 4 | 1.512 | 4 | 1.512 |

layer2 | 4.04 | 1.484 | 4.04 | 1.484 | 4.04 | 1.484 |

layer3 | 3.92 | 1.712 | 3.92 | 1.712 | 3.92 | 1.712 |

**Table A2.**Average and standard deviation (SD) of the hyperparameters for all responses with $\sigma =1$ and $n=30$.

Hyperparameters | Average Y1 | SD Y1 | Average Y2 | SD Y2 | Average Y3 | SD Y3 |
---|---|---|---|---|---|---|

k | 3.08 | 1.047 | 8.28 | 2.241 | 8.52 | 2.297 |

minsplit | 1.9 | 1.94 | 1.1 | 0.707 | 1.3 | 1.199 |

minbucket | 1.3 | 1.199 | 5.7 | 1.199 | 5.7 | 1.199 |

cp | 0.006 | 0.006 | 0.013 | 0.011 | 0.013 | 0.013 |

minsize | 5.6 | 3.169 | 5.64 | 3.269 | 5.04 | 3.597 |

mtry | 3.78 | 1.016 | 2.22 | 0.737 | 2.34 | 0.917 |

n.trees | 113 | 62.93 | 57 | 26.745 | 51 | 7.071 |

interaction.depth | 1.58 | 0.499 | 1.28 | 0.454 | 1.26 | 0.443 |

shrinkage | 0.1 | 0 | 0.1 | 0 | 0.1 | 0 |

n.minobsinnode | 10 | 0 | 10 | 0 | 10 | 0 |

nrounds | 106 | 73.29 | 68 | 51.27 | 51 | 7.071 |

max_depth | 2.06 | 1.449 | 1.86 | 1.414 | 1.74 | 1.44 |

eta | 0.31 | 0.03 | 0.318 | 0.039 | 0.322 | 0.042 |

gamme | 0 | 0 | 0 | 0 | 0 | 0 |

colsample_bytree | 0.704 | 0.101 | 0.656 | 0.091 | 0.684 | 0.1 |

min_child_weight | 1 | 0 | 1 | 0 | 1 | 0 |

subsample | 0.69 | 0.189 | 0.745 | 0.197 | 0.735 | 0.198 |

degree | 1.4 | 0.639 | 1.56 | 0.787 | 1.66 | 0.895 |

scale | 1.824 | 3.359 | 0.139 | 0.293 | 0.252 | 1.421 |

${C}_{poly}$ | 1.555 | 1.226 | 1.135 | 1.292 | 0.995 | 1.243 |

sigma | 0.397 | 0.27 | 0.518 | 0.635 | 0.494 | 0.339 |

${C}_{rad}$ | 6.16 | 6.973 | 0.68 | 2.24 | 0.345 | 0.214 |

layer1 | 4.96 | 1.228 | 4.64 | 1.588 | 3.68 | 1.684 |

layer2 | 5.52 | 0.953 | 4.96 | 1.228 | 4.24 | 1.791 |

layer3 | 5.32 | 1.186 | 4.04 | 1.784 | 4.12 | 1.637 |

minsplit | 2.7 | 2.393 | 2.7 | 2.393 | 2.7 | 2.393 |

minbucket | 1.5 | 1.515 | 1.5 | 1.515 | 1.5 | 1.515 |

cp | 0.009 | 0.008 | 0.009 | 0.008 | 0.009 | 0.008 |

layer1 | 4.12 | 1.637 | 4.12 | 1.637 | 4.12 | 1.637 |

layer2 | 3.88 | 1.686 | 3.88 | 1.686 | 3.88 | 1.686 |

layer3 | 4.28 | 1.512 | 4.28 | 1.512 | 4.28 | 1.512 |

**Table A3.**Average and standard deviation (SD) of the hyperparameters for all responses with $\sigma =1$ and $n=20$.

Hyperparameters | Average Y1 | SD Y1 | Average Y2 | SD Y2 | Average Y3 | SD Y3 |
---|---|---|---|---|---|---|

k | 2.88 | 0.849 | 8.3 | 2.485 | 8.54 | 2.786 |

minsplit | 1.8 | 1.852 | 1.7 | 1.753 | 1.7 | 1.753 |

minbucket | 1 | 0 | 4.7 | 2.215 | 5.2 | 1.852 |

cp | 0.005 | 0.005 | 0.012 | 0.012 | 0.006 | 0.009 |

minsize | 5.58 | 3.308 | 5.24 | 3.384 | 4.48 | 3.209 |

mtry | 4.08 | 1.007 | 2.62 | 1.159 | 2.34 | 0.848 |

n.trees | 72 | 55.476 | 51 | 7.071 | 59 | 40.013 |

interaction.depth | 1 | 0 | 1 | 0 | 1 | 0 |

shrinkage | 0.1 | 0 | 0.1 | 0 | 0.1 | 0 |

n.minobsinnode | 10 | 0 | 10 | 0 | 10 | 0 |

nrounds | 110 | 74.231 | 62 | 38.545 | 59 | 34.538 |

max_depth | 2.72 | 1.356 | 2.26 | 1.601 | 1.68 | 1.332 |

eta | 0.324 | 0.043 | 0.32 | 0.04 | 0.32 | 0.04 |

gamme | 0 | 0 | 0 | 0 | 0 | 0 |

colsample_bytree | 0.716 | 0.1 | 0.656 | 0.091 | 0.68 | 0.099 |

min_child_weight | 1 | 0 | 1 | 0 | 1 | 0 |

subsample | 0.67 | 0.169 | 0.75 | 0.192 | 0.725 | 0.192 |

degree | 1.32 | 0.621 | 1.58 | 0.758 | 1.26 | 0.6 |

scale | 2.33 | 3.652 | 0.488 | 1.975 | 0.223 | 1.411 |

${C}_{poly}$ | 1.415 | 1.173 | 1.675 | 1.581 | 0.675 | 0.892 |

sigma | 0.308 | 0.222 | 0.323 | 0.219 | 0.695 | 1.371 |

${C}_{rad}$ | 7 | 9.318 | 0.455 | 0.562 | 0.485 | 0.64 |

layer1 | 4.76 | 1.333 | 4.08 | 1.614 | 3.68 | 1.731 |

layer2 | 5.36 | 1.174 | 4.44 | 1.68 | 4.04 | 1.641 |

layer3 | 5.6 | 0.808 | 3.52 | 1.644 | 4.08 | 1.85 |

minsplit | 2.1 | 2.092 | 2.1 | 2.092 | 2.1 | 2.092 |

minbucket | 1 | 0 | 1 | 0 | 1 | 0 |

cp | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 | 0.01 |

layer1 | 3.92 | 1.563 | 3.92 | 1.563 | 3.92 | 1.563 |

layer2 | 4.2 | 1.471 | 4.2 | 1.471 | 4.2 | 1.471 |

layer3 | 3.88 | 1.48 | 3.88 | 1.48 | 3.88 | 1.48 |

**Table A4.**Average and standard deviation (SD) of the hyperparameters for all responses with $\sigma =1$ and $n=15$.

Hyperparameters | Average Y1 | SD Y1 | Average Y2 | SD Y2 | Average Y3 | SD Y3 |
---|---|---|---|---|---|---|

k | 2.74 | 0.853 | 7.16 | 2.691 | 7.84 | 2.985 |

minsplit | 1.2 | 0.99 | 1.4 | 1.37 | 1.7 | 1.753 |

minbucket | 1.1 | 0.707 | 4.8 | 2.157 | 4.5 | 2.315 |

cp | 0.008 | 0.009 | 0.005 | 0.007 | 0.005 | 0.008 |

minsize | 5.72 | 3.084 | 6.32 | 3.365 | 6.48 | 3.346 |

mtry | 4.52 | 0.863 | 2.66 | 1.099 | 2.5 | 1.055 |

n.trees | 107 | 90.356 | 81 | 70.631 | 55 | 29.014 |

interaction.depth | 1 | 0 | 1 | 0 | 1 | 0 |

shrinkage | 0.1 | 0 | 0.1 | 0 | 0.1 | 0 |

n.minobsinnode | 10 | 0 | 10 | 0 | 10 | 0 |

nrounds | 115 | 76.432 | 62 | 42.33 | 68 | 52.255 |

max_depth | 2.6 | 1.539 | 2.52 | 1.581 | 2.76 | 1.611 |

eta | 0.314 | 0.035 | 0.332 | 0.047 | 0.336 | 0.048 |

gamme | 0 | 0 | 0 | 0 | 0 | 0 |

colsample_bytree | 0.728 | 0.097 | 0.652 | 0.089 | 0.664 | 0.094 |

min_child_weight | 1 | 0 | 1 | 0 | 1 | 0 |

subsample | 0.72 | 0.185 | 0.68 | 0.194 | 0.685 | 0.188 |

degree | 1.16 | 0.468 | 1.7 | 0.814 | 1.32 | 0.683 |

scale | 2.492 | 3.814 | 0.872 | 2.726 | 0.257 | 1.42 |

${C}_{poly}$ | 1.715 | 1.489 | 1.175 | 1.311 | 0.74 | 1.087 |

sigma | 0.262 | 0.184 | 0.346 | 0.23 | 0.489 | 0.55 |

${C}_{rad}$ | 3.5 | 3.066 | 1.435 | 3.188 | 1.51 | 3.361 |

layer1 | 4.52 | 1.446 | 3.92 | 1.614 | 4.04 | 1.784 |

layer2 | 5.32 | 1.186 | 4.72 | 1.604 | 4.52 | 1.607 |

layer3 | 5.44 | 0.993 | 3.48 | 1.798 | 3.72 | 1.807 |

minsplit | 2.3 | 2.215 | 2.3 | 2.215 | 2.3 | 2.215 |

minbucket | 1.2 | 0.99 | 1.2 | 0.99 | 1.2 | 0.99 |

cp | 0.007 | 0.007 | 0.007 | 0.007 | 0.007 | 0.007 |

layer1 | 4.04 | 1.641 | 4.04 | 1.641 | 4.04 | 1.641 |

layer2 | 4.12 | 1.586 | 4.12 | 1.586 | 4.12 | 1.586 |

layer3 | 4.64 | 1.367 | 4.64 | 1.367 | 4.64 | 1.367 |

**Table A5.**Average and standard deviation (SD) of the hyperparameters for all responses of DoE case study.

Hyperparameters | Average Y1 | SD Y1 | Average Y2 | SD Y2 | Average Y3 | SD Y3 | Average Y4 | SD Y4 | Average Y5 | SD Y5 |
---|---|---|---|---|---|---|---|---|---|---|

k | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}6.893$ | $\phantom{\rule{3.33333pt}{0ex}}2.833$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}7.398$ | $\phantom{\rule{3.33333pt}{0ex}}2.736$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}6.867$ | $\phantom{\rule{3.33333pt}{0ex}}2.983$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}6.928$ | $\phantom{\rule{3.33333pt}{0ex}}2.832$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}7.354$ | $\phantom{\rule{3.33333pt}{0ex}}2.723$ |

minsplit | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.595$ | $\phantom{\rule{3.33333pt}{0ex}}1.629$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.301$ | $\phantom{\rule{3.33333pt}{0ex}}1.197$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.482$ | $\phantom{\rule{3.33333pt}{0ex}}1.485$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.602$ | $\phantom{\rule{3.33333pt}{0ex}}1.638$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.305$ | $\phantom{\rule{3.33333pt}{0ex}}1.204$ |

minbucket | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.274$ | $\phantom{\rule{3.33333pt}{0ex}}2.392$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.771$ | $\phantom{\rule{3.33333pt}{0ex}}2.500$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.133$ | $\phantom{\rule{3.33333pt}{0ex}}2.433$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.253$ | $\phantom{\rule{3.33333pt}{0ex}}2.398$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.744$ | $\phantom{\rule{3.33333pt}{0ex}}2.503$ |

cp | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.006$ | $\phantom{\rule{3.33333pt}{0ex}}0.008$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.007$ | $\phantom{\rule{3.33333pt}{0ex}}0.009$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.008$ | $\phantom{\rule{3.33333pt}{0ex}}0.010$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.006$ | $\phantom{\rule{3.33333pt}{0ex}}0.008$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.007$ | $\phantom{\rule{3.33333pt}{0ex}}0.009$ |

minsize | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.905$ | $\phantom{\rule{3.33333pt}{0ex}}3.442$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.157$ | $\phantom{\rule{3.33333pt}{0ex}}3.042$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.964$ | $\phantom{\rule{3.33333pt}{0ex}}3.355$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.952$ | $\phantom{\rule{3.33333pt}{0ex}}3.435$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.159$ | $\phantom{\rule{3.33333pt}{0ex}}3.061$ |

mtry | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.262$ | $\phantom{\rule{3.33333pt}{0ex}}2.929$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.831$ | $\phantom{\rule{3.33333pt}{0ex}}3.177$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.470$ | $\phantom{\rule{3.33333pt}{0ex}}2.973$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.217$ | $\phantom{\rule{3.33333pt}{0ex}}2.918$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}5.841$ | $\phantom{\rule{3.33333pt}{0ex}}3.195$ |

n.trees | $\phantom{\rule{3.33333pt}{0ex}}83.330$ | $72.950$ | $\phantom{\rule{3.33333pt}{0ex}}69.280$ | $56.220$ | $\phantom{\rule{3.33333pt}{0ex}}70.480$ | $58.450$ | $\phantom{\rule{3.33333pt}{0ex}}83.730$ | $73.300$ | $\phantom{\rule{3.33333pt}{0ex}}69.510$ | $56.520$ |

interaction.depth | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ |

shrinkage | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.100$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.100$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.100$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.100$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.100$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ |

n.minobsinnode | $\phantom{\rule{3.33333pt}{0ex}}10.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}10.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}10.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}10.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}10.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ |

nrounds | $138.100$ | $80.520$ | $147.000$ | $85.310$ | $150.000$ | $79.630$ | $136.700$ | $80.050$ | $148.200$ | $85.150$ |

max_depth | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}2.786$ | $\phantom{\rule{3.33333pt}{0ex}}1.440$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}2.928$ | $\phantom{\rule{3.33333pt}{0ex}}1.536$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.000$ | $\phantom{\rule{3.33333pt}{0ex}}1.514$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}2.771$ | $\phantom{\rule{3.33333pt}{0ex}}1.443$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}2.939$ | $\phantom{\rule{3.33333pt}{0ex}}1.542$ |

eta | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.356$ | $\phantom{\rule{3.33333pt}{0ex}}0.050$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.351$ | $\phantom{\rule{3.33333pt}{0ex}}0.050$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.354$ | $\phantom{\rule{3.33333pt}{0ex}}0.050$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.355$ | $\phantom{\rule{3.33333pt}{0ex}}0.050$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.351$ | $\phantom{\rule{3.33333pt}{0ex}}0.050$ |

gamma | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ |

colsample_bytree | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.721$ | $\phantom{\rule{3.33333pt}{0ex}}0.098$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.752$ | $\phantom{\rule{3.33333pt}{0ex}}0.086$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.721$ | $\phantom{\rule{3.33333pt}{0ex}}0.098$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.723$ | $\phantom{\rule{3.33333pt}{0ex}}0.098$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.754$ | $\phantom{\rule{3.33333pt}{0ex}}0.085$ |

min_child_weight | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.000$ | $\phantom{\rule{3.33333pt}{0ex}}0.000$ |

subsample | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.625$ | $\phantom{\rule{3.33333pt}{0ex}}0.178$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.619$ | $\phantom{\rule{3.33333pt}{0ex}}0.155$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.604$ | $\phantom{\rule{3.33333pt}{0ex}}0.143$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.626$ | $\phantom{\rule{3.33333pt}{0ex}}0.178$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.619$ | $\phantom{\rule{3.33333pt}{0ex}}0.156$ |

degree | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.560$ | $\phantom{\rule{3.33333pt}{0ex}}0.750$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.675$ | $\phantom{\rule{3.33333pt}{0ex}}0.783$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.422$ | $\phantom{\rule{3.33333pt}{0ex}}0.665$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.566$ | $\phantom{\rule{3.33333pt}{0ex}}0.752$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.683$ | $\phantom{\rule{3.33333pt}{0ex}}0.784$ |

scale | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.235$ | $\phantom{\rule{3.33333pt}{0ex}}3.072$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.226$ | $\phantom{\rule{3.33333pt}{0ex}}3.093$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.695$ | $\phantom{\rule{3.33333pt}{0ex}}3.612$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.237$ | $\phantom{\rule{3.33333pt}{0ex}}3.090$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.241$ | $\phantom{\rule{3.33333pt}{0ex}}3.109$ |

${C}_{poly}$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.970$ | $\phantom{\rule{3.33333pt}{0ex}}1.020$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.280$ | $\phantom{\rule{3.33333pt}{0ex}}1.452$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.895$ | $\phantom{\rule{3.33333pt}{0ex}}1.190$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.970$ | $\phantom{\rule{3.33333pt}{0ex}}1.026$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.247$ | $\phantom{\rule{3.33333pt}{0ex}}1.429$ |

sigma | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.057$ | $\phantom{\rule{3.33333pt}{0ex}}0.007$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.057$ | $\phantom{\rule{3.33333pt}{0ex}}0.007$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.056$ | $\phantom{\rule{3.33333pt}{0ex}}0.006$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.057$ | $\phantom{\rule{3.33333pt}{0ex}}0.007$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.057$ | $\phantom{\rule{3.33333pt}{0ex}}0.007$ |

${C}_{rad}$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.330$ | $\phantom{\rule{3.33333pt}{0ex}}4.385$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.602$ | $\phantom{\rule{3.33333pt}{0ex}}5.383$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.873$ | $\phantom{\rule{3.33333pt}{0ex}}7.118$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.322$ | $\phantom{\rule{3.33333pt}{0ex}}4.411$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.634$ | $\phantom{\rule{3.33333pt}{0ex}}5.408$ |

layer1 | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.024$ | $\phantom{\rule{3.33333pt}{0ex}}1.657$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.639$ | $\phantom{\rule{3.33333pt}{0ex}}1.566$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.313$ | $\phantom{\rule{3.33333pt}{0ex}}1.696$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.000$ | $\phantom{\rule{3.33333pt}{0ex}}1.653$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.659$ | $\phantom{\rule{3.33333pt}{0ex}}1.565$ |

layer2 | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.905$ | $\phantom{\rule{3.33333pt}{0ex}}1.580$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.976$ | $\phantom{\rule{3.33333pt}{0ex}}1.638$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.880$ | $\phantom{\rule{3.33333pt}{0ex}}1.541$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.928$ | $\phantom{\rule{3.33333pt}{0ex}}1.576$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.951$ | $\phantom{\rule{3.33333pt}{0ex}}1.632$ |

layer3 | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.786$ | $\phantom{\rule{3.33333pt}{0ex}}1.584$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.373$ | $\phantom{\rule{3.33333pt}{0ex}}1.651$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.518$ | $\phantom{\rule{3.33333pt}{0ex}}1.611$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.807$ | $\phantom{\rule{3.33333pt}{0ex}}1.581$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.366$ | $\phantom{\rule{3.33333pt}{0ex}}1.659$ |

minsplit | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.833$ | $\phantom{\rule{3.33333pt}{0ex}}1.875$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.723$ | $\phantom{\rule{3.33333pt}{0ex}}1.769$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.843$ | $\phantom{\rule{3.33333pt}{0ex}}1.884$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.843$ | $\phantom{\rule{3.33333pt}{0ex}}1.884$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}1.732$ | $\phantom{\rule{3.33333pt}{0ex}}1.778$ |

minbucket | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.929$ | $\phantom{\rule{3.33333pt}{0ex}}2.064$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.976$ | $\phantom{\rule{3.33333pt}{0ex}}2.030$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.795$ | $\phantom{\rule{3.33333pt}{0ex}}2.151$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.916$ | $\phantom{\rule{3.33333pt}{0ex}}2.073$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.963$ | $\phantom{\rule{3.33333pt}{0ex}}2.039$ |

cp.1 | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.009$ | $\phantom{\rule{3.33333pt}{0ex}}0.011$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.009$ | $\phantom{\rule{3.33333pt}{0ex}}0.011$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.008$ | $\phantom{\rule{3.33333pt}{0ex}}0.011$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.009$ | $\phantom{\rule{3.33333pt}{0ex}}0.011$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}0.009$ | $\phantom{\rule{3.33333pt}{0ex}}0.011$ |

layer1 | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.238$ | $\phantom{\rule{3.33333pt}{0ex}}1.625$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.072$ | $\phantom{\rule{3.33333pt}{0ex}}1.636$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.289$ | $\phantom{\rule{3.33333pt}{0ex}}1.597$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.241$ | $\phantom{\rule{3.33333pt}{0ex}}1.635$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.073$ | $\phantom{\rule{3.33333pt}{0ex}}1.646$ |

layer2 | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.381$ | $\phantom{\rule{3.33333pt}{0ex}}1.567$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.169$ | $\phantom{\rule{3.33333pt}{0ex}}1.629$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.289$ | $\phantom{\rule{3.33333pt}{0ex}}1.627$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.386$ | $\phantom{\rule{3.33333pt}{0ex}}1.576$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}4.171$ | $\phantom{\rule{3.33333pt}{0ex}}1.639$ |

layer3 | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.571$ | $\phantom{\rule{3.33333pt}{0ex}}1.615$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.735$ | $\phantom{\rule{3.33333pt}{0ex}}1.646$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.735$ | $\phantom{\rule{3.33333pt}{0ex}}1.616$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.590$ | $\phantom{\rule{3.33333pt}{0ex}}1.616$ | $\phantom{\rule{3.33333pt}{0ex}}\phantom{\rule{3.33333pt}{0ex}}3.756$ | $\phantom{\rule{3.33333pt}{0ex}}1.645$ |

#### Appendix A.2. Boxplots of Errors for All Approaches and Responses

## References

- Zhang, Y.; Wu, Y. Introducing Machine Learning Models to Response Surface Methodologies. In Response Surface Methodology in Engineering Science; IntechOpen: London, UK, 2021. [Google Scholar]
- Paturi, U.M.R.; Reddy, N.S.; Cheruku, S.; Narala, S.K.R.; Cho, K.K.; Reddy, M.M. Estimation of coating thickness in electrostatic spray deposition by machine learning and response surface methodology. Surf. Coat. Technol.
**2021**, 422, 127559. [Google Scholar] [CrossRef] - Lashari, N.; Ganat, T.; Otchere, D.; Kalam, S.; Ali, I. Navigating viscosity of GO-SiO
_{2}/HPAM composite using response surface methodology and supervised machine learning models. J. Pet. Sci. Eng.**2021**, 205, 108800. [Google Scholar] [CrossRef] - Shozib, I.A.; Ahmad, A.; Rahaman, M.A.; Alam, M.; Beheshti, M.; Taufiqurrahman, I. Modelling and optimization of microhardness of electroless Ni-P-TiO
_{2}composite coating based on machine learning approaches and RSM. J. Mater. Res. Technol.**2021**, 12, 1010–1025. [Google Scholar] [CrossRef] - Keshtegar, B.; Gholampour, A.; Thai, D.K.; Taylan, O.; Trung, N.T. Hybrid regression and machine learning model for predicting ultimate condition of FRP-confined concrete. Compos. Struct.
**2021**, 262, 113644. [Google Scholar] [CrossRef] - Lou, H.; Chung, J.I.; Kiang, Y.H.; Xiao, L.Y.; Hageman, M.J. The application of machine learning algorithms in understanding the effect of core/shell technique on improving powder compactability. Int. J. Pharm.
**2019**, 555, 368–379. [Google Scholar] [CrossRef] [Green Version] - Haque, S.; Khan, S.; Wahid, M.; Dar, S.A.; Soni, N.; Mandal, R.K.; Singh, V.; Tiwari, D.; Lohani, M.; Areeshi, M.Y.; et al. Artificial Intelligence vs. Statistical Modeling and Optimization of Continuous Bead Milling Process for Bacterial Cell Lysis. Front. Microbiol.
**2016**, 7, 1852. [Google Scholar] [CrossRef] - Pilkington, J.L.; Preston, C.; Gomes, R.L. Comparison of response surface methodology (RSM) and artificial neural networks (ANN) towards efficient extraction of artemisinin from Artemisia annua. Ind. Crops Prod.
**2014**, 58, 15–24. [Google Scholar] [CrossRef] - Bourquin, J.; Schmidli, H.; van Hoogevest, P.; Leuenberger, H. Advantages of Artificial Neural Networks (ANNs) as alternative modelling technique for data sets showing non-linear relationships using data from a galenical study on a solid dosage form. Eur. J. Pharm. Sci.
**1998**, 7, 5–16. [Google Scholar] [CrossRef] [PubMed] - Souza Lima, E.; Lima, V.; Almeida, C.; Justi, K. Application of response surface methodology and machine learning combined with data simulation to metal determination of freshwater sediment. Water Air Soil Pollut.
**2017**, 228, 370. [Google Scholar] [CrossRef] - Bi, Q.; Goodman, K.E.; Kaminsky, J.; Lessler, J. What is Machine Learning? A Primer for the Epidemiologist. Am. J. Epidemiol.
**2019**, 188, 2222–2239. [Google Scholar] [CrossRef] - Crisci, C.; Terra, R.; Pacheco, J.; Ghattas, B.; Bidegain, M.; Goyenola, G.; Lagomarsino, J.; Méndez, G.; Mazzeo, N. Multi-model approach to predict phytoplankton biomass and composition dynamics in a eutrophic shallow lake governed by extreme meteorological events. Ecol. Model.
**2017**, 360, 80–93. [Google Scholar] [CrossRef] - Myers, R.H.; Montgomery, D.C.; Anderson-Cook, C.M. Response Surface Methodology: Process and Product in Optimization Using Designed Experiments; John Wiley and Sons: New York, NY, USA, 1995. [Google Scholar]
- Sarabia, L.; Ortiz, M. 1.12—Response Surface Methodology. In Comprehensive Chemometrics; Brown, S.D., Tauler, R., Walczak, B., Eds.; Elsevier: Oxford, UK, 2009; pp. 345–390. [Google Scholar] [CrossRef]
- Manzon, D.; Claeys-Bruno, M.; Declomesnil, S.; Carité, C.; Sergent, M. Quality by Design: Comparison of Design Space construction methods in the case of Design of Experiments. Chemom. Intell. Lab. Syst.
**2020**, 200, 104002. [Google Scholar] [CrossRef] - dos Moreira, C.S.; Lourenço, F.R. Development and optimization of a stability-indicating chromatographic method for verapamil hydrochloride and its impurities in tablets using an analytical quality by design (AQbD) approach. Microchem. J.
**2020**, 154, 104610. [Google Scholar] [CrossRef] - Hastie, T.; Tibshirani, R.; Friedman, J.; Franklin, J. The elements of statistical learning: Data mining, inference, and prediction. Math. Intell.
**2004**, 27, 83–85. [Google Scholar] [CrossRef] - Breiman, L.; Friedman, J.; Stone, C.J.; Olshen, R.A. Classification and Regression Trees; Chapman and Hall/CRC: Boca Raton, FL, USA, 1984. [Google Scholar]
- Breiman, L. Random Forests. Mach. Learn.
**2001**, 45, 5–32. [Google Scholar] [CrossRef] [Green Version] - Freund, Y.; Schapire, R.E. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci.
**1997**, 55, 119–139. [Google Scholar] [CrossRef] [Green Version] - Natekin, A.; Knoll, A. Gradient boosting machines, a tutorial. Front. Neurorobot.
**2013**, 7, 21. [Google Scholar] [CrossRef] [Green Version] - Chen, T.; He, T. xgboost: eXtreme Gradient Boosting. 2021. Available online: https://cran.r-project.org/web/packages/xgboost/vignettes/xgboost.pdf (accessed on 27 July 2023).
- Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. [Google Scholar] [CrossRef]
- Nerini, D.; Durbec, J.; Mante, C.; Garcia, F.; Ghattas, B. Forecasting physicochemical variables by a classification tree method: Application to the Berre Lagoon (South France). Acta Biotheor.
**2001**, 48, 181–196. [Google Scholar] [CrossRef] - Tibshirani, R. Regression shrinkage and selection via the Lasso. J. R. Stat. Soc. Ser. B (Methodol.)
**1996**, 58, 267–288. [Google Scholar] [CrossRef] - Nelder, J.A.; Wedderburn, R.W.M. Generalized linear models. J. R. Stat. Soc. Ser. (Gen.)
**1972**, 135, 370. [Google Scholar] [CrossRef] - Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature
**1986**, 323, 533–536. [Google Scholar] [CrossRef] - Marquardt, D.W.; Snee, R.D. Ridge regression in practice. Am. Stat.
**1975**, 29, 3–20. [Google Scholar] [CrossRef] - Geurts, P.; Ernst, D.; Wehenkel, L. Extremely randomized trees. Mach. Learn.
**2006**, 63, 3–42. [Google Scholar] [CrossRef] [Green Version] - Schaffer, J.; Whitley, D.; Eshelman, L. Combinations of genetic algorithms and neural networks: A survey of the state of the art. In Proceedings of the COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks, Baltimore, MD, USA, 6 June 1992; pp. 1–37. [Google Scholar] [CrossRef]
- Jie, J.; Zeng, J.; Han, C. An extended mind evolutionary computation model for optimizations. Appl. Math. Comput.
**2007**, 185, 1038–1049. [Google Scholar] [CrossRef] - Huang, G.B.; Zhu, Q.Y.; Siew, C.K. Extreme learning machine: Theory and applications. Neurocomputing
**2006**, 70, 489–501. [Google Scholar] [CrossRef] - Aljazzar, H.; Leue, S. K*: A heuristic search algorithm for finding the k shortest paths. Artif. Intell.
**2011**, 175, 2129–2154. [Google Scholar] [CrossRef] [Green Version] - RStudio Team. RStudio: Integrated Development Environment for R; RStudio, PBC: Boston, MA, USA, 2020. [Google Scholar]
- Kuhn, M. Building predictive models in R using the caret package. J. Stat. Softw.
**2008**, 28, 1–26. [Google Scholar] [CrossRef] [Green Version] - Dai, Y.; Yang, C.; Liu, Y.; Yao, Y. Latent-Enhanced Variational Adversarial Active Learning Assisted Soft Sensor. IEEE Sens. J.
**2023**, 23, 15762–15772. [Google Scholar] [CrossRef] - Zhu, J.; Jia, M.; Zhang, Y.; Deng, H.; Liu, Y. Transductive transfer broad learning for cross-domain information exploration and multigrade soft sensor application. Chemom. Intell. Lab. Syst.
**2023**, 235, 104778. [Google Scholar] [CrossRef] - Jia, M.; Xu, D.; Yang, T.; Liu, Y.; Yao, Y. Graph convolutional network soft sensor for process quality prediction. J. Process Control
**2023**, 123, 12–25. [Google Scholar] [CrossRef] - Liu, K.; Zheng, M.; Liu, Y.; Yang, J.; Yao, Y. Deep Autoencoder Thermography for Defect Detection of Carbon Fiber Composites. IEEE Trans. Ind. Inform.
**2023**, 19, 6429–6438. [Google Scholar] [CrossRef]

**Figure 1.**Example of surface responses [16].

**Figure 2.**

**Left-hand**panel: a neuron with d inputs.

**Right-hand**panel: a NN with a one-dimensional input and output, and one hidden layer containing three neurons.

**Table 1.**Summary of RSM vs. ML studies—PM: Polynomial Model, LASSO: Least Absolute Shrinkage Selection Operator, GLM: Generalized Linear Model, rf: random forest, GBDT: Gradient Boosting Decision Tree, MLP: Multiple Layer Perceptrons, SVR: Support Vector Regression, BPNN: Back-Propagation Neural Network, SVM: Support Vector Machine, CART: Classification and Regression Tree, ET: Extra Tree regression, GBR: Gradient Boosting Regression, xgboost: eXtreme Gradient Boosting, NN: Neural Network, GA-BPNN: Genetic Algorithm-Back Propagation Neural Network, MEA-BPNN: Mind Evolutionary Algorithm Based BPNN, ELM: Extreme Learning Machine, EV: Explained Variance, MAPE: Mean Absolute Percentage Error, RMSE: Root Mean Square Error, AIC: Akaike Information Criterion, MAE: Mean Absolute Error, ${R}^{2}$: coefficient of determination, MSE: Mean Square Error, NSE: Nash and Sutcliffe Efficiency, d: agreement index, CV RMSE: Coefficient of Variance of the RMSE, AAD: Absolute Average Deviation.

References | # Factors | # Responses | n | Models | Criteria |
---|---|---|---|---|---|

[1] | 63 | 1 | 56,512 | PM (1) PM (2) Linear ML model- LASSO, GLM rf, GBDT MLP, SVR | EV MAPE RMSE |

[2] | 3 | 1 | 30 | PM (2) BPNN, SVM | MAPE |

[3] | 5 | 1 | 57 | PM (2) Ridge regression LASSO, SVM CART, rf ET, GBR xgboost | AIC MAE RMSE ${R}^{2}$ |

[4] | 5 | 1 | 36 | PM (2) NN, SVM rf, ET | MSE MAE ${R}^{2}$ |

[5] | 8 | 2 | 765 | PM (2), SVR RSM-SVR ${}^{1}$ | RMSE MAE NSE d |

[6] | 2 (+2) ${}^{2}$ | 2 | 28 | PM (2) BPNN GA-BPNN, MEA-BPNN ELM | CV RMSE RMSE AIC (for NN) |

[10] | 3 | 6 | 18 | PM (2) Lazy KStar rf | RMSE, ${R}^{2}$ |

[8] | 3 | 1 | 20 | PM (2) MLP | AAD |

[9] | 6 | 2 | 102 | PM (2) MLP | ${R}^{2}$ |

^{1}hybrid model.

^{2}binary factors.

**Table 2.**Hyperparameter values tested for each method. knn: k-nearest neighbors, rf: random forest, svmPoly: support vector machine with polynomial kernel, svmRadial: support vector machine with radial basis function kernel, CART: Classification And Regression Tree, gbm: gradient boosting machine, xgboost: eXtreme Gradient Boosting, NN: Neural Network, CARTmv: Classification and Regression Tree with multivariate responses, NNmv: Neural Network with multivariate responses.

Method | Hyperparameter Grid | Description |
---|---|---|

knn | k: {1, …, 10} | number of neighbors |

CART | minsplit: {1, 5, 10} cp: {0.002, 0.005, 0.01, 0.015, 0.02, 0.03} minbucket: {1, 5, 10} | minimum number of observations to attempt a split complexity parameter minimum number of observations in a leaf |

rf | minsize: {2, 5, 10} mtry: {1, …, 5} | minimum size of a node number of variables to test for each split |

gbm | n.trees: {50, 100, 150, 200, 250} interaction.depth: {1, 2, 3, 4, 5} shrinkage: 0.1 n.minobsinnode: 10 | total number of trees to fit maximum depth of trees learning rate minimum number of observations in leaves |

xgboost | nrounds: {50, 100, 150, 200, 250} max_depth: {1, 2, 3, 4, 5} eta: {0.3,0.4} gamma: 0 colsample_bytree: {0.6, 0.8} min_child_weight: 1 subsample: [0.5, 0.625, 0.750, 0.875, 1} | maximum number of boosting iterations maximum depth of a tree control the learning rate minimum loss reduction subsample ratio of columns when constructing each tree minimum sum of instance weight needed in a child subsample ratio of the training instance |

svmPoly | degree: {0.25, 0.5, 1, 2, 4} scale: {0.001, 0.01, 0.1, 1, 10} ${C}_{poly}$: {1, 2, 3} | polynome degree polynomial scaling factor control parameter |

svmRadial | sigma: constant value ${C}_{rad}$: {0.25, 0.5, 1, 2, 4, 8, 16, 32} | bandwidth of kernel function control parameter |

NN | layer1: {2, 4, 6, 8, 10} layer2: {2, 4, 6, 8, 10} layer3: {2, 4, 6, 8, 10} | number of hidden neurons in each layer |

CARTmv | minsplit: {1,5,10} cp: {0.002, 0.005, 0.01, 0.015, 0.02, 0.03} minbucket: {1, 5, 10} | minimum number of observations to attempt a split complexity parameter minimum number of observations in a leaf |

NNmv | layer1: {2, 4, 6, 8, 10} layer2: {2, 4, 6, 8, 10} layer3: {2, 4, 6, 8, 10} | number of hidden neurons in each layer |

**Table 3.**RMSE and rank for each method and each response (n = 50). Gray cells highlight the best model.

Method | ${\mathit{S}}_{1}$ | ${\mathit{S}}_{2}$ | ${\mathit{S}}_{3}$ | ${\mathit{S}}_{\mathit{ave}}$ | ${\mathit{R}}_{1}$ | ${\mathit{R}}_{2}$ | ${\mathit{R}}_{3}$ | ${\mathit{R}}_{\mathit{ave}}$ |
---|---|---|---|---|---|---|---|---|

lm | $1.067\pm 0.118$ | $1.070\pm 0.136$ | $1.116\pm 0.116$ | $1.072$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}2$ |

knn | $1.629\pm 0.190$ | $1.063\pm 0.112$ | $1.093\pm 0.108$ | $1.262$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}7$ |

CART | $1.715\pm 0.246$ | $1.155\pm 0.154$ | $1.192\pm 0.130$ | $1.354$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | 10 | 11 | $\phantom{\rule{3.33333pt}{0ex}}9$ |

rf | $1.338\pm 0.177$ | $1.092\pm 0.138$ | $1.123\pm 0.126$ | $1.184$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}5$ |

gbm | $1.296\pm 0.161$ | $1.067\pm 0.133$ | $1.085\pm 0.132$ | $1.149$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}4$ |

xgb | $1.390\pm 0.198$ | $1.121\pm 0.147$ | $1.150\pm 0.141$ | $1.220$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}6$ |

svmpoly | $1.085\pm 0.120$ | $1.054\pm 0.127$ | $1.056\pm 0.129$ | $1.065$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}1$ |

svmrad | $1.706\pm 0.626$ | $1.072\pm 0.121$ | $1.066\pm 0.119$ | $1.281$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}8$ |

NN | $6.102\pm 1.870$ | $1.061\pm 0.127$ | $1.047\pm 0.109$ | $2.737$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | 10 |

CARTmv | $8.331\pm 0.676$ | $1.324\pm 0.147$ | $1.151\pm 0.154$ | $3.602$ | 11 | 11 | 10 | 11 |

NNmv | $1.063\pm 0.117$ | $1.122\pm 0.130$ | $1.070\pm 0.112$ | $1.085$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}3$ |

**Table 4.**RMSE and rank for each method and each response (n = 30). Gray cells highlight the best model.

Method | ${\mathit{S}}_{1}$ | ${\mathit{S}}_{2}$ | ${\mathit{S}}_{3}$ | ${\mathit{S}}_{\mathit{ave}}$ | ${\mathit{R}}_{1}$ | ${\mathit{R}}_{2}$ | ${\mathit{R}}_{3}$ | ${\mathit{R}}_{\mathit{ave}}$ |
---|---|---|---|---|---|---|---|---|

lm | $1.177\pm 0.184$ | $1.138\pm 0.184$ | $1.123\pm 0.189$ | $1.146$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}3$ |

knn | $2.036\pm 0.527$ | $1.088\pm 0.129$ | $1.067\pm 0.149$ | $1.397$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}7$ |

CART | $2.003\pm 0.451$ | $1.167\pm 0.192$ | $1.140\pm 0.166$ | $1.437$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}9$ |

rf | $1.587\pm 0.385$ | $1.121\pm 0.135$ | $1.117\pm 0.152$ | $1.275$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}5$ |

gbm | $1.550\pm 0.327$ | $1.116\pm 0.140$ | $1.100\pm 0.159$ | $1.255$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}4$ |

xgb | $1.601\pm 0.380$ | $1.216\pm 0.155$ | $1.194\pm 0.193$ | $1.337$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | 10 | 11 | $\phantom{\rule{3.33333pt}{0ex}}6$ |

svmpoly | $1.199\pm 0.207$ | $1.100\pm 0.189$ | $1.055\pm 0.135$ | $1.118$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}1$ |

svmrad | $2.057\pm 0.798$ | $1.093\pm 0.152$ | $1.059\pm 0.146$ | $1.403$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}8$ |

NN | $6.248\pm 1.930$ | $1.084\pm 0.144$ | $1.038\pm 0.142$ | $2.790$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | 10 |

CARTmv | $8.104\pm 0.895$ | $1.335\pm 0.222$ | $1.184\pm 0.203$ | $3.541$ | 11 | 11 | 10 | 11 |

NNmv | $1.164\pm 0.172$ | $1.117\pm 0.165$ | $1.081\pm 0.152$ | $1.121$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}2$ |

**Table 5.**RMSE and rank for each method and each response (n = 20). Gray cells highlight the best model.

Method | ${\mathit{S}}_{1}$ | ${\mathit{S}}_{2}$ | ${\mathit{S}}_{3}$ | ${\mathit{S}}_{\mathit{ave}}$ | ${\mathit{R}}_{1}$ | ${\mathit{R}}_{2}$ | ${\mathit{R}}_{3}$ | ${\mathit{R}}_{\mathit{ave}}$ |
---|---|---|---|---|---|---|---|---|

lm | $1.205\pm 0.260$ | $1.203\pm 0.233$ | $1.196\pm 0.212$ | $1.201$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}3$ |

knn | $2.309\pm 0.549$ | $1.110\pm 0.195$ | $1.064\pm 0.174$ | $1.494$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}9$ |

CART | $2.091\pm 0.589$ | $1.179\pm 0.231$ | $1.139\pm 0.210$ | $1.470$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}8$ |

rf | $1.761\pm 0.485$ | $1.147\pm 0.180$ | $1.091\pm 0.177$ | $1.333$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}4$ |

gbm | $1.915\pm 0.444$ | $1.141\pm 0.196$ | $1.099\pm 0.189$ | $1.385$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}6$ |

xgb | $1.706\pm 0.388$ | $1.258\pm 0.197$ | $1.174\pm 0.183$ | $1.379$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}5$ |

svmpoly | $1.236\pm 0.236$ | $1.127\pm 0.192$ | $1.048\pm 0.164$ | $1.137$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}2$ |

svmrad | $2.207\pm 0.979$ | $1.123\pm 0.184$ | $1.045\pm 0.149$ | $1.458$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}7$ |

NN | $5.397\pm 2.040$ | $1.140\pm 0.191$ | $1.014\pm 0.153$ | $2.517$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | 10 |

CARTmv | $7.959\pm 1.100$ | $1.404\pm 0.254$ | $1.270\pm 0.252$ | $3.545$ | 11 | 11 | 11 | 11 |

NNmv | $1.172\pm 0.241$ | $1.154\pm 0.196$ | $1.064\pm 0.177$ | $1.130$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}1$ |

**Table 6.**RMSE and rank for each method and each response (n = 15). Gray cells highlight the best model.

Method | ${\mathit{S}}_{1}$ | ${\mathit{S}}_{2}$ | ${\mathit{S}}_{3}$ | ${\mathit{S}}_{\mathit{ave}}$ | ${\mathit{R}}_{1}$ | ${\mathit{R}}_{2}$ | ${\mathit{R}}_{3}$ | ${\mathit{R}}_{\mathit{ave}}$ |
---|---|---|---|---|---|---|---|---|

lm | $1.436\pm 0.491$ | $1.371\pm 0.512$ | $1.293\pm 0.314$ | $1.367$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | 10 | 11 | $\phantom{\rule{3.33333pt}{0ex}}3$ |

knn | $2.670\pm 0.818$ | $1.094\pm 0.192$ | $1.063\pm 0.199$ | $1.611$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}8$ |

CART | $2.424\pm 0.693$ | $1.220\pm 0.302$ | $1.151\pm 0.193$ | $1.598$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}6$ |

rf | $2.149\pm 0.730$ | $1.133\pm 0.218$ | $1.079\pm 0.170$ | $1.454$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}4$ |

gbm | $2.697\pm 0.741$ | $1.187\pm 0.287$ | $1.100\pm 0.161$ | $1.661$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}9$ |

xgb | $1.910\pm 0.530$ | $1.289\pm 0.290$ | $1.221\pm 0.209$ | $1.473$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}5$ |

svmpoly | $1.401\pm 0.425$ | $1.194\pm 0.426$ | $1.045\pm 0.199$ | $1.213$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}2$ |

svmrad | $2.576\pm 1.080$ | $1.162\pm 0.274$ | $1.087\pm 0.229$ | $1.608$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}7$ |

NN | $5.624\pm 2.260$ | $1.102\pm 0.174$ | $1.007\pm 0.182$ | $2.577$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | 10 |

CARTmv | $8.267\pm 1.370$ | $1.437\pm 0.282$ | $1.247\pm 0.292$ | $3.650$ | 11 | 11 | 10 | 11 |

NNmv | $1.332\pm 0.455$ | $1.109\pm 0.185$ | $1.077\pm 0.193$ | $1.173$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}1$ |

**Table 7.**RMSE and rank for each method and each response (DOE case study). Gray cells highlight the best model.

Method | ${\mathit{S}}_{1}$ | ${\mathit{S}}_{2}$ | ${\mathit{S}}_{3}$ | ${\mathit{S}}_{4}$ | ${\mathit{S}}_{5}$ | ${\mathit{S}}_{\mathit{ave}}$ | ${\mathit{R}}_{1}$ | ${\mathit{R}}_{2}$ | ${\mathit{R}}_{3}$ | ${\mathit{R}}_{4}$ | ${\mathit{R}}_{5}$ | ${\mathit{R}}_{\mathit{ave}}$ |
---|---|---|---|---|---|---|---|---|---|---|---|---|

lm | $0.274\pm 0.115$ | $7.69\pm 3.060$ | $13.35\pm 5.080$ | $5.59\pm 2.740$ | $0.843\pm 0.338$ | $5.55$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | 11 | 11 | 10 | 11 | 11 |

knn | $0.268\pm 0.081$ | $3.15\pm 0.832$ | $\phantom{\rule{3.33333pt}{0ex}}8.33\pm 1.540$ | $4.80\pm 2.000$ | $0.310\pm 0.078$ | $3.37$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}4$ |

CART | $0.292\pm 0.094$ | $3.10\pm 0.903$ | $\phantom{\rule{3.33333pt}{0ex}}9.85\pm 2.130$ | $6.30\pm 1.940$ | $0.378\pm 0.109$ | $3.98$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | 10 | 11 | $\phantom{\rule{3.33333pt}{0ex}}9$ | 10 |

rf | $0.258\pm 0.086$ | $2.54\pm 0.767$ | $\phantom{\rule{3.33333pt}{0ex}}9.38\pm 1.880$ | $4.99\pm 1.640$ | $0.350\pm 0.075$ | $3.50$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}8$ |

gbm | $0.256\pm 0.078$ | $2.17\pm 0.778$ | $\phantom{\rule{3.33333pt}{0ex}}8.90\pm 2.500$ | $5.49\pm 1.460$ | $0.376\pm 0.099$ | $3.44$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}5$ |

xgb | $0.198\pm 0.085$ | $2.23\pm 0.903$ | $\phantom{\rule{3.33333pt}{0ex}}9.20\pm 2.140$ | $4.28\pm 1.730$ | $0.343\pm 0.095$ | $3.25$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}2$ |

svmpoly | $0.254\pm 0.087$ | $3.36\pm 1.510$ | $\phantom{\rule{3.33333pt}{0ex}}9.20\pm 1.720$ | $4.33\pm 1.770$ | $0.347\pm 0.129$ | $3.50$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}6$ | $\phantom{\rule{3.33333pt}{0ex}}7$ |

svmrad | $0.242\pm 0.077$ | $2.80\pm 0.985$ | $\phantom{\rule{3.33333pt}{0ex}}8.89\pm 1.550$ | $4.40\pm 1.550$ | $0.329\pm 0.093$ | $3.33$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}3$ | $\phantom{\rule{3.33333pt}{0ex}}3$ |

NN | $0.273\pm 0.072$ | $3.18\pm 1.090$ | $\phantom{\rule{3.33333pt}{0ex}}9.77\pm 2.430$ | $3.70\pm 1.770$ | $0.316\pm 0.060$ | $3.45$ | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | $\phantom{\rule{3.33333pt}{0ex}}6$ |

CARTmv | $0.311\pm 0.077$ | $2.89\pm 1.090$ | $\phantom{\rule{3.33333pt}{0ex}}9.54\pm 1.600$ | $5.52\pm 1.310$ | $0.339\pm 0.088$ | $3.72$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}5$ | $\phantom{\rule{3.33333pt}{0ex}}8$ | $\phantom{\rule{3.33333pt}{0ex}}9$ | $\phantom{\rule{3.33333pt}{0ex}}4$ | $\phantom{\rule{3.33333pt}{0ex}}9$ |

NNmv | $0.491\pm 0.616$ | $3.13\pm 1.280$ | $\phantom{\rule{3.33333pt}{0ex}}6.98\pm 1.940$ | $3.98\pm 1.660$ | $0.614\pm 0.467$ | $3.04$ | 11 | $\phantom{\rule{3.33333pt}{0ex}}7$ | $\phantom{\rule{3.33333pt}{0ex}}1$ | $\phantom{\rule{3.33333pt}{0ex}}2$ | 10 | $\phantom{\rule{3.33333pt}{0ex}}1$ |

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |

© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ghattas, B.; Manzon, D.
Machine Learning Alternatives to Response Surface Models. *Mathematics* **2023**, *11*, 3406.
https://doi.org/10.3390/math11153406

**AMA Style**

Ghattas B, Manzon D.
Machine Learning Alternatives to Response Surface Models. *Mathematics*. 2023; 11(15):3406.
https://doi.org/10.3390/math11153406

**Chicago/Turabian Style**

Ghattas, Badih, and Diane Manzon.
2023. "Machine Learning Alternatives to Response Surface Models" *Mathematics* 11, no. 15: 3406.
https://doi.org/10.3390/math11153406