# Learning Adaptive Coarse Spaces of BDDC Algorithms for Stochastic Elliptic Problems with Oscillatory and High Contrast Coefficients

^{1}

^{2}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Adaptive BDDC Algorithm

#### 2.1.1. Local Linear System

#### 2.1.2. Notation and Preliminary Results

#### 2.1.3. Generalized Eigenvalue Problems

#### 2.2. Learning Adaptive BDDC Algorithm

#### 2.2.1. Karhunen–Loève Expansion

#### 2.2.2. Neural Network

- Step 1: Perform KL expansion on the logarithmic stochastic permeability function $K(\mathit{x},\omega )$;
- Step 2: Generate M realizations of $\left\{{\mathit{\xi}}^{\left(i\right)}\right\}$ and obtain the corresponding BDDC dominant eigenvectors $\left\{y\left({\mathit{\xi}}^{\left(i\right)}\right)\right\}$, which are the training data for the neural network;
- Step 3: Define training conditions and train the neural network;
- Step 4: Examine the network performance whether the NRMSE is satisfied, otherwise, go back to Step 3 and change training conditions.

## 3. Results and Discussion

#### 3.1. Choices of Stochastic Coefficients

- Brownian sheet covariance function:$${C}_{K}(\mathit{x},\widehat{\mathit{x}})=min({x}_{1},{\widehat{x}}_{1})min({x}_{2},{\widehat{x}}_{2}),$$$$\begin{array}{cc}\hfill {\lambda}_{k}& ={\displaystyle \frac{16}{\left({(2i-1)}^{2}{\pi}^{2}\right)\left({(2j-1)}^{2}{\pi}^{2}\right)}},\hfill \\ \hfill {f}_{k}\left(\mathit{x}\right)& =2sin\left(\right)open="("\; close=")">(i-{\displaystyle \frac{1}{2}})\pi {x}_{1}sin\left(\right)open="("\; close=")">(j-{\displaystyle \frac{1}{2}})\pi {x}_{2}\hfill & .\end{array}$$
- Exponential covariance function:$${C}_{K}(\mathit{x},\widehat{\mathit{x}})={\sigma}_{K}^{2}exp\left(\right)open="("\; close=")">-{\displaystyle \frac{|{x}_{1}-{\widehat{x}}_{1}|}{{\eta}_{1}}}-{\displaystyle \frac{|{x}_{2}-{\widehat{x}}_{2}|}{{\eta}_{2}}}$$$$\begin{array}{cc}\hfill {\lambda}_{k}& ={\displaystyle \frac{4{\eta}_{1}{\eta}_{2}{\sigma}_{K}^{2}}{({r}_{1,i}^{2}{\eta}_{1}^{2}+1)({r}_{2,j}^{2}{\eta}_{2}^{2}+1)}},\hfill \\ \hfill {f}_{k}\left(\mathit{x}\right)& ={\displaystyle \frac{{r}_{1,i}{\eta}_{1}cos\left({r}_{1,i}{x}_{1}\right)+sin\left({r}_{1,i}{x}_{1}\right)}{\sqrt{({r}_{1,i}^{2}{\eta}_{1}^{2}+1)/2+{\eta}_{1}}}}{\displaystyle \frac{{r}_{2,j}{\eta}_{2}cos\left({r}_{2,j}{x}_{2}\right)+sin\left({r}_{2,j}{x}_{2}\right)}{\sqrt{({r}_{2,j}^{2}{\eta}_{2}^{2}+1)/2+{\eta}_{2}}}},\hfill \end{array}$$$$({r}_{1,i}^{2}{\eta}_{1}^{2}-1)sin\left({r}_{1,i}\right)=2{\eta}_{1}{r}_{1,i}cos\left({r}_{1,i}\right)$$

#### 3.2. Adaptive BDDC Parameters and Training Conditions

- Number of truncated terms in KL expansion: $R=4$;
- Number of hidden layers: $L=1$;
- Number of neurons in the hidden layer: 10;
- Number of neurons in the output layer: $O=336$;
- Activation function in hidden layer: hyperbolic tangent function;
- Activation function in output layer: linear function;
- Stopping criteria:
- -
- Minimum value of the cost function gradient: ${10}^{-6}$; or
- -
- Maximum number of training epochs: 1,000,000;

- Sample size of training set: $M=$10,000;
- Sample size of testing set: $M=500$.

#### 3.3. Brownian Sheet Covariance Function

- ($\mathcal{A}$)
- ${10}^{s}$
- ($\mathcal{B}$)
- $5sin\left(2\pi {x}_{1}\right)sin\left(2\pi {x}_{2}\right)+5$.

#### Further Experiments on Different Training Conditions

- Modified experiment 1: $R=9$;
- Modified experiment 2: Training sample is 5000.

#### 3.4. Exponential Covariance Function

- Different stochastic behavior:
- All remain unchanged except $({\eta}_{1},{\eta}_{2})=(0.2,0.125)$ or $({\eta}_{1},{\eta}_{2})=(1,1)$;

- Different mean permeability function:
- The expected function is changed to the Layers 1, 4,34 in the x-direction of SPE10 data, but the stochastic parameters are unchanged, i.e., $({\sigma}_{K}^{2},{\eta}_{1},{\eta}_{2})=(1,0.25,0.25)$.

#### 3.5. Exponential Covariance Function with Multiple Hidden Layers

- Number of hidden layers: $L=1$ or $L=2$;
- Number of neurons in the hidden layers: ${n}^{\left(1\right)}=10$ and ${n}^{\left(2\right)}=0,5$ or 7;
- Number of neurons in the output layer: $O=\phantom{\rule{3.33333pt}{0ex}}$21,240;
- Sample size of training set: $M=2000$;
- Sample size of testing set: $M=100$.

## 4. Conclusions

## Author Contributions

## Funding

## Data Availability Statement

## Conflicts of Interest

## References

- Chung, E.T.; Efendiev, Y.; Leung, W.T. An adaptive generalized multiscale discontinuous Galerkin method for high-contrast flow problems. Multiscale Model. Simul.
**2018**, 16, 1227–1257. [Google Scholar] [CrossRef][Green Version] - Chung, E.T.; Efendiev, Y.; Li, G. An adaptive GMsFEM for high-contrast flow problems. J. Comput. Phys.
**2014**, 273, 54–76. [Google Scholar] [CrossRef][Green Version] - Babuška, I.; Nobile, F.; Tempone, R. A stochastic collocation method for elliptic partial differential equations with random input data. SIAM J. Numer. Anal.
**2007**, 45, 1005–1034. [Google Scholar] [CrossRef] - Babuska, I.; Tempone, R.; Zouraris, G.E. Galerkin finite element approximations of stochastic elliptic partial differential equations. SIAM J. Numer. Anal.
**2004**, 42, 800–825. [Google Scholar] [CrossRef] - Ghanem, R.G.; Spanos, P.D. Stochastic Finite Elements: A Spectral Approach; Courier Corporation: North Chelmsford, MA, USA, 2003. [Google Scholar]
- Brunton, S.L.; Noack, B.R.; Koumoutsakos, P. Machine learning for fluid mechanics. Annu. Rev. Fluid Mech.
**2020**, 52, 477–508. [Google Scholar] [CrossRef][Green Version] - Kutz, J.N. Deep learning in fluid dynamics. J. Fluid Mech.
**2017**, 814, 1–4. [Google Scholar] [CrossRef][Green Version] - Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev.
**2021**, 63, 208–228. [Google Scholar] [CrossRef] - Zhao, W.; Du, S. Spectral–spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach. IEEE Trans. Geosci. Remote Sens.
**2016**, 54, 4544–4554. [Google Scholar] [CrossRef] - Heinlein, A.; Klawonn, A.; Lanser, M.; Weber, J. Combining machine learning and domain decomposition methods for the solution of partial differential equations—A review. GAMM Mitteilungen
**2021**, 44, e202100001. [Google Scholar] [CrossRef] - Vasilyeva, M.; Leung, W.T.; Chung, E.T.; Efendiev, Y.; Wheeler, M. Learning macroscopic parameters in nonlinear multiscale simulations using nonlocal multicontinua upscaling techniques. J. Comput. Phys.
**2020**, 412, 109323. [Google Scholar] [CrossRef][Green Version] - Wang, Y.; Cheung, S.W.; Chung, E.T.; Efendiev, Y.; Wang, M. Deep multiscale model learning. J. Comput. Phys.
**2020**, 406, 109071. [Google Scholar] [CrossRef][Green Version] - Chung, E.; Leung, W.T.; Pun, S.M.; Zhang, Z. A multi-stage deep learning based algorithm for multiscale model reduction. J. Comput. Appl. Math.
**2021**, 394, 113506. [Google Scholar] [CrossRef] - Yeung, T.S.A.; Chung, E.T.; See, S. A deep learning based nonlinear upscaling method for transport equations. arXiv
**2020**, arXiv:2007.03432. [Google Scholar] - Burrows, S.; Frochte, J.; Völske, M.; Torres, A.B.M.; Stein, B. Learning overlap optimization for domain decomposition methods. In Proceedings of the Pacific-Asia Conference on Knowledge Discovery and Data Mining, Gold Coast, Australia, 14–17 April 2013; pp. 438–449. [Google Scholar]
- Heinlein, A.; Klawonn, A.; Lanser, M.; Weber, J. Machine Learning in Adaptive Domain Decomposition Methods—Predicting the Geometric Location of Constraints. SIAM J. Sci. Comput.
**2019**, 41, A3887–A3912. [Google Scholar] [CrossRef][Green Version] - Li, K.; Tang, K.; Wu, T.; Liao, Q. D3M: A deep domain decomposition method for partial differential equations. IEEE Access
**2019**, 8, 5283–5294. [Google Scholar] [CrossRef] - Dostert, P.; Efendiev, Y.; Hou, T.Y.; Luo, W. Coarse-gradient Langevin algorithms for dynamic data integration and uncertainty quantification. J. Comput. Phys.
**2006**, 217, 123–142. [Google Scholar] [CrossRef] - Wheeler, M.F.; Wildey, T.; Yotov, I. A multiscale preconditioner for stochastic mortar mixed finite elements. Comput. Methods Appl. Mech. Eng.
**2011**, 200, 1251–1262. [Google Scholar] [CrossRef][Green Version] - Zhang, D.; Lu, Z. An efficient, high-order perturbation approach for flow in random porous media via Karhunen–Loève and polynomial expansions. J. Comput. Phys.
**2004**, 194, 773–794. [Google Scholar] [CrossRef] - Kim, H.H.; Chung, E.; Wang, J. BDDC and FETI-DP preconditioners with adaptive coarse spaces for three-dimensional elliptic problems with oscillatory and high contrast coefficients. J. Comput. Phys.
**2017**, 349, 191–214. [Google Scholar] [CrossRef][Green Version] - Kim, H.H.; Chung, E.T. A BDDC algorithm with enriched coarse spaces for two-dimensional elliptic problems with oscillatory and high contrast coefficients. Multiscale Model. Simul.
**2015**, 13, 571–593. [Google Scholar] [CrossRef] - Dohrmann, C.R. A preconditioner for substructuring based on constrained energy minimization. SIAM J. Sci. Comput.
**2003**, 25, 246–258. [Google Scholar] [CrossRef] - Mandel, J.; Dohrmann, C.R.; Tezaur, R. An algebraic theory for primal and dual substructuring methods by constraints. Appl. Numer. Math.
**2005**, 54, 167–193. [Google Scholar] [CrossRef] - Li, J.; Widlund, O.B. FETI-DP, BDDC, and block Cholesky methods. Int. J. Numer. Methods Eng.
**2006**, 66, 250–271. [Google Scholar] [CrossRef] - Toselli, A.; Widlund, O. Domain Decomposition Methods—Algorithms and Theory; Springer: Berlin, Germany, 2005; Volume 34. [Google Scholar]
- Klawonn, A.; Radtke, P.; Rheinbach, O. A comparison of adaptive coarse spaces for iterative substructuring in two dimensions. Electron. Trans. Numer. Anal.
**2016**, 45, 75–106. [Google Scholar] - Klawonn, A.; Radtke, P.; Rheinbach, O. FETI-DP with different scalings for adaptive coarse spaces. Proc. Appl. Math. Mech.
**2014**. [Google Scholar] [CrossRef] - Anderson, W.N., Jr.; Duffin, R.J. Series and parallel addition of matrices. J. Math. Anal. Appl.
**1969**, 26, 576–594. [Google Scholar] [CrossRef][Green Version] - Dohrmann, C.R.; Pechstein, C. Modern Domain Decomposition Solvers: BDDC, Deluxe Scaling, and an Algebraic Approach. 2013. Available online: http://people.ricam.oeaw.ac.at/c.pechstein/pechstein-bddc2013.pdf (accessed on 3 June 2021).
- Schwab, C.; Todor, R.A. Karhunen–Loève approximation of random fields by generalized fast multipole methods. J. Comput. Phys.
**2006**, 217, 100–122. [Google Scholar] [CrossRef] - Wang, L. Karhunen–Loève Expansions and Their Applications. Ph.D. Thesis, London School of Economics and Political Science, London, UK, 2008. [Google Scholar]
- Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw.
**1993**, 6, 525–533. [Google Scholar] [CrossRef] - Makridakis, S. Accuracy measures: Theoretical and practical concerns. Int. J. Forecast.
**1993**, 9, 527–529. [Google Scholar] [CrossRef]

**Figure 1.**An illustration of neural network structure of a $(L+1)$-layer perceptron with R input neurons and O output neurons. The lth hidden layer contains ${n}^{\left(l\right)}$ hidden neurons.Network graph for a $(L+1)$-layer perceptron.

**Figure 2.**Illustration of first four eigenfunctions ${f}_{k}\left(\mathit{x}\right)$ for the second covariance function with $({\sigma}_{K}^{2},{\eta}_{1},{\eta}_{2})=(1,0.25,0.25)$.

**Figure 3.**Realizations of permeability coefficient when expected functions $\mathcal{A}$ and $\mathcal{B}$ are used.

**Figure 4.**Comparisons in the performance of the BDDC preconditioner when expected functions $\mathcal{A}$ (

**left column**) and $\mathcal{B}$ (

**right column**) are used.

**Figure 6.**Realizations of Layer 35 permeability coefficient when different ${\eta}_{1}$ and ${\eta}_{2}$ are used.

**Figure 7.**Comparisons in the performance of the BDDC preconditioner when expected functions of SPE10 Layer 35 (

**left column**), SPE10 Layer 35 * (

**middle column**) and SPE10 Layer 34 (

**right column**) are used.

**Figure 8.**Differences in the performance of the BDDC preconditioner when expected functions of SPE10 Layer 35 (

**left column**) and SPE10 Layer 35 * (

**right column**) are used.

**Figure 10.**Comparisons in the performance of the BDDC pre-conditioner when single hidden layer (

**left column**), two hidden layers with ${n}^{\left(2\right)}=5$ (

**middle column**) and two hidden layers with ${n}^{\left(2\right)}=7$ (

**right column**) are used.

Case | Epochs Trained | Training NRMSE | Training Time | Preparation Time for 1 Sample |
---|---|---|---|---|

$\mathcal{A}$ | 2.29 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{5}$ | 1.97$\phantom{\rule{3.33333pt}{0ex}}\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 1.98 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{3}$ s | 1.53 s |

$\mathcal{B}$ | 5.39 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ | 3.77 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ | 6.33 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{2}$ s | 1.54 s |

Case | Testing NRMSE | sMAPE (${\mathit{l}}_{\mathit{\infty}}$ Error) in | ||
---|---|---|---|---|

Iteration Number | ${\mathit{\lambda}}_{\mathbf{min}}$ | ${\mathit{\lambda}}_{\mathbf{max}}$ | ||

$\mathcal{A}$ | 6.58$\phantom{\rule{3.33333pt}{0ex}}\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 6.94 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (2) | 2.58 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-7}$ (4.02$\phantom{\rule{3.33333pt}{0ex}}\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$) | 4.45 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (3.58 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

$\mathcal{B}$ | 6.73$\phantom{\rule{3.33333pt}{0ex}}\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ | 6.93 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1) | 3.19 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$ (1.50$\phantom{\rule{3.33333pt}{0ex}}\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-4}$) | 2.01 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (4.94 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

Case | Epochs Trained | Training NRMSE | Training Time | Preparation Time for 1 Sample |
---|---|---|---|---|

$\mathcal{A}$ | 2.17 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{5}$ | 2.74 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 2.47 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{3}$ s | 1.77 s |

$\mathcal{B}$ | 1.15 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{5}$ | 4.02 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ | 1.19 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{3}$ s | 1.75 s |

Case | Testing NRMSE | sMAPE (${\mathit{l}}_{\mathit{\infty}}$ Error) in | ||
---|---|---|---|---|

Iteration Number | ${\mathit{\lambda}}_{\mathbf{min}}$ | ${\mathit{\lambda}}_{\mathbf{max}}$ | ||

$\mathcal{A}$ | 6.97 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 5.85 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1) | 2.15 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$ (1.35 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$) | 2.29 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (9.05 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

$\mathcal{B}$ | 5.40 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ | 6.73 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1) | 2.99$\phantom{\rule{3.33333pt}{0ex}}\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$ (1.76 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-4}$) | 1.96 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (5.64 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

Case | Epochs Trained | Training NRMSE | Training Time | Preparation Time for 1 Sample |
---|---|---|---|---|

$\mathcal{A}$ | 9.12 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ | 3.34 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 8.94 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{2}$ s | 1.53 s |

$\mathcal{B}$ | 1.01 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{5}$ | 4.91 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ | 1.12 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{3}$ s | 1.54 s |

Case | Testing NRMSE | sMAPE (${\mathit{l}}_{\mathit{\infty}}$ Error) in | ||
---|---|---|---|---|

Iteration Number | ${\mathit{\lambda}}_{\mathbf{min}}$ | ${\mathit{\lambda}}_{\mathbf{max}}$ | ||

$\mathcal{A}$ | 7.82 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 6.68 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (2) | 7.15 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-7}$ (6.59 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$) | 3.82 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (1.57 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-1}$) |

$\mathcal{B}$ | 8.43 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ | 6.91 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1) | 2.73 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$ (1.64 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-4}$) | 2.39 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (6.61 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

Case | Epochs Trained | Training NRMSE | Training Time | Preparation Time for 1 Sample |
---|---|---|---|---|

Layer 35 | 2.62 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{5}$ | 1.52 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 1.07 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{3}$ s | 12.34 s |

Case | Testing NRMSE | sMAPE (${\mathit{l}}_{\mathit{\infty}}$ Error) in | ||
---|---|---|---|---|

Iteration Number | ${\mathit{\lambda}}_{\mathbf{min}}$ | ${\mathit{\lambda}}_{\mathbf{max}}$ | ||

Layer 1 | 8.02 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 1.04 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-1}$ (2) | 3.74 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$ (3.71 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$) | 4.00 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1.36 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-1}$) |

Layer 4 | 6.95 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 7.18 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1) | 1.11 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$ (1.36 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-4}$) | 1.61 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (6.85 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

Layer 34 | 5.08 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 2.52 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (1) | 3.25 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$ (1.74 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-4}$) | 3.06 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (8.47 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

Layer 35 | 2.48 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 3.65 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1) | 7.52 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$ (7.50 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$) | 1.36 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (6.66 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

Layer 35 * | 2.27 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 3.89 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1) | 7.57 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$ (7.17 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$) | 1.27 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (6.59 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

Layer 35 ** | 2.75 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 4.29 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (1) | 8.52 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$ (7.13 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$) | 1.81 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (8.10 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$) |

**Table 9.**Training records when new coefficient function $\widehat{\rho}(\mathit{x},\omega )$ is considered.

$({\mathit{n}}^{\left(1\right)},{\mathit{n}}^{\left(2\right)})$ | Epochs Trained | Training NRMSE | Training Time | Preparation Time for 1 Sample |
---|---|---|---|---|

(10,0) | 1.00 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | 3.81 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 4.89 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ s | 130.77 s |

(10,5) | 6.46 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{5}$ | 4.44 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 2.65 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ s | |

(10,7) | 1.00 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{6}$ | 4.11 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 4.62 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{4}$ s |

**Table 10.**Testing records when new coefficient function $\widehat{\rho}(\mathit{x},\omega )$ is considered.

$({\mathit{n}}^{\left(1\right)},{\mathit{n}}^{\left(2\right)})$ | Testing NRMSE | sMAPE (${\mathit{l}}_{\mathit{\infty}}$ Error) in | ||
---|---|---|---|---|

Iteration Number | ${\mathit{\lambda}}_{\mathbf{min}}$ | ${\mathit{\lambda}}_{\mathbf{max}}$ | ||

(10,0) | 5.47 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 2.12 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (4) | 1.86 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$ (2.98 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$) | 1.42 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-1}$ (4.34 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{2}$) |

(10,5) | 3.57 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 7.81 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-3}$ (5) | 1.21 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$ (3.53 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$) | 4.55 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (7.89 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{1}$) |

(10,7) | 2.30 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ | 1.12 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (5) | 1.13 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-6}$ (2.50 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-5}$) | 9.18 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{-2}$ (7.01 $\times \phantom{\rule{3.33333pt}{0ex}}{10}^{1}$) |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Chung, E.; Kim, H.-H.; Lam, M.-F.; Zhao, L.
Learning Adaptive Coarse Spaces of BDDC Algorithms for Stochastic Elliptic Problems with Oscillatory and High Contrast Coefficients. *Math. Comput. Appl.* **2021**, *26*, 44.
https://doi.org/10.3390/mca26020044

**AMA Style**

Chung E, Kim H-H, Lam M-F, Zhao L.
Learning Adaptive Coarse Spaces of BDDC Algorithms for Stochastic Elliptic Problems with Oscillatory and High Contrast Coefficients. *Mathematical and Computational Applications*. 2021; 26(2):44.
https://doi.org/10.3390/mca26020044

**Chicago/Turabian Style**

Chung, Eric, Hyea-Hyun Kim, Ming-Fai Lam, and Lina Zhao.
2021. "Learning Adaptive Coarse Spaces of BDDC Algorithms for Stochastic Elliptic Problems with Oscillatory and High Contrast Coefficients" *Mathematical and Computational Applications* 26, no. 2: 44.
https://doi.org/10.3390/mca26020044