Next Article in Journal
Impact of Dredged Material Disposal on Heavy Metal Concentrations and Benthic Communities in Huangmao Island Marine Dumping Area near Pearl River Estuary
Next Article in Special Issue
Topology Optimization of Deformable Bodies with Linear Dynamic Impact and Frictionless Contact Condition
Previous Article in Journal
Performance Analysis of Rate Splitting in Massive MIMO Systems with Low Resolution ADCs/DACs
Previous Article in Special Issue
Identification of Pre-Tightening Torque Dependent Parameters for Empirical Modeling of Bolted Joints
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mesh-Free Surrogate Models for Structural Mechanic FEM Simulation: A Comparative Study of Approaches

1
Know-Center GmbH, Research Center for Data-Driven Business & Big Data Analytics, Inffeldgasse 13, 8010 Graz, Austria
2
Institute of Interactive Systems and Data Science, Graz University of Technology, Inffeldgasse 16c, 8010 Graz, Austria
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2021, 11(20), 9411; https://doi.org/10.3390/app11209411
Submission received: 15 September 2021 / Revised: 1 October 2021 / Accepted: 3 October 2021 / Published: 11 October 2021

Abstract

:
The technical world of today fundamentally relies on structural analysis in the form of design and structural mechanic simulations. A traditional and robust simulation method is the physics-based finite element method (FEM) simulation. FEM simulations in structural mechanics are known to be very accurate; however, the higher the desired resolution, the more computational effort is required. Surrogate modeling provides a robust approach to address this drawback. Nonetheless, finding the right surrogate model and its hyperparameters for a specific use case is not a straightforward process. In this paper, we discuss and compare several classes of mesh-free surrogate models based on traditional and thriving machine learning (ML) and deep learning (DL) methods. We show that relatively simple algorithms (such as k-nearest neighbor regression) can be competitive in applications with low geometrical complexity and extrapolation requirements. With respect to tasks exhibiting higher geometric complexity, our results show that recent DL methods at the forefront of literature (such as physics-informed neural networks) are complicated to train and to parameterize and thus, require further research before they can be put to practical use. In contrast, we show that already well-researched DL methods, such as the multi-layer perceptron, are superior with respect to interpolation use cases and can be easily trained with available tools. With our work, we thus present a basis for the selection and practical implementation of surrogate models.

1. Introduction

Assessing the properties of mechanical structures with real physical experiments is expensive, as it costs both time and resources. To reduce these costs of knowledge enrichment in the field of structural analysis, computer simulations of structural mechanics have become crucial. An essential simulation method is the finite element method (FEM) in which the simulation domain space is represented by a finite number of connected elements. Space- and time-dependent behavior between connected elements and within the elements themselves is governed by physical equations. Observation of real physical experiments provides the coefficients for these governing equations. Since most geometries and use cases cannot be solved analytically, an approximation of the proposed physical equations is obtained by numerical methods [1]. However, solving complex problems with FEM is time-consuming and computationally expensive. In order to reduce the computational effort, surrogate modeling offers a promising solution [2].
Surrogate models are trained in a supervised manner and are designed to learn the function mapping between inputs and outputs from a given FEM simulation use case. With a sufficient amount of training data with respect to the use case, an according model is able to substitute for the FEM simulation use case up to a certain accuracy.
There is already a considerable number of related work concerning surrogate modeling of structural mechanics simulations with machine learning (ML) or deep learning (DL) approaches. In the following, we want to present the most important works for this paper. Artificial neural networks (ANN) are used in the work of Roberts et al. [3] to predict damage development in forged brake discs reinforced with Al-SiC particles, using damage maps. The ANN is a multilayer perceptron (MLP), and training data are obtained from FEM simulations using the commercial DEFORM simulation software. For rapid estimation of forming and cutting forces for given process parameters, Hans Raj et al. [4] investigate a method using MLP models. The researchers focus on two processes: hot upsetting and extrusion. Each process, represented by a MLP, is trained with FEM simulation results from the FORGE2 commercial FEM simulation software. García-Crespo et al. [5] predict the projectile response after impact with steel armor using a MLP; their surrogate model studied is trained with data from FEM simulations of the use case. Nourbakhsh et al. [6] explore generalizable surrogate models for 3D trusses, using MLP and FEM training data. Chan et al. [7] estimate the performance of hot-forged product designs, using a MLP trained on FEM results obtained with the commercial software DEFORM. D’Addona and Antonelli [8] use single-layer feedforward ANNs instead of FEM as a metamodel in a sequential approximate optimization (SAO) algorithm. In a case study on hot forging of a steel disk, they compare their results with an ANN trained on FEM simulation results and the FEM simulation software QForm3D. Gudur and Dixit [9] predict the velocity field and location of neutral point of cold flat rolling with a MLP trained with rigid-plastic FEM simulation results. Pellicer-Valero et al. [10] predict the mechanical behavior of different livers with MLPs trained from FEM simulations.
Abueidda et al. [11] estimate the mechanical properties of a two-dimensional checkerboard composite using a convolutional neural network (CNN) trained with FEM results. Regarding mesh-based approaches, Pfaff et al. [12] present a framework to train graph neural networks (GNN) on mesh-based simulations and show the applicability in aerodynamics, structural mechanics, and fabric.
Surrogate models were also obtained using classical, i.e., non-neural ML, approaches. For example, the authors of [3] apply Gaussian process regression (GPR) besides ANN in their approach. Loghin and Ismonov [13] predict the stress intensity factors, using GPR trained with FEM results of a classical bolt-nut assembly. Ming et al. [14] model the electrical discharge machining process with GPR trained from data generated with numerical FEM simulation.
Using support vector regression (SVR), Pan et al. [15] construct a metamodel in an optimization approach for lightweight vehicle design. Training data are generated, using design of experiment approaches with FEM simulations. To predict the stress at the implant–bone interface, Li et al. [16] utilize SVR in order to replace FEM simulation. Hu and Li [17] estimate cutting coefficients in a mechanistic milling force model with SVR trained with FEM simulation data.
Employing tree-based models, Martínez-Martínez et al. [18] estimate the biomechanical behavior of breast tissue under compression, using three different tree-based models trained from FEM simulations. The models are trained with FEM data in terms of nodal coordinates and nodal tissue membership. Zhang et al. [19] estimate the base failure stability for braced excavations in anisotropic clay using extreme gradient boosting, random forest regression (RFR) and data obtained from FEM simulation results. Qi et al. [20] utilize a decision tree regressor to predict the mechanical properties of carbon fiber reinforced plastics with data obtained from FEM simulations. Besides MLPs Pellicer-Valero et al. [10] utilize RFRs to predict the biomechanics of livers.
A recent neural network–based approach are physics informed neural networks (PINNs). PINNs are trained simultaneously on data and governing differential equations and can be used for the solution and inversion of equations governing physical systems. Utilizing PINNs, Haghighat and Juanes [21] substitute a particular FEM simulation of a perforated strip under uniaxial extension. In [22], Haghighat et al. present a surrogate modeling approach with PINNs and a specific use case. Focusing on consistency, Shin [23] evaluates findings regarding PINNs with Poisson’s equation and the heat equation. Yin et al. [24] use PINNs to predict permeability and viscoelastic modulus from thrombus deformation data, described by the fourth-order Cahn–Hilliard and Navier–Stokes equations. In addition to the application of PINNs in structural mechanics problems, there is also a considerable number of papers, especially in computational fluid dynamics [25,26,27,28,29].
Related work shows capabilities of surrogate modeling, thus demonstrating the feasibility of supervised learning models trained with FEM simulations. From our analysis of the existing literature, we identify the following drawbacks:
  • In most cases, the surrogate model only substitutes for a subset of the considered computational domain. Thus, such an approach focuses only on a region of interest and cannot be used to evaluate the entire computational domain (notable exceptions are [12,22]).
  • Surrogate models representing the complete discretized computational domain (mesh) are solely fitted and evaluated on one use case—generalization to unseen data is only achieved with respect to the discretization of the computational domain, but not with respect to other use case specific parameters (notable exception concerning material parameters [22]).
  • Due to differences in FEM use cases and data, the comparison of related work is useful only in some cases.
  • Replication of published experiments is often not achievable because important parameters are not reported, e.g., number of finite elements, type of finite elements (bilinear, biquadratic, reduced integration etc.), method of discretization (meshing), as well as hyperparameters of the ML models, such as learning or activation functions.
To address these drawbacks, we present the following contributions of our paper:
  • We present the main DL and ML methods together with a compact description and mathematical notation to equip practitioners with a reference to surrogate FEM simulation mesh-free and assess the feasibility and maturity of the novel PINNs method.
  • We utilize three classic use cases in structural mechanics and evaluate these models in terms of performance on unseen configurations (inter- and extrapolation) in order to assess their ability to generalize across different use case specific parameters.
  • We discuss the characteristics of all DL and ML models, and their practical implications, in the context of the use cases.
With our work, we pave the way of mesh-free surrogate modeling for practical use: we provide a basis for efficient model and hyperparameters selection regarding use case and performance metrics. These insights shall not only assist the domain expert during model selection, but will also help in consolidating the current research in mesh-free surrogate modeling for structural mechanics applications.
We report all information to make our experiments reproducible. If certain model settings are not mentioned, they are left at default values. Moreover, our FEM simulations are performed with Abaqus Student Edition 2019 (Dassault Systèmes, Velizy-Villacoublay, France), and thus, the process of data generation is not limited to commercial software, which makes it possible for everyone to connect to our research.
The remainder of this paper is organized as follows. In Section 2, we present the materials and methods of our experiments, first providing insights into the process of data generation, using the FEM simulations in Section 2.1, then describing the datasets obtained from the FEM simulations in Section 2.2, followed by the ML and DL models used in Section 2.3. Section 3 shows the results, which are discussed in Section 4. In Section 5, we present the conclusion of our work and an outlook for the future.

2. Materials and Methods

In this section, we present all relevant information about the methodology of our experiments. First, Section 2.1 provides an overview of the data generation process, using three classic FEM simulation use cases. Then, Section 2.2 describes the datasets used from the FEM simulations, and Section 2.3 presents the ML and DL models used. A more detailed overview of the mathematical background and assumptions of the ML and DL models can be found in the Appendix. When predicting a particular use case with a surrogate model, the individual nodes discretizing the particular geometry of the use case (i.e., mesh) are sequentially input into the surrogate model with the appropriate generalization variable. The surrogate model then predicts the output of each node in sequence; see Figure 1.
It should be noted that there are no constraints on the discretization (mesh), i.e., the node coordinates can be freely chosen within the simulation domain and nodes are not connected to each other. Therefore, we refer to our approach as mesh-free, but we want to clearly distinguish ourselves from other mesh-free methods, such as smoothed particle hydrodynamics, the diffuse element method, the moving particle finite element method, etc. The predictions of the individual nodes together constitute the prediction for the simulation domain of the particular use case. By adding the nodal displacement outputs of the surrogate model to the initial node coordinates, we obtain the new deformed geometry. Further surrogate model outputs (e.g., stresses, strains) describe the queried nodes and thus the complete simulation domain in more detail.

2.1. FEM Use Cases

For illustration, we base our evaluation on three classic use cases from structural mechanics. We consider the (1) tensile load, (2) bending load and (3) compressive load:
  • Elongation of a plate with a perforation;
  • Bending of a beam;
  • Compression of a block with four perforations.
See Table 1 and Figure 2. We utilize an isotropic elasto-plastic rate-independent material model (i.e., a perfectly plastic material). The kinematic relations for our 2D plane strain use cases are defined by the total strain components ε x x = u x x , ε y y = u y y , ε x y = 1 2 ( u x y + u y x ) , ε z z = 0 with displacements u x and u y and deviatoric strain components e x x = ε x x ε v o l 3 , e y y = ε y y ε v o l 3 , e x y = ε x y and e z z = ε v o l 3 . Since there is no volumetric plastic strain in the von Mises yield function, the volumetric strain can be expressed as ε v o l = trace ( ε ) s.t. ε v o l = ε x x + ε y y . The deviatoric stress components are defined by s x x = σ x x ( σ x x + σ y y + σ z z 3 ) , s y y = σ y y ( σ x x + σ y y + σ z z 3 ) , s x y = σ x y and s z z = σ z z ( σ x x + σ y y + σ z z 3 ) , where σ i j ( i , j x , y ) are the components of the Cauchy stress tensor. The plastic strain components are defined by ε x x p l = ε ¯ p l 3 2 s x x q , ε y y p l = ε ¯ p l 3 2 s y y q , ε x y p l = ε ¯ p l 3 2 s x y q and ε z z p l = ε ¯ p l 3 2 s z z q with equivalent plastic strain of the von Mises model as ε ¯ p l = ε ¯ σ Y 3 μ 0 , where σ Y is the yield stress and μ the second Lamé parameter. The total equivalent strain is defined by ε ¯ = 2 3 i , j x , y e i j e i j with deviatoric strain components e i j . The decomposition of the strain is ε i j = ε i j e l + ε i j p l with elastic component ε i j e l and plastic component ε i j p l of the respective strain matrices. The equivalent stress is defined by q = 3 2 s i j s i j . In our PINN approach, we utilize the definitions of the total strain components, deviatoric strain and stress components and plastic strain components in the respective regularization term.
We use quarter symmetry in use cases 1 and 3 to make efficient use of computational resources. Additional information regarding the variation of parameters in the simulations is presented in Table 2, where simulations marked in bold are used for the test and evaluation of the surrogate models and are not in the training dataset. Conversely, simulations not marked in bold represent the training dataset and are not in the test dataset. In use cases exhibiting varied geometry parameters (i.e., elongation of a plate and compression of a block use cases), the mesh is also different in each simulation. Thus, we train and evaluate the surrogate models on use cases with different meshes (i.e., in each simulation, the node coordinates differ).
The first use case, a perforated steel strip under tensile load, is similar to the nonlinear solid mechanics use case of [21,22]. However, in our approach, we evaluate the generalization over the perforation diameter and use material properties for steel and a top edge displacement of 5 mm in positive y-axis to consider a more challenging use case.
We execute different simulation settings, where the generalization variable (diameter of perforation) is changed in each simulation; see Figure 2a and Table 2. In our second use case, we simulate a bending beam that end is displaced about 5 mm in the positive x-direction; see Figure 2b. We vary the yield stress generalization variable in each simulation setting; see Table 2. In our third use case, we simulate a quarter-symmetric block with four perforations under compressive load, which is compressed about 5 mm in the negative y-axis; see Figure 2c. In this use case, we vary two generalization variables (yield stress and width of the block) in each simulation; see Table 2.
We evaluate our models on interpolation (i.e., that the generalization variables for testing are within the range of the generalization variables observed during training) and extrapolation (i.e., that the generalization variables for testing are outside the range of the generalization variables observed during training) tasks. In Table 2, we mark interpolation tasks with superscript ( i ) and extrapolation tasks with superscript ( e ) .
In Figure 3, we present the perfect nonlinear elastoplastic material behavior of our use cases. The Young’s modulus is 210 GPa, Poisson’s ratio 0.3 and the yield stress 900 MPa. In our first use case, the perforated plate, we use this setting in each simulation. In the other two use cases, the yield stress varies, while the remaining material parameters stay the same.
All parts are meshed, using plane strain 4-node bilinear quadrilateral elements with reduced integration and hourglass control. Please note that although [22] recommends the use of larger order elements for the approximation of body forces, we use bilinear elements since we do not use body forces in our surrogate modeling approaches. We create a finer mesh near additional geometric details (i.e., perforations in the plate and block use cases) and seed the perforation edge of the plate with an approximate size of 3.8 mm and the remaining edges with an approximate size of 5 mm. The perforation edges of the block are seeded with an approximate size of 3 mm and the remaining edges with an approximate size of 4 mm. The beam exhibits no comparable geometric details; thus, we seed all edges with an approximate size of 1.5 mm.
We obtain our FEM simulation results in the context of general static simulations. Details of the simulation steps are shown in Table 3. Simulation control parameters that are not listed are left at default values.

2.2. Dataset

The nodal data from our Abaqus FEM simulations constitute the datasets. For each use case, the nodal data are split into training and test dataset, respectively. The training dataset D = X 1 , , X n with number of training instances n and the test dataset T = X n + 1 , , X n + m with number of test instances m are generated from several FEM simulations; see Table 2 and Table 6, where bold marked simulations belong to T and the remaining to D. Thus, we split our data due to different generalization variables and not randomly. We denote each instance with index i, i { 1 , 2 , , n + m } . An instance X i = ( x i , y i ) is generated of an input vector x i R p and output vector y i R q . Each input vector x i is composed of the initial x- and y-coordinates of a FEM node and the respective generalization variable (i.e., perforated plate: D i a m e t e r , beam: Y i e l d S t r e s s , block with four perforations: W i d t h and Y i e l d S t r e s s ) of the FEM simulation; see Table 4. Thus, we have p = 3 in the plate and beam use case, and p = 4 in the block use case.
In our setting, each output vector y i contains 13 ( q = 13 ) output variables obtained from FEM simulation with input x i , namely the ε x x t , ε x y t and ε y y t total strain components, the ε x x p , ε x y p , ε y y p and ε z z p plastic strain components, the σ x x , σ x y , σ y y and σ z z principal and shear stress components and the displacement in x- and y-directions u and v of each node; see Table 5 and Figure 4. We split the data in a training and test dataset (see Table 6) and standardized the data by removing the mean and scaling to unit variance.
In Figure 4, we present graphical results with visible mesh obtained from Abaqus FEM simulation of the output variables used for a block use case.

2.3. Surrogate Models

In this section, we give an overview of the surrogate models used and their general assumptions; to highlight the differences as well as the advantages and disadvantages between them, we present a detailed mathematical background in Appendix A. We have selected models from different learning paradigms:
  • Gradient boosting decision tree regressor (GBDTR): piecewise constant model.
  • K-nearest neighbor regressor (KNNR): distance-based model.
  • Gaussian process regressor (GPR): Bayesian model.
  • Support vector regressor (SVR): hyperplane-based model.
  • Multi layer perceptron (MLP): classic feedforward neural network model.
  • Physics informed neural network (PINN): neural network model with physics-based regularization.

3. Results

For evaluation, we split the data into a training and test dataset to fit and test our surrogate models; see Table 6 for the dataset sizes and Table 2 for more details regarding the data split.
As a next step, we need to define hyperparameters for each model and each use case. We performed hyperparameter optimization using only training data; no test data were used. In our PINN approaches, the adaptation of hyperparameters was based on the work of [21,22]. Our MLPs were designed to be similar to our PINNs to allow for fair comparisons. We varied hyperparameters in our neural network approaches (MLP and PINN) following best practices and guidelines, where we optimized the number of hidden layers, number of neurons per hidden layer, activation function, validation split, earlystopping patience and the size of the batch per training epoch. Regarding the rest of our models, we applied a grid-search with a five fold cross-validation, utilizing the training data to obtain the best hyperparameters. The hyperparameters for each use case are in Appendix B and Table A1, Table A2, Table A3, Table A4, Table A5 and Table A6.
Our evaluation is based on R2-scores with respect to the FEM results and inference time. For models that contain inherent randomness, such as MLPs, GBDTR and PINNs, a five-fold cross-validation was conducted. For these models, we report the mean values and standard deviation of the R2-score. For the sake of brevity, we report only the average R2-scores across all 13 targets in this section; see Table 7, Table 8 and Table 9. The R2-scores for individual targets are provided in Appendix C. The inference times are based on the mean value of three measurements. Inferences were run on a machine with 16 GB RAM, 8 CPUs and Intel(R) i7-8565 2.0GHz processor. To compare the inference time of our surrogate models with the computation time required to run FEM simulations, we have included the latter also in Table 7, Table 8 and Table 9.
For graphical results, we chose simulations that cover the error situation quite well in order to make statements about the performance of each model. In addition to the absolute errors (Figure 5a–f, Figure 6a–f, Figure 7a–f, Figure 8a–f, Figure 9a–f and Figure 10a–f), the corresponding FEM simulations of the basis are shown in Figure 5g, Figure 6g, Figure 7g, Figure 8g, Figure 9g and Figure 10g.
GBDTR, KNNR, GPR and SVR algorithms were implemented with the scikit-learn library version 0.24.0 in Python. The SVR and GBDTR algorithms were constructed with MultiOutputRegressor scikit-learn API to fit one regressor per target. Regarding our DL algorithms, the utilized MLPs were implemented with the keras API version 2.4.3 and our PINNs were implemented with the sciann API version 0.5.5.0 in Python 3.8.5. We used the PDEs from [21,22], but instead of the inversion part, we trained our PINNs additionally with plastic strain data, same as for the rest of the surrogate models.
In the elongation of a perforated plate use case, our approach is based on a total of nine FEM simulations. We used five simulations for training and four simulations to evaluate the fitted models; see Table 2. We report the average of R2-scores across all outputs in Table 7 with the corresponding inference times.
Regarding extrapolation, the absolute errors of each surrogate model with respect to σ x y of Simulation 1 are shown in Figure 5. We plot the absolute errors of each surrogate model of σ z z of Simulation 4 in Figure 6 as an example of interpolation. In addition, we show in both figures the ground truth of the corresponding output variable obtained from the FEM simulation. For both interpolation and extrapolation, the errors are large near the shear band. As far as extrapolation is concerned, in addition to the errors near the shear band, most models have significant errors near the maximum negative xy shear stresses; see blue areas in Figure 5g. GBDTR performs well overall, though the error increases in various locations; while PINNs have a similar average performance, they perform better outside the shear band regarding absolute errors. MLP overall shows the best results followed by KNNR.
In the bending beam use case, similar to the perforated plate use case, we trained our models on five simulations and tested them using the remaining four, see Table 2. We present the average R2-scores across all outputs and inference times in Table 8 for the test simulations 1, 4, 6 and 9.
We provide a graphical representation of the absolute error of the surrogate models regarding ε y y t in Figure 7a–f with the FEM simulation result in (g) as one instance of interpolation. Absolute errors of the surrogate models regarding ε x x p and extrapolation are shown in Figure 8. Overall higher errors can be observed near the encastred boundary condition of the beam for some models for that output. While the PINN shows a competitive average R2-score regarding interpolation, on this single target, its performance shows significant weaknesses.
The compression of a block with four perforations use case presents a more complex setting because we generalize by two generalization variables (yield stress and block width). Therefore, we utilize more training data for this use case; see Table 2. We report the average results of R2-scores with corresponding standard deviations, if applicable, in Table 9.
As an instance for interpolation, the absolute errors regarding ε x x p can be seen in Figure 9a–f with Abaqus FEM simulation result (g). Respectively, an instance for extrapolation is shown in Figure 10 with absolute errors (a–f) and FEM ground truth (g). Some models show higher prediction errors near shear bands (high ε x x p regions) regarding the interpolation task. However, SVR and GPR cannot extract meaningful information from the training data, especially in the space free of plastic deformation. This is indicated by the low average R2-scores, compared to the other models. Considering absolute errors of σ x y and extrapolation the MLP, which is otherwise performing well, shows weaknesses and is in general outperformed by the KNNR.

4. Discussion

All classes of surrogate models that we considered in this work share several key characteristics: (1) they are mesh-free and thus, can deliver results with infinite resolution; (2) the computation time required to obtain the target values at predefined positions is orders of magnitude lower than for FEM simulations; (3) since for each simulation setup, where the geometry changes, a different mesh is created during FEM simulations, our results indicate that all classes of surrogate models generalize (interpolate) reasonably well across training data positions; (4) furthermore, all surrogate model classes generalize at least to some extent across use case parameters, such as changes in geometry or material parameters. Finally, all surrogate model classes must be used with care, as they do not extrapolate well to data positions and/or use case parameters unseen during training. Our findings show this in the extrapolation result of the beam use case, Simulation 9: due to the greater yield stress, almost no plastic deformation occurs; thus, the surrogate models are not able to learn such material behavior. Similar findings can be seen from the extrapolation results of the block use case, Simulation 1, 2, 12 and 13: approaches utilizing PINNs and SVRs are not able to predict acceptable strain components, leading to overall worse averaged R2-scores. In general, it can be stated that the surrogate models used show similar behavior with respect to inter- and extrapolation, but differ with respect to individual components, i.e., some models are better at predicting individual components (e.g., strains) for unknown generalization variables (e.g., yield strength) than others. Another example would be the symmetric nature of the use case, making it redundant to evaluate, e.g., stresses at negative x-positions, the proposed surrogate models will certainly respond with such stress values, which consequently, cannot be considered meaningful. Similarly, while the surrogate models may well be evaluated at physically meaningless use case parameters, e.g., negative radii, the thus obtained results must be considered meaningless as well. Therefore, all surrogate models must be treated with this in mind, which is a fundamental difference to FEM simulations that do not offer such modes of failure. With these considerations in mind, we now turn to discuss specific characteristics of each surrogate model class.
Our KNNR approach, which can be considered simple compared to the other algorithms, gave competitive results; moreover, this approach showed the best results regarding extrapolation (i.e., Simulations 1, 2, 12 and 13) in the block use case.
Algorithms we constructed with MultiOutputRegressor (SVR and GBDTR) could give better results if the hyperparameters are tuned to each target separately. However, we did not do this for fairness reasons since our other algorithms are also fitted to the overall use case and not to each target individually. We intend to monitor this in the future.
In our setting, the GPR algorithm did not deliver good results. Tuning the kernel function could deliver better results; however, we do not believe that it would be practical to modify for each new simulation use case. Thus, we not intend to head in this direction. However, we plan to investigate whether other Bayesian methods (e.g., Bayesian neural network [30] or neural processes [31]) could be beneficial.
Our MLPs approaches delivered the overall best results in our comparison, especially regarding interpolation (i.e., in the plate and beam use cases Simulations 4 and 6 and in the block use case, Simulation 7). They achieved high accuracies (R2-score > 0.992), while reducing the inference time by a factor of over 100 in comparison to FEM simulations. As mentioned before, designing the architecture is not a straightforward process; however, if the network is deep enough and suitable optimization methods are available (e.g., Adam optimizer) the network can be also efficiently trained utilizing early stopping.
As already reported in literature [32,33,34,35], we experienced in our setting that PINNs are not straightforward to design and train. Due to several plateaus in the loss function, early stopping did not prove to be effective. Therefore, we set a fixed number of training epochs. One reason for our observation could be the existence of a non-convex Pareto frontier [36]. In the multi-objective optimization problem, the optimizer might attempt to adjust the model parameters while situated between the different losses, leading it to favor one loss at the expense of the other [37]. Possible approaches to overcome this problem are adaptive optimizers [38], adaptive loss [39], and adaptive activation functions [40]. Moreover, PINNs are objects of current research and will gain more and more attention in the future. Besides other fundamental methods, we additionally plan to aim in that direction for improved surrogate modeling.

5. Conclusions

In this work, we deliver a comprehensive evaluation of generalizable and mesh-free ML and DL surrogate models based on FEM simulation and show that surrogate modeling leads to fast predictions with infinite resolution for practical use. In the context of our evaluation, we show which ML and DL models are target oriented at which level of complexity with respect to prediction accuracy and inference time, which can serve as a basis for the practical implementation of surrogate models (in, for example, production for real-time prediction, cyber–physical systems, and process design).
In future work, we plan to conduct more complex experiments, e.g., generalizing across more input variables regarding geometry (e.g., consideration of all component dimensions) and material parameters (e.g., non-perfect nonlinear material behavior, time-dependent material properties, grain growth, and phase transformation). We will moreover explore extended surrogate models with more complex output variables (e.g., grain size, grain structure, and phase transformation).

Author Contributions

Conceptualization, J.G.H. and P.O.; methodology, J.G.H. and B.C.G.; software, J.G.H.; validation, J.G.H., B.C.G. and P.O.; formal analysis, J.G.H.; investigation, J.G.H.; resources, J.G.H., B.C.G., P.O. and R.K.; data curation, J.G.H.; writing—original draft preparation, J.G.H.; writing—review and editing, J.G.H., B.C.G., P.O. and R.K.; visualization, J.G.H.; supervision, R.K.; project administration, J.G.H. and P.O.; funding acquisition, J.G.H., P.O. and R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Österreichische Forschungsförderungsgesellschaft (FFG) Grant No. 881039.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The project BrAIN—Brownfield Artificial Intelligence Network for Forging of High Quality Aerospace Components (FFG Grant No. 881039) is funded in the framework of the program ‘TAKE OFF’, which is a research and technology program of the Austrian Federal Ministry of Transport, Innovation and Technology. The Know-Center is funded within the Austrian COMET Program—Competence Centers for Excellent Technologies—under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG. The authors would also like to thank the developers of the sciann API for making their work available and for responding promptly to our questions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Surrogate Models

We follow the notation introduced in Section 2.2 with data instance X i = ( x i , y i ) containing input vector x i and output vector y i , the number of training instances is n and the number of test instances is m. Notations regarding individual models are introduced when needed.

Appendix A.1. GBDTR

Boosting methods are powerful techniques in which the final “strong” regressor model is based on an iteratively formed ensemble of “weak” base regressor models [41]. The main idea behind boosting is to sequentially add new models to the ensemble, iteratively refining the output. In GBDTR models, boosting is applied to arbitrary differentiable loss functions. In general, GBDTR models are additive models, where the samples are modified so that the labels are set to the negative gradient, while the distribution is held constant [42].
The additive method of GBDTR is the following:
y i ^ = F G ( x i ) = g = 1 G h g ( x i )
where y i ^ is the prediction for a given input x i , and h g are the fitted base tree regressors. The constant G is the number of base tree regressors. The GBDTR algorithm is greedy, where a newly added tree regressor h g is fitted to minimize the loss L g of the resulting ensemble F g = F g 1 + h g , i.e.,
h g = a r g min h L g = a r g min h i = 1 n l ( y i , F g 1 ( x i ) + h ( x i ) )
Here, l ( y i , F ( x i ) ) is defined by the loss parameters, and h ( x i ) is the candidate base regressor. With the utilization of a first-order Taylor approximation:
l ( z ) l ( a ) + ( z a ) l ( a ) a
where z corresponds to F g 1 ( x i ) + h g ( x i ) and a corresponds to F g 1 ( x i ) , we can approximate the value of l with the following:
l ( y i , F g 1 ( x i ) + h g ( x i ) ) l ( y i , F g 1 ( x i ) ) + h g ( x i ) l ( y i , F ( x i ) ) F ( x i ) F = F g 1
We denote the derivative of the loss with g i and remove constant terms:
h m a r g min h i = 1 n h ( x i ) g i
h m is minimized if h ( x i ) is fitted to predict a value proportional to the negative gradient.

Appendix A.2. KNNR

The KNNR algorithm is a relatively simple method mathematically, compared to other algorithms presented here. Here, the model stores all available use cases from the training dataset D and predicts the numerical target y ^ j of a test query instance x j with n < j ( n + m ) based on a similarity measure (e.g., distance functions). The algorithm computes the distance-weighted average of the numerical targets of the K nearest neighbors of x j in D [43].
Specifically, we introduce a distance metric d that measures the distance between all training instances x i with i n and a test instance x j . Next, the training instances are sorted w.r.t. their respective distance in ascending order to the test instance, i.e., there is a permutation π j of the training indices i such that d ( x π j ( 1 ) , x j ) d ( x π j ( 2 ) , x j ) d ( x π j ( n ) , x j ) . Then, the estimate y ^ j ( x j ) is given as the following:
y ^ j ( x j ) = 1 K i = 1 K y π j ( i )
where K must be specified as a hyperparameter.

Appendix A.3. GPR

Gaussian process regression modeling is a non-parametric Bayesian approach [44]. In general, a Gaussian process is a generalization of the Gaussian distribution. The Gaussian distribution describes random variables or random vectors, while a Gaussian process describes a function f ( x ) [45].
In general, a Gaussian process is completely specified by its mean function μ ( x ) and covariance function K ( x , x ) (also called kernel).
If the function f ( x ) under consideration is modeled by a Gaussian process, i.e., if f ( x ) GP ( μ ( x ) , K ( x , x ) ) , then we have the following
E [ f ( x ) ] = μ ( x )
E [ ( f ( x ) μ ( x ) ) ( f ( x ) μ ( x ) ) ] = K ( x , x )
for all x and x . Thus, we can define the Gaussian process as the following:
f ( x ) N ( μ ( x ) , K ( x , x ) )
We use the notation that matrix D = ( X D , Y D ) contains the training data with input data matrix X D = ( x 1 , , x n ) and output data matrix Y D = ( y 1 , , y n ) , and test data matrix T = ( X T , Y T ) contains the test data with X T = ( x n + 1 , , x n + m ) as input and Y T = ( y n + 1 , , y n + m ) as output. We can define that they are jointly Gaussian and zero mean with consideration of the prior distribution:
Y D Y T N ( 0 , K ( X D , X D ) ) K ( X D , X T ) ) K ( X T , X D ) ) K ( X T , X T ) ) )
The Gaussian process makes a prediction Y T for X T in a probabilistic way, where, as stated before, the posterior distribution can be fully described by the mean and the covariance.
Y T | X T , X D , Y D N ( K ( X T , X D ) K ( X D , X D ) 1 Y D , K ( X T , X T ) K ( X T , X D ) K ( X D , X D ) 1 K ( X D , X T ) )

Appendix A.4. SVR

The SVR approach is a generalization of the SVM classification problem by introducing an ϵ -sensitive region around the approximated function, also called an ϵ -tube. The optimization task in SVR contains two steps: first, finding a convex ϵ -insensitive loss function that need to be minimized, and second, finding the smallest ϵ -tube that contains the most training instances.
The convex optimization has a unique solution and is solved using numerical optimization algorithms. One of the main advantages of SVR is that the computational complexity does not depend on the dimensionality of the input space [46]. To deal with otherwise intractable constraints of the optimization problem, we introduce slack variables ξ i and ξ i [47]. The positive constant C determines the trade-off between the flatness of the function and the magnitude up to which deviations greater than ϵ are allowed. The primal quadratic optimization problem of SVR is defined as the following:
minimize ω , b 1 2 | | ω | | 2 + C i = 1 n ( ξ i + ξ i )
s u b j e c t t o t h e f o l l o w i n g : y i ω T x i b ϵ + ξ i ω T x i + b y i ϵ + ξ i ξ i , ξ i 0
Here, ω is the weight and b the bias to be adjusted. The constrained quadratic optimization problem can be solved by minimizing the Lagrangian with non-negative Lagrange multipliers λ i , λ i , α i , α i , i { 1 , , n } :
L ( ω , ξ , ξ , λ , λ , α , α ) = 1 2 | | ω | | 2 + C i = 1 n ξ i + ξ i + i = 1 n α i ( y i ω T x i ε ξ i ) + i = 1 n α i ( y i + ω T x i ε ξ i ) i = 1 n λ i ξ i + λ i ξ i
The minimum of L can be found by taking the partial derivatives with respect to the variables and making them equal to zero (Karush-Kuhn-Tucker (KKT) conditions). With the final KKT condition, we can state the following:
α i ( y i + ω T x i ε ξ i ) = 0 α i ( y i + ω T x i ε ξ i ) = 0 λ i ξ i = 0 λ i ξ i = 0
The Lagrange multipliers that are zero correspond to the inside of the ε -tube, while the support vectors have non-zero Lagrange multipliers. The function estimate depends only on the support vectors, hence this representation is sparse. More specifically, we can derive the following function approximation to predict y ^ j ( x j ) :
y ^ j ( x j ) = i = 1 n S V ( α i α i ) x i T x j
with α i , α i [ 0 , C ] and the number of support vectors n S V . For nonlinear SVR we replace ω T x i in (12)–(15) by ω T ϕ ( x i ) and the inner product in (16) by the kernel K ( x i , x j ) .

Appendix A.5. MLP

A neural network is a network of simple processing elements, also called neurons. The neurons are arranged in layers. In a fully-connected multi-layer network, a neuron in one layer is connected to every neuron in the layer before and after it. The number of neurons in the input layer is the number of input features p and the number of neurons in the output layer is the number of targets q [48]. MLPs have several theoretical advantages, compared to other ML algorithms. Due to the universal approximation theorem, an MLP can approximate any function if the activation functions of the network are appropriate [49,50,51]. The MLP makes no prior assumptions about the data distribution, and in many cases, can be trained to generalize to new data not yet seen [52]. However, finding the right architecture and finding the setting of training parameters is not straightforward and usually done by trial and error influenced by the literature and guidelines.
A neural network output y ^ corresponding to an input x can be represented as a composition of functions, where the output of layer L 1 acts as input to the following layer L. For example, for non-linear activation function σ L , weight matrix W L , and bias vector b L of the respective layer L, we obtain the following:
y ^ ( x ) = t L ( x ) = σ L ( W L T t L 1 ( x ) + b L )
With the neural network estimate y ^ ( x ) and the respective target y of an input x, we can denote a loss function L . A very common loss function for MLPs for regression tasks is the mean-squared error:
L ( W , b ) = 1 n i = 1 n y ^ ( x i ) y i 2
where W and b are the collections of all weight matrices and bias terms, respectively. Optimal weight W and bias b terms for each layer are identified with minimizing the loss function L via back-propagation [53].
W / b = argmin W , b L ( W , b )

Appendix A.6. PINN

In PINNs, the network is trained simultaneously on data and governing differential equations. PINNs are regularized such that their function approximation y ^ ( x ) obeys known laws of physics that apply to the observed data. This type of network is well suited for solving and inverting equations that control physical systems and find application in fluid and solid mechanics as well as in dynamical systems [21,35].
PINNs share similarities with common ANNs, but the loss function has an additional part that describes the physics behind the use case setting. More specifically, the loss L is composed of the data-driven loss L d a t a and the physics-informed loss L p h y s i c s :
L = L d a t a + L p h y s i c s
While the data-driven loss is often a standard mean-squared error, the physics-informed loss accounts for the degree to which the function approximation solves a given system of governing differential equations. For further details, we refer the reader to [23,35,54] in general and to the Python package of [21,22] in particular for simple implementation of structural mechanics use cases.

Appendix B. Hyperparameters

Table A1. Best performing hyperparameters GBDTR.
Table A1. Best performing hyperparameters GBDTR.
PlateBeamBlock
losslslsls
criterionfriedman_msemsefriedman_mse
max_featuresautolog2auto
n_estimators40010002000
Table A2. Best performing hyperparameters KNNR.
Table A2. Best performing hyperparameters KNNR.
PlateBeamBlock
n_neighbors (K)7510
weightsdistancedistancedistance
algorithmbruteball_treeauto
leaf_size151
p_value125
Table A3. Best performing hyperparameters GPR.
Table A3. Best performing hyperparameters GPR.
PlateBeamBlock
kernelMatern()**2RationalQuadratic()**2RationalQuadratic()**2
alpha10−1310−1310−14
Table A4. Best performing hyperparameters SVR.
Table A4. Best performing hyperparameters SVR.
PlateBeamBlock
kernelrbfrbfrbf
gammascalescalescale
epsilon0.0050.0050.4
C955105
Table A5. Best performing hyperparameters MLP.
Table A5. Best performing hyperparameters MLP.
PlateBeamBlock
hidden layers324
neurons100-100-100100-100100-100-100-100
activation functionrelurelurelu
batch size323264
validation split0.10.10.1
early stopping patience500050007500
max epochs100,000100,000100,000
stopped at27,69326,38343,272
Table A6. Best hyperparameters PINN.
Table A6. Best hyperparameters PINN.
PlateBeamBlock
hidden layers444
neurons100-100-100-100100-100-100-100100-100-100-100
activation functiontanhtanhtanh
batch size646464
epochs50,00050,00050,000

Appendix C. Detailed Results

Table A7. Detailed results for the plate elongation use case Simulation 1.
Table A7. Detailed results for the plate elongation use case Simulation 1.
SIMULATION 1
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.99233.121 × 10−60.83315.206 × 10−24.117 × 10−15.296 × 10−11.788 × 10−67.973 × 10−15.936 × 10−1
ε x y t 0.99003.681 × 10−70.47481.814 × 10−15.390 × 10−23.846 × 10−12.065 × 10−55.121 × 10−15.039 × 10−1
ε y y t 0.99243.055 × 10−60.87499.156 × 10−24.169 × 10−15.281 × 10−11.243 × 10−77.992 × 10−15.950 × 10−1
ε x x p 0.99233.269 × 10−60.73853.795 × 10−24.079 × 10−15.120 × 10−12.215 × 10−77.964 × 10−15.933 × 10−1
ε x y p 0.99012.085 × 10−60.61953.218 × 10−14.889 × 10−23.509 × 10−11.003 × 10−55.014 × 10−15.011 × 10−1
ε y y p 0.99233.243 × 10−60.74633.195 × 10−24.127 × 10−15.069 × 10−15.126 × 10−87.976 × 10−15.941 × 10−1
ε z z p 0.98655.307 × 10−60.32354.347 × 10−18.349 × 10−17.773 × 10−17.044 × 10−109.194 × 10−16.734 × 10−1
σ x x 0.97989.128 × 10−60.94961.676 × 10−28.886 × 10−17.682 × 10−11.685 × 10−108.854 × 10−17.017 × 10−1
σ x y 0.97602.915 × 10−70.84059.858 × 10−28.373 × 10−16.984 × 10−13.676 × 10−98.605 × 10−16.706 × 10−1
σ y y 0.99082.639 × 10−60.85746.011 × 10−29.822 × 10−18.925 × 10−12.420 × 10−99.120 × 10−15.432 × 10−1
σ z z 0.99142.684 × 10−60.94841.407 × 10−29.208 × 10−18.774 × 10−12.448 × 10−89.326 × 10−16.558 × 10−1
u0.99813.358 × 10−80.96901.919 × 10−29.095 × 10−18.721 × 10−12.230 × 10−79.443 × 10−16.629 × 10−1
v0.99765.954 × 10−90.96103.203 × 10−29.195 × 10−18.903 × 10−11.258 × 10−119.549 × 10−16.810 × 10−1
mean 0.99006.155 × 10−90.77978.709 × 10−20.61880.66063.959 × 10−80.81640.6131
MSE ε x x t 2.916 × 10−54.483 × 10−116.325 × 10−41.973 × 10−42.230 × 10−31.783 × 10−32.569 × 10−117.683 × 10−41.540 × 10−3
ε x y t 3.237 × 10−53.865 × 10−121.702 × 10−35.879 × 10−43.066 × 10−31.994 × 10−32.168 × 10−101.581 × 10−31.608 × 10−3
ε y y t 2.927 × 10−54.523 × 10−114.815 × 10−43.523 × 10−42.244 × 10−31.816 × 10−31.841 × 10−127.727 × 10−41.558 × 10−3
ε x x p 2.945 × 10−54.752 × 10−119.969 × 10−41.447 × 10−42.257 × 10−31.861 × 10−33.220 × 10−127.764 × 10−41.551 × 10−3
ε x y p 2.974 × 10−51.886 × 10−111.144 × 10−39.679 × 10−42.861 × 10−31.952 × 10−39.069 × 10−111.499 × 10−31.500 × 10−3
ε y y p 2.954 × 10−54.803 × 10−119.764 × 10−41.230 × 10−42.260 × 10−31.898 × 10−37.593 × 10−137.789 × 10−41.562 × 10−3
ε z z p 1.921 × 10−91.071 × 10−192.604 × 10−31.673 × 10−32.345 × 10−83.163 × 10−81.421 × 10−231.145 × 10−84.638 × 10−8
σ x x 1.599 × 1025.736 × 1023.997 × 1021.329 × 1028.831 × 1021.837 × 1031.059 × 10−29.082 × 1022.364 × 103
σ x y 1.196 × 1027.2217.941 × 1024.906 × 1028.099 × 1021.501 × 1039.107 × 10−26.941 × 1021.640 × 103
σ y y 5.013 × 1027.911 × 1037.809 × 1033.291 × 1039.766 × 1025.883 × 1037.2534.818 × 1032.501 × 104
σ z z 1.896 × 1021.297 × 1031.135 × 1033.094 × 1021.741 × 1032.696 × 1031.183 × 1011.481 × 1037.567 × 103
u4.736 × 10−32.001 × 10−77.558 × 10−24.684 × 10−22.210 × 10−13.123 × 10−11.329 × 10−61.361 × 10−18.230 × 10−1
v0.00553.012 × 10−80.08777.203 × 10−21.810 × 10−12.467 × 10−16.361 × 10−111.015 × 10−17.174 × 10−1
Table A8. Detailed results for the plate elongation use case Simulation 4.
Table A8. Detailed results for the plate elongation use case Simulation 4.
SIMULATION 4
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.99941.890 × 10−50.96151.029 × 10−20.61100.90618.599 × 10−20.93540.8478
ε x y t 0.99841.387 × 10−40.60642.507 × 10−10.15530.82521.696 × 10−10.75580.6991
ε y y t 0.99941.450 × 10−50.98178.052 × 10−30.61690.90679.160 × 10−20.93610.8495
ε x x p 0.99941.634 × 10−50.81835.486 × 10−20.60870.83793.180 × 10−20.93460.8457
ε x y p 0.99841.217 × 10−50.94107.109 × 10−30.14680.73091.302 × 10−10.75020.6967
ε y y p 0.99941.572 × 10−50.88818.563 × 10−40.61170.84873.739 × 10−20.93510.8467
ε z z p 0.99341.400 × 10−30.78882.487 × 10−30.93490.93171.525 × 10−20.97900.9440
σ x x 0.99571.192 × 10−40.99033.579 × 10−40.93260.95724.072 × 10−20.97420.9404
σ x y 0.99306.390 × 10−40.97531.977 × 10−40.86430.88461.103 × 10−10.94170.8974
σ y y 0.99859.487 × 10−50.89723.716 × 10−30.99320.96603.214 × 10−20.99090.9736
σ z z 0.99721.784 × 10−40.98132.062 × 10−40.98130.97262.625 × 10−20.99010.9667
u0.99954.181 × 10−50.99343.368 × 10−40.92970.97102.879 × 10−20.97920.9370
v0.99976.148 × 10−50.99271.157 × 10−30.93920.97922.026 × 10−20.98570.9454
mean 0.99781.970 × 10−40.90892.598 × 10−20.71740.90146.310 × 10−20.92980.8761
MSE ε x x t 2.150 × 10−66.723 × 10−81.370 × 10−43.662 × 10−51.384 × 10−33.200 × 10−43.200 × 10−42.298 × 10−45.412 × 10−4
ε x y t 4.496 × 10−63.991 × 10−71.132 × 10−37.213 × 10−42.430 × 10−34.955 × 10−44.955 × 10−47.027 × 10−48.656 × 10−4
ε y y t 2.186 × 10−65.247 × 10−86.610 × 10−52.914 × 10−51.386 × 10−33.345 × 10−43.345 × 10−42.312 × 10−45.447 × 10−4
ε x x p 2.173 × 10−65.875 × 10−86.531 × 10−41.972 × 10−41.407 × 10−33.485 × 10−43.484 × 10−42.352 × 10−45.547 × 10−4
ε x y p 4.228 × 10−63.139 × 10−81.522 × 10−41.834 × 10−52.202 × 10−35.152 × 10−45.152 × 10−46.446 × 10−47.827 × 10−4
ε y y p 2.194 × 10−65.701 × 10−84.060 × 10−43.106 × 10−61.409 × 10−33.422 × 10−43.422 × 10−42.356 × 10−45.562 × 10−4
ε z z p 7.663 × 10−101.627 × 10−107.659 × 10−49.020 × 10−67.573 × 10−94.990 × 10−94.726 × 10−92.438 × 10−96.512 × 10−9
σ x x 5.480 × 1011.5151.226 × 1024.5478.561 × 1025.339 × 1025.272 × 1023.283 × 1027.575 × 102
σ x y 3.438 × 1013.1551.218 × 1029.760 × 10−16.699 × 1025.597 × 1025.547 × 1022.880 × 1025.066 × 102
σ y y 1.267 × 1028.0278.700 × 1033.144 × 1025.743 × 1023.081 × 1032.514 × 1037.740 × 1022.236 × 103
σ z z 6.863 × 1014.4374.654 × 1025.1294.640 × 1026.858 × 1026.491 × 1022.456 × 1028.285 × 102
u8.003 × 10−46.664 × 10−51.046 × 10−25.368 × 10−41.120 × 10−14.637 × 10−24.578 × 10−23.320 × 10−21.005 × 10−1
v4.276 × 10−48.849 × 10−51.057 × 10−21.666 × 10−38.756 × 10−22.954 × 10−22.952 × 10−22.062 × 10−27.857 × 10−2
Table A9. Detailed results for the plate elongation use case Simulation 6.
Table A9. Detailed results for the plate elongation use case Simulation 6.
SIMULATION 6
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.99582.566 × 10−30.93361.934 × 10−20.61820.84051.562 × 10−10.92280.8366
ε x y t 0.99304.453 × 10−30.28773.328 × 10−10.19150.69243.030 × 10−10.72800.6696
ε y y t 0.99592.522 × 10−30.92115.104 × 10−20.62380.83441.648 × 10−10.92380.8385
ε x x p 0.99582.606 × 10−30.54143.173 × 10−10.61490.75298.924 × 10−20.92210.8352
ε x y p 0.99324.276 × 10−30.91312.635 × 10−20.18310.59661.726 × 10−10.72180.6668
ε y y p 0.99582.590 × 10−30.77448.943 × 10−20.61780.77999.094 × 10−20.92280.8362
ε z z p 0.99012.688 × 10−30.82642.288 × 10−20.95900.89382.641 × 10−30.98600.9487
σ x x 0.98371.326 × 10−20.98393.098 × 10−40.95270.96003.838 × 10−20.97990.9407
σ x y 0.97688.968 × 10−30.96552.023 × 10−30.86870.85711.381 × 10−10.94310.9030
σ y y 0.98901.196 × 10−20.91031.363 × 10−30.99080.97292.571 × 10−20.99380.9709
σ z z 0.99008.749 × 10−30.98511.518 × 10−30.98180.96823.102 × 10−20.99230.9636
u0.99801.261 × 10−30.98351.344 × 10−30.90770.93666.320 × 10−20.97170.9336
v0.99886.926 × 10−40.98496.032 × 10−30.91610.96853.118 × 10−20.97600.9359
mean 0.99201.889 × 10−30.84706.309 × 10−20.72510.85031.005 × 10−10.92190.8676
MSE ε x x t 1.563 × 10−59.386 × 10−62.475 × 10−47.210 × 10−51.423 × 10−35.884 × 10−45.884 × 10−42.878 × 10−46.090 × 10−4
ε x y t 2.402 × 10−51.372 × 10−52.360 × 10−31.103 × 10−32.680 × 10−31.012 × 10−31.012 × 10−39.014 × 10−41.095 × 10−3
ε y y t 1.579 × 10−59.392 × 10−62.994 × 10−41.936 × 10−41.427 × 10−36.268 × 10−46.268 × 10−42.890 × 10−46.128 × 10−4
ε x x p 1.596 × 10−59.635 × 10−61.725 × 10−31.194 × 10−31.449 × 10−36.327 × 10−46.327 × 10−42.929 × 10−46.201 × 10−4
ε x y p 2.135 × 10−51.199 × 10−52.627 × 10−47.965 × 10−52.469 × 10−38.705 × 10−48.705 × 10−48.407 × 10−41.007 × 10−3
ε y y p 1.599 × 10−59.654 × 10−68.559 × 10−43.392 × 10−41.450 × 10−35.899 × 10−45.899 × 10−42.930 × 10−46.214 × 10−4
ε z z p 1.341 × 10−96.361 × 10−106.584 × 10−48.677 × 10−54.377 × 10−95.918 × 10−95.708 × 10−91.492 × 10−95.484 × 10−9
σ x x 4.784 × 1025.454 × 1022.162 × 1024.1706.368 × 1025.300 × 1025.246 × 1022.709 × 1027.981 × 102
σ x y 2.621 × 1022.590 × 1021.620 × 1029.4916.161 × 1026.614 × 1026.566 × 1022.669 × 1024.552 × 102
σ y y 2.230 × 1032.811 × 1038.488 × 1031.291 × 1028.701 × 1022.741 × 1032.264 × 1035.900 × 1022.755 × 103
σ z z 4.078 × 1024.385 × 1023.842 × 1023.926 × 1014.705 × 1028.263 × 1027.996 × 1021.981 × 1029.411 × 102
u2.471 × 10−31.295 × 10−31.953 × 10−21.591 × 10−31.093 × 10−17.515 × 10−27.473 × 10−23.349 × 10−27.862 × 10−2
v1.594 × 10−33.341 × 10−41.621 × 10−26.487 × 10−39.022 × 10−23.373 × 10−23.371 × 10−22.578 × 10−26.895 × 10−2
Table A10. Detailed results for the plate elongation use case Simulation 9.
Table A10. Detailed results for the plate elongation use case Simulation 9.
SIMULATION 9
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.99024.611 × 10−70.81491.175 × 10−15.305 × 10−17.066 × 10−14.844 × 10−78.255 × 10−15.599 × 10−1
ε x y t 0.96991.932 × 10−50.33569.434 × 10−25.673 × 10−23.932 × 10−11.518 × 10−84.667 × 10−13.628 × 10−1
ε y y t 0.99034.339 × 10−70.84841.445 × 10−15.344 × 10−17.197 × 10−13.018 × 10−88.272 × 10−15.611 × 10−1
ε x x p 0.99043.884 × 10−70.60977.907 × 10−25.180 × 10−17.195 × 10−14.415 × 10−118.235 × 10−15.571 × 10−1
ε x y p 0.97041.921 × 10−50.59373.410 × 10−14.617 × 10−24.245 × 10−13.808 × 10−84.609 × 10−13.586 × 10−1
ε y y p 0.99043.972 × 10−70.73574.504 × 10−25.204 × 10−17.128 × 10−17.477 × 10−78.242 × 10−15.579 × 10−1
ε z z p 0.96287.728 × 10−60.37474.181 × 10−19.292 × 10−18.309 × 10−16.767 × 10−99.016 × 10−16.839 × 10−1
σ x x 0.96333.899 × 10−40.95771.764 × 10−29.258 × 10−19.264 × 10−11.662 × 10−99.516 × 10−16.847 × 10−1
σ x y 0.94381.369 × 10−30.81681.079 × 10−18.161 × 10−16.292 × 10−11.792 × 10−77.731 × 10−15.730 × 10−1
σ y y 0.98765.783 × 10−50.94434.324 × 10−39.779 × 10−19.280 × 10−11.478 × 10−119.367 × 10−15.502 × 10−1
σ z z 0.98651.062 × 10−50.95262.420 × 10−29.704 × 10−19.328 × 10−13.200 × 10−109.407 × 10−16.108 × 10−1
u0.98983.302 × 10−60.91686.792 × 10−28.492 × 10−16.756 × 10−12.928 × 10−78.302 × 10−16.322 × 10−1
v0.98592.747 × 10−60.93006.483 × 10−28.630 × 10−18.433 × 10−12.501 × 10−88.963 × 10−16.544 × 10−1
mean 0.97862.970 × 10−50.75621.046 × 10−10.65680.72639.780 × 10−90.80450.5651
MSE ε x x t 3.323 × 10−55.335 × 10−126.296 × 10−43.996 × 10−41.597 × 10−39.979 × 10−45.605 × 10−125.937 × 10−41.497 × 10−3
ε x y t 1.069 × 10−42.439 × 10−102.360 × 10−33.352 × 10−43.351 × 10−32.156 × 10−31.916 × 10−131.895 × 10−32.264 × 10−3
ε y y t 3.349 × 10−55.193 × 10−125.245 × 10−45.000 × 10−41.611 × 10−39.698 × 10−43.612 × 10−135.977 × 10−41.518 × 10−3
ε x x p 3.322 × 10−54.646 × 10−121.350 × 10−32.735 × 10−41.667 × 10−39.703 × 10−45.282 × 10−166.106 × 10−41.532 × 10−3
ε x y p 9.449 × 10−51.962 × 10−101.298 × 10−31.090 × 10−33.048 × 10−31.839 × 10−33.889 × 10−131.723 × 10−32.050 × 10−3
ε y y p 3.332 × 10−54.818 × 10−129.206 × 10−41.569 × 10−41.670 × 10−31.000 × 10−39.069 × 10−126.123 × 10−41.540 × 10−3
ε z z p 2.279 × 10−92.901 × 10−202.178 × 10−31.456 × 10−34.339 × 10−91.036 × 10−82.540 × 10−236.026 × 10−91.937 × 10−8
σ x x 3.733 × 1024.030 × 1044.303 × 1021.794 × 1027.548 × 1027.480 × 1021.718 × 10−14.916 × 1023.206 × 103
σ x y 1.946 × 1021.639 × 1046.338 × 1023.734 × 1026.364 × 1021.283 × 1032.1467.852 × 1021.478 × 103
σ y y 1.092 × 1034.456 × 1054.890 × 1033.796 × 1021.943 × 1036.320 × 1031.139 × 10−15.556 × 1033.948 × 104
σ z z 2.613 × 1023.972 × 1039.161 × 1024.681 × 1025.727 × 1021.300 × 1031.197 × 10−11.147 × 1037.528 × 103
u5.525 × 10−39.688 × 10−74.508 × 10−23.679 × 10−28.165 × 10−21.757 × 10−18.590 × 10−89.196 × 10−21.992 × 10−1
v0.00727.221 × 10−70.03593.324 × 10−27.021 × 10−28.034 × 10−26.573 × 10−95.317 × 10−21.772 × 10−1
Table A11. Detailed results for the bending beam use case Simulation 1.
Table A11. Detailed results for the bending beam use case Simulation 1.
SIMULATION 1
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.83678.066 × 10−50.76821.742 × 10−20.80360.85662.006 × 10−60.83770.6790
ε x y t 0.85706.906 × 10−50.66321.499 × 10−10.49320.90302.784 × 10−70.64090.5727
ε y y t 0.95941.142 × 10−50.84873.318 × 10−40.95200.96512.776 × 10−90.96070.8805
ε x x p 0.06477.879 × 10−60.03041.866 × 10−40.00830.15991.015 × 10−50.14260.0633
ε x y p −0.00914.606 × 10−4−0.01063.336 × 10−4−0.00510.09063.080 × 10−60.00980.0652
ε y y p 0.07232.093 × 10−50.03354.720 × 10−40.00510.17469.704 × 10−70.15680.0720
ε z z p 0.11573.506 × 10−4−0.00788.842 × 10−40.01520.24581.254 × 10−60.23730.1278
σ x x 0.96433.732 × 10−40.98222.748 × 10−30.02910.98373.203 × 10−70.62930.1681
σ x y 0.91571.814 × 10−40.89021.515 × 10−20.42270.94514.972 × 10−60.63240.5024
σ y y 0.94828.435 × 10−50.95461.249 × 10−30.96180.95471.283 × 10−70.95400.9815
σ z z 0.97891.711 × 10−50.97421.819 × 10−30.97660.98182.750 × 10−70.97810.9521
u0.99486.208 × 10−60.99745.933 × 10−40.99780.99741.963 × 10−100.99720.9678
v0.98752.782 × 10−50.88976.238 × 10−20.99760.99736.646 × 10−80.99760.9580
mean 0.66827.345 × 10−60.61651.648 × 10−20.51220.71201.088 × 10−80.62880.5377
MSE ε x x t 3.452 × 10−73.602 × 10−164.899 × 10−73.682 × 10−84.151 × 10−73.077 × 10−79.363 × 10−173.430 × 10−76.783 × 10−7
ε x y t 3.723 × 10−84.679 × 10−188.765 × 10−83.903 × 10−81.319 × 10−72.554 × 10−87.242 × 10−209.346 × 10−81.112 × 10−7
ε y y t 2.876 × 10−75.739 × 10−161.073 × 10−62.352 × 10−93.402 × 10−72.484 × 10−73.517 × 10−182.785 × 10−78.473 × 10−7
ε x x p 4.778 × 10−72.056 × 10−184.953 × 10−79.531 × 10−115.066 × 10−74.288 × 10−74.320 × 10−184.380 × 10−74.785 × 10−7
ε x y p 1.779 × 10−81.431 × 10−191.781 × 10−85.879 × 10−121.772 × 10−81.604 × 10−82.456 × 10−211.745 × 10−81.648 × 10−8
ε y y p 6.251 × 10−79.503 × 10−186.512 × 10−73.180 × 10−106.704 × 10−75.577 × 10−72.451 × 10−185.681 × 10−76.253 × 10−7
ε z z p 1.029 × 10−84.752 × 10−206.790 × 10−75.958 × 10−101.146 × 10−88.783 × 10−98.663 × 10−238.878 × 10−91.015 × 10−8
σ x x 9.974 × 1012.906 × 1034.964 × 1017.6682.709 × 1034.422 × 1015.128 × 10−21.035 × 1032.322 × 103
σ x y 7.615 × 1011.479 × 1029.920 × 1011.368 × 1015.214 × 1025.007 × 1017.0833.320 × 1024.494 × 102
σ y y 1.295 × 1045.275 × 1061.135 × 1043.123 × 1029.543 × 1031.141 × 1044.827 × 1041.150 × 1044.631 × 103
σ z z 5.921 × 1021.342 × 1047.215 × 1025.094 × 1016.548 × 1025.054 × 1026.585 × 1016.130 × 1021.342 × 103
u1.251 × 10−23.633 × 10−56.323 × 10−31.435 × 10−35.220 × 10−36.289 × 10−31.284 × 10−86.723 × 10−37.797 × 10−2
v4.059 × 10−42.945 × 10−83.589 × 10−32.030 × 10−37.902 × 10−58.486 × 10−52.337 × 10−117.796 × 10−51.367 × 10−3
Table A12. Detailed results for the bending beam use case Simulation 4.
Table A12. Detailed results for the bending beam use case Simulation 4.
SIMULATION 4
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.99924.882 × 10−40.98396.920 × 10−40.96010.99281.008 × 10−30.99400.9590
ε x y t 0.99771.512 × 10−30.96438.745 × 10−30.57940.99411.496 × 10−40.93210.7219
ε y y t 0.99962.146 × 10−40.97931.865 × 10−40.99700.99814.564 × 10−40.99950.9922
ε x x p 0.99651.939 × 10−30.84716.012 × 10−30.87960.87092.564 × 10−30.98190.8397
ε x y p 0.98948.291 × 10−30.99114.488 × 10−40.08270.85133.840 × 10−30.79970.3642
ε y y p 0.99721.521 × 10−30.85469.650 × 10−40.89680.88812.980 × 10−30.98550.8495
ε z z p 0.99856.214 × 10−40.74111.351 × 10−20.93520.94792.379 × 10−30.99440.8838
σ x x 0.99818.971 × 10−40.99875.263 × 10−40.04180.99797.556 × 10−50.90460.4887
σ x y 0.99741.516 × 10−30.97991.242 × 10−20.46000.99464.227 × 10−40.91670.6331
σ y y 0.99971.157 × 10−40.91692.384 × 10−40.99920.99831.213 × 10−50.99980.9965
σ z z 0.99971.080 × 10−40.98522.265 × 10−40.99390.99901.026 × 10−40.99890.9916
u0.99982.885 × 10−50.99893.940 × 10−41.00000.99967.767 × 10−50.99980.9989
v0.99974.696 × 10−50.95171.110 × 10−31.00000.99954.749 × 10−50.99990.9974
mean 0.99791.319 × 10−30.93791.042 × 10−30.75580.96406.751 × 10−40.96210.8243
MSE ε x x t 1.158 × 10−97.461 × 10−102.467 × 10−81.057 × 10−96.097 × 10−81.099 × 10−81.541 × 10−99.198 × 10−96.258 × 10−8
ε x y t 5.680 × 10−103.784 × 10−108.924 × 10−92.188 × 10−91.052 × 10−71.471 × 10−93.743 × 10−111.700 × 10−86.960 × 10−8
ε y y t 2.708 × 10−91.457 × 10−91.404 × 10−71.267 × 10−92.058 × 10−81.267 × 10−83.100 × 10−93.589 × 10−95.309 × 10−8
ε x x p 3.854 × 10−102.155 × 10−101.699 × 10−86.681 × 10−101.337 × 10−81.435 × 10−82.849 × 10−102.012 × 10−91.782 × 10−8
ε x y p 4.304 × 10−113.372 × 10−113.608 × 10−111.825 × 10−123.730 × 10−96.049 × 10−101.562 × 10−118.146 × 10−102.586 × 10−9
ε y y p 4.490 × 10−102.477 × 10−102.367 × 10−81.571 × 10−101.679 × 10−81.822 × 10−84.851 × 10−102.358 × 10−92.449 × 10−8
ε z z p 7.588 × 10−123.101 × 10−124.214 × 10−82.199 × 10−93.233 × 10−102.600 × 10−101.187 × 10−112.771 × 10−115.800 × 10−10
σ x x 6.3012.9044.2421.7043.102 × 1036.7812.446 × 10−13.087 × 1021.655 × 103
σ x y 2.5461.4891.974 × 1011.220 × 1015.306 × 1025.2684.153 × 10−18.179 × 1013.604 × 102
σ y y 9.601 × 1013.555 × 1012.554 × 1047.326 × 1012.483 × 1025.289 × 1023.7275.885 × 1011.065 × 103
σ z z 9.5873.4244.705 × 1027.1791.947 × 1023.277 × 1013.2513.488 × 1012.663 × 102
u5.949 × 10−47.015 × 10−52.758 × 10−39.579 × 10−41.676 × 10−58.592 × 10−41.889 × 10−43.838 × 10−42.781 × 10−3
v9.229 × 10−61.558 × 10−61.601 × 10−33.683 × 10−55.846 × 10−71.598 × 10−51.576 × 10−62.105 × 10−68.758 × 10−5
Table A13. Detailed results for the bending beam use case Simulation 6.
Table A13. Detailed results for the bending beam use case Simulation 6.
SIMULATION 6
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.99971.585 × 10−40.98271.679 × 10−30.96060.99703.028 × 10−40.99460.9615
ε x y t 0.99885.440 × 10−40.96361.211 × 10−20.63600.99562.013 × 10−40.94180.7492
ε y y t 0.99971.581 × 10−40.96833.948 × 10−40.99790.99901.438 × 10−40.99970.9935
ε x x p 0.99731.546 × 10−30.85673.014 × 10−30.77130.85291.868 × 10−30.98250.7532
ε x y p 0.98875.094 × 10−30.97887.437 × 10−30.10490.77734.855 × 10−30.78370.3351
ε y y p 0.99791.115 × 10−30.87691.114 × 10−20.79800.86271.675 × 10−30.98510.7650
ε z z p 0.99807.159 × 10−40.61621.176 × 10−20.83920.89553.934 × 10−40.99100.8054
σ x x 0.99855.385 × 10−40.99931.296 × 10−40.03740.99871.248 × 10−40.90510.4865
σ x y 0.99847.070 × 10−40.98191.416 × 10−20.48900.99532.116 × 10−40.91980.6422
σ y y 0.99971.586 × 10−40.94651.587 × 10−30.99930.99881.469 × 10−40.99990.9969
σ z z 0.99971.588 × 10−40.98903.975 × 10−50.99370.99911.271 × 10−40.99900.9919
u0.99971.787 × 10−50.99842.302 × 10−41.00000.99976.919 × 10−50.99980.9989
v0.99976.709 × 10−50.95031.090 × 10−31.00000.99951.477 × 10−40.99990.9974
mean 0.99818.315 × 10−40.93148.396 × 10−40.74060.95161.368 × 10−40.96170.8059
MSE ε x x t 4.762 × 10−102.159 × 10−102.352 × 10−82.287 × 10−95.373 × 10−84.027 × 10−94.125 × 10−107.327 × 10−95.246 × 10−8
ε x y t 2.949 × 10−101.330 × 10−108.896 × 10−92.962 × 10−98.902 × 10−81.069 × 10−94.922 × 10−111.423 × 10−86.133 × 10−8
ε y y t 1.818 × 10−91.064 × 10−92.134 × 10−72.658 × 10−91.434 × 10−86.562 × 10−99.683 × 10−102.272 × 10−94.385 × 10−8
ε x x p 9.135 × 10−115.298 × 10−114.910 × 10−91.033 × 10−107.838 × 10−95.041 × 10−96.403 × 10−115.983 × 10−108.458 × 10−9
ε x y p 1.017 × 10−114.605 × 10−121.921 × 10−116.723 × 10−128.092 × 10−102.014 × 10−104.389 × 10−121.955 × 10−106.011 × 10−10
ε y y p 1.126 × 10−105.894 × 10−116.510 × 10−95.889 × 10−101.068 × 10−87.262 × 10−98.856 × 10−117.876 × 10−101.242 × 10−8
ε z z p 4.137 × 10−121.456 × 10−122.030 × 10−86.220 × 10−103.271 × 10−102.126 × 10−108.001 × 10−131.840 × 10−113.959 × 10−10
σ x x 4.8911.8012.3114.333 × 10−13.219 × 1034.1834.173 × 10−13.174 × 1021.717 × 103
σ x y 1.5827.091 × 10−11.820 × 1011.420 × 1015.125 × 1024.7492.123 × 10−18.043 × 1013.589 × 102
σ y y 8.720 × 1015.288 × 1011.785 × 1045.289 × 1022.192 × 1024.124 × 1024.896 × 1014.382 × 1011.044 × 103
σ z z 8.6785.2153.603 × 1021.3052.061 × 1022.859 × 1014.1743.387 × 1012.660 × 102
u7.648 × 10−44.353 × 10−53.887 × 10−35.607 × 10−41.611 × 10−56.987 × 10−41.685 × 10−43.679 × 10−42.779 × 10−3
v9.476 × 10−62.243 × 10−61.663 × 10−33.645 × 10−54.862 × 10−71.622 × 10−54.937 × 10−61.937 × 10−68.695 × 10−5
Table A14. Detailed results for the bending beam use case Simulation 9.
Table A14. Detailed results for the bending beam use case Simulation 9.
SIMULATION 9
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.78513.018 × 10−30.63586.166 × 10−40.86560.81723.255 × 10−50.79390.9395
ε x y t 0.82554.803 × 10−30.83922.690 × 10−20.64780.91334.185 × 10−70.79310.6226
ε y y t 0.96273.529 × 10−60.91263.213 × 10−40.97760.97213.246 × 10−80.97320.9432
ε x x p −984.35723.895 × 104−2372.95201.322 × 101−305.8183−823.38123.067 × 101-809.9266−220.5149
ε x y p −13394.35421.990 × 106−13885.93083.975 × 102−604.7674−8965.12964.317 × 102−2397.3474−4955.1166
ε y y p −821.50682.422 × 104−343.28222.128−282.6090−698.79681.881 × 10−1−683.0748−193.7274
ε z z p −359.95361.372 × 103−382.85761.639 × 101−214.6722−322.91594.464 × 10−2−309.8423−109.3652
σ x x 0.95291.611 × 10−30.98931.808 × 10−30.02750.99322.982 × 10−90.56290.2630
σ x y 0.91201.109 × 10−30.92601.163 × 10−20.48620.96273.386 × 10−70.69040.5129
σ y y 0.96725.183 × 10−60.81053.138 × 10−30.95690.97634.897 × 10−80.97450.8841
σ z z 0.98251.315 × 10−50.96818.932 × 10−50.97380.98872.130 × 10−70.98550.9184
u0.99471.871 × 10−50.99854.685 × 10−40.99500.99802.820 × 10−90.99780.9581
v0.99332.244 × 10−50.94931.716 × 10−30.99600.99802.141 × 10−90.99820.9443
mean −1196.29206.166 × 103−1305.92263.269 × 101−107.7646−830.89264.298−322.4940−420.9029
MSE ε x x t 2.655 × 10−74.608 × 10−154.500 × 10−77.618 × 10−101.660 × 10−72.177 × 10−72.073 × 10−172.546 × 10−77.480 × 10−8
ε x y t 4.174 × 10−82.747 × 10−163.847 × 10−86.433 × 10−98.423 × 10−82.062 × 10−86.416 × 10−224.949 × 10−89.025 × 10−8
ε y y t 2.494 × 10−71.581 × 10−165.852 × 10−72.150 × 10−91.500 × 10−71.973 × 10−72.708 × 10−161.793 × 10−73.804 × 10−7
ε x x p 3.694 × 10−75.475 × 10−158.900 × 10−74.957 × 10−91.150 × 10−73.109 × 10−72.740 × 10−193.040 × 10−78.305 × 10−8
ε x y p 1.826 × 10−83.699 × 10−181.893 × 10−85.419 × 10−108.258 × 10−101.224 × 10−86.153 × 10−243.270 × 10−96.756 × 10−9
ε y y p 4.966 × 10−78.827 × 10−152.079 × 10−71.285 × 10−91.712 × 10−74.229 × 10−76.942 × 10−204.130 × 10−71.176 × 10−7
ε z z p 9.798 × 10−91.011 × 10−182.318 × 10−79.893 × 10−95.854 × 10−98.779 × 10−96.376 × 10−228.438 × 10−92.996 × 10−9
σ x x 1.599 × 1021.859 × 1043.640 × 1016.1443.304 × 1032.324 × 1012.649 × 10−21.485 × 1032.504 × 103
σ x y 8.763 × 1011.101 × 1037.371 × 1011.158 × 1015.118 × 1023.585 × 1011.5343.084 × 1024.852 × 102
σ y y 1.180 × 1046.690 × 1056.809 × 1041.127 × 1031.547 × 1048.593 × 1031.887 × 1039.171 × 1034.165 × 104
σ z z 5.870 × 1021.478 × 1041.069 × 1032.9948.768 × 1023.739 × 1024.308 × 1024.876 × 1022.737 × 103
u1.288 × 10−21.115 × 10−43.569 × 10−31.144 × 10−31.217 × 10−24.617 × 10−36.444 × 10−95.283 × 10−31.023 × 10−1
v2.259 × 10−42.552 × 10−81.711 × 10−35.786 × 10−51.339 × 10−46.601 × 10−52.616 × 10−126.196 × 10−51.880 × 10−3
Table A15. Detailed results for the block compression use case Simulation 1.
Table A15. Detailed results for the block compression use case Simulation 1.
SIMULATION 1
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.73031.285 × 10−1−3.12001.797−1.02250.46611.606 × 10−30.72330.0611
ε x y t 0.52722.271 × 10−1−1.86303.719 × 10−1−0.10800.42853.330 × 10−40.73190.0383
ε y y t 0.73011.291 × 10−1−3.02602.761−1.00940.45152.165 × 10−40.72360.0549
ε x x p 0.72671.309 × 10−1−2.62011.502−0.96050.46321.165 × 10−30.72130.0619
ε x y p 0.52082.282 × 10−1−0.21476.512 × 10−1−0.04800.38851.938 × 10−20.72950.0348
ε y y p 0.72731.308 × 10−1−0.41427.218 × 10−1−0.95760.42981.476 × 10−30.72170.0624
ε z z p 0.36642.504 × 10−10.28908.202 × 10−2−1.06940.26821.412 × 10−30.68420.0806
σ x x 0.28252.839 × 10−10.85313.238 × 10−2−0.57220.72332.191 × 10−40.82440.1672
σ x y 0.21574.538 × 10−10.83478.739 × 10−30.04080.59104.449 × 10−40.79970.1585
σ y y 0.28102.974 × 10−10.78702.325 × 10−20.72100.68083.475 × 10−40.7726−0.3058
σ z z 0.39293.008 × 10−10.80483.480 × 10−30.61430.63562.169 × 10−40.8023−0.1644
u0.82667.245 × 10−20.92944.231 × 10−30.45510.91574.560 × 10−50.93600.5587
v0.90315.961 × 10−20.98722.228 × 10−40.71440.96191.290 × 10−50.98050.5683
mean 0.55621.952 × 10−1−0.44416.066 × 10−1−0.24630.56951.665 × 10−30.78080.1059
MSE ε x x t 9.226 × 10−44.397 × 10−41.409 × 10−26.146 × 10−36.918 × 10−31.826 × 10−35.492 × 10−69.465 × 10−43.211 × 10−3
ε x y t 2.624 × 10−31.261 × 10−31.589 × 10−22.065 × 10−36.150 × 10−33.172 × 10−31.849 × 10−61.488 × 10−35.338 × 10−3
ε y y t 9.301 × 10−44.449 × 10−41.388 × 10−29.514 × 10−36.925 × 10−31.890 × 10−37.460 × 10−79.526 × 10−43.257 × 10−3
ε x x p 9.391 × 10−44.498 × 10−41.244 × 10−25.160 × 10−36.737 × 10−31.845 × 10−34.004 × 10−69.577 × 10−43.224 × 10−3
ε x y p 2.464 × 10−31.173 × 10−36.245 × 10−33.348 × 10−35.387 × 10−33.143 × 10−39.964 × 10−51.391 × 10−34.962 × 10−3
ε y y p 9.431 × 10−44.524 × 10−44.890 × 10−32.496 × 10−36.769 × 10−31.972 × 10−35.103 × 10−69.625 × 10−43.242 × 10−3
ε z z p 6.827 × 10−82.698 × 10−82.458 × 10−32.836 × 10−42.230 × 10−77.885 × 10−81.521 × 10−103.403 × 10−89.906 × 10−8
σ x x 1.703 × 1046.737 × 1033.486 × 1037.684 × 1023.732 × 1046.568 × 1035.2004.168 × 1031.977 × 104
σ x y 1.182 × 1046.839 × 1032.491 × 1031.317 × 1021.446 × 1046.164 × 1036.7053.019 × 1031.268 × 104
σ y y 9.018 × 1043.730 × 1042.671 × 1042.916 × 1033.499 × 1044.003 × 1044.358 × 1012.852 × 1041.638 × 105
σ z z 1.872 × 1049.276 × 1036.021 × 1031.073 × 1021.189 × 1041.124 × 1046.6886.098 × 1033.591 × 104
u3.141 × 10−11.312 × 10−11.278 × 10−17.662 × 10−39.869 × 10−11.526 × 10−18.258 × 10−51.159 × 10−17.993 × 10−1
v4.938 × 10−13.039 × 10−16.544 × 10−21.136 × 10−31.4561.943 × 10−16.575 × 10−59.955 × 10−22.201
Table A16. Detailed results for the block compression use case Simulation 2.
Table A16. Detailed results for the block compression use case Simulation 2.
SIMULATION 2
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.70962.194 × 10−1−1.88431.366 × 10−1−0.23050.53182.513 × 10−30.71290.0908
ε x y t 0.59733.255 × 10−1−1.01115.049 × 10−1−0.17310.27822.402 × 10−30.63170.0484
ε y y t 0.70652.224 × 10−10.42362.569 × 10−1−0.22380.53621.582 × 10−30.71440.0893
ε x x p 0.70092.318 × 10−1−1.10474.716 × 10−1−0.19680.54747.692 × 10−40.71050.0898
ε x y p 0.61013.059 × 10−10.60862.773 × 10−2−0.09030.24723.168 × 10−30.62830.0483
ε y y p 0.70052.322 × 10−1−0.01813.360 × 10−2−0.20020.53438.637 × 10−40.71130.0899
ε z z p −0.57429.762 × 10−10.60197.576 × 10−2−1.08390.32897.350 × 10−40.70090.0660
σ x x 0.32743.777 × 10−10.88182.484 × 10−3−1.69680.64803.008 × 10−40.85160.3220
σ x y −0.43059.030 × 10−10.56256.341 × 10−3−0.56890.03282.754 × 10−40.57890.2049
σ y y 0.18294.317 × 10−10.70023.199 × 10−30.55360.60072.593 × 10−50.7000−0.1031
σ z z −0.35968.129 × 10−10.73701.084 × 10−20.48290.51601.780 × 10−40.7096−0.1446
u0.89321.158 × 10−10.92638.374 × 10−30.35610.94187.221 × 10−50.95790.4798
v0.83451.753 × 10−10.98134.156 × 10−30.73130.95072.445 × 10−50.96770.5499
mean 0.37683.803 × 10−10.18505.531 × 10−2−0.18000.51493.320 × 10−40.73660.1409
MSE ε x x t 1.744 × 10−31.318 × 10−31.732 × 10−28.206 × 10−47.390 × 10−32.812 × 10−31.509 × 10−51.724 × 10−35.461 × 10−3
ε x y t 2.154 × 10−31.741 × 10−31.076 × 10−22.700 × 10−36.274 × 10−33.861 × 10−31.285 × 10−51.970 × 10−35.090 × 10−3
ε y y t 1.772 × 10−31.342 × 10−33.479 × 10−31.550 × 10−37.387 × 10−32.799 × 10−39.547 × 10−61.724 × 10−35.496 × 10−3
ε x x p 1.811 × 10−31.404 × 10−31.275 × 10−22.856 × 10−37.248 × 10−32.741 × 10−34.658 × 10−61.753 × 10−35.512 × 10−3
ε x y p 1.859 × 10−31.459 × 10−31.866 × 10−31.322 × 10−45.199 × 10−33.590 × 10−31.511 × 10−51.772 × 10−34.538 × 10−3
ε y y p 1.824 × 10−31.414 × 10−36.200 × 10−32.046 × 10−47.309 × 10−32.836 × 10−35.260 × 10−61.758 × 10−35.542 × 10−3
ε z z p 1.408 × 10−78.729 × 10−82.424 × 10−34.614 × 10−41.863 × 10−76.001 × 10−86.573 × 10−112.675 × 10−88.352 × 10−8
σ x x 1.019 × 1045.721 × 1031.791 × 1033.762 × 1014.085 × 1045.332 × 1034.5572.249 × 1031.027 × 104
σ x y 8.585 × 1035.419 × 1032.626 × 1033.805 × 1019.416 × 1035.805 × 1031.6532.527 × 1034.772 × 103
σ y y 8.018 × 1044.236 × 1042.942 × 1043.139 × 1024.381 × 1043.919 × 1042.5452.944 × 1041.083 × 105
σ z z 2.661 × 1041.591 × 1045.149 × 1032.122 × 1021.012 × 1049.474 × 1033.4845.684 × 1032.240 × 104
u4.301 × 10−14.665 × 10−12.971 × 10−13.374 × 10−22.5942.343 × 10−12.909 × 10−41.697 × 10−12.096
v8.877 × 10−19.405 × 10−11.001 × 10−12.229 × 10−21.4422.647 × 10−11.312 × 10−41.735 × 10−12.414
Table A17. Detailed results for the block compression use case Simulation 7.
Table A17. Detailed results for the block compression use case Simulation 7.
SIMULATION 7
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.99912.601 × 10−40.98816.498 × 10−30.53680.96642.355 × 10−20.96880.3910
ε x y t 0.99876.959 × 10−40.96232.560 × 10−20.25730.95123.642 × 10−20.95080.2373
ε y y t 0.99912.628 × 10−40.96961.661 × 10−20.53970.96632.395 × 10−20.96910.3940
ε x x p 0.99912.756 × 10−40.87264.550 × 10−30.52570.96552.371 × 10−20.96880.3834
ε x y p 0.99867.165 × 10−40.96163.809 × 10−30.24960.95772.802 × 10−20.94840.2319
ε y y p 0.99912.774 × 10−40.89694.043 × 10−30.52810.96352.655 × 10−20.96900.3846
ε z z p 0.99684.519 × 10−50.77959.273 × 10−30.68620.96292.088 × 10−20.97890.4545
σ x x 0.99656.357 × 10−40.98836.623 × 10−40.79800.97431.251 × 10−20.98680.5950
σ x y 0.99629.610 × 10−40.97751.717 × 10−30.61310.94103.089 × 10−20.97720.5081
σ y y 0.99153.103 × 10−30.87131.406 × 10−30.89000.98914.567 × 10−30.99450.7272
σ z z 0.99432.068 × 10−30.98743.501 × 10−50.83450.98001.025 × 10−20.99060.6242
u0.99963.315 × 10−50.98511.746 × 10−30.93860.99731.892 × 10−30.99670.9069
v0.99972.460 × 10−50.99232.524 × 10−40.94150.99761.783 × 10−30.99720.9224
mean 0.99768.258 × 10−50.94104.310 × 10−30.64150.97021.884 × 10−20.97670.5200
MSE ε x x t 3.776 × 10−61.124 × 10−65.131 × 10−52.808 × 10−52.001 × 10−31.454 × 10−41.018 × 10−41.349 × 10−42.631 × 10−3
ε x y t 7.595 × 10−64.066 × 10−62.200 × 10−41.496 × 10−44.339 × 10−32.854 × 10−42.128 × 10−42.873 × 10−44.456 × 10−3
ε y y t 3.777 × 10−61.144 × 10−61.324 × 10−47.231 × 10−52.004 × 10−31.467 × 10−41.043 × 10−41.347 × 10−42.638 × 10−3
ε x x p 3.881 × 10−61.196 × 10−65.530 × 10−41.975 × 10−52.059 × 10−31.496 × 10−41.029 × 10−41.356 × 10−42.676 × 10−3
ε x y p 7.212 × 10−63.779 × 10−62.027 × 10−42.009 × 10−53.957 × 10−32.229 × 10−41.478 × 10−42.722 × 10−44.051 × 10−3
ε y y p 3.894 × 10−61.213 × 10−64.511 × 10−41.769 × 10−52.065 × 10−31.598 × 10−41.162 × 10−41.357 × 10−42.692 × 10−3
ε z z p 5.595 × 10−107.808 × 10−129.648 × 10−44.057 × 10−55.421 × 10−86.411 × 10−93.608 × 10−93.652 × 10−99.424 × 10−8
σ x x 1.240 × 1022.269 × 1014.180 × 1022.364 × 1017.209 × 1039.186 × 1024.463 × 1024.708 × 1021.445 × 104
σ x y 6.742 × 1011.684 × 1013.950 × 1023.009 × 1016.781 × 1031.034 × 1035.413 × 1023.991 × 1028.620 × 103
σ y y 1.877 × 1036.894 × 1022.858 × 1043.124 × 1022.443 × 1042.421 × 1031.015 × 1031.226 × 1036.060 × 104
σ z z 2.603 × 1029.521 × 1015.801 × 1021.6127.619 × 1039.203 × 1024.717 × 1024.317 × 1021.730 × 104
u1.251 × 10−39.453 × 10−54.255 × 10−24.980 × 10−31.752 × 10−17.838 × 10−35.395 × 10−39.348 × 10−32.655 × 10−1
v1.509 × 10−31.275 × 10−44.010 × 10−21.308 × 10−33.030 × 10−11.242 × 10−29.241 × 10−31.433 × 10−24.024 × 10−1
Table A18. Detailed results for the block compression use case Simulation 12.
Table A18. Detailed results for the block compression use case Simulation 12.
SIMULATION 12
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.65843.166 × 10−1−3.41452.071−1.16240.46562.148 × 10−40.71870.0506
ε x y t 0.53243.411 × 10−1−1.72595.343 × 10−1−0.11940.42784.223 × 10−40.71960.0415
ε y y t 0.65633.215 × 10−1−2.95702.652−1.10910.45348.915 × 10−40.71960.0503
ε x x p 0.65783.175 × 10−1−2.76141.762−1.11390.46181.278 × 10−30.71580.0478
ε x y p 0.51683.478 × 10−1−0.11226.743 × 10−1−0.07890.40098.991 × 10−30.71570.0401
ε y y p 0.65913.172 × 10−1−0.51618.422 × 10−1−1.10140.43002.718 × 10−40.71650.0495
ε z z p 0.60643.871 × 10−10.27231.236 × 10−1−0.46020.26587.615 × 10−40.68080.0781
σ x x 0.61821.745 × 10−10.88284.138 × 10−20.01800.72183.966 × 10−50.84230.1688
σ x y 0.47994.811 × 10−10.88031.074 × 10−20.32610.59128.127 × 10−40.77350.1288
σ y y 0.55834.457 × 10−10.81369.338 × 10−30.37930.68032.977 × 10−40.82850.1275
σ z z 0.59193.890 × 10−10.91961.648 × 10−30.33780.63561.603 × 10−40.79790.0963
u0.72772.782 × 10−10.90726.483 × 10−30.46790.91577.408 × 10−50.92690.5649
v0.93105.347 × 10−20.98661.786 × 10−40.71660.96212.432 × 10−60.98000.5744
mean 0.63033.204 × 10−1−0.44806.652 × 10−1−0.22300.57026.266 × 10−40.77970.1553
MSE ε x x t 1.092 × 10−31.012 × 10−31.412 × 10−26.623 × 10−36.915 × 10−31.828 × 10−37.349 × 10−78.997 × 10−43.036 × 10−3
ε x y t 2.505 × 10−31.827 × 10−31.460 × 10−22.863 × 10−35.997 × 10−33.176 × 10−32.344 × 10−61.502 × 10−35.135 × 10−3
ε y y t 1.115 × 10−31.043 × 10−31.284 × 10−28.604 × 10−36.842 × 10−31.884 × 10−33.073 × 10−69.098 × 10−43.081 × 10−3
ε x x p 1.098 × 10−31.019 × 10−31.207 × 10−25.655 × 10−36.783 × 10−31.849 × 10−34.391 × 10−69.120 × 10−43.055 × 10−3
ε x y p 2.264 × 10−31.629 × 10−35.210 × 10−33.159 × 10−35.054 × 10−33.080 × 10−34.622 × 10−51.332 × 10−34.497 × 10−3
ε y y p 1.107 × 10−31.030 × 10−34.921 × 10−32.734 × 10−36.821 × 10−31.971 × 10−39.399 × 10−79.202 × 10−43.085 × 10−3
ε z z p 1.229 × 10−71.208 × 10−72.362 × 10−34.012 × 10−44.558 × 10−77.910 × 10−88.205 × 10−119.964 × 10−82.878 × 10−7
σ x x 2.769 × 1041.265 × 1048.502 × 1033.001 × 1037.121 × 1046.602 × 1039.414 × 10−11.144 × 1046.028 × 104
σ x y 2.415 × 1042.233 × 1045.559 × 1034.987 × 1023.128 × 1046.161 × 1031.225 × 1011.052 × 1044.045 × 104
σ y y 1.760 × 1051.776 × 1057.428 × 1043.722 × 1032.474 × 1054.010 × 1043.734 × 1016.835 × 1043.478 × 105
σ z z 3.808 × 1043.630 × 1047.501 × 1031.538 × 1026.179 × 1041.124 × 1044.9431.886 × 1048.433 × 104
u4.605 × 10−14.704 × 10−11.569 × 10−11.096 × 10−28.999 × 10−11.527 × 10−11.342 × 10−41.235 × 10−17.358 × 10−1
v3.434 × 10−12.660 × 10−16.647 × 10−28.887 × 10−41.4101.934 × 10−11.240 × 10−59.955 × 10−22.117
Table A19. Detailed results for the block compression use case Simulation 13.
Table A19. Detailed results for the block compression use case Simulation 13.
SIMULATION 13
MLPPINNSVRGBDTRKNNRGPR
meanstdmeanstd meanstd
R2 ε x x t 0.75112.314 × 10−1−4.38172.685−0.29330.52049.591 × 10−40.71640.0857
ε x y t 0.58013.853 × 10−1−0.91141.065 × 10−1−0.17970.26232.829 × 10−30.63140.0485
ε y y t 0.75172.319 × 10−10.11259.052 × 10−2−0.26460.52801.380 × 10−30.71860.0873
ε x x p 0.75472.273 × 10−1−2.11309.596 × 10−1−0.27400.53574.099 × 10−40.71310.0826
ε x y p 0.56164.021 × 10−10.53562.972 × 10−2−0.11160.22507.643 × 10−30.62770.0480
ε y y p 0.75562.269 × 10−1−0.41743.678 × 10−1−0.27410.52178.630 × 10−40.71420.0830
ε z z p 0.69592.525 × 10−10.45482.042 × 10−1−0.40130.49907.038 × 10−50.72670.0912
σ x x 0.42515.887 × 10−10.90773.046 × 10−3−0.49880.78473.553 × 10−40.86200.2942
σ x y −0.00888.836 × 10−10.83143.051 × 10−30.18020.42556.447 × 10−40.68050.1513
σ y y 0.60424.473 × 10−10.66788.365 × 10−4−0.02500.76085.706 × 10−60.84180.1470
σ z z 0.63563.848 × 10−10.93511.247 × 10−30.07880.69991.686 × 10−40.82820.1321
u0.96272.725 × 10−20.93924.615 × 10−30.36760.94671.345 × 10−40.96250.4940
v0.94785.094 × 10−20.98053.346 × 10−30.72700.95211.989 × 10−50.96970.5572
mean 0.64753.326 × 10−1−0.11223.265 × 10−1−0.07450.58941.579 × 10−40.76870.1771
MSE ε x x t 1.412 × 10−31.313 × 10−33.054 × 10−21.523 × 10−27.339 × 10−32.721 × 10−35.442 × 10−61.609 × 10−35.188 × 10−3
ε x y t 2.253 × 10−32.067 × 10−31.025 × 10−25.715 × 10−46.329 × 10−33.958 × 10−31.518 × 10−51.977 × 10−35.105 × 10−3
ε y y t 1.422 × 10−31.328 × 10−35.080 × 10−35.182 × 10−47.239 × 10−32.702 × 10−37.900 × 10−61.611 × 10−35.225 × 10−3
ε x x p 1.389 × 10−31.288 × 10−31.763 × 10−25.436 × 10−37.217 × 10−32.630 × 10−32.322 × 10−61.625 × 10−35.197 × 10−3
ε x y p 2.107 × 10−31.933 × 10−32.232 × 10−31.428 × 10−45.342 × 10−33.724 × 10−33.673 × 10−51.789 × 10−34.575 × 10−3
ε y y p 1.398 × 10−31.299 × 10−38.111 × 10−32.104 × 10−37.291 × 10−32.737 × 10−34.939 × 10−61.635 × 10−35.247 × 10−3
ε z z p 7.358 × 10−86.111 × 10−83.120 × 10−31.168 × 10−33.391 × 10−71.212 × 10−71.703 × 10−116.614 × 10−82.199 × 10−7
σ x x 2.766 × 1042.832 × 1044.438 × 1031.466 × 1027.211 × 1041.036 × 1041.709 × 1016.640 × 1033.395 × 104
σ x y 1.954 × 1041.711 × 1043.266 × 1035.908 × 1011.588 × 1041.113 × 1041.249 × 1016.187 × 1031.644 × 104
σ y y 1.214 × 1051.372 × 1051.019 × 1052.565 × 1023.143 × 1057.333 × 1041.7504.851 × 1042.615 × 105
σ z z 2.089 × 1042.205 × 1043.720 × 1037.147 × 1015.280 × 1041.720 × 1049.6649.844 × 1034.974 × 104
u1.395 × 10−11.019 × 10−12.273 × 10−11.726 × 10−22.3651.992 × 10−15.028 × 10−41.401 × 10−11.892
v2.707 × 10−12.644 × 10−11.011 × 10−11.737 × 10−21.4172.487 × 10−11.032 × 10−41.573 × 10−12.299

References

  1. Reddy, J.N. Introduction to the Finite Element Method; Mechanical Engineering; McGraw Hill Education: New York, NY, USA, 2019. [Google Scholar]
  2. Yang, X.-S.; Koziel, S.; Leifsson, L. Computational Optimization, Modelling and Simulation: Recent Trends and Challenges. Procedia Comput. Sci. 2013, 18, 855–860. [Google Scholar] [CrossRef] [Green Version]
  3. Roberts, S.M.; Kusiak, J.; Liu, Y.L.; Forcellese, A.; Withers, P.J. Prediction of damage evolution in forged aluminium metal matrix composites using a neural network approach. J. Mater. Process. Technol. 1998, 80–81, 507–512. [Google Scholar] [CrossRef]
  4. Hans Raj, K.; Sharma, R.S.; Srivastava, S.; Patvardhan, C. Modeling of manufacturing processes with ANNs for intelligent manufacturing. Int. J. Mach. Tools Manuf. 2000, 40, 851–868. [Google Scholar] [CrossRef]
  5. García-Crespo, A.; Ruiz-Mezcua, B.; Fernández-Fdz, D.; Zaera, R. Prediction of the response under impact of steel armours using a multilayer perceptron. Neural Comput. Appl. 2007, 16, 147–154. [Google Scholar] [CrossRef] [Green Version]
  6. Nourbakhsh, M.; Irizarry, J.; Haymaker, J. Generalizable surrogate model features to approximate stress in 3D trusses. Eng. Appl. Artif. Intell. 2018, 71, 15–27. [Google Scholar] [CrossRef]
  7. Chan, W.L.; Fu, M.W.; Lu, J. An integrated FEM and ANN methodology for metal-formed product design. Eng. Appl. Artif. Intell. 2008, 21, 1170–1181. [Google Scholar] [CrossRef]
  8. D’Addona, D.M.; Antonelli, D. Neural Network Multiobjective Optimization of Hot Forging. Procedia CIRP 2018, 67, 498–503. [Google Scholar] [CrossRef]
  9. Gudur, P.P.; Dixit, U.S. A neural network-assisted finite element analysis of cold flat rolling. Eng. Appl. Artif. Intell. 2008, 21, 43–52. [Google Scholar] [CrossRef]
  10. Pellicer-Valero, O.J.; Rupérez, M.J.; Martínez-Sanchis, S.; Martín-Guerrero, J.D. Real-time biomechanical modeling of the liver using Machine Learning models trained on Finite Element Method simulations. Expert Syst. Appl. 2020, 143, 113083. [Google Scholar] [CrossRef]
  11. Abueidda, D.W.; Almasri, M.; Ammourah, R.; Ravaioli, U.; Jasiuk, I.M.; Sobh, N.A. Prediction and optimization of mechanical properties of composites using convolutional neural networks. Compos. Struct. 2019, 227, 111264. [Google Scholar] [CrossRef] [Green Version]
  12. Pfaff, T.; Fortunato, M.; Sanchez-Gonzalez, A.; Battaglia, P.W. Learning Mesh-Based Simulation with Graph Networks. In Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia, 26 April–1 May 2020. [Google Scholar]
  13. Loghin, A.; Ismonov, S. Augmenting Generic Fatigue Crack Growth Models Using 3D Finite Element Simulations and Gaussian Process Modeling. In Proceedings of the ASME 2019 Pressure Vessels & Piping Conference, San Antonio, TX, USA, 14ߝ19 July 2019; Volume 2. [Google Scholar] [CrossRef]
  14. Ming, W.; Zhang, G.; Li, H.; Guo, J.; Zhang, Z.; Huang, Y.; Chen, Z. A hybrid process model for EDM based on finite-element method and Gaussian process regression. Int. J. Adv. Manuf. Technol. 2014, 74, 1197–1211. [Google Scholar] [CrossRef]
  15. Pan, F.; Zhu, P.; Zhang, Y. Metamodel-based lightweight design of B-pillar with TWB structure via support vector regression. Comput. Struct. 2010, 88, 36–44. [Google Scholar] [CrossRef]
  16. Li, H.; Shi, M.; Liu, X.; Shi, Y. Uncertainty optimization of dental implant based on finite element method, global sensitivity analysis and support vector regression. Proceedings of the Institution of Mechanical Engineers. Part H J. Eng. Med. 2019, 233, 232–243. [Google Scholar] [CrossRef] [PubMed]
  17. Hu, F.; Li, D. Modelling and Simulation of Milling Forces Using an Arbitrary Lagrangian–Eulerian Finite Element Method and Support Vector Regression. J. Optim. Theory Appl. 2012, 153, 461–484. [Google Scholar] [CrossRef]
  18. Martínez-Martínez, F.; Rupérez-Moreno, M.J.; Martínez-Sober, M.; Solves-Llorens, J.A.; Lorente, D.; Serrano-López, A.J.; Martínez-Sanchis, S.; Monserrat, C.; Martín-Guerrero, J.D. A finite element-based machine learning approach for modeling the mechanical behavior of the breast tissues under compression in real-time. Comput. Biol. Med. 2017, 90, 116–124. [Google Scholar] [CrossRef] [PubMed]
  19. Zhang, W.; Zhang, R.; Wu, C.; Goh, A.T.C.; Wang, L. Assessment of basal heave stability for braced excavations in anisotropic clay using extreme gradient boosting and random forest regression. Undergr. Space 2020. [Google Scholar] [CrossRef]
  20. Qi, Z.; Zhang, N.; Liu, Y.; Chen, W. Prediction of mechanical properties of carbon fiber based on cross-scale FEM and machine learning. Compos. Struct. 2019, 212, 199–206. [Google Scholar] [CrossRef]
  21. Haghighat, E.; Juanes, R. SciANN: A Keras/TensorFlow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. Comput. Methods Appl. Mech. Eng. 2021, 373, 113552. [Google Scholar] [CrossRef]
  22. Haghighat, E.; Raissi, M.; Moure, A.; Gomez, H.; Juanes, R. A physics-informed deep learning framework for inversion and surrogate modeling in solid mechanics. Comput. Methods Appl. Mech. Eng. 2021, 379, 113741. [Google Scholar] [CrossRef]
  23. Shin, Y. On the Convergence of Physics Informed Neural Networks for Linear Second-Order Elliptic and Parabolic Type PDEs. CiCP 2020, 28, 2042–2074. [Google Scholar] [CrossRef]
  24. Yin, M.; Zheng, X.; Humphrey, J.D.; Em Karniadakis, G. Non-invasive Inference of Thrombus Material Properties with Physics-Informed Neural Networks. Comput. Methods Appl. Mech. Eng. 2021, 375, 113603. [Google Scholar] [CrossRef]
  25. Arnold, F.; King, R. State–space modeling for control based on physics-informed neural networks. Eng. Appl. Artif. Intell. 2021, 101, 104195. [Google Scholar] [CrossRef]
  26. Zobeiry, N.; Humfeld, K.D. A physics-informed machine learning approach for solving heat transfer equation in advanced manufacturing and engineering applications. Eng. Appl. Artif. Intell. 2021, 101, 104232. [Google Scholar] [CrossRef]
  27. Nascimento, R.G.; Fricke, K.; Viana, F.A.C. A tutorial on solving ordinary differential equations using Python and hybrid physics-informed neural network. Eng. Appl. Artif. Intell. 2020, 96, 103996. [Google Scholar] [CrossRef]
  28. Jin, X.; Cai, S.; Li, H.; Karniadakis, G.E. NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations. J. Comput. Phys. 2021, 426, 109951. [Google Scholar] [CrossRef]
  29. Mao, Z.; Jagtap, A.D.; Karniadakis, G.E. Physics-informed neural networks for high-speed flows. Comput. Methods Appl. Mech. Eng. 2020, 360, 112789. [Google Scholar] [CrossRef]
  30. Goan, E.; Fookes, C. Bayesian Neural Networks: An Introduction and Survey. In Case Studies in Applied Bayesian Data Science; CIRM Jean-Morlet Chair, Fall 2018; Lecture Notes in Mathematics; Mengersen, K.L., Pudlo, P., Robert, C.P., Eds.; Springer: Cham, Switzerland, 2020; Volume 2259, pp. 45–87. [Google Scholar]
  31. Garnelo, M.; Rosenbaum, D.; Maddison, C.; Ramalho, T.; Saxton, D.; Shanahan, M.; Teh, Y.W.; Rezende, D.; Eslami, S.M.A. Conditional Neural Processes. In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10–15 July 2018; Dy, J., Krause, A., Eds.; pp. 1704–1713. [Google Scholar]
  32. Wang, S.; Teng, Y.; Perdikaris, P. Understanding and mitigating gradient pathologies in physics-informed neural networks. arXiv 2020, arXiv:2001.04536. [Google Scholar]
  33. Pang, G.; Lu, L.; Karniadakis, G.E. fPINNs: Fractional Physics-Informed Neural Networks. SIAM J. Sci. Comput. 2019, 41, A2603–A2626. [Google Scholar] [CrossRef]
  34. Yang, Y.; Perdikaris, P. Adversarial uncertainty quantification in physics-informed neural networks. J. Comput. Phys. 2019, 394, 136–152. [Google Scholar] [CrossRef] [Green Version]
  35. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  36. Ngatchou, P.; Zarei, A.; El-Sharkawi, A. Pareto Multi Objective Optimization. In Proceedings of the 13th International Conference on Intelligent Systems Application to Power Systems, Arlington, VA, USA, 6–10 November 2005; IEEE Service Center: Piscataway, NJ, USA, 2005; pp. 84–91. [Google Scholar] [CrossRef]
  37. Pettit, C.L.; Wilson, D.K. A physics-informed neural network for sound propagation in the atmospheric boundary layer. In Proceedings of the 179th Meeting of the Acoustical Society of America, Acoustics Virtually Everywhere, 7–11 December 2020; p. 22002. [Google Scholar] [CrossRef]
  38. Lihua, L. Simulation physics-informed deep neural network by adaptive Adam optimization method to perform a comparative study of the system. Eng. Comput. 2021, 1–20. [Google Scholar] [CrossRef]
  39. McClenny, L.; Braga-Neto, U. Self-Adaptive Physics-Informed Neural Networks using a Soft Attention Mechanism. arXiv 2020, arXiv:2009.04544. [Google Scholar]
  40. Jagtap, A.D.; Kawaguchi, K.; Karniadakis, G.E. Adaptive activation functions accelerate convergence in deep and physics-informed neural networks. J. Comput. Phys. 2020, 404, 109136. [Google Scholar] [CrossRef] [Green Version]
  41. 1.11. Ensemble Methods—Scikit-Learn 0.24.1 Documentation (2021.000Z). Available online: https://scikit-learn.org/stable/modules/ensemble.html (accessed on 5 November 2020).
  42. Friedman, J.H. Greedy Function Approximation: A Gradient Boosting Machine. Ann. Stat. 2001, 29, 1189–1232. [Google Scholar] [CrossRef]
  43. 1.6. Nearest Neighbors—Scikit-Learn 0.24.1 Documentation (2021.000Z). Available online: https://scikit-learn.org/stable/modules/neighbors.html (accessed on 5 November 2020).
  44. Schulz, E.; Speekenbrink, M.; Krause, A. A tutorial on Gaussian process regression: Modelling, exploring, and exploiting functions. J. Math. Psychol. 2018, 85, 1–16. [Google Scholar] [CrossRef]
  45. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning, 3rd ed.; Adaptive Computation and Machine Learning; MIT Press: Cambridge, MA, USA, 2008. [Google Scholar]
  46. Awad, M.; Khanna, R. Support Vector Regression. In Efficient Learning Machines. Theories, Concepts, and Applications for Engineers and System Designers; The Expert’s Voice in Machine Learning; Awad, M., Khanna, R., Eds.; Apress Open: New York, NY, USA, 2015; pp. 67–80. [Google Scholar]
  47. Smola, A.J.; Schölkopf, B. A tutorial on support vector regression. Stat. Comput. 2004, 14, 199–222. [Google Scholar] [CrossRef] [Green Version]
  48. Svozil, D.; Kvasnicka, V.; Pospichal, J. Introduction to multi-layer feed-forward neural networks. Chemom. Intell. Lab. Syst. 1997, 39, 43–62. [Google Scholar] [CrossRef]
  49. Leshno, M.; Lin, V.Y.; Pinkus, A.; Schocken, S. Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 1993, 6, 861–867. [Google Scholar] [CrossRef] [Green Version]
  50. Lu, Z.; Pu, H.; Wang, F.; Hu, Z.; Wang, L. The Expressive Power of Neural Networks: A View from the Width. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  51. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  52. Gardner, M.W.; Dorling, S.R. Artificial neural networks (the multilayer perceptron)—A review of applications in the atmospheric sciences. Atmos. Environ. 1998, 32, 2627–2636. [Google Scholar] [CrossRef]
  53. Rumelhart, D.E.; Hinton, G.E.; Williams, R.J. Learning representations by back-propagating errors. Nature 1986, 323, 533–536. [Google Scholar] [CrossRef]
  54. Lu, L.; Pestourie, R.; Yao, W.; Wang, Z.; Verdugo, F.; Johnson, S.G. Physics-informed neural networks with hard constraints for inverse design. arXiv 2021, arXiv:2102.04626. [Google Scholar]
Figure 1. Principle of our surrogate model approach: all N nodes (i.e., their coordinates), together with the respective generalization variable, are sequentially entered into a surrogate model, which then sequentially predicts the outcome of the respective coordinates (i.e., the displacements, strains, and stresses of the respective node).
Figure 1. Principle of our surrogate model approach: all N nodes (i.e., their coordinates), together with the respective generalization variable, are sequentially entered into a surrogate model, which then sequentially predicts the outcome of the respective coordinates (i.e., the displacements, strains, and stresses of the respective node).
Applsci 11 09411 g001
Figure 2. The three use cases: (a) elongation of a plate (diameter = 100 mm) about 5 mm at the top end in positive y-direction, (b) bending of a beam by a displacement at the top end about 5 mm in positive x-direction, (c) compression of a block with four perforations in the center of the quarter-symmetric parts (width = 220 mm) about 5 mm in negative y-direction and (d) the considered coordinate system.
Figure 2. The three use cases: (a) elongation of a plate (diameter = 100 mm) about 5 mm at the top end in positive y-direction, (b) bending of a beam by a displacement at the top end about 5 mm in positive x-direction, (c) compression of a block with four perforations in the center of the quarter-symmetric parts (width = 220 mm) about 5 mm in negative y-direction and (d) the considered coordinate system.
Applsci 11 09411 g002
Figure 3. Perfect nonlinear elastoplastic material properties for a Young’s modulus of 210 GPa, Poisson’s ratio of 0.3 and yield stress of 900 MPa. The yield stress varies in simulations regarding the beam and block use cases.
Figure 3. Perfect nonlinear elastoplastic material properties for a Young’s modulus of 210 GPa, Poisson’s ratio of 0.3 and yield stress of 900 MPa. The yield stress varies in simulations regarding the beam and block use cases.
Applsci 11 09411 g003
Figure 4. Block use case: Abaqus FEM results that our surrogate models should predict.
Figure 4. Block use case: Abaqus FEM results that our surrogate models should predict.
Applsci 11 09411 g004
Figure 5. Elongation of a perforated plate, Simulation 1 (extrapolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of σ x y .
Figure 5. Elongation of a perforated plate, Simulation 1 (extrapolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of σ x y .
Applsci 11 09411 g005
Figure 6. Elongation of a perforated plate, Simulation 4 (interpolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of σ z z .
Figure 6. Elongation of a perforated plate, Simulation 4 (interpolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of σ z z .
Applsci 11 09411 g006
Figure 7. Bending of a beam, Simulation 6 (interpolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of ε y y t .
Figure 7. Bending of a beam, Simulation 6 (interpolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of ε y y t .
Applsci 11 09411 g007
Figure 8. Bending of a beam, Simulation 9 (extrapolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of ε x x p .
Figure 8. Bending of a beam, Simulation 9 (extrapolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of ε x x p .
Applsci 11 09411 g008
Figure 9. Compression of a block, Simulation 7 (interpolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of ε x x p .
Figure 9. Compression of a block, Simulation 7 (interpolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of ε x x p .
Applsci 11 09411 g009aApplsci 11 09411 g009b
Figure 10. Compression of a block, Simulation 13 (extrapolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of σ x y .
Figure 10. Compression of a block, Simulation 13 (extrapolation): absolute errors of different surrogate models (af) and ground truth Abaqus FEM simulation (g) of σ x y .
Applsci 11 09411 g010
Table 1. Classic FEM use cases. Overview of the three use cases and their main change and types of deformations. In the first two use cases, only a single change is conduced, while in the last use case, a combination of changes is studied.
Table 1. Classic FEM use cases. Overview of the three use cases and their main change and types of deformations. In the first two use cases, only a single change is conduced, while in the last use case, a combination of changes is studied.
Use CaseChangeDeformation
PlateGeometryElongation
BeamMaterial PropertiesBending
BlockGeometry, Material PropertiesCompression
Table 2. Dataset generation by executing several different simulations with varying generalization variables (Plate: perforation D i a m e t e r , Beam: Y i e l d S t r e s s and Block: Y i e l d S t r e s s and W i d t h ), bold marked simulations are not in the training dataset and only used for test and evaluation. Interpolation tasks are marked with superscript ( i ) and extrapolation tasks with superscript ( e ) .
Table 2. Dataset generation by executing several different simulations with varying generalization variables (Plate: perforation D i a m e t e r , Beam: Y i e l d S t r e s s and Block: Y i e l d S t r e s s and W i d t h ), bold marked simulations are not in the training dataset and only used for test and evaluation. Interpolation tasks are marked with superscript ( i ) and extrapolation tasks with superscript ( e ) .
Plate
Simulation ID1  ( e ) 234  ( i ) 56  ( i ) 789  ( e )
Diameter [mm]60708090100110120130140
Beam
Simulation ID1  ( e ) 234  ( i ) 56  ( i ) 789  ( e )
Yield Stress [MPa]850900950100010501100115012001250
Block
Simulation ID1  ( e ) 2  ( e ) 34567  ( i ) 89101112  ( e ) 13  ( e )
Yield Stress [MPa]75075090090090010501050105012001200120013501350
Width [mm]180260200220240200220240200220240180260
Table 3. Abaqus FEM simulation control parameters.
Table 3. Abaqus FEM simulation control parameters.
Abaqus FEM Simulation Settings
Simulation typeStatic, General
Time period1
NlgeomOn
Max number of increments100
Initial increment size1
Min increment size1 × 10−5
Max increment size1
Equation solver methodDirect
Solution techniqueFull Newton
Table 4. Surrogate model input variables. Data obtained from FEM simulations are transformed so that each FEM node (represented by its x- and y-coordinates) with the respective generalization variable is an instance.
Table 4. Surrogate model input variables. Data obtained from FEM simulations are transformed so that each FEM node (represented by its x- and y-coordinates) with the respective generalization variable is an instance.
SimulationPlateBeamBlock
Input variablesx-coordinatex-coordinatex-coordinate
y-coordinatey-coordinatey-coordinate
DiameterYield StressYield Stress
Width
Table 5. Surrogate model output variables. For each input FEM node, a surrogate model predicts its respective strains, stresses and displacements.
Table 5. Surrogate model output variables. For each input FEM node, a surrogate model predicts its respective strains, stresses and displacements.
Output Variables
ε x x t ε x x p σ x x u
ε x y t ε x y p σ x y v
ε y y t ε y y p σ y y
ε z z p σ z z
Table 6. Dataset splits: number of training instances n and test instances m due to the data generation from Table 2.
Table 6. Dataset splits: number of training instances n and test instances m due to the data generation from Table 2.
PlateBeamBlock
Training dataset D444727206722
Test dataset T353421764107
Table 7. Plate: averaged results, bold values indicate the best performing surrogate models. Values in parentheses are the corresponding standard deviations of the average R2-scores due to repeated experiments of stochastic process models. For further information concerning simulations, see Table 2.
Table 7. Plate: averaged results, bold values indicate the best performing surrogate models. Values in parentheses are the corresponding standard deviations of the average R2-scores due to repeated experiments of stochastic process models. For further information concerning simulations, see Table 2.
ModelMLPPINNSVRGBDTRKNNRGPRFEM
Simulation 1
R20.9900
(6.155 × 10−9 )
0.7797
(8.709 × 10−2)
0.61880.6606
(3.959 × 10−8 )
0.81640.6131 -
Inference
time [s]
0.05230.07461.150.3110.007220.1519.01
Simulation 4
R20.9978
(1.970 × 10−4)
0.9089
(2.598 × 10−2)
0.71740.9014
(6.310 × 10−2)
0.92980.8761 -
Inference
time [s]
0.07810.06381.200.2710.007340.1399.08
Simulation 6
R20.9920
(1.889 × 10−3 )
0.8470
(6.309 × 10−2 )
0.72510.8503
(1.005 × 10−1 )
0.92190.8676 -
Inference
time [s]
0.05950.06411.100.2510.007970.1319.88
Simulation 9
R20.9786
(2.970 × 10−5 )
0.7562
(1.046 × 10−1 )
0.65680.7263
(9.780 × 10−9 )
0.80450.5651 -
Inference
time [s]
0.06650.07151.020.2510.007340.13910.03
Table 8. Beam: averaged results, bold values indicate the best performing surrogate models. Values in parentheses are the corresponding standard deviations of the average R2-scores due to repeated experiments of stochastic process models. For further information concerning simulations, see Table 2.
Table 8. Beam: averaged results, bold values indicate the best performing surrogate models. Values in parentheses are the corresponding standard deviations of the average R2-scores due to repeated experiments of stochastic process models. For further information concerning simulations, see Table 2.
ModelMLPPINNSVRGBDTRKNNRGPRFEM
Simulation 1
R20.6682
(7.345 × 10−6 )
0.6165
(1.648 × 10−2 )
0.51220.7120
(1.088 × 10−8 )
0.62880.5377 -
Inference
time [s]
0.07810.06381.200.2710.007340.1399.08
Simulation 4
R20.9979
(1.319 × 10−3 )
0.9379
(1.042 × 10−3 )
0.75580.9640
(6.751 × 10−4 )
0.96210.8243 -
Inference
time [s]
0.04570.1100.4180.05410.07900.2216.81
Simulation 6
R20.9981
(8.315 × 10−4 )
0.9314
(8.396 × 10−4 )
0.74060.9516
(1.368 × 10−4 )
0.96170.8059 -
Inference
time [s]
0.04420.06680.4020.05690.07050.2126.61
Simulation 9
R2−1196.2920
(6.166 × 103 )
−1305.9226
(3.269 × 101 )
−107.7646−830.8926
(4.2984 )
−322.4940−420.9029 -
Inference
time [s]
0.07810.06381.200.2710.007340.1399.08
Table 9. Block: averaged results, bold values indicate the best performing surrogate models. Values in parentheses are the corresponding standard deviations of the average R2-scores, due to repeated experiments of stochastic process models. For further information concerning simulations, see Table 2.
Table 9. Block: averaged results, bold values indicate the best performing surrogate models. Values in parentheses are the corresponding standard deviations of the average R2-scores, due to repeated experiments of stochastic process models. For further information concerning simulations, see Table 2.
ModelMLPPINNSVRGBDTRKNNRGPRFEM
Simulation 1
R20.5562
(1.952 × 10−1)
−0.4441
(6.066 × 10−1 )
−0.24630.5695
(1.665 × 10−3 )
0.78080.1059 -
Inference
time [s]
0.07810.06381.200.2710.007340.1399.08
Simulation 2
R20.3768
(3.803 × 10−1 )
0.1850
(5.531 × 10−2 )
−0.18000.5149
(3.320 × 10−4 )
0.73660.1409 -
Inference
time [s]
0.04570.1100.4180.05410.07900.2216.81
Simulation 7
R20.9976
(8.258 × 10−5 )
0.9410
(4.310 × 10−3 )
0.64150.9702
(1.884 × 10−2 )
0.97670.5200 -
Inference
time [s]
0.04420.06680.4020.05690.07050.2126.61
Simulation 12
R20.6303
(3.204 × 10−1 )
−0.4480
(6.652 × 10−1 )
−0.22300.5702
(6.266 × 10−4 )
0.77970.1553 -
Inference
time [s]
0.04420.06680.4020.05690.07050.2126.61
Simulation 13
R20.6475
(3.326 × 10−1 )
−0.1122
(3.265 × 10−1 )
−0.07450.5894
(1.579 × 10−4 )
0.76870.1771 -
Inference
time [s]
0.07810.06381.200.2710.007340.1399.08
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Hoffer, J.G.; Geiger, B.C.; Ofner, P.; Kern, R. Mesh-Free Surrogate Models for Structural Mechanic FEM Simulation: A Comparative Study of Approaches. Appl. Sci. 2021, 11, 9411. https://doi.org/10.3390/app11209411

AMA Style

Hoffer JG, Geiger BC, Ofner P, Kern R. Mesh-Free Surrogate Models for Structural Mechanic FEM Simulation: A Comparative Study of Approaches. Applied Sciences. 2021; 11(20):9411. https://doi.org/10.3390/app11209411

Chicago/Turabian Style

Hoffer, Johannes G., Bernhard C. Geiger, Patrick Ofner, and Roman Kern. 2021. "Mesh-Free Surrogate Models for Structural Mechanic FEM Simulation: A Comparative Study of Approaches" Applied Sciences 11, no. 20: 9411. https://doi.org/10.3390/app11209411

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop