# Artificial Neural Networks-Based Material Parameter Identification for Numerical Simulations of Additively Manufactured Parts by Material Extrusion

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Materials and Methods

#### 2.1. Material Parameter Identification

#### 2.1.1. Iterative Optimization Procedure

- The success of optimization algorithms is highly depending on the chosen starting point, which is usually not known
- Many iterations are needed to find appropriate parameters for complex material models, which leads to high computational costs
- A high number of error function evaluations are needed

#### 2.1.2. Direct Neural Network-Based Procedure

#### 2.2. Artificial Neural Networks

#### 2.3. Data Generation by Numerical Simulations

#### 2.4. Additive Manufacturing and Mechanical Characterization of Test Specimens—Experimental Set-Up

^{®}GmbH, Darmstadt, Germany) with a 10 kN load cell and using an extensometer for strain measurement. The testing speed was set to 1 mm per minute. Figure 5 shows the stress-strain curves of the additively manufactured test specimens. Both Young’s modulus (2055.33 MPa) and maximum tensile stress (30.70 MPa) exhibit rather low standard deviations 155.94 MPa and 0.64 MPa, respectively, while elongation at break shows lager scattering. Thus, the experimental data are well applicable for the investigations in this paper, since the elongation at break will not be considered at first. Higher strains would lead to difficulties in the numerical simulation, especially in combination with the custom yield curve. Since this is not the focus of the investigation and the custom formulation of the yield curve only serves to describe the existing material behavior without damage and other effects, the experimental results of test specimen 5 were selected as data basis for further investigations. This sample shows the representative material behavior, but has a lower elongation at break than most other samples.

## 3. Results and Discussion

#### 3.1. Iterative Optimization Results

#### 3.2. FFANN Results

#### 3.2.1. Sampling Strategies

#### 3.2.2. Quantity of Input Data

#### 3.2.3. Input Data Range

#### 3.2.4. Validation Set Sizes

#### 3.2.5. Data Modification with Gaussian Noise

#### 3.2.6. Hyperparameters

#### Gridsearch

#### Randomsearch

#### Hyperparameter Optimization

- Epochs per default 250
- Neurons from 50 to 500 in steps of 25
- GD optimizers without SGD

#### 3.3. Comparison of Iterative and Direct Inverse Procedures

## 4. Conclusions and Outlook

## Author Contributions

## Funding

## Conflicts of Interest

## Abbreviations

ABS | Acrylonitrile butadiene styrene |

AM | Additive Manufacturing |

ANN | Artificial Neural Network |

EXP | Experiment |

FE | Finite Element |

FFA | Full Factorial Approach |

FFANN | Feedforward Artificial Neural Network |

GD | Gradient Descent |

HL | Hidden Layer |

HP | Hyperparameter |

IL | Input Layer |

LHS | Latin Hypercube Sampling |

LHSG | Latin Hypercube Sampling with genetic space filling |

ML | Machine Learning |

NN | Neural Network |

OL | Output Layer |

PI | Parameter Identification |

SGD | Stochastic gradient descent |

TPE | Tree-of-Parzen-Estimators |

## Appendix A

Run | Sampling Strategy | Train Set Size | Val. Set Size | Train Range | Val. Range | NN | Noisy Data | Mean MSE Parameter Val. Set (−) | Mean MSE FDC Val. Set (${\mathit{N}}^{2}$) | MSE FDC Exp. (${\mathit{N}}^{2}$) |
---|---|---|---|---|---|---|---|---|---|---|

1 | FFA | $10,000$ | 2401 | 1 | 2 | Default | No | $0.0382$ | $46.2$ | $129.0$ |

2 | LHS | $10,000$ | 2401 | 1 | 2 | Default | No | $0.0215$ | $27.7$ | $140.7$ |

3 | FFA | 4096 | 1296 | 1 | 2 | Default | No | $0.0354$ | $25.5$ | $126.1$ |

4 | LHS | 4096 | 1296 | 1 | 2 | Default | No | $0.0288$ | $31.4$ | $125.5$ |

5 | FFA | 2401 | 625 | 1 | 2 | Default | No | $0.0469$ | $32.9$ | $122.4$ |

6 | LHS | 2401 | 625 | 1 | 2 | Default | No | $0.0332$ | $31.4$ | $144.4$ |

7 | FFA | 625 | 256 | 1 | 2 | Default | No | $0.0609$ | $45.3$ | $138.6$ |

8 | LHS | 625 | 256 | 1 | 2 | Default | No | $0.0445$ | $44.1$ | $168.3$ |

9 | LHSG | 625 | 256 | 1 | 2 | Default | No | $0.0417$ | $47.3$ | $157.1$ |

10 | FFA | 256 | 81 | 1 | 2 | Default | No | $0.0972$ | $626.7$ | $780.5$ |

11 | LHS | 256 | 81 | 1 | 2 | Default | No | $0.0489$ | $91.4$ | $199.5$ |

12 | LHSG | 256 | 81 | 1 | 2 | Default | No | $0.0512$ | $68.2$ | $162.8$ |

13 | LHS | 150 | 50 | 1 | 2 | Default | No | $0.0521$ | $90.1$ | $169.1$ |

14 | LHSG | 150 | 50 | 1 | 2 | Default | No | $0.0491$ | $85.7$ | $154.4$ |

15 | LHS | $10,000$ | 2401 | 1 | 1 | Default | No | $0.0271$ | $21.3$ | $134.8$ |

16 | LHS | $10,000$ | 2401 | 2 | 1 | Default | No | $0.0357$ | $52.7$ | $198.0$ |

17 | LHS | $10,000$ | 2401 | 3 | 1 | Default | No | $0.0782$ | $248.9$ | $320.6$ |

18 | LHS | $10,000$ | 2401 | 4 | 1 | Default | No | $0.2009$ | $355.5$ | $128.2$ |

19 | LHS | $10,000$ | 5000 | 1 | 1 | Default | No | $0.0304$ | $26.0$ | $178.2$ |

20 | LHS | $10,000$ | $10,000$ | 1 | 1 | Default | No | $0.0289$ | $25.3$ | $127.9$ |

21 | FFA | $10,000$ | 2401 | 1 | 2 | Default | Yes | $0.0317$ | $19.7$ | $224.4$ |

22 | LHS | $10,000$ | 2401 | 1 | 2 | Default | Yes | $0.0227$ | $26.3$ | $149.7$ |

23 | LHS | 625 | 256 | 1 | 2 | Default | Yes | $0.0463$ | $121.6$ | $251.9$ |

24 | FFA | $10,000$ | 2401 | 1 | 2 | HPO1 | No | $0.0254$ | $7.5$ | $149.3$ |

25 | FFA | $10,000$ | 2401 | 1 | 2 | HPO2 | No | $0.0241$ | $6.4$ | $150.1$ |

26 | FFA | $10,000$ | 2401 | 1 | 2 | HPO3 | No | $0.0257$ | $6.4$ | $142.9$ |

27 | FFA | $10,000$ | 2401 | 1 | 2 | HPO4 | No | $0.0390$ | $28.5$ | $142.9$ |

28 | LHS | $10,000$ | 2401 | 1 | 2 | HPO5 | No | $0.0185$ | $14.8$ | $155.5$ |

29 | LHS | $10,000$ | 2401 | 1 | 2 | HPO6 | No | $0.0204$ | $13.5$ | $155.3$ |

30 | LHS | $10,000$ | 2401 | 1 | 2 | HPO7 | No | $0.0188$ | $6.6$ | $140.2$ |

31 | LHS | $10,000$ | 2401 | 1 | 2 | HPO8 | No | $0.0194$ | $8.4$ | $150.3$ |

Parameter Range | ${\mathit{a}}_{\mathit{Min}}$ | ${\mathit{a}}_{\mathit{Max}}$ | ${\mathit{b}}_{\mathit{Min}}$ | ${\mathit{b}}_{\mathit{Max}}$ | ${\mathit{c}}_{\mathit{Min}}$ | ${\mathit{c}}_{\mathit{Max}}$ | ${\mathit{d}}_{\mathit{Min}}$ | ${\mathit{d}}_{\mathit{Max}}$ |
---|---|---|---|---|---|---|---|---|

1 | $50,000$ | $54,000$ | $-4060$ | $-3980$ | $10.500$ | $12.800$ | $510.00$ | $555.00$ |

2 | $50,200$ | $53,800$ | $-4056$ | $-3984$ | $10.615$ | $12.685$ | $512.25$ | $552.75$ |

3 | $50,600$ | $53,400$ | $-4048$ | $-3992$ | $10.845$ | $12.455$ | $516.75$ | $548.25$ |

4 | $51,000$ | $53,000$ | $-4040$ | $-4000$ | $11.075$ | $12.225$ | $521.25$ | $543.75$ |

(Hyper-)Parameter | Setting |
---|---|

Batch Size | 12 |

Epochs | 250 |

Neurons (Input Layer) | 35 |

Hidden Layers | 1 |

Neurons | 250 |

Kernel Initializer (HL) | Normal |

Activation (HL) | Relu |

Dropout (HL) | $0.05$ |

Neurons (Output Layer) | 4 |

Kernel Initializer (OL) | Normal |

Activation (OL) | 4 |

Neurons (Output Layer) | Linear |

Gradient Descent Optimizer | Adam |

N | Batch Size | Epochs | Neurons (HL) | Kernel Initializer (HL) | Activation (HL) | Dropout (HL) | Kernel Initializer (OL) | Optimizer |
---|---|---|---|---|---|---|---|---|

1 | 3 | 75 | 25 | Normal * | Softmax | $0.00$ | Normal * | Adam * |

2 | 6 | 100 | 50 | Uniform | Softplus | 0.05 * | Uniform | Rmsprop |

3 | 9 | 125 | 75 | Glorot Uniform | Softsign | $0.10$ | Glorot Uniform | SGD |

4 | 12 * | 150 | 100 | Lecun Uniform | Relu * | $0.15$ | Lecun Uniform | Adagrad |

5 | 15 | 175 | 125 | Zero | Tanh | $0.20$ | Zero | Adadelta |

6 | 18 | 200 | 150 | Glorot Normal | Sigmoid | $0.25$ | Glorot Normal | Adamax |

7 | 21 | 225 | 175 | He Normal | Hard Sigmoid | $0.30$ | He Normal | Nadam |

8 | 24 | 250 * | 200 | He Uniform | Linear | $0.35$ | He Uniform | − |

9 | 27 | − | 225 | − | Selu | $0.40$ | − | − |

10 | 30 | − | 250 * | − | Elu | $0.45$ | − | − |

11 | 33 | − | 275 | − | Exponential | $0.50$ | − | − |

12 | 36 | − | 300 | − | − | − | − | − |

13 | 39 | − | 325 | − | − | − | − | − |

14 | − | − | 350 | − | − | − | − | − |

15 | − | − | 375 | − | − | − | − | − |

16 | − | − | 400 | − | − | − | − | − |

17 | − | − | 425 | − | − | − | − | − |

18 | − | − | 450 | − | − | − | − | − |

19 | − | − | 475 | − | − | − | − | − |

20 | − | − | 500 | − | − | − | − | − |

N | Batch Size | Epochs | Neurons (HL) | Kernel Initializer (HL) | Activation (HL) | Dropout (HL) | Kernel Initializer (OL) | GD Optimizer |
---|---|---|---|---|---|---|---|---|

1 | $0.0380$ | $0.0402$ | $0.0323$ | $0.0304$ | $0.0406$ | $0.0253$ | $0.0322$ | $0.0329$ |

2 | $0.0328$ | $0.0393$ | $0.0320$ | $0.0310$ | $0.0375$ | $0.0321$ | $0.0324$ | $0.0364$ |

3 | $0.0343$ | $0.0379$ | $0.0329$ | $0.0324$ | $0.0301$ | $0.0338$ | $0.0312$ | $0.0447$ |

4 | $0.0308$ | $0.0350$ | $0.0341$ | $0.0333$ | $0.0314$ | $0.0426$ | $0.0332$ | $0.0616$ |

5 | $0.0317$ | $0.0303$ | $0.0291$ | $0.0834$ | $0.0389$ | $0.0438$ | $0.0318$ | $0.0708$ |

6 | $0.0288$ | $0.0327$ | $0.0312$ | $0.0301$ | $0.0379$ | $0.0487$ | $0.0354$ | $0.0265$ |

7 | $0.0302$ | $0.0309$ | $0.0309$ | $0.0277$ | $0.0259$ | $0.0490$ | $0.0328$ | $0.0281$ |

8 | $0.0279$ | $0.0324$ | $0.0296$ | $0.0287$ | $0.0314$ | $0.0562$ | $0.0306$ | − |

9 | $0.0285$ | − | $0.0319$ | − | $0.0289$ | $0.0579$ | − | − |

10 | $0.0273$ | − | $0.0312$ | − | $0.0320$ | $0.0564$ | − | − |

11 | $0.0330$ | − | $0.0316$ | − | $0.0394$ | $0.0582$ | − | − |

12 | $0.0278$ | − | $0.0302$ | − | − | − | − | − |

13 | $0.0281$ | − | $0.0295$ | − | − | − | − | − |

14 | − | − | $0.0340$ | − | − | − | − | − |

15 | − | − | $0.0295$ | − | − | − | − | − |

16 | − | − | $0.0302$ | − | − | − | − | − |

17 | − | − | $0.0310$ | − | − | − | − | − |

18 | − | − | $0.0335$ | − | − | − | − | − |

19 | − | − | $0.0301$ | − | − | − | − | − |

20 | − | − | $0.0296$ | − | − | − | − | − |

Batch Size | Epochs | Neurons (HL) | Kernel Initializer (HL) | Activation (HL) | Dropout (HL) | Kernel Initializer (OL) | GD Optimizer | Mean MSE Parameter Train Set (−) |
---|---|---|---|---|---|---|---|---|

36 | 250 | 325 | He Normal | Selu | $0.05$ | Glorot Normal | Nadam | $0.0317$ |

21 | 125 | 300 | He Normal | Softsign | $0.25$ | Glorot Uniform | Adamax | $0.0321$ |

30 | 150 | 400 | He Uniform | Tanh | $0.15$ | He Uniform | Adamax | $0.0322$ |

18 | 75 | 450 | He Normal | Selu | $0.40$ | Glorot Normal | Adamax | $0.0324$ |

24 | 250 | 450 | Uniform | Softsign | $0.35$ | He Normal | Nadam | $0.0329$ |

21 | 100 | 250 | Zero | Softmax | $0.45$ | Lecun Uniform | Adadelta | $0.0834$ |

39 | 150 | 25 | Lecun Uniform | Exponential | $0.45$ | Normal | Adadelta | $0.0844$ |

15 | 150 | 75 | Lecun Uniform | Softmax | $0.05$ | Lecun Uniform | Adadelta | $0.0920$ |

33 | 250 | 450 | Glorot Uniform | Softsign | $0.05$ | Glorot Normal | Nadam | $0.1009$ |

15 | 150 | 350 | Zero | Relu | $0.10$ | Normal | Adadelta | $0.1536$ |

**Figure A1.**Learning curves with optimizer (

**a**) Adam and (

**b**) Rmsprop, (

**c**) SGD, (

**d**) Adagrad, (

**e**) Adadelta and (

**f**) Adamax.

(Hyper-) Parameter | HPO1 | HPO2 | HPO3 | HPO4 | HPO5 | HPO6 | HPO7 | HPO8 |
---|---|---|---|---|---|---|---|---|

Batch Size | 27 | 12 | 39 | 3 | 27 | 12 | 27 | 30 |

Epochs * | 250 | 250 | 250 | 250 | 250 | 250 | 250 | 250 |

Neurons (IL) * | 35 | 35 | 35 | 35 | 35 | 35 | 35 | 35 |

Number (Hidden) (Layer) | 2 | 2 | 1 | 2 | 2 | 2 | 1 | 1 |

Neurons (HL1) | 350 | 50 | 400 | 250 | 350 | 50 | 200 | 450 |

Kernel Initializer (HL1) | He Uniform | He Uniform | Lecun Uniform | He Uniform | He Uniform | He Uniform | He Uniform | Lecun Uniform |

Activation (HL1) | Hard Sigmoid | Hard Sigmoid | Hard Sigmoid | Softsign | Hard Sigmoid | Hard Sigmoid | Hard Sigmoid | Selu |

Dropout (HL1) | $0.1$ | $0.0$ | $0.0$ | $0.35$ | $0.1$ | $0.0$ | $0.0$ | $0.05$ |

Neurons (HL2) | 250 | 300 | − | 350 | 250 | 300 | − | − |

Kernel Initializer (HL2) * | normal | normal | normal | normal | normal | normal | normal | normal |

Activation (HL2) * | linear | linear | linear | linear | linear | linear | linear | linear |

Dropout (HL2) * | $0.05$ | $0.05$ | $0.05$ | $0.05$ | $0.05$ | $0.05$ | $0.05$ | $0.05$ |

Neurons (OL) * | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 4 |

Kernel Initializer (OL) | He Uniform | Normal | He Uniform | He Normal | He Uniform | Normal | Glorot Uniform | Zero |

Activation (OL) * | linear | linear | linear | linear | linear | linear | linear | linear |

GD Optimizer | Adamax | Adam | Adam | Adagrad | Adamax | Adam | Adam | Adamax |

HP Optimization Library | Keras Tuner | Keras Tuner | Keras Tuner | Hyperas | Keras Tuner | Keras Tuner | Keras Tuner | Hyperas |

HP Optimizer | Random- search | Hyper- band | Bayesian | Bayesian TPE | Random- search | Hyper- band | Bayesian | Bayesian TPE |

Max Trials * | 100 | 100 | 100 | 100 | 100 | 100 | 100 | 100 |

Exekution per Trial * | 2 | 2 | 2 | 2 | 2 | 2 | 2 | 2 |

## References

- Gibson, I.; Rosen, D.; Stucker, B. Additive Manufacturing Technologies; Springer: New York, NY, USA, 2015. [Google Scholar] [CrossRef]
- Kumke, M.; Watschke, H.; Hartogh, P.; Bavendiek, A.K.; Vietor, T. Methods and tools for identifying and leveraging additive manufacturing design potentials. Int. J. Interact. Des. Manuf. (IJIDeM)
**2017**, 12, 481–493. [Google Scholar] [CrossRef] - Mohamed, O.A.; Masood, S.H.; Bhowmik, J.L. Optimization of fused deposition modeling process parameters: A review of current research and future prospects. Adv. Manuf.
**2015**, 3, 42–53. [Google Scholar] [CrossRef] - Mahnken, R.; Stein, E. The identification of parameters for visco-plastic models via finite-element methods and gradient methods. Model. Simul. Mater. Sci. Eng.
**1994**, 2, 597–616. [Google Scholar] [CrossRef] - Mahnken, R.; Stein, E. A unified approach for parameter identification of inelastic material models in the frame of the finite element method. Comput. Methods Appl. Mech. Eng.
**1996**, 136, 225–258. [Google Scholar] [CrossRef] - Morand, L.; Helm, D. A mixture of experts approach to handle ambiguities in parameter identification problems in material modeling. Comput. Mater. Sci.
**2019**, 167, 85–91. [Google Scholar] [CrossRef] - Kučerová, A.; Zeman, J. Estimating Parameters of Microplane Material Model Using Soft Computing Methods. In Proceedings of the 6th World Congresses of Structural and Multidisciplinary Optimization, Rio de Janeiro, Brazil, 30 May–3 June 2005. [Google Scholar]
- Goh, G.D.; Sing, S.L.; Yeong, W.Y. A review on machine learning in 3D printing: Applications, potential, and challenges. Artif. Intell. Rev.
**2020**. [Google Scholar] [CrossRef] - Mehlig, B. Artifical Neural Networks; Lecture Notes; Department of Physics, University of Gothenburg: Göteborg, Sweden, 2019. [Google Scholar]
- Kučerová, A. Identification of Nonlinear Mechanical Model Parameters Based on Softcomputing Methods. Ph.D. Thesis, Czech Technical University in Prague, Prague, Czech Republic, 2007. [Google Scholar]
- Unger, J.F.; Könke, C. An inverse parameter identification procedure assessing the quality of the estimates using Bayesian neural networks. Appl. Soft Comput.
**2011**, 11, 3357–3367. [Google Scholar] [CrossRef] - Soares, C.; de Freitas, M.; Araújo, A.; Pedersen, P. Identification of material properties of composite plate specimens. Compos. Struct.
**1993**, 25, 277–285. [Google Scholar] [CrossRef] - Gelin, J.; Ghouati, O. An inverse method for determining viscoplastic properties of aluminium alloys. J. Mater. Process. Technol.
**1994**, 45, 435–440. [Google Scholar] [CrossRef] - Araújo, A.; Soares, C.M.; de Freitas, M. Characterization of material parameters of composite plate specimens using optimization and experimental vibrational data. Compos. Part B Eng.
**1996**, 27, 185–191. [Google Scholar] [CrossRef] - Fogel, D. An introduction to simulated evolutionary optimization. IEEE Trans. Neural Netw.
**1994**, 5, 3–14. [Google Scholar] [CrossRef] [PubMed][Green Version] - Yao, L.; Sethares, W. Nonlinear parameter estimation via the genetic algorithm. IEEE Trans. Signal Process.
**1994**, 42, 927–935. [Google Scholar] [CrossRef][Green Version] - Kerschen, G.; Worden, K.; Vakakis, A.F.; Golinval, J.C. Past, present and future of nonlinear system identification in structural dynamics. Mech. Syst. Signal Process.
**2006**, 20, 505–592. [Google Scholar] [CrossRef][Green Version] - Yagawa, G.; Okuda, H. Neural networks in computational mechanics. Arch. Comput. Methods Eng.
**1996**, 3, 435–512. [Google Scholar] [CrossRef] - Jordan, M.I.; Rumelhart, D.E. Forward Models: Supervised Learning with a Distal Teacher. Cogn. Sci.
**1992**, 16, 307–354. [Google Scholar] [CrossRef] - Huber, N.; Tsakmakis, C. Determination of constitutive properties fromspherical indentation data using neural networks. Part i:the case of pure kinematic hardening in plasticity laws. J. Mech. Phys. Solids
**1999**, 47, 1569–1588. [Google Scholar] [CrossRef] - Huber, N.; Tsakmakis, C. Determination of constitutive properties fromspherical indentation data using neural networks. Part ii:plasticity with nonlinear isotropic and kinematichardening. J. Mech. Phys. Solids
**1999**, 47, 1589–1607. [Google Scholar] [CrossRef] - Lefik, M.; Schrefler, B. Artificial neural network for parameter identifications for an elasto-plastic model of superconducting cable under cyclic loading. Comput. Struct.
**2002**, 80, 1699–1713. [Google Scholar] [CrossRef] - Nardin, A.; Schrefler, B.; Lefik, M. Application of Artificial Neural Network for Identification of Parameters of a Constitutive Law for Soils. In Developments in Applied Artificial Intelligence, Proceedings of the 16th International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, IEA/AIE, Loughborough, UK, 23–26 June 2003; Springer: Berlin, Germany, 2003; pp. 545–554. [Google Scholar] [CrossRef]
- Helm, D. Pseudoelastic behavior of shape memory alloys: Constitutive theory and identification of the material parameters using neural networks. Tech. Mech.
**2005**, 25, 39–58. [Google Scholar] - Chamekh, A.; Salah, H.B.H.; Hambli, R. Inverse technique identification of material parameters using finite element and neural network computation. Int. J. Adv. Manuf. Technol.
**2008**, 44, 173–179. [Google Scholar] [CrossRef] - Aguir, H.; Chamekh, A.; BelHadjSalah, H.; Dogui, A.; Hambli, R. Parameter identification of a non-associative elastoplastic constitutive model using ANN and multi-objective optimization. Int. J. Mater. Form.
**2009**, 2, 75–82. [Google Scholar] [CrossRef] - MacKay, D.J.C. Bayesian Interpolation. Neural Comput.
**1992**, 4, 415–447. [Google Scholar] [CrossRef] - Mareš, T.; Janouchová, E.; Kučerová, A. Artificial neural networks in the calibration of nonlinear mechanical models. Adv. Eng. Softw.
**2016**, 95, 68–81. [Google Scholar] [CrossRef][Green Version] - Livermore Software Technology Corporation (LSTC). LS-DYNA Keyword User’s Manual Volume II Material Models LS-DYNA, r11 ed. Available online: https://www.dynamore.de/de/download/manuals/ls-dyna/ls-dyna-manual-r11.0-vol-ii-12-mb (accessed on 9 December 2020).
- Stander, N.E.A. LS OPT User’s Manual—A Design Optimization and Probabilistic Analysis Tool for the Engeneering Analyst, v.6.0 ed. 2019. Available online: https://www.lsoptsupport.com/documents/manuals/ls-opt/lsopt_60_manual.pdf (accessed on 9 December 2020).
- McKay, M.D.; Beckman, R.J.; Conover, W.J. A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code. Technometrics
**1979**, 21, 239–245. [Google Scholar] [CrossRef] - Jin, R.; Chen, W.; Sudjianto, A. An efficient algorithm for constructing optimal design of computer experiments. J. Stat. Plan. Inference
**2005**, 134, 268–287. [Google Scholar] [CrossRef] - Fausett, L.; Fausett, L. Fundamentals of Neural Networks: Architectures, Algorithms, and Applications; Springer: Vienna, Austria, 1994. [Google Scholar]
- Gurney, K. An Introduction to Neural Networks; CRC Press: London, UK, 2018. [Google Scholar] [CrossRef]
- Haykin, S. Neural Networks and Learning Machines; Number Bd. 10 in Neural Networks and Learning Machines; Prentice Hall: Hamilton, ON, Canada, 2009. [Google Scholar]
- Da Silva, I.N.; Spatti, D.H.; Flauzino, R.A.; Liboni, L.H.B.; dos Reis Alves, S.F. Artificial Neural Networks; Springer International Publishing: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
- Shanmuganathan, S.; Samarasinghe, S. Artificial Neural Network Modelling; Springer International Publishing: Cham, Switzerland, 2016. [Google Scholar] [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016; Available online: http://www.deeplearningbook.org (accessed on 20 July 2020).
- Pinto, N.; Doukhan, D.; DiCarlo, J.J.; Cox, D.D. A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation. PLoS Comput. Biol.
**2009**, 5, e1000579. [Google Scholar] [CrossRef] [PubMed] - Moons, B.; Bankman, D.; Verhelst, M. Embedded Deep Learning; Springer: Berlin/Heidelberg, Germany, 2019. [Google Scholar] [CrossRef]
- Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw.
**1989**, 2, 359–366. [Google Scholar] [CrossRef] - Anders, U.; Korn, O. Model selection in neural networks. Neural Netw.
**1999**, 12, 309–323. [Google Scholar] [CrossRef][Green Version] - Nielsen, M. Neural Networks and Deep Learning. 2015. Available online: http://static.latexstudio.net/article/2018/0912/neuralnetworksanddeeplearning.pdf (accessed on 9 December 2020).
- Hutter, F.; Kotthoff, L.; Vanschoren, J. (Eds.) Automated Machine Learning; Springer International Publishing: Cham, Switzerland, 2019. [Google Scholar] [CrossRef][Green Version]
- O’Malley, T.; Bursztein, E.; Long, J.; Chollet, F.; Jin, H.; Invernizzi, L. Keras Tuner. Available online: https://github.com/keras-team/keras-tuner (accessed on 6 July 2020).
- Li, L.; Jamieson, K.G.; DeSalvo, G.; Rostamizadeh, A.; Talwalkar, A. Efficient Hyperparameter Optimization and Infinitely Many Armed Bandits. arXiv
**2016**, arXiv:abs/1603.06560. [Google Scholar] - Bergstra, J.; Bardenet, R.; Bengio, Y.; Kegl, B. Algorithms for Hyper-Parameter Optimization. In Proceedings of the 24th International Conference on Neural Information Processing Systems ( NIPS’11), Granada, Spain, 12–14 December 2011; Curran Associates Inc.: Red Hook, NY, USA, 2011; pp. 2546–2554. [Google Scholar]
- Falkner, S.; Klein, A.; Hutter, F. BOHB: Robust and Efficient Hyperparameter Optimization at Scale. 2018. Available online: http://xxx.lanl.gov/abs/1807.01774 (accessed on 17 August 2020).
- Bengio, Y. Practical Recommendations for Gradient-Based Training of Deep Architectures. Neural Netw. Tricks Trade
**2012**, 437–478. [Google Scholar] [CrossRef][Green Version] - Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; Salakhutdinov, R. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J. Mach. Learn. Res.
**2014**, 15, 1929–1958. [Google Scholar]

**Figure 1.**Parameter identification process to obtain the four material parameters for the custom yield function of MAT024 with LS-OPT.

**Figure 2.**Workflow of the used direct NN-based parameter identification procedure applied to the problem of identifying the material parameters of the custom yield curve formulation for MAT024.

**Figure 6.**FDC from the parameter identification process with LS-OPT at (

**a**) iteration 1 and (

**b**) iteration 4 (SECFORC1—LS-DYNA output for the force and NODOUT1—for displacement).

**Figure 7.**Sampling strategy and quantity of input data—comparison of MSE of material parameter and FDC from (

**a**) validation data and (

**b**) experimental data.

**Figure 9.**Input data ranges comparison of MSE of material parameter and FDC from (

**a**) validation data and (

**b**) experimental data.

**Figure 10.**Comparison of MSE of material parameter and FDC from (

**a**) validation data and (

**b**) experimental data with different validation set sizes.

**Figure 11.**Learning curves for different variations of parameter ranges and training set/validation set ratio; (

**a**) Run_2, (

**b**) Run_15, (

**c**) Run_17 and (

**d**) Run_20.

**Figure 12.**Comparison of MSE of material parameter and FDC from (

**a**) validation data and (

**b**) experimental data with noisy training data.

**Figure 16.**Comparison of MSE of FDC from (

**a**) validation data and (

**b**) experimental data for different NN from HP optimization.

**Figure 17.**Comparison of calculated FDC with parameters from iterative and NN-based process for (

**a**) Run_5, (

**b**) Run_10, (

**c**) Run_31 and (

**d**) for one randomly chosen labeled and the corresponding predicted parameter set of Run_26.

Material | Color | Extrusion Speed | Build Platform Temperature | Nozzle Temperature | Layer Thickness | Raster Angle | Perimeter Shells |
---|---|---|---|---|---|---|---|

ABS | black | 40 mm/s | 100 ${}^{\circ}$C | 245 ${}^{\circ}$C | 0.2 mm | ±45 ${}^{\circ}$C | 2 |

**Table 2.**Identified material parameters using LS-OPT for the simulation of the experimental specimen and the corresponding FDC MSE.

Parameter a | Parameter b | Parameter c | Parameter d | MSE FDC |
---|---|---|---|---|

52,514 | −4056.63 | 11.231 | 539.29 | 163.14 ${N}^{2}$ |

Run | Mean MSE Parameter Val. Set (−) | Mean MSE FDC Val. Set (${\mathit{N}}^{2}$) | MSE FDC Exp. (${\mathit{N}}^{2}$) |
---|---|---|---|

5 | $0.0469$ | $32.9$ | $122.4$ |

25 | $0.0241$ | $6.4$ | $150.1$ |

26 | $0.0257$ | $6.4$ | $142.9$ |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Meißner, P.; Watschke, H.; Winter, J.; Vietor, T. Artificial Neural Networks-Based Material Parameter Identification for Numerical Simulations of Additively Manufactured Parts by Material Extrusion. *Polymers* **2020**, *12*, 2949.
https://doi.org/10.3390/polym12122949

**AMA Style**

Meißner P, Watschke H, Winter J, Vietor T. Artificial Neural Networks-Based Material Parameter Identification for Numerical Simulations of Additively Manufactured Parts by Material Extrusion. *Polymers*. 2020; 12(12):2949.
https://doi.org/10.3390/polym12122949

**Chicago/Turabian Style**

Meißner, Paul, Hagen Watschke, Jens Winter, and Thomas Vietor. 2020. "Artificial Neural Networks-Based Material Parameter Identification for Numerical Simulations of Additively Manufactured Parts by Material Extrusion" *Polymers* 12, no. 12: 2949.
https://doi.org/10.3390/polym12122949