A Neural Network Constitutive Model and Automatic Stiffness Evaluation for Multiscale Finite Elements
Abstract
:1. Introduction
2. Background of the Constitutive Metamodel Based on Responses of Representative Volume Elements
3. Feedforward and Backpropagation in the Neural Model with a Computation of Tangential Stiffness Matrix
4. Computational Procedure
Computational Algorithm
- Import all necessary Python’s libraries:import tensorflow as tf, import keras, import matplotlib.pyplot as plt, and import csv.
- Set up the input for neural surrogate networks from the data set of the strain tensor. There are three inputs, , for each NN, and each input is a vector of values.
- Set up the output, training and test samples for the neural networks as well as the data set of the stress tensor for fitting while minimizing the error loss function.
- Set up a number of training iterations, epochs and batches for the neural networks.
- Read the data set, and if they are available, from txt/csv files with np.genfromtxt function. A part of the data set is used for training, and the remaining data are used for the test.
- Normalize the and if it is necessary with
- Create a class/function object in Python allowing automatic differentiation using tensorflow tf.GradientTape module.
- Set up a number of neurons and layers for the NNs.
- Group layers (input, hidden and output) and neurons into an object with training/inference features for the surrogate net metamodels with dimensions, with three inputs and one output, based on the Keras modules, tf.keras.Input, tf.keras.layers.Dense, tf. keras.models.Model, tf.keras.layers.Input.
- Call the class/function for Automatic Differentiation, which is defined in Step 7.
- Define the input and the output for the NNs using module tf. keras.models.Model for inputs and training items in the list of outputs.
- Create training and test data, including the training variables for the inputs and for the output and In the case of the Sobolev function, is also included if the corresponding data are available.
- Choose an activation function. Here, the tanh function is used.
- Compile the residual neural models using the Keras module keras.models.Model.compile with the help of the built-in Adam optimizer and Mean Square Error modules.
- Train the neural metamodels with using keras.models.Model.fit as the training input and output data (backpropagation). If hyperelastic principles are enforced, include the inequality for the strain energy function,
- Stop simulation if overfitting happens (the validation loss starts to increase). In this case, add a penalty to the loss function for large weights. Simplify the NN by reducing the number of layers or neurons. Check the data set, increase data or correct them if necessary. Go back to the previous step.
- Go back to the true values from the normalized output results and the derivatives.
- Plot graphs for the model accuracy, model loss and prediction results of the outputs of the NNs with the use of plot and history() functions, which are readily available for use inside Python, and save the obtained data with, e.g., np.savetxt() and the figures with save plt.safefig().
5. Numerical Results
5.1. Example 1
5.2. Example 2
5.3. Example 3
5.4. Example 4
6. Conclusions
Supplementary Materials
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Drosopoulos, G.A.; Stavroulakis, G.E. Non-Linear Mechanics for Composite, Heterogeneous Structures; CRC Press, Taylor and Francis: Boca Raton, FL, USA, 2022. [Google Scholar]
- Urbański, A. The Unified, Finite Element Formulation of Homogenization of Structural Members with a Periodic Microstructure; Cracow University of Technology: Cracow, Poland, 2005. [Google Scholar]
- Geers, M.G.D.; Kouznetsova, V.G.; Brekelmans, W.A.M. Multi-scale computational homogenization: Trends and challenges. J. Comput. Appl. Math. 2010, 234, 2175–2182. [Google Scholar] [CrossRef]
- Kortesis, S.; Panagiotopoulos, P.D. Neural networks for computing in structural analysis: Methods and prospects of applications. Int. J. Num. Meth. Eng. 1993, 36, 2305–2318. [Google Scholar] [CrossRef]
- Avdelas, A.V.; Panagiotopoulos, P.D.; Kortesis, S. Neural networks for computing in the elastoplastic analysis of structures. Meccanica 1995, 30, 1–15. [Google Scholar] [CrossRef]
- Muradova, A.D.; Stavroulakis, G.E. The projective-iterative method and neural network estimation for buckling of elastic plates in nonlinear theory. Comm. Nonlin. Sci. Num. Sim. 2007, 12, 1068–1088. [Google Scholar] [CrossRef]
- Stavroulakis, G.E. Inverse and Identification Problems in Mechanics; Springer/Kluwer Academic: Berlin/Heidelberg, Germany, 2000. [Google Scholar]
- Yagawa, G.; Oishi, A. Computational Mechanics with Neural Networks; Springer: Berlin/Heidelberg, Germany, 2021. [Google Scholar]
- Dornheim, J.; Mor, L.; Nallani, H.J.; Helm, D. Neural Networks for Constitutive Modeling: From Universal Function Approximators to Advanced Models and the Integration of Physics. Arch. Comp. Meth. Eng. 2024, 31, 1097–1127. [Google Scholar] [CrossRef]
- Linka, K.; Hillgärtner, M.; Abdolazizi, K.P.; Aydin, R.C.; Itskov, M.; Cyron, C.J. Constitutive artificial neural networks: A fast and general approach to predictive data-driven constitutive modeling by deep learning. J. Comput. Phys. 2021, 429, 110010. [Google Scholar] [CrossRef]
- Linka, K.; Kuhl, E. A new family of Constitutive Artificial Neural Networks towards automated model discovery. Comput. Methods Appl. Mech. Eng. 2023, 403, 115731. [Google Scholar] [CrossRef]
- Czarnecki, W.M.; Osindero, S.; Swirszcz, M.J.G.; Pascanu, R. Sobolev Training for Neural Networks. arXiv 2017, arXiv:1706.04859. [Google Scholar]
- Cocola, J.; Hand, P. Global Convergence of Sobolev Training for Overparameterized Neural Networks. In Machine Learning, Optimization, and Data Science; Nicosia, G., Ojha, V., La Malfa, E., Jansen, G., Sciacca, V., Pardalos, P., Giuffrida, G., Umeton, R., Eds.; Lecture Notes in Computer Science, LOD 2020; Springer: Cham, Switzerland, 2020; Volume 12565. [Google Scholar]
- Haghighat, E.; Juanes, R. Sciann, A keras/tensorflow wrapper for scientific computations and physics-informed deep learning using artificial neural networks. Comput. Meth. Appl. Mech. Eng. 2021, 373, 113552. [Google Scholar] [CrossRef]
- Baydin, A.; Pearlmutter, B.A.; Radul, A.A.; Siskind, J.M. Automatic Differentiation in Machine Learning: A Survey. arXiv 2018, arXiv:1502.05767. [Google Scholar]
- Michel, J.-C.; Moulinec, H.; Suquet, P. Effective properties of composite materials with periodic microstructure: A computational approach. Comput. Meth. Appl. Mech. Eng. 1999, 172, 109–143. [Google Scholar] [CrossRef]
- Zohdi, T.I.; Wriggers, P. An Introduction to Computational Micromechanics; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
- Tikarrouchine, E.; Benaarbia, A.; Chatzigeorgiou, G.; Meraghni, F. Non-linear FE2 multiscale simulation of damage, micro and macroscopic strains in polyamide 66-woven composite structures: Analysis and experimental validation. Compos. Struct. 2021, 255, 112926. [Google Scholar]
- Drosopoulos, G.A.; Giannis, K.; Stavroulaki, M.E.; Stavroulakis, G.E. Metamodeling-assisted numerical homogenization for masonry and cracked structures. ASCE J. Eng. Mech. 2018, 144, 04018072. [Google Scholar]
- Yvonnet, J.; He, Q.C.; Li, P. Reducing internal variables and improving efficiency in data-driven modelling of anisotropic damage from RVE simulations. Comput. Mech. 2023, 72, 37–55. [Google Scholar]
- Le, B.A.; Yvonnet, J.; He, Q.-C. Computational homogenization of nonlinear elastic materials using neural networks. Int. J. Num. Meth. Eng. 2015, 104, 1061–1084. [Google Scholar]
- Urbański, A.; Szymon, L.; Marcin, D. Multi-scale modeling of brick masonry using a numerical homogenization technique and an artificial neural network. Arch. Civil Eng. 2022, 68, 179–197. [Google Scholar]
- Eivazi, H.; Tröger, J.-A.; Wittek, S.; Hartmann, S.; Rausch, A. FE² Computations with Deep Neural Networks: Algorithmic Structure, Data Generation, and Implementation. Available online: https://ssrn.com/abstract=4485434 (accessed on 7 June 2023).
- Fish, J.; Yu, Y. Data-physics driven reduced order homogenization. Int. J. Numer. Methods Engrg. 2023, 124, 1620–1645. [Google Scholar] [CrossRef]
- As’ad, F.; Avery, P.; Farhat, C. A mechanics-informed artificial neural network approach in data-driven constitutive modeling. Int. J. Numer. Methods Eng. 2022, 123, 2738–2759. [Google Scholar]
- Protopapadakis, E.; Schauer, M.; Pierri, E.; Doulamis, A.D.; Stavroulakis, G.E.; Böhrnsen, J.-U.; Langer, S. A genetically optimized neural classifier applied to numerical pile integrity tests considering concrete piles. Comput. Struct. 2016, 162, 68–79. [Google Scholar]
- Muradova, A.D.; Stavroulakis, G.E. Physics-informed neural networks for elastic plate problems with bending and Winkler-type contact effects. J. Serbian Soc. Comput. Mech. 2021, 15, 45–54. [Google Scholar] [CrossRef]
- Mouratidou, A.D.; Drosopoulos, G.A.; Stavroulakis, G.E. Ensemble of physics-informed neural networks for solving plane elasticity problems with examples. Acta Mechanica 2024, 235, 6703–6722. [Google Scholar] [CrossRef]
- Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
- Katsikis, D.; Muradova, A.D.; Stavroulakis, G.S. A Gentle Introduction to Physics-Informed Neural Networks, with Applications in Static Rod and Beam Problems. J. Adv. Appl. Comput. Math. 2022, 9, 103–128. [Google Scholar] [CrossRef]
- Lu, X.; Giovanis, D.G.; Yvonnet, J.; Papadopoulos, V.; Detrez, F.; Bai, J. A data-driven computational homogenization method based on neural networks for the nonlinear anisotropic electrical response of graphene/polymer nanocomposites. Comput. Mech. 2019, 64, 307–321. [Google Scholar] [CrossRef]
- Fish, J.; Wagner, G.J.; Keten, S. Mesoscopic and multiscale modelling in materials. Nat. Mater. 2021, 20, 774–786. [Google Scholar] [CrossRef] [PubMed]
- Tchalla, A.; Belouettar, S.; Makradi, A.; Zahrouni, H. An ABAQUS toolbox for multiscale finite element computation. Compos. Part B Eng. 2013, 52, 323–333. [Google Scholar] [CrossRef]
- Wei, H.; Wu, C.T.; Hu, W.; Su, T.H.; Oura, H.; Nishi, M.; Naito, T.; Chung, S.; Shen, L. LS-DYNA Machine Learning–Based Multiscale Method for Nonlinear Modeling of Short Fiber–Reinforced Composites. J. Eng. Mech. 2023, 149, 04023003. [Google Scholar] [CrossRef]
- Su, T.H.; Huang, S.J.; Jean, J.G.; Chen, C.S. Multiscale computational solid mechanics: Data and machine learning. J. Mech. 2022, 38, 568–585. [Google Scholar] [CrossRef]
- Fei, T.; Xin, L.; Haodong, D.; Wenbin, Y. Learning composite constitutive laws via coupling Abaqus and deep neural network. Compos. Struct. 2021, 272, 114137. [Google Scholar]
- Stavroulakis, G.E.; Giannis, K.; Drosopoulos, G.A.; Stavroulaki, M.E. Non-linear Computational Homogenization Experiments. In Proceedings of the COMSOL Conference, Rotterdam, The Netherlands, 23–25 October 2013; Available online: http://purl.tuc.gr/dl/dias/9D3019B4-7879-4E1F-9845-E53693BB1717 (accessed on 12 October 2015).
- Barron, A.R. Approximation and Estimation Bounds for Artificial Neural Networks. Mach. Learn. 1994, 14, 115–133. [Google Scholar] [CrossRef]
- Ruder, S. An overview of gradient descent optimization algorithms. arXiv 2017, arXiv:1609.04747. [Google Scholar]
- Abadi, M.; Barham, P.; Chen, J.; Chen, Z.; Davis, A.; Dean, J.; Devin, M.; Ghemawat, S.; Irving, G.; Isard, M.; et al. Tensorflow: A system for large-scale machine learning. In Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), USENIX, Savannah, GA, USA, 2–4 November 2016; pp. 265–283. Available online: https://www.usenix.org/conference/osdi16/technical-sessions/presentation/abadi (accessed on 16 March 2025).
- Dastjerdi, S.; Alibakhshi, A.; Akgöz, B.; Civalek, O. Novel Nonlinear Elasticity Approach for Analysis of Nonlinear and Hyperelastic Structures. Eng. Anal. Bound. Elem. 2022, 143, 219–236. [Google Scholar] [CrossRef]
- Marckmann, G.; Verron, E. Comparison of hyperelastic models for rubber-like materials. Rubber Chem. Tech. 2006, 79, 835–858. [Google Scholar] [CrossRef]
Number of Layers | Neurons | Epochs | Training Samples | MSE | Time (min.) |
---|---|---|---|---|---|
2 | [40,40] | 2000 | 42 | 3 | |
2 | [40,40] | 4000 | 42 | 6 | |
3 | [15,30,40] | 2000 | 42 | 5 | |
4 | [15,20,15,20] | 4000 | 84 | 8 | |
4 | [15,20,15,20] | 8000 | 84 | 15.5 |
Number of Layers | Neurons | Epochs | Batch Size | MSE | Time (min.) |
---|---|---|---|---|---|
2 | [15,15] | 1000 | 64 | 13.64 | |
2 | [15,15] | 2000 | 64 | 27.62 | |
2 | [40,40] | 2000 | 64 | 27.30 | |
2 | [40,40] | 2000 | 84 | 26.43 | |
3 | [15,30,15] | 4000 | 84 | 52.48 | |
4 | [15,20,15,20] | 2000 | 84 | 26.31 | |
4 | [30,40,30,40] | 2000 | 84 | 26.79 | |
5 | [15,20,15,20,15] | 2000 | 84 | 26.27 | |
5 | [15,20,15,20,15] | 4000 | 84 | 52.57 |
Neurons in Layers | Epochs | Train. Loss | Valid. Loss | MSE | MAE | MSEstiff | MAEstiff | Time (min.) |
---|---|---|---|---|---|---|---|---|
[50,50] | 2000 | 0.000661 | 0.0001885 | 0.006957 | 18.51 | |||
[50,50] | 5000 | 0.001550 | 0.000164 | 0.006603 | 44.89 | |||
[30, 40, 30, 40] | 2000 | 0.000600 | 0.000740 | 0.013892 | 21.82 | |||
[30, 40, 30, 40] | 5000 | 0.000941 | 0.089927 | 0.135217 | 49.69 | |||
[15, 30, 15, 30, 15] | 2000 | 0.001138 | 0.000648 | 0.013876 | 24.63 | |||
[15, 30, 15, 30, 15] | 5000 | 0.000674 | 0.002472 | 0.026296 | 60.52 |
Neurons in Layers | Epochs | Train. Loss | Valid. Loss | MSE | MAE | MSEstiff | MAEstiff | Time (min.) |
---|---|---|---|---|---|---|---|---|
[50,50] | 2000 | 0.002398 | 0.004027 | 27.04 | ||||
[50,50] | 5000 | 0.002168 | 0.004628 | 66.73 | ||||
[30, 40, 30, 40] | 2000 | 0.002791 | 0.003114 | 26.94 | ||||
[30, 40, 30, 40] | 5000 | 0.001990 | 0.004675 | 66.50 | ||||
[15, 30, 15, 30, 15] | 2000 | 0.002822 | 0.003749 | 26.53 | ||||
[15, 30, 15, 30, 15] | 5000 | 0.001506 | 0.003678 | 66.68 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Mouratidou, A.D.; Stavroulakis, G.E. A Neural Network Constitutive Model and Automatic Stiffness Evaluation for Multiscale Finite Elements. Appl. Sci. 2025, 15, 3697. https://doi.org/10.3390/app15073697
Mouratidou AD, Stavroulakis GE. A Neural Network Constitutive Model and Automatic Stiffness Evaluation for Multiscale Finite Elements. Applied Sciences. 2025; 15(7):3697. https://doi.org/10.3390/app15073697
Chicago/Turabian StyleMouratidou, Aliki D., and Georgios E. Stavroulakis. 2025. "A Neural Network Constitutive Model and Automatic Stiffness Evaluation for Multiscale Finite Elements" Applied Sciences 15, no. 7: 3697. https://doi.org/10.3390/app15073697
APA StyleMouratidou, A. D., & Stavroulakis, G. E. (2025). A Neural Network Constitutive Model and Automatic Stiffness Evaluation for Multiscale Finite Elements. Applied Sciences, 15(7), 3697. https://doi.org/10.3390/app15073697