Evolutionary PINN Learning Algorithms Inspired by Approximation to Pareto Front for Solving Ill-Posed Problems
Abstract
:1. Introduction
2. Materials and Methods
2.1. Advantages of Physics-Informed Neural Network Modelling
2.2. Problem Statement
2.3. Evolutionary Algorithms Based on Pareto Front: General Idea
3. Computational Experiments and Results
3.1. Benchmark Problems Statement
3.2. Algorithms Description
- Termination condition:The method terminates after achieving a predefined number of neurons, N, in a PINN model. Different values of N are considered to investigate the impact of the total number of neurons on the obtained solutions and their outcomes.
- Initial PopulationA population of neurons with radial basis functions of the form
- Pareto Mutation:To select individuals from the incoming population, the following procedure is employed. During the initial mutation, the discrete quadratic errors for satisfying the Laplace equation and boundary conditions are calculated for each initialised network. These errors are then summed to derive a "global" penalty multiplierDuring the mutation process, the current networks are trained independently by minimising the corresponding loss functions using algorithm Rprop [45] and by regenerating points for the Laplace equation every 5 training epochs. The total number of training epochs can be selected separately. For the last mutation, The total number of training epochs is employed. The values considered in the experiments are presented in Table 1.Regarding the second selection procedure outlined in the Pareto mutation, all mutated networks are preserved in all runs () except for the final one.As previously mentioned, collocation points are utilised at the boundary. Initially, they are split into two sets: the first set is involved in the training of the network, while the second set is employed in the selection process. Except for the first mutation, all other mutations employ the first set of points for training PINNs.
- Crossover:The present study investigates algorithm that employs crossover within the current population. The optimal Pareto set for this PINN solutions is constructed based on the discrete quadratic errors for satisfying the Laplace equation and boundary conditions . Next, the neurons are selected from individuals located at the leftmost and rightmost ends of the current Pareto-optimal set, and added one by one to all other networks multiplied by the optimal parameter in the sense of least squares. From the resulting new individuals, the one with the minimum error is selected for each value of the penalty multiplier , A scheme of the Crossover procedure is presented in Figure 3.
- A note on the set of penalty multipliers:In the previous paragraph, we introduced two different sets of penalty multipliers. The general scheme of the algorithm allows for new selections of such sets within each mutation and crossover. In this study, we consider two options: in the first, the set of penalty multipliers is fixed during the initialisation of the first population of networks (), while in the second, parameter is updated during the crossover process. In the latter case, the new parameter is calculated as follows:
3.3. Results: Problem (10)+(11)
Algorithm 1 Evolutionary PINN Learning Algorithms Inspired by Approximation to Pareto Front |
|
3.4. Results: Problem (10)+(12)
4. Discussion
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
PINN | physics-informed neural network |
References
- Haykin, S. Neural Networks: A Comprehensive Foundation; Prentice Hall: Hoboken, NJ, USA, 1999; 842p. [Google Scholar]
- Slowik, A.; Kwasnicka, H. Evolutionary algorithms and their applications to engineering problems. Neural Comput. Appl. 2020, 32, 12363–12379. [Google Scholar] [CrossRef]
- Maier, H.R.; Razavi, S.; Kapelan, Z.; Matott, L.S.; Kasprzyk, J.; Tolson, B.A. Introductory overview: Optimization using evolutionary algorithms and other metaheuristics. Environ. Model. Softw. 2019, 114, 195–213. [Google Scholar] [CrossRef]
- Yu, X.; Gen, M. Introduction to Evolutionary Algorithms; Springer Science & Business Media: London, UK, 2010. [Google Scholar]
- Tian, Y.; Zhang, X.; Wang, C.; Jin, Y. An Evolutionary Algorithm for Large-Scale Sparse Multiobjective Optimization Problems. IEEE Trans. Evolut. Comput. 2020, 24, 2380–2393. [Google Scholar] [CrossRef]
- Sirignano, J.; Spiliopoulos, K. DGM: A deep learning algorithm for solving partial differential equations. J. Comput. Phys. 2018, 375, 1339–1364. [Google Scholar] [CrossRef]
- Dissanayake, M.W.M.G.; Phan-Thien, N. Neural-network-based approximations for solving partial differential equations. Commun. Numer. Methods Eng. 1994, 10, 195–201. [Google Scholar] [CrossRef]
- Lagaris, I.; Likas, A.; Fotiadis, D. Artificial neural networks for solving ordinary and partial differential equations. IEEE Trans. Neural Netw. 1998, 9, 987–1000. [Google Scholar] [CrossRef]
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Elsken, T.; Metzen, J.H.; Hutter, F. Efficient Multi-Objective Neural Architecture Search via Lamarckian Evolution. arXiv 2018, arXiv:1804.09081. [Google Scholar]
- Vasilyev, A.N.; Tarkhov, D.A. Mathematical models of complex systems on the basis of artificial neural networks. Nonlinear Phenom. Complex Syst. 2014, 17, 327–335. [Google Scholar]
- Tarkhov, D.A. Neural networks as a means of mathematical modeling. Neurocomput. Dev. Appl. 2006, 2, 1–49. [Google Scholar]
- Tarkhov, D.; Vasilyev, A.N. Semi-Empirical Neural Network Modeling and Digital Twins Development; Academic Press: Cambridge, MA, USA, 2019. [Google Scholar]
- Mirjalili, S.; Mirjalili, S.M.; Lewis, A. Grey Wolf optimizer. Adv. Eng. Softw. 2014, 69, 46–61. [Google Scholar] [CrossRef]
- Eberhart, R.; Kennedy, J. Particle swarm optimization. In Proceedings of the ICNN’95—International Conference on Neural Networks, Perth, WA, Australia, 27 November–1 December 1995; Volume 4, pp. 1942–1948. [Google Scholar]
- Stanley, K.; Clune, J.; Lehman, J.; Miikkulainen, R. Designing neural networks through neuroevolution. Nat. Mach. Intell. 2019, 1, 24–35. [Google Scholar] [CrossRef]
- Soltoggio, A.; Stanley, K.O.; Risi, S. Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks. Neural Netw. 2018, 108, 48–67. [Google Scholar] [CrossRef] [PubMed]
- Xue, Y.; Tong, Y.; Neri, F. An ensemble of differential evolution and Adam for training feed-forward neural networks. Inf. Sci. 2022, 608, 453–471. [Google Scholar] [CrossRef]
- Kaveh, A.; Khavaninzadeh, N. Efficient training of two ANNs using four meta-heuristic algorithms for predicting the FRP strength. Structures 2023, 52, 256–272. [Google Scholar] [CrossRef]
- Basir, S.; Inanc, S. Physics and Equality Constrained Artificial Neural Networks: Application to Partial Differential Equations. arXiv 2021, arXiv:2109.14860. [Google Scholar]
- Zobeiry, N.; Humfeld, K.D. A Physics-Informed Machine Learning Approach for Solving Heat Transfer Equation in Advanced Manufacturing and Engineering Applications. Eng. Appl. Artif. Intell. 2021, 101, 104232. [Google Scholar] [CrossRef]
- Rao, C.; Sun, H.; Liu, Y. Physics-informed deep learning for incompressible laminar flows. arXiv 2020, arXiv:2002.10558. [Google Scholar] [CrossRef]
- Huang, Y.; Zhang, Z.; Zhang, X. A Direct-Forcing Immersed Boundary Method for Incompressible Flows Based on Physics-Informed Neural Network. Fluids 2022, 7, 56. [Google Scholar] [CrossRef]
- Wang, H.; Liu, Y.; Wang, S. Dense velocity reconstruction from particle image velocimetry/particle tracking velocimetry using a physics-informed neural network. Phys. Fluids 2022, 34, 017116. [Google Scholar] [CrossRef]
- Bischof, R.; Kraus, M. Multi-objective loss balancing for physics-informed deep learning. arXiv 2021, arXiv:2110.09813. [Google Scholar]
- Du, K.J.; Li, J.Y.; Wang, H.; Zhang, J. Multi-objective multi-criteria evolutionary algorithm for multi-objective multi-task optimization. Complex Intell. Syst. 2022, 9, 1211–1228. [Google Scholar] [CrossRef]
- Coello, C.A.C.; Lamont, G.B.; Van Veldhuizen, D.A. Evolutionary Algorithms for Solving Multi-Objective Problems, 2nd ed.; Springer: New York, NY, USA, 2007. [Google Scholar]
- Liu, Q.; Jin, Y.; Heiderich, M.; Rodemann, T.; Yu, G. An Adaptive Reference Vector-Guided Evolutionary Algorithm Using Growing Neural Gas for Many-Objective Optimization of Irregular Problems. IEEE Trans. Cybern. 2020, 52, 2698–2711. [Google Scholar] [CrossRef]
- Lin, X.; Zhen, H.; Li, Z.; Zhang, Q.; Kwong, S.T. Pareto Multi-Task Learning. arXiv 2019, arXiv:1912.12854. [Google Scholar]
- Momma, M.; Dong, C.; Liu, J. A Multi-objective / Multi-task Learning Framework Induced by Pareto Stationarity. In Proceedings of the 39th International Conference on Machine Learning (PMLR), Baltimore, MD, USA, 17–23 July 2022; Volume 162, pp. 15895–15907. [Google Scholar]
- Lu, B.; Moya, C.; Lin, G. NSGA-PINN: A Multi-Objective Optimization Method for Physics-Informed Neural Network Training. Algorithms 2023, 16, 194. [Google Scholar] [CrossRef]
- Xue, Y.; Wang, Y.; Liang, J. A self-adaptive gradient descent search algorithm for fully-connected neural networks. Neurocomputing 2022, 478, 70–80. [Google Scholar] [CrossRef]
- Daw, A.; Bu, J.; Wang, S.; Perdikaris, P.; Karpatne, A. Rethinking the importance of sampling in physics-informed neural networks. arXiv 2022, arXiv:2207.02338. [Google Scholar]
- Cauteruccio, F.; Stamile, C.; Terracina, G.; Ursino, G.; Sappey-Mariniery, D. An automated string-based approach to White Matter fiber-bundles clustering. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015. [Google Scholar] [CrossRef]
- He, Q.; Barajas-Solano, D.; Tartakovsky, G.; Tartakovsky, A.M. Physics-informed neural networks for multiphysics data assimilation with application to subsurface transport. Adv. Water Resour. 2020, 141, 103610. [Google Scholar] [CrossRef]
- Viguerie, A.; Lorenzo, G.; Auricchio, F.; Baroli, D.; Hughes, T.J.R.; Patton, A.; Reali, A.; Yankeelov, T.E.; Veneziani, A. Simulating the spread of COVID-19 via a spatially-resolved susceptible–exposed–infected–recovered–deceased (SEIRD) model with heterogeneous diffusion. Appl. Math. Lett. 2021, 111, 106617. [Google Scholar] [CrossRef] [PubMed]
- Calimeri, F.; Cauteruccio, F.; Cinelli, L.; Marzullo, A.; Stamile, C.; Terracina, G.; Durand-Dubief, F.; Sappey-Marinier, D. A logic-based framework leveraging neural networks for studying the evolution of neurological disorders. Theory Pract. Logic Programm. 2021, 21, 80–124. [Google Scholar] [CrossRef]
- Trogdon, T.; Biondini, G. Evolution partial differential equations with discontinuous data. Quart. Appl. Math. 2019, 77, 689–726. [Google Scholar] [CrossRef]
- Mitchell, S.L.; Vynnycky, M. An accuracy-preserving numerical scheme for parabolic partial differential equations subject to discontinuities in boundary conditions. Appl. Math. Comput. 2021, 400, 125979. [Google Scholar] [CrossRef]
- Araújo, A.; Martins, F.; Vélez, W.; Portela, A. Automatic mesh-free boundary analysis: Multi-objective optimization. Eng. Anal. Bound. Elem. 2021, 125, 264–279. [Google Scholar] [CrossRef]
- Lazovskaya, T.; Tarkhov, D. Fresh approaches to the construction of parameterized neural network solutions of a stiff differential equation. St. Petersburg Polytech. Univ. J. Phys. Math. 2015, 1, 192–198. [Google Scholar] [CrossRef]
- Tarkhov, D.; Vasilyev, A. New neural network technique to the numerical solution of mathematical physics problems. II: Complicated and nonstandard problems. Opt. Mem. Neural Netw. (Inf. Opt.) 2005, 14, 97–122. [Google Scholar]
- Lazovskaya, T.; Tarkhov, D.; Dudnik, A.; Koksharova, E.; Mochalova, O.; Muranov, D.; Pozhvanyuk, K.; Sysoeva, A. Investigation of Pareto Front of Neural Network Approximation of Solution of Laplace Equation in Two Statements: With Discontinuous Initial Conditions or with Measurement Data. In Advances in Neural Computation, Machine Learning, and Cognitive Research VI. NEUROINFORMATICS 2022; Kryzhanovsky, B., Dunin-Barkowski, W., Redko, V., Tiumentsev, Y., Eds.; Studies in Computational Intelligence; Springer: Cham, Switzerland, 2023; Volume 1064. [Google Scholar] [CrossRef]
- Satrio, P.; Mahlia, T.M.I.; Giannetti, N.; Saito, K. Optimization of HVAC system energy consumption in a building using artificial neural network and multi-objective genetic algorithm. Sustain. Energy Technol. Assess. 2019, 35, 48–57. [Google Scholar] [CrossRef]
- Braun, H.; Riedmiller, M. A direct adaptive method for faster backpropagation learning: The Rprop algorithm. In Proceedings of the IEEE International Conference on Neural Networks, San Francisco, CA, USA, 28 March–1 April 1993; pp. 586–591. [Google Scholar]
N | ||||
---|---|---|---|---|
9 | 9 | 10, 20, 30 | 10, 50, 100 | 400, 800 |
N | , | ||||
---|---|---|---|---|---|
9 | 9 | 10, 20, 30 | 30 | 16, 64 | 0.01, 0.1 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Lazovskaya, T.; Tarkhov, D.; Chistyakova, M.; Razumov, E.; Sergeeva, A.; Shemyakina, T. Evolutionary PINN Learning Algorithms Inspired by Approximation to Pareto Front for Solving Ill-Posed Problems. Computation 2023, 11, 166. https://doi.org/10.3390/computation11080166
Lazovskaya T, Tarkhov D, Chistyakova M, Razumov E, Sergeeva A, Shemyakina T. Evolutionary PINN Learning Algorithms Inspired by Approximation to Pareto Front for Solving Ill-Posed Problems. Computation. 2023; 11(8):166. https://doi.org/10.3390/computation11080166
Chicago/Turabian StyleLazovskaya, Tatiana, Dmitriy Tarkhov, Maria Chistyakova, Egor Razumov, Anna Sergeeva, and Tatiana Shemyakina. 2023. "Evolutionary PINN Learning Algorithms Inspired by Approximation to Pareto Front for Solving Ill-Posed Problems" Computation 11, no. 8: 166. https://doi.org/10.3390/computation11080166
APA StyleLazovskaya, T., Tarkhov, D., Chistyakova, M., Razumov, E., Sergeeva, A., & Shemyakina, T. (2023). Evolutionary PINN Learning Algorithms Inspired by Approximation to Pareto Front for Solving Ill-Posed Problems. Computation, 11(8), 166. https://doi.org/10.3390/computation11080166