A Physics-Informed Neural Network (PINN) Approach to Over-Equilibrium Dynamics in Conservatively Perturbed Linear Equilibrium Systems
Abstract
1. Introduction
2. Methodology
2.1. Conservatively Perturbed Equilibrium (CPE)
2.2. Physics-Informed Neural Network (PINN)
Training Strategy and Optimization
3. CPE-PINN Integration
- Physical initialization: The equilibrium composition is computed from the kinetic matrix M as its normalized null vector. A valid CPE perturbation satisfying is applied to construct the initial state .
- Neural approximation and differentiation: The neural architecture generates continuous predictions for any t; automatic differentiation supplies the exact time derivatives .
- Physics-consistent optimization: The total loss combines differential residuals, invariance preservation, and equilibrium anchoring. Minimizing drives the network toward full compliance with both the kinetic law and the CPE constraints.
4. Results and Discussion
4.1. Analysis of Perturbed Species in a Three-Species Acyclic Mechanism
4.2. Analysis of Unperturbed Species in a Three-Species Cyclic Mechanism

4.3. Analysis of Unperturbed Species in a Four-Species Acyclic Mechanism
The experimental results of this case are summarized in Table 3.4.4. Analysis of Unperturbed Species in a Four-Species Cyclic Mechanism

4.5. Application of PINN to Higher-Order Linear CPE Systems
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Appendix A
Appendix A.1
| Figure | System Type | Species | Epochs | Cons. Weight | Special Features | |
|---|---|---|---|---|---|---|
| Figure 3a–c | 3-species cyclic | A, B, C | 800 | 1000 | 0.5 | No tail anchor used |
| Figure 3d | 3-species cyclic | A, B, C | 1000 | 1000 | 0.5 | No tail anchor used |
| Figure 4 | 3-species cyclic | A, B, C | 2000 | 1200 | 0.5 | No tail anchor used |
| Figure 5a | 4-species acyclic | A, B, C, D | 2000 | 2000 | 5.0 | Higher conservation penalty; asymmetric |
| Figure 5b | 4-species acyclic | A, B, C, D | 2000 | 2000 | 0.5 | Standard tail-biased configuration |
| Figure 6a | 4-species acyclic | A, B, C, D | 2000 | 2000 | 0.5 | Tail-biased collocation |
| Figure 6b | 4-species acyclic | A, B, C, D | 2000 | 2000 | 0.5 | Tail-biased collocation |
| Figure 6c | 4-species acyclic | A, B, C, D | 2000 | 2000 | 0.5 | Tail-biased collocation |
| Figure 6d | 4-species acyclic | A, B, C, D | 2500 | 2200 | 0.5 (1.6× for C) | Gradient clipping; species-weighted physics |
| Figure 7a,b | 4-species cyclic | A, B, C, D | 2000 | 2000 | 0.5 | Markers every 60 points for visualization |
Appendix A.2
| Component | Setting | Notes |
|---|---|---|
| Network | 5-layer MLP, 128 neurons/layer | GELU activation, linear output |
| Input/Output | 1 input (t)/3 or 4 outputs (species) | Matches problem dimensionality |
| Parameterization | Satisfies initial condition by construction | |
| Optimizer | Adam, | Exponential decay every 1000 epochs |
| Epochs | 2000–2500 | Longer for stiff or cyclic cases |
| Collocation Points | 2000–2200 per epoch | 50% uniform, 50% tail-biased (Beta(3,1)) |
| Loss Terms | Physics (1.0), Conservation (0.5–5.0), Tail Anchor (2.0) | Adjustable per case |
| Precision | Double (float64) | Ensures stability and accuracy |
| Validation | LSODA (SciPy) with tol = | High-precision reference integration |
| Gradient Handling | Clipping (1.0) in some cases | Used when numerical instability arises |
| Device | CUDA if available | Defaults to CPU otherwise |
| PINN Component | Configuration | Notes |
|---|---|---|
| Architecture | ||
| Network Type | Feedforward MLP | Sequential MLP |
| Input Dimension | 1 | Time (t) |
| Output Dimension | 3 or 4 | Species conc. (A, B, C) or (A, B, C, D) |
| Hidden Layers | 5 | Excl. input and output layers |
| Neurons per Layer | 128 | Uniform across hidden layers |
| Activation Function | GELU | Gaussian Error Linear Unit |
| Output Activation | None (linear) | Direct concentration output |
| Parameterization | ||
| Concentration Form | Initial condition satisfied automatically | |
| Neural network output | Learned time-dependent correction | |
| Training | ||
| Optimizer | Adam | Adaptive moment estimation |
| Learning Rate () | Initial lr | |
| LR Scheduler | ExponentialLR | Multiplicative decay |
| Scheduler Gamma () | Applied every 1000 epochs (typ.) | |
| Epochs | 2000–2500 | 2000 standard; 2500 for Figure 6d |
| Batch Size () | 2000–2200 | Collocation pts/epoch |
| Collocation Strategy | ||
| Base Sampling | Uniform random | 50% points (60% in Figure 6d) |
| Tail-Biased Sampling | Beta(3,1) | Dense near |
| Endpoint Inclusion | Optional | Include and/or |
| Loss Function | ||
| Physics Loss | ODE residual | |
| Conservation Loss | Mass balance constraint | |
| Tail Anchor Loss | MSE at | Stabilizes late-time behavior |
| Physics Weight | Base weight | |
| Conservation Weight | – | standard; for Figure 5a; for C in Figure 6d |
| Tail Anchor Weight | Applied to 256 points at | |
| Gradient Management | ||
| Gradient Clipping | (norm) | Applied in Figure 6d only |
| Autograd Method | torch.autograd.grad | Per-species derivative computation |
| Gradient Graph | create_graph=True | For higher-order derivatives |
| retain_graph | True | Keep graph for multiple species |
| Numerical Precision | ||
| Data Type | torch.float64 | Double precision |
| Device | CUDA (if available) | Falls back to CPU |
| Time Domain | ||
| System-dependent | (0, 1–8) s (typ.) | |
| Evaluation Points | 600–1200 | For plotting and validation |
| Validation | ||
| Reference Method | scipy.integrate.solve_ivp | LSODA method |
| ODE Tolerances | rtol = 1 × 10−9, atol = 1 × 10−9 | High-precision validation |
References
- Yablonsky, G.; Branco, P.; Marin, G.; Constales, D. Conservatively Perturbed Equilibrium (CPE) in Chemical Kinetics. Chem. Eng. Sci. 2019, 196, 384–390. [Google Scholar] [CrossRef]
- Xi, Y.; Liu, X.; Constales, D.; Yablonsky, G. Perturbed and Unperturbed: Analyzing the Conservatively Perturbed Equilibrium (Linear Case). Entropy 2020, 22, 1160. [Google Scholar] [CrossRef]
- Peng, B.; Zhu, X.; Constales, D.; Yablonsky, G. Experimental Verification of Conservatively Perturbed Equilibrium for a Complex Non-Linear Chemical Reaction. Chem. Eng. Sci. 2021, 229, 116008. [Google Scholar] [CrossRef]
- Trishch, V.; Yablonsky, G.; Constales, D.; Beznosyk, Y. Conservatively Perturbed Equilibrium in Multi-Route Catalytic Reactions. J. Non-Equilib. Thermodyn. 2023, 48, 229–241. [Google Scholar] [CrossRef]
- Trishch, V.; Beznosyk, Y.; Constales, D.; Yablonsky, G. Over-Equilibrium as a Result of Conservatively-Perturbed Equilibrium (Acyclic and Cyclic Mechanisms). J. Non-Equilib. Thermodyn. 2022, 47, 103–110. [Google Scholar] [CrossRef]
- Raissi, M.; Perdikaris, P.; Karniadakis, G. Physics-Informed Neural Networks: A Deep Learning Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Karniadakis, G.; Kevrekidis, I.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed Machine Learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
- Bibeau, V.; Boffito, D.; Blais, B. Physics-Informed Neural Network to Predict Kinetics of Biodiesel Production in Microwave Reactors. Chem. Eng. Process. Process Intensif. 2024, 196, 109652. [Google Scholar] [CrossRef]
- Ji, W.; Qiu, W.; Shi, Z.; Pan, S.; Deng, S. Stiff-PINN: Physics-Informed Neural Network for Stiff Chemical Kinetics. J. Phys. Chem. A 2021, 125, 8098–8106. [Google Scholar] [CrossRef]
- Weng, Y.; Zhou, D. Multiscale Physics-Informed Neural Networks for Stiff Chemical Kinetics. J. Phys. Chem. A 2022, 126, 8534–8543. [Google Scholar] [CrossRef]
- De Florio, M.; Schiassi, E.; Furfaro, R. Physics-informed neural networks and functional interpolation for stiff chemical kinetics. Chaos: Interdiscip. J. Nonlinear Sci. 2022, 32. [Google Scholar] [CrossRef]
- Wang, S.; Yu, X.; Perdikaris, P. When and Why PINNs Fail to Train: A Neural Tangent Kernel Perspective. J. Comput. Phys. 2022, 449, 110768. [Google Scholar] [CrossRef]
- Wu, Z.; Wang, H.; He, C.; Zhang, B.; Xu, T.; Chen, Q. The application of physics-informed machine learning in multiphysics modeling in chemical engineering. Ind. Eng. Chem. Res. 2023, 62, 18178–18204. [Google Scholar] [CrossRef]
- Asrav, T.; Aydin, E. Physics-informed recurrent neural networks and hyper-parameter optimization for dynamic process systems. Comput. Chem. Eng. 2023, 173, 108195. [Google Scholar] [CrossRef]
- Turan, M.; Dutta, A. A novel conceptual framework for droplet/particle size distribution in suspension polymerization using Physics-Informed Neural Network (PINN). Chem. Eng. J. 2025, 519, 164977. [Google Scholar] [CrossRef]
- Nair, S.; Walsh, T.F.; Pickrell, G.; Semperlotti, F. Multiple scattering simulation via physics-informed neural networks. Eng. Comput. 2025, 41, 31–50. [Google Scholar] [CrossRef]
- Yu, J.; Lu, L.; Meng, X.; Karniadakis, G.E. Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems. Comput. Methods Appl. Mech. Eng. 2022, 393, 114823. [Google Scholar] [CrossRef]
- Wang, Y.; Yao, Y.; Gao, Z. An extrapolation-driven network architecture for physics-informed deep learning. Neural Netw. 2025, 183, 106998. [Google Scholar] [CrossRef] [PubMed]
- Li, R.; Wang, J.X.; Lee, E.; Luo, T. Physics-informed deep learning for solving phonon Boltzmann transport equation with large temperature non-equilibrium. NPJ Comput. Mater. 2022, 8, 29. [Google Scholar] [CrossRef]
- Zheng, Y.; Hu, C.; Wang, X.; Wu, Z. Physics-informed recurrent neural network modeling for predictive control of nonlinear processes. J. Process Control 2023, 128, 103005. [Google Scholar] [CrossRef]
- Ngo, S.I.; Lim, Y.I. Solution and parameter identification of a fixed-bed reactor model for catalytic CO2 methanation using physics-informed neural networks. Catalysts 2021, 11, 1304. [Google Scholar] [CrossRef]
- Koksal, E.S.; Asrav, T.; Esenboga, E.E.; Cosgun, A.; Kusoglu, G.; Aydin, E. Physics-informed and data-driven modeling of an industrial wastewater treatment plant with actual validation. Comput. Chem. Eng. 2024, 189, 108801. [Google Scholar] [CrossRef]
- Sachs, J.; Bui, M.; McCarthy, J.E.; Yablonsky, G. Conservatively perturbed equilibrium and perturbation: Linear case. Chem. Eng. J. 2025, 510, 161284. [Google Scholar] [CrossRef]








| Experiment Settings | Experiment #1 | Experiment #2 |
|---|---|---|
| Kinetic Parameters (s−1): | ||
| Perturbed species: | A, B | A, B |
| Unperturbed species: | C | C |
| Experiment Settings | Value |
|---|---|
| Kinetic parameters (s−1): | |
| Perturbed species: | A, B |
| Unperturbed species: | C |
| Experimental Settings | Value |
|---|---|
| Kinetic parameters (): |
|
| Perturbed species: | A, D |
| Unperturbed species: | B, C |
| Experimental Settings | Values |
|---|---|
| Kinetic parameters (s−1): |
|
| Experiment | Perturbed Species | Unperturbed Species | Behavior |
|---|---|---|---|
| 1 | A, B | C, D | 2 extrema of [C], 1 of [D] |
| 2 | A, C | B, D | 1 extremum of [B], 1 of [D] |
| 3 | A, D | B, C | 2 extrema of [C], 1 of [B] |
| 4 | B, C | A, D | 1 extremum of [A], 1 of [D] |
| 5 | B, D | A, C | 1 extremum of [A], 1 of [C] |
| 6 | C, D | A, B | 1 extremum of [A], 1 of [B] |
| Experiment Settings | Value |
|---|---|
| Kinetic parameters (s−1): |
|
| Perturbed species: | A, D |
| Unperturbed species: | B, C |
| Category | Parameter | Value |
|---|---|---|
| Network | Architecture | |
| Activation | GELU | |
| Parameterization | ||
| Training | Optimizer | Adam () |
| Scheduler | ExponentialLR (, every 1k epochs) | |
| Epochs | 2000 (standard) / 2500 | |
| Collocation Points () | 2000–2200 per epoch | |
| Sampling | Strategy | 50% uniform + 50% Beta(3,1) tail-biased |
| Loss | Physics () | |
| Conservation () | ||
| Tail Anchor () | [256 pts] | |
| Precision | dtype | torch.float64 |
| Validation | Method | solve_ivp (LSODA, tol = ) |
| Figure | System | Epochs | Modifications | ||
|---|---|---|---|---|---|
| Figure 3a–c | 3-sp cyclic | 800 | 1000 | 0.5 | Standard |
| Figure 3d | 3-sp cyclic | 1000 | 1000 | 0.5 | Standard |
| Figure 4 | 3-sp cyclic | 2000 | 1200 | 0.5 | No tail anchor |
| Figure 5a | 4-sp acyclic | 2000 | 2000 | 5.0 | High conservation weight |
| Figure 5b and Figure 6a–c | 4-sp acyclic/cyclic | 2000 | 2000 | 0.5 | Standard |
| Figure 6d | 4-sp acyclic | 2500 | 2200 | 0.5 | +Grad clip (1.0), 1.6× weight for C |
| Figure 7a,b | 4-sp cyclic | 2000 | 2000 | 0.5 | +Tail anchor (2.0), Tail-biased collocation markers |
| Metric | PINN | ODE (RK45, Adaptive) | ODE (BDF, Adaptive) | Notes |
|---|---|---|---|---|
| 3-Species Acyclic System | ||||
| Error | Comparable accuracy | |||
| Mass conservation error | < | < | < | All satisfy physical constraints |
| Adaptive steps taken | 2000 (fixed) | 127 | 145 | RK45 refines during transient |
| Wall-clock time (single eval) | ms | 95 ms | 112 ms | ODE solver slower for single-shot inference |
| 3-Species Cyclic System | ||||
| Error | Comparable accuracy | |||
| Mass conservation error | < | < | < | All satisfy conservation |
| Adaptive steps taken | 2000 (fixed) | 156 | 168 | Higher stiffness detected |
| Wall-clock time (single eval) | ms | 115 ms | 128 ms | ODE slower; BDF handles stiffness |
| 4-Species Acyclic System | ||||
| Error | Comparable; slight BDF advantage | |||
| Mass conservation error | < | < | < | All maintain physical validity |
| Adaptive steps taken | 2000 (fixed) | 203 | 218 | Increased complexity; more steps |
| Wall-clock time (single eval) | ms | 142 ms | 155 ms | ODE cost increases with system size |
| 4-Species Cyclic System | ||||
| Error | BDF superior in cyclic/stiff case | |||
| Mass conservation error | < | < | < | All conserve mass effectively |
| Adaptive steps taken | 2000 (fixed) | 187 | 211 | Highest stiffness; BDF more efficient |
| Wall-clock time (single eval) | ms | 138 ms | 149 ms | Comparable to RK45 despite more steps |
| Metric | PINN | ODE Solver (Adaptive) | Advantage |
|---|---|---|---|
| Training time | 5–10 min | 0 | ODE |
| Single evaluation | <1 ms | 50–200 ms | PINN (50–200×) |
| Break-even queries | ≈4000 | – | PINN |
| Residual accuracy | – | – | None |
| Storage size | <5 MB | Model code | PINN |
| Inference accuracy | Marginal extrapolation loss | Stable and bounded | ODE |
| Application fit | Inverse problems, Monte Carlo sampling | Single-shot evaluation | Domain-dependent |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Dutta, A.; Mukherjee, B.; Hosen, S.A.; Turan, M.; Constales, D.; Yablonsky, G. A Physics-Informed Neural Network (PINN) Approach to Over-Equilibrium Dynamics in Conservatively Perturbed Linear Equilibrium Systems. Entropy 2026, 28, 9. https://doi.org/10.3390/e28010009
Dutta A, Mukherjee B, Hosen SA, Turan M, Constales D, Yablonsky G. A Physics-Informed Neural Network (PINN) Approach to Over-Equilibrium Dynamics in Conservatively Perturbed Linear Equilibrium Systems. Entropy. 2026; 28(1):9. https://doi.org/10.3390/e28010009
Chicago/Turabian StyleDutta, Abhishek, Bitan Mukherjee, Sk Aftab Hosen, Meltem Turan, Denis Constales, and Gregory Yablonsky. 2026. "A Physics-Informed Neural Network (PINN) Approach to Over-Equilibrium Dynamics in Conservatively Perturbed Linear Equilibrium Systems" Entropy 28, no. 1: 9. https://doi.org/10.3390/e28010009
APA StyleDutta, A., Mukherjee, B., Hosen, S. A., Turan, M., Constales, D., & Yablonsky, G. (2026). A Physics-Informed Neural Network (PINN) Approach to Over-Equilibrium Dynamics in Conservatively Perturbed Linear Equilibrium Systems. Entropy, 28(1), 9. https://doi.org/10.3390/e28010009

