A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding
Abstract
1. Introduction
- (1)
- Regarding training efficiency and robustness: Ko S et al. (2025) proposed the variable-scaling technique VS-PINN, which significantly improved the training efficiency and accuracy for stiff and nonlinear problems by analyzing the eigenvalues of the neural tangent kernel (NTK) [10]. Marcin Łoś et al. (2025) developed the Collocation-based Robust Variational Physics-Informed Neural Network (CRV-PINN), which enhanced model stability by constructing a robust loss function and is applicable to a broad class of partial differential equations [11]. Song Y et al. (2024) introduced LA-PINN, which, by incorporating a loss-attentional network, assigns different weights to errors at different training points, effectively improving convergence speed and approximation accuracy for hard-to-fit regions [12].
- (2)
- Regarding complex flow field feature capture: Li Y et al. (2024) improved upon classical PINN by incorporating Long-Short Term Memory (LSTM) network structures and attention mechanisms, along with an L1 regularization term as a penalty [13]. This effectively addressed pseudo-oscillations and smoothing phenomena near sparse waves and shock waves, significantly reducing oscillation relaxation. Their improved algorithm offers higher resolution and a better ability to capture the details of complex phenomena.
- (3)
- Regarding model architecture and parameterization: Zhengwu Miao et al. (2023) [14] proposed a deep learning method called VC-PINN (Variable Coefficient Physics-Informed Neural Network), specifically designed for forward and inverse problems of partial differential equations with variable coefficients. It enhances standard PINN by adding a branch network to approximate variable coefficients, incorporating hard constraints on these coefficients, and introducing a ResNet structure without additional parameters to unify linear and nonlinear coefficients while mitigating the vanishing gradient problem [14]. Yifan Wang et al. (2024) [15] employed a Neural Architecture Search-guided method called NAS-PINN to automatically discover optimal neural architectures for solving given partial differential equations. They validated its adaptability to irregular computational domains and high-dimensional problems and numerically demonstrated that more hidden layers do not necessarily lead to better performance [15]. Wandel N et al. (2021) proposed Spline-PINN, which approximates incompressible Navier–Stokes equations and damped wave equations by training continuous Hermite spline CNNs using only a physics-informed loss function [16]. Additionally, the PINNS-Torch package developed by Reza Akbarian Bafghi et al. (2023) leverages the PyTorch 2.0 (https://github.com/rezaakb/pinns-torch, accessed on 4 June 2025) framework to significantly enhance the implementation speed and usability of PINN by compiling dynamic computational graphs into static ones [17]. Additionally, A. Noorizadegan et al. (2024) improve PINN reliability and accuracy for PDE problems by modifying network architecture to enhance gradient flow and representation capacity, leading to more stable training [18]. This architecture-centered line is closely related to our objective but follows a different technical route.
2. Theoretical Foundations and Method Formulation
2.1. Problem Definition and Parametric Setting
2.2. Governing Equations for Incompressible Flow
2.3. Neural Representation and Logarithmic Reynolds Embedding
2.4. PINN Loss Function Framework
2.4.1. PDE Residual Loss
2.4.2. Boundary Condition Loss (Separated Lid/Wall Terms)
2.4.3. Data Loss (Supervised Reference Samples)
2.5. Vorticity-Enhanced Constraints (VE-PINN)
2.5.1. Vorticity Definition and Transport Residual
2.5.2. Full VE-PINN Objective
2.5.3. Training Objective
- (1)
- Supervised reference samples
- (2)
- Boundary samples
- (3)
- Interior collocation samples
- (4)
- Interior vorticity samples
3. Calculation Condition
3.1. Problem Definition and Domain Setup
3.2. Reynolds-Number Parameterization and Training Range
3.3. Training Samples: Supervised Data, Collocation Points, and Boundary Points
- (1)
- Supervised CFD samples (for )
- (2)
- Interior collocation points (for and )
- (3)
- Boundary points (for )
- BL-PINN (baseline): a PINN trained with .
- VE-PINN (proposed): the baseline augmented with the vorticity transport constraint, , and using logarithmic Reynolds embedding as the default input parameterization.
3.4. Ablation Study and Hyperparameter Groups
- MAE for velocity components: and .
- RMSE for assessing energy-scale deviations.
3.5. Training Protocol and Reproducibility
4. Results and Discussion
4.1. Optimization of Network Topology and Input Embedding
4.1.1. Network Architecture Search
4.1.2. Effect of Reynolds-Number Parameterization
- Linear Scaling (D1): When linear normalization is adopted, the Reynolds number is mapped directly to a uniformly spaced coordinate in the embedded space. Although this preserves constant spacing between adjacent training cases, it represents regime variation only in terms of absolute numerical difference. For the present dataset spanning to , such a parameterization is not well-suited for unified cross-regime learning. In practice, the network tends to bias the optimization toward high-Re regimes, where the residual distribution and numerical scale differ substantially from those of the low-Re cases, resulting in degraded global consistency.
- Logarithmic Embedding (D2): To alleviate this issue, the Reynolds number is transformed using a logarithmic embedding, , followed by min-max normalization before input to the network. This transformation does not simply make the samples redistribute more uniformly. Instead, it reshapes the conditioning variable by expanding the low-Re region and compressing the high-Re region in the embedded space. As shown in Figure 5, which visualizes the embedding mapping, the sample positions in the embedded space, and the spacing between neighboring Reynolds-number cases, the logarithmic embedding allocates higher parameter resolution to the low-Re range. The inset views further show that this redistribution is especially pronounced for . These observations suggest that the logarithmic embedding improves parameter-space conditioning by representing Reynolds-number variation in relative-scale terms rather than absolute numerical distance. This interpretation is consistent with the quantitative results. As shown in Figure 6, the logarithmic embedding reduces the Multi-Regime Mean Relative Error (MR-MRE) by 42% compared with linear scaling, suggesting that logarithmic Reynolds-number embedding is an effective parameterization strategy for unified multi-regime training in the present study.
- Reciprocal Parameterization (D3): To further assess alternative embeddings, we also tested a reciprocal mapping, , followed by min-max scaling, under the same network architecture and training budget. This transformation partially compresses the high-Re range, but it over-expands the low-Re interval and introduces excessively non-uniform sensitivity across regimes. Compared with linear scaling (D1), D3 improves the prediction in some individual cases; however, it is less stable than logarithmic embedding (D2) in terms of convergence behavior and cross-regime consistency. Overall, these results indicate that is a useful auxiliary baseline, whereas remains the most effective and robust parameterization for unified multi-regime training in the present study.
4.2. Efficacy of Physics-Enhanced Constraints
4.2.1. Improvement in Global Flow Prediction
4.2.2. Capturing Vortices
- (1)
- Baseline Failure: Standard PINN often over-smooth these low-energy flow features, predicting near-zero velocity fields in the corner regions, thereby completely ignoring the existence of secondary vortices.
- (2)
- Effective Capture by Enhanced Model: With the introduction of the vorticity transport constraint, the model successfully identifies and reconstructs the corner secondary vortices. Figure 12 and Figure 13 present the vorticity distribution results within the region . It can be observed that VE-PINN provides richer flow field details in its predictions of separation point locations and recirculation centers compared to BL-PINN. Without vorticity enhancement, as the top lid velocity increases, the gradients at the edges of the primary vortex structure become concentrated, and the corner secondary vortex structures experience dissipation, making them difficult to resolve and capture. From the VE-PINN predictions, as the Reynolds number increases, changes also occur within the primary vortex structure, which in turn leads to the generation of cascaded secondary vortex structures.
4.2.3. Quantitative Validation of Vortex Capture
4.3. Parameter Sensitivity Analysis
4.3.1. Synergistic Weights
4.3.2. Collocation Point Density
4.4. Computational Time Analysis
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Abbreviations
| AD | Automatic Differentiation |
| AE-NN | Deep-Autoencoder Neural Network |
| BL-PINN | Baseline Physics-Informed Neural Network |
| CFD | Computational Fluid Dynamics |
| CFDbench | CFD Benchmark Dataset |
| CNN | Convolutional Neural Network |
| CRVPINN | Collocation-based Robust Variational Physics-Informed Neural Network |
| GNN | Graph Neural Network |
| LA-PINN | Loss-Attentional Physics-Informed Neural Network |
| LDC | Lid-Driven Cavity |
| LES | Large Eddy Simulation |
| LR | Learning Rate |
| LSTM | Long-Short Term Memory |
| MAE | Mean Absolute Error |
| MLP | Multi-layer Perceptron |
| MR-MRE | Mean Relative Error across Multiple Regimes |
| NAS-PINN | Neural Architecture Search-guided Physics-Informed Neural Network |
| NTK | Neural Tangent Kernel |
| PDE | Partial Differential Equation |
| PINN | Physics-Informed Neural Network |
| PIV | Particle Image Velocimetry |
| POD | Proper Orthogonal Decomposition |
| PRNN | Physics-Reinforced Neural Network |
| Re | Reynolds-number |
| RMSE | Root Mean Square Error |
| RNN | Recurrent Neural Network |
| ROM | Reduced Order Model |
| VC-PINN | Variable Coefficient Physics-Informed Neural Network |
| VE-PINN | Vorticity-Enhanced Physics-Informed Neural Network |
References
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: A Navier-Stokes informed deep learning framework for assimilating flow visualization data. arXiv 2018, arXiv:1808.04327. [Google Scholar] [CrossRef]
- Lawal, Z.K.; Yassin, H.; Lai, D.T.C.; Che Idris, A. Physics-informed neural network (PINN) evolution and beyond: A systematic literature review and bibliometric analysis. Big Data Cogn. Comput. 2022, 6, 140. [Google Scholar] [CrossRef]
- Ghia, U.K.N.G.; Ghia, K.N.; Shin, C.T. High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method. J. Comput. Phys. 1982, 48, 387–411. [Google Scholar] [CrossRef]
- Chiu, P.H.; Wong, J.C.; Ooi, C.; Dao, M.H.; Ong, Y.S. CAN-PINN: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Comput. Methods Appl. Mech. Eng. 2022, 395, 114909. [Google Scholar] [CrossRef]
- Rahaman, N.; Baratin, A.; Arpit, D.; Draxler, F.; Lin, M.; Hamprecht, F.A. On the spectral bias of neural networks. arXiv 2018, arXiv:1806.08734. [Google Scholar] [CrossRef]
- Wang, S.; Teng, Y.; Perdikaris, P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
- Wang, C.; Li, S.; He, D.; Wang, L. Is L2 Physics Informed Loss Always Suitable for Training Physics Informed Neural Network? Adv. Neural Inf. Process. Syst. 2022, 35, 8278–8290. [Google Scholar] [CrossRef]
- Sekar, V.; Jiang, Q.; Shu, C. Accurate near wall steady flow field prediction using Physics Informed Neural Network (PINN). arXiv 2022. [Google Scholar] [CrossRef]
- Ko, S.; Park, S. VS-PINN: A fast and efficient training of physics-informed neural networks using variable-scaling methods for solving PDEs with stiff behavior. J. Comput. Phys. 2025, 529, 113860. [Google Scholar] [CrossRef]
- Łoś, M.; Służalec, T.; Maczuga, P.; Vilkha, A.; Uriarte, C.; Paszyński, M. Collocation-based robust variational physics-informed neural networks (CRVPINNs). Comput. Struct. 2025, 316, 107839. [Google Scholar] [CrossRef]
- Song, Y.; Wang, H.; Yang, H. Loss-attentional physics-informed neural networks. J. Comput. Phys. 2024, 501, 111722. [Google Scholar] [CrossRef]
- Li, Y.; Sun, Q.; Wei, J.; Huang, C. An Improved PINN Algorithm for Shallow Water Equations Driven by Deep Learning. Symmetry 2024, 16, 1376. [Google Scholar] [CrossRef]
- Miao, Z.; Chen, Y. VC-PINN: Variable coefficient physics-informed neural network for forward and inverse problems of PDEs with variable coefficient. Phys. D Nonlinear Phenom. 2023, 456, 133945. [Google Scholar] [CrossRef]
- Wang, Y.; Zhong, L. NAS-PINN: Neural architecture search-guided physics-informed neural network for solving PDEs. J. Comput. Phys. 2024, 496, 112603. [Google Scholar] [CrossRef]
- Wandel, N.; Weinmann, M.; Neidlin, M.; Klein, R. Spline-PINN: Approaching PDEs without Data using Fast, Physics-Informed Hermite-Spline CNNs. arXiv 2021. [Google Scholar] [CrossRef]
- Bafghi, R.A.; Raissi, M. PINNs-Torch: Enhancing Speed and Usability of Physics-Informed Neural Networks with PyTorch. The Symbiosis of Deep Learning and Differential Equations III. 2023. Available online: https://openreview.net/forum?id=nl1ZzdHpab (accessed on 11 November 2023).
- Noorizadegan, A.; Young, D.; Hon, Y.; Chen, C. Power-Enhanced Residual Network for Function Approximation and Physics-Informed Inverse Problems. Appl. Math. Comput. 2024, 480, 128910. [Google Scholar] [CrossRef]
- Luo, Y.; Chen, Y.; Zhang, Z. CFDBench: A Comprehensive Benchmark for Machine Learning Methods in Fluid Dynamics. arXiv 2024. [Google Scholar] [CrossRef]
- Iuliano, E. Towards a POD-based Surrogate Model for CFD Optimization. In Proceedings of the ECCOMAS CFD and Optimization 2011, Antalya, Turkey, 23–25 May 2011. [Google Scholar]
- Raibaudo, C.; Piquet, T.; Schliffke, B.; Conan, B.; Perret, L. POD analysis of the wake dynamics of an offshore floating wind turbine model. J. Phys. Conf. Ser. 2022, 2265, 022085. [Google Scholar] [CrossRef]
- Karcher, N. POD-Based Model-Order Reduction for Discontinuous Parameters. Fluids 2022, 7, 242. [Google Scholar] [CrossRef]
- Li, T.; Zou, S.; Chang, X.; Zhang, L.; Deng, X. Predicting unsteady incompressible fluid dynamics with finite volume informed neural network. Phys. Fluids 2024, 36, 23. [Google Scholar] [CrossRef]
- Chen, J.; Hachem, E.; Viquerat, J. Graph neural networks for laminar flow prediction around random 2D shapes. arXiv 2021. [Google Scholar] [CrossRef]
- Gao, R.; Jaiman, R.K. Predicting fluid–structure interaction with graph neural networks. Phys. Fluids 2024, 36, 17. [Google Scholar] [CrossRef]
- Zeng, F.; Zeng, Y.; Zhao, P.; Liu, Z.; Li, W. Nonlinear reduced-order analysis of three-dimensional thermal stratification phenomenon in the upper plenum of a lead-bismuth cooled fast reactor based on a graph neural network. Prog. Nucl. Energy 2025, 188, 105874. [Google Scholar] [CrossRef]
- Shen, Y.; Alonso, J.J. Performance Evaluation of a Graph Neural Network-Augmented Multi-Fidelity Workflow for Predicting Aerodynamic Coefficients on Delta Wings at Low Speed. In Proceedings of the AIAA SCITECH 2025 Forum, Orlando, FL, USA, 6–10 January 2025. [Google Scholar] [CrossRef]
- Shen, Y.; Needels, J.T.; Alonso, J.J. VortexNet: A Graph Neural Network-Based Multi-Fidelity Surrogate Model for Field Predictions. In Proceedings of the AIAA SCITECH 2025 Forum, Orlando, FL, USA, 6–10 January 2025. [Google Scholar] [CrossRef]
- Chen, W.; Wang, Q.; Hesthaven, J.S.; Zhang, C. Physics-informed machine learning for reduced-order modeling of nonlinear problems. J. Comput. Phys. 2021, 446, 110666. [Google Scholar] [CrossRef]
- Wang, S.; Chen, X.; Geyer, P. Feasibility analysis of pod and deep autoencoder for reduced order modelling Indoor environment CFD prediction. In Proceedings of the Building Simulation Conference Proceedings 2023, Shanghai, China, 4–9 September 2023. [Google Scholar] [CrossRef]
- Yan, C.; Xu, S.; Sun, Z.; Guo, D.; Ju, S.; Huang, R.; Yang, G. Exploring hidden flow structures from sparse data through deep-learning-strengthened proper orthogonal decomposition. Phys. Fluids 2023, 35, 037119. [Google Scholar] [CrossRef]
- Mohammadpour, J.; Li, X.; Salehi, F. Modelling Cryogenic Hydrogen Jet Dispersion: CFD-POD-ML Insights. In Proceedings of the 24th Australasian Fluid Mechanics Conference—AFMC2024, Canberra, Australia, 1–5 December 2024. [Google Scholar] [CrossRef]
- Geneva, N.; Zabaras, N. Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks. J. Comput. Phys. 2020, 403, 109056. [Google Scholar] [CrossRef]
- McClenny, L.; Braga-Neto, U. Self-adaptive physics-informed neural networks using a soft attention mechanism. arXiv 2024, arXiv:2009.04544. [Google Scholar]
- Shao, X.; Liu, Z.; Zhang, S.; Zhao, Z.; Hu, C. PIGNN-CFD: A physics-informed graph neural network for rapid predicting urban wind field defined on unstructured mesh. Build. Environ. 2023, 232, 110056. [Google Scholar] [CrossRef]
- Wang, Y.; Qiu, X.; Pei, Q.; Wang, J.; Zhang, P.; Bai, X. FAMAW-PINN: A physics-informed neural network integrating adaptive loss weighting with firefly-inspired adaptive point movement. J. Comput. Phys. 2025, 542, 114363. [Google Scholar] [CrossRef]
- Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Yu, P.S. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4–24. [Google Scholar] [CrossRef]


















| Group ID | Category | Variables Tested | Values | Purpose |
|---|---|---|---|---|
| A | Network Architecture | width; depth; activation | width: 64/128/256/512; depth: 4/6/8/12/16; activation: tanh | Identify a stable MLP configuration for AD-based residuals |
| B | Loss Weights | , , | : 2.0/5.0/8.0, : 5/10/20, : 0.5/1.0/2.0 | Study sensitivity to loss balancing and vorticity constraint strength |
| C | Sampling/Resolution | interior grid resolution | 250 × 250/500 × 500 | Evaluate robustness to sampling density |
| D | Normalization Strategy | Reynolds embedding; input scaling | linear, , ; scaling: 50 k/100 k/ ∞ | Improve conditioning for multi-regime learning |
| E | Optimization Strategy | optimizer; LR schedule; initialization | Adam/L-BFGS/AdamW/RMSprop; constant/step/cosine; He/Xavier | Compare optimization dynamics and convergence stability |
| F | Physics Constraints | constraint type | None/Vorticity Transport | Quantify the benefit of the vorticity transport constraint |
| G | Training Dynamics | batch size; augmentation | batch: 256/512/1024/2048; augmentation: none/Gaussian noise ()/Random Fourier | Test convergence–memory trade-offs and training robustness |
| H | Supervision ablation | default/0 | Distinguish the effect of supervised data from that of physics-based constraints |
| Re | BL-PINN | VE-PINN | BL-PINN () | VE-PINN () |
|---|---|---|---|---|
| 1000 | 0.060510 | 0.046405 | 1.217626 | 0.329759 |
| 10,000 | 0.218097 | 0.109516 | 12.023779 | 3.058282 |
| 20,000 | 0.450735 | 0.219769 | 24.083371 | 6.116485 |
| 50,000 | 1.712135 | 1.589577 | 59.950204 | 14.960957 |
| Re | Vortex Region | Reference Center | BL-PINN Center | VE-PINN Center | BL Center Abs -Error (%) | VE Center Abs -Error (%) | Reference | ||
|---|---|---|---|---|---|---|---|---|---|
| 1000 | Primary Vortex | (0.5313, 0.5625) | (0.5301, 0.5312) | (0.5316, 0.5811) | (0.23, 5.56) | (0.06, 3.31) | 2.04968 | 1.9613 | 2.1201 |
| Bottom-right secondary vortex | (0.8594, 0.1094) | (0.7651, 0.0814) | (0.8058, 0.1189) | (10.97, 25.59) | (6.24, 8.68) | 1.15465 | 1.0154 | 1.1843 | |
| 10,000 | Primary Vortex | (0.5117, 0.5333) | (0.4918, 0.5102) | (0.5202, 0.5561) | (3.89, 4.33) | (1.66, 4.28) | 1.88082 | 1.9671 | 1.9431 |
| Bottom-right secondary vortex | (0.7656, 0.0586) | (0.6813, 0.0711) | (0.7903, 0.0623) | (11.01, 21.33) | (3.23, 6.31) | 4.05310 | 3.0159 | 3.7231 |
| ID | Velocity_magnitude_RMSE | ||||
|---|---|---|---|---|---|
| Re = 1000 | Re = 10,000 | Re = 50,000 | |||
| B01 | 0.5 | 8 | 0.075475 | 0.580357 | 2.327602 |
| B02 | 1.0 | 5 | 0.077873 | 0.417447 | 3.895379 |
| B03 | 2.0 | 2 | 0.075728 | 0.279227 | 1.883633 |
| ID | Interior Grid Resolution | Velocity_magnitude_RMSE | ||
|---|---|---|---|---|
| Re = 1000 | Re = 10,000 | Re = 50,000 | ||
| C01 | 250 × 250 | 0.090725 | 0.674081 | 4.021075 |
| C02 | 500 × 500 | 0.088883 | 0.374291 | 2.180845 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Zheng, Y.; Peng, F.; Wang, Z.; Lei, J.; Pian, S. A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding. Fluids 2026, 11, 93. https://doi.org/10.3390/fluids11040093
Zheng Y, Peng F, Wang Z, Lei J, Pian S. A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding. Fluids. 2026; 11(4):93. https://doi.org/10.3390/fluids11040093
Chicago/Turabian StyleZheng, Yaxiong, Fei Peng, Zhanzhi Wang, Jianming Lei, and Shan Pian. 2026. "A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding" Fluids 11, no. 4: 93. https://doi.org/10.3390/fluids11040093
APA StyleZheng, Y., Peng, F., Wang, Z., Lei, J., & Pian, S. (2026). A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding. Fluids, 11(4), 93. https://doi.org/10.3390/fluids11040093
