Next Article in Journal
Evaluating Alternatives for Combined Modeling of Gas Cavities and Unsteady Friction in Closed-Pipe Transients
Previous Article in Journal
Hydrodynamic Shielding and Oxidation Suppression in Merging Lazy Plumes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding

1
College of Naval Architecture and Ocean Engineering, Naval University of Engineering, Wuhan 430033, China
2
College of Shipping and Ocean Engineering, Wuhan Institute of Shipbuilding Technology, Wuhan 430050, China
3
School of Physics and Mechanical Electrical & Engineering, Hubei University of Education, Wuhan 430205, China
*
Author to whom correspondence should be addressed.
Fluids 2026, 11(4), 93; https://doi.org/10.3390/fluids11040093
Submission received: 24 February 2026 / Revised: 27 March 2026 / Accepted: 31 March 2026 / Published: 2 April 2026

Abstract

To improve unified modeling of steady two-dimensional lid-driven cavity flow across a wide range of Reynolds numbers, this study proposes a Vorticity-Enhanced Physics-Informed Neural Network (VE-PINN). The method augments a standard velocity-pressure PINN with a vorticity-transport residual and uses a logarithmic Reynolds-number embedding, log 10 R e , for multi-regime training. Using CFD benchmark data as supervision and evaluation, we conduct systematic ablation studies on network architecture, loss weighting, sampling density, input embedding, and physical constraint over R e = 1000 50000 , together with out-of-range extrapolation tests. The results show that the logarithmic Reynolds-number embedding improves cross-regime training stability and reduces the multi-regime mean relative error, while the vorticity-transport constraint improves the reconstruction of velocity fields and secondary vortical structures with only a modest increase in training cost. Further comparisons based on contour fields, centerline velocity profiles, vortex-core locations, and vorticity intensity indicate that VE-PINN provides improved accuracy, physical consistency, and generalization relative to the baseline PINN in the present benchmark. These findings suggest that, for the steady cavity-flow problem considered here, combining logarithmic parameter embedding with derivative-level physical constraint is a practical and effective strategy for parametric PINN modeling.

1. Introduction

Computational Fluid Dynamics (CFD) plays an important role in engineering design and scientific research, but traditional CFD methods are often computationally expensive and struggle with inverse problems or parameterized studies. In recent years, Physics-Informed Neural Networks (PINN) have emerged as a promising mesh-free framework for solving both forward and inverse partial differential equation (PDE) problems by embedding physical laws into the neural network’s training objective [1,2]. Since their introduction, PINNs have attracted widespread attention in scientific computing because of their ability to integrate sparse data with first-principles constraints. Lawal et al. (2022) conducted a systematic literature review and bibliometric analysis of 120 articles published from 2019 to 2022, identifying fluid mechanics as one of the fastest-growing application areas for PINNs and analyzing the objectives, methods, and limitations of PINN-based approaches [3]. This trend is primarily driven by the urgent demand for efficient solvers for incompressible Navier–Stokes equations in engineering design and analysis.
Two-dimensional lid-driven cavity flow is a classic benchmark problem, and its detailed solutions have been well documented for decades [4]. As the Reynolds number increases, the flow field gradually evolves from simple recirculation at low Reynolds numbers to complex coherent vortex structures and multiple flow regimes at high Reynolds numbers. This sensitivity to flow parameters makes it an ideal platform for evaluating whether physics-informed learning methods can robustly capture global flow patterns and local dynamic characteristics across a wide range of flow regimes. However, when a single PINN model is required to generalize across multiple Reynolds-number ranges, training becomes fragile, especially for high Reynolds-number cavity flows dominated by strong shear layers and vorticity transport.
Despite the promising outlook for PINN, standard PINN faces numerous optimization and representation challenges when applied to nonlinear, convection-dominated flow problems [5]. Solutions at high Reynolds numbers often involve steep boundary layers and multi-scale features, which traditional multi-layer perceptrons (MLPs) frequently struggle to approximate due to their “spectral bias,” meaning neural networks tend to prioritize learning low-frequency components while neglecting high-frequency gradient information [6]. In addition, “gradient flow pathology”—caused by stiff PDE residuals and imbalanced loss terms—can lead to unstable training and slow convergence [7]. Chuwei Wang et al. (2022) theoretically investigated the relationship between the loss function and the quality of solution approximation, pointing out that the L2-norm residual, commonly chosen as the standard for PINN training, does not always correlate with the true accuracy of the physical solution, especially in convection-dominated flow regimes [8]. Sekar V et al. (2022) observed accurate solutions far from the wall but oscillations in the surface friction coefficient distribution when simulating flat plate boundary layer problems using PINN, indicating that accurate prediction in near-wall regions remains challenging [9].
To address these challenges, researchers have proposed various improved strategies aimed at enhancing PINN training efficiency, robustness, and the ability to capture complex physical phenomena:
(1)
Regarding training efficiency and robustness: Ko S et al. (2025) proposed the variable-scaling technique VS-PINN, which significantly improved the training efficiency and accuracy for stiff and nonlinear problems by analyzing the eigenvalues of the neural tangent kernel (NTK) [10]. Marcin Łoś et al. (2025) developed the Collocation-based Robust Variational Physics-Informed Neural Network (CRV-PINN), which enhanced model stability by constructing a robust loss function and is applicable to a broad class of partial differential equations [11]. Song Y et al. (2024) introduced LA-PINN, which, by incorporating a loss-attentional network, assigns different weights to errors at different training points, effectively improving convergence speed and approximation accuracy for hard-to-fit regions [12].
(2)
Regarding complex flow field feature capture: Li Y et al. (2024) improved upon classical PINN by incorporating Long-Short Term Memory (LSTM) network structures and attention mechanisms, along with an L1 regularization term as a penalty [13]. This effectively addressed pseudo-oscillations and smoothing phenomena near sparse waves and shock waves, significantly reducing oscillation relaxation. Their improved algorithm offers higher resolution and a better ability to capture the details of complex phenomena.
(3)
Regarding model architecture and parameterization: Zhengwu Miao et al. (2023) [14] proposed a deep learning method called VC-PINN (Variable Coefficient Physics-Informed Neural Network), specifically designed for forward and inverse problems of partial differential equations with variable coefficients. It enhances standard PINN by adding a branch network to approximate variable coefficients, incorporating hard constraints on these coefficients, and introducing a ResNet structure without additional parameters to unify linear and nonlinear coefficients while mitigating the vanishing gradient problem [14]. Yifan Wang et al. (2024) [15] employed a Neural Architecture Search-guided method called NAS-PINN to automatically discover optimal neural architectures for solving given partial differential equations. They validated its adaptability to irregular computational domains and high-dimensional problems and numerically demonstrated that more hidden layers do not necessarily lead to better performance [15]. Wandel N et al. (2021) proposed Spline-PINN, which approximates incompressible Navier–Stokes equations and damped wave equations by training continuous Hermite spline CNNs using only a physics-informed loss function [16]. Additionally, the PINNS-Torch package developed by Reza Akbarian Bafghi et al. (2023) leverages the PyTorch 2.0 (https://github.com/rezaakb/pinns-torch, accessed on 4 June 2025) framework to significantly enhance the implementation speed and usability of PINN by compiling dynamic computational graphs into static ones [17]. Additionally, A. Noorizadegan et al. (2024) improve PINN reliability and accuracy for PDE problems by modifying network architecture to enhance gradient flow and representation capacity, leading to more stable training [18]. This architecture-centered line is closely related to our objective but follows a different technical route.
While these methods have achieved significant progress in their respective domains, the generalization capability across multiple Reynolds-number regimes—i.e., the robust applicability of a single network architecture over a broad range of flow parameters—remains a core unsolved challenge [2,3]. This is particularly critical in canonical benchmark problems like lid-driven cavity flow, where the geometry is fixed but vortex structures dynamically evolve with the Reynolds number, making the design of PINN models capable of effectively handling solution manifold transformations caused by parameter variations exceptionally important. The CFDBench dataset introduced by Luo Yining et al. (2023) also focuses on evaluating the generalization capabilities of machine learning methods in fluid mechanics across different operating conditions (boundary conditions, fluid physical properties, domain geometry), further highlighting the significance of cross-parameter generalization learning [19]. From this perspective, robust PINN design can be approached from two complementary directions: (i) architecture-level enhancement (e.g., improving trainability and expressiveness) and (ii) physics/loss-level enhancement with appropriate parameter conditioning. The present study mainly investigates the second direction for wide-Reynolds-number cavity flows.
Parametric PINN guide networks to learn solution mappings under different operating conditions by incorporating physical parameters as additional inputs [6]. However, this approach faces critical challenges, such as how to appropriately scale parameters to avoid “regime collapse” and how to design network architectures that maintain sufficient representational capacity across significantly different flow states. This is also a key focus of the present work. The review by Lawal et al. (2022) also envisioned combining PINN with Graph Neural Network (GNN) or Recurrent Neural Network (RNN) for time-series data-driven flow prediction to enhance multi-regime learning capabilities [3].
Beyond directly solving PDEs, combining data-driven Reduced Order Models (ROM) with deep learning has emerged as an effective approach for handling complex flow fields. Proper Orthogonal Decomposition (POD) is commonly used to extract dominant flow modes. Iuliano E (2011) utilized CFD methods combined with POD to construct surrogate models for transonic airfoil aerodynamic design [20]. Raibaudo, Cédric et al. (2022) investigated the wake dynamics of offshore floating wind turbines through PIV experiments and POD analysis [21]. Karcher N (2022) proposed an improved oPOD snapshot reconstruction method for cases with incomplete data [22].
In recent years, Graph Neural Networks (GNNs) have shown great potential in fluid mechanics due to their ability to process unstructured data. Tianyu Li et al. (2024) introduced a novel GNN-based Spatial Integration Layer (S.I.L) to implement physical constraint, significantly improving the prediction accuracy of GNN in unsteady flow prediction scenarios, and constructed a quadratic message aggregation process in the GNN’s message-passing layer using an extended stencil method [23]. Chen J et al. (2021) proposed a GNN architecture for two-dimensional incompressible laminar flow prediction, which works directly on body-fitted triangular meshes and can effectively reconstruct velocity and pressure fields around cylinders regardless of input geometry symmetry [24]. Gao R et al. (2024) developed a data-driven reduced-order modeling framework for fluid–structure interaction systems based on rotation-equivariant quasi-monolithic GNN, achieving stable and accurate predictions of fluid and mesh states [25]. Zeng F et al. (2025) constructed a graph auto-encoder based on GNN for nonlinear reduced-order analysis of three-dimensional thermal stratification phenomena in the upper plenum of a lead-bismuth-cooled fast reactor [26]. In the context of multi-fidelity modeling, Shen Y et al. (2025a) proposed VortexNet, a GNN-based surrogate model for aircraft conceptual design, integrating high-fidelity CFD data into low-fidelity prediction tools and achieving predictive capabilities by fusing geometric information and flow features in the latent space [27]. Shen Y et al. (2025b) further integrated VortexNet into the SUAVE conceptual design suite to predict aerodynamic coefficients [28].
Furthermore, deep learning methods have also been integrated with ROM. Chen W et al. (2021) proposed a data-driven physics-informed machine learning method for dimensionality reduction, predicting reduced-order coefficients using a feedforward neural network [29]. By using the mean square norm of the reduced-order equation residuals as a loss function, PINN can learn POD-G reduced-order models. Projecting high-fidelity snapshots from reduced basis generation provided additional physical information to train the Physics-Reinforced Neural Network (PRNN) by minimizing a weighted sum of the reduced-order equation residual loss and the matching error loss on the projected dataset. Wang S et al. (2023) compared POD-NN (proper orthogonal decomposition neural network) and AE-NN (deep-autoencoder neural network) for real-time accurate prediction of indoor airflow organization and velocity distribution, highlighting the significant advantages of AE-NN, based on its network structure and nonlinear functions, for fast and accurate prediction of complex dynamic CFD problems [30]. Yan C et al. (2023) introduced the PINN-POD method, which reconstructs the flow field from sparse observation data using several sub-PINNs, requiring less data than traditional POD to extract hidden flow structures [31]. Javad Mohammadpour et al. (2024) combined Large Eddy Simulation (LES), POD, and a novel hybrid model (LSTM neural networks with SES methods) to analyze cryogenic hydrogen diffusion and accurately predict time coefficients [32].
These studies demonstrate that GNN and ROM can effectively reduce computational burden and perform well in handling unstructured geometries, learning from sparse data, or integrating multi-scale information. However, these methods typically focus on learning mappings from data or dynamics after dimensionality reduction, rather than directly using physical constraints to enhance a single PINN for solving full-scale complex flow fields over a wide parameter range, particularly addressing stability and accuracy issues for vortex details at high Reynolds numbers [33,34,35,36,37].
Therefore, developing a mechanism that can directly improve PINN’s generalization ability in multi-regime complex flows remains a critical unsolved problem. At the same time, we note that several ingredients relevant to this goal have been explored individually in prior studies, including parameter scaling/embedding, vorticity-related formulations, adaptive weighting, and architecture-level optimization. The combined deployment of logarithmic Reynolds-number embedding and vorticity-based derivative regularization within a single parametric PINN framework, under a controlled and reproducible benchmark protocol, has not been systematically investigated for wide-Re lid-driven cavity flows.
Accordingly, the main scientific contribution of this work is not a novel standalone module, but the targeted integration of two complementary physics-informed enhancements:
(1) a logarithmic Reynolds-number embedding to mitigate regime collapse and improve cross-regime parameter conditioning; and
(2) a vorticity-transport residual used as an auxiliary derivative-level regularization term to strengthen supervision of vortex-dominated dynamics—without introducing new governing physics.
The controlled benchmark protocol serves as a supporting validation framework, designed specifically to isolate and verify the contribution of each component. It enables us to attribute performance gains unambiguously to the coupled design, rather than to incidental implementation choices. In this context, we propose a VE-PINN for steady two-dimensional lid-driven cavity flow across a wide Reynolds-number range. The method is built on the above coupled design, and its efficacy is demonstrated through systematic ablation studies and out-of-range extrapolation tests under reproducible conditions.

2. Theoretical Foundations and Method Formulation

This section introduces the theoretical foundations underlying the proposed VE-PINN framework. We first review the governing equations of incompressible viscous flows and their non-dimensional formulation. We then briefly summarize the Physics-Informed Neural Network paradigm and discuss its limitations in high-Reynolds-number and multi-regime settings. Finally, we motivate the incorporation of vorticity-based constraint and logarithmic Reynolds embedding from a theoretical perspective.

2.1. Problem Definition and Parametric Setting

We consider the steady two-dimensional incompressible lid-driven cavity flow as the benchmark problem in this work. The governing equations are the steady incompressible Navier–Stokes equations posed on a square domain Ω R 2 with boundary Ω , the unknown solution fields are the velocity u ( x , y ) = ( u , v ) and pressure p ( x , y ) .
Unless otherwise stated, all variables are expressed in nondimensional form. Spatial coordinates are normalized by the cavity length L , velocities by the lid speed U , and pressure by ρ U 2 , so that the computational domain used by the network is the normalized square cavity [ 0,1 ] × [ 0,1 ] , and the problem is parameterized by the Reynolds number R e = U L / ν .
In the parametric-learning setting, R e is introduced as an additional network input. The training cases span R e [ 1000,50000 ] , while out-of-range cases are used separately for extrapolation assessment.

2.2. Governing Equations for Incompressible Flow

For steady incompressible flow, the non-dimensional governing equations are
· u = 0
( u · ) u + p 1 R e 2 u = 0
Following the PINN paradigm, we define the PDE residuals (evaluated via automatic differentiation) as
f m a s s = u x + v y
f u = u u x + v u y + p x 1 R e 2 u
f v = u v x + v v y + p y 1 R e 2 v
where 2 ( · ) = 2 ( · ) / x 2 + 2 ( · ) / y 2 .

2.3. Neural Representation and Logarithmic Reynolds Embedding

We approximate the parametric solution map by a neural network
( u θ , v θ , p θ ) = N θ ( x , y , R e ^ )
where θ denotes network parameters and R e ^ = R e ^ l o g is the normalized Reynolds input. To improve conditioning in multi-regime learning, we adopt the logarithmic Reynolds embedding described in your Equation (2) and map it to [ 1,1 ] using min–max scaling:
R e ^ l o g = 2 · l o g 10 ( R e ) l o g 10 ( R e m i n ) l o g 10 ( R e m a x ) l o g 10 ( R e m i n ) 1 ,   R e [ R e m i n , R e m a x ]
For comparison, linear normalization is defined as
R e ^ l i n = 2 · R e R e m i n R e m a x R e m i n 1
Unless otherwise specified, R e ^ = R e ^ l o g is used in all multi-regime experiments. This re-parameterization does not modify the governing equations; it changes only how the parametric input is presented to the network, which can reduce regime imbalance when a single model is trained across a wide range of Reynolds-number. To stabilize higher-order derivatives required by physics residuals, we use smooth tanh activations, normalized inputs (x, y, and log-scaled Reynolds embedding), and regularized optimization (L2 weight decay and global gradient clipping).

2.4. PINN Loss Function Framework

PINN training is formulated as minimizing a weighted sum of physics residuals, boundary constraints, and supervised data mismatch:
L t o t a l = λ p d e L p d e + λ b c L b c + λ d a t a L d a t a
where L p d e , L b c , and L d a t a denote the PDE residual loss, boundary-condition loss, and supervised data loss. In addition, λ p d e , λ b c , and λ d a t a are tuned in a bounded range to avoid gradient domination of any single term, which improves numerical stability when computing AD-based high-order derivatives. Below, we define each term.

2.4.1. PDE Residual Loss

We sample N f collocation points { ( x i , y i , R e i ) } i = 1 N f inside the domain and enforce the steady Navier–Stokes residuals:
L p d e = 1 N f i = 1 N f ( | f m a s s ( i ) | 2 + | f u ( i ) | 2 + | f v ( i ) | 2 )
where f m a s s ( i ) , f u ( i ) , and f v ( i ) are defined in Equations (3)–(5) and are evaluated using automatic differentiation of N θ .

2.4.2. Boundary Condition Loss (Separated Lid/Wall Terms)

Let Ω be decomposed into the moving lid boundary Γ l i d and stationary wall boundaries Γ w a l l . Using N l i d samples on the lid and N w a l l samples on the stationary walls, we define
L b c = L l i d + L w a l l
L l i d = 1 N l i d j = 1 N l i d ( | u θ ( x j ) U l i d | 2 + | v θ ( x j ) | 2 ) ,     x j Γ l i d
L w a l l = 1 N w a l l k = 1 N w a l l ( | u θ ( x k ) | 2 + | v θ ( x k ) | 2 ) ,     x k Γ w a l l

2.4.3. Data Loss (Supervised Reference Samples)

To incorporate sparse reference observations from CFDBench/Ansys-Fluent, we use N d supervised samples { ( x k , y k , R e k ) , ( u k r e f , v k r e f ) } k = 1 N d and minimize in data loss
L d a t a = 1 N d k = 1 N d ( | u θ ( x k ) u k r e f | 2 + | v θ ( x k ) v k r e f | 2 )
If pressure data are not used in supervision, p is learned implicitly through the PDE constraints.

2.5. Vorticity-Enhanced Constraints (VE-PINN)

2.5.1. Vorticity Definition and Transport Residual

For two-dimensional incompressible flow, the scalar vorticity is
ω θ = v θ x u θ y
In steady state, the corresponding vorticity transport equation becomes
u θ ω θ x + v θ ω θ y 1 R e ( 2 ω θ x 2 + 2 ω θ y 2 ) = 0
We define the vorticity transport residual
f ω = u θ ω θ , x + v θ ω θ , y 1 R e ( ω θ , x x + ω θ , y y )
and enforce it at N p interior points:
L v o r t = 1 N p i = 1 N p | f w ( i ) | 2
Here ω is not parameterized by an additional network. Instead, a single network f θ ( x , y , R e ) outputs ( u ,   v ,   p ) , and ω is computed from the same network outputs via automatic differentiation as ω   =   v / x     u / y . All derivatives in the vorticity-transport residual are obtained on the same computational graph. We emphasize that the steady vorticity-transport equation is not an independent physical law from the incompressible Navier–Stokes equations; rather, it can be derived from the momentum equations. Therefore, the introduced vorticity loss does not inject new physics but provides an additional derivative-level constraint on the same solution manifold. In this sense, L v o r t can be interpreted as a physics-derived regularization term that emphasizes rotational transport and local shear structures.

2.5.2. Full VE-PINN Objective

The final VE-PINN objective augments the standard PINN loss with vorticity-enhanced constraints:
L V E P I N N = λ p d e L p d e + λ b c L b c + λ d a t a L d a t a + λ v o r t L v o r t
The weight λ v o r t is tuned through sensitivity analysis, and we report the recommended robust ranges based on the observed performance curves. The overall architecture of VE-PINN is shown in Figure 1. Although redundant in a strict PDE sense, this auxiliary constraint reshapes the optimization landscape by directly penalizing rotational inconsistency, which is particularly relevant in cavity flows with sharp shear layers and corner vortices. This effect is analogous to Sobolev-type training, where matching derivative information improves the approximation of gradient-dominated features.

2.5.3. Training Objective

We consider a parametric PINN that maps spatial coordinates and a Reynolds-number parameter to the steady flow field. Let the neural network be
N θ : ( x , y , R e ^ ) ( u θ ( x , y , R e ^ ) , v θ ( x , y , R e ^ ) , p θ ( x , y , R e ^ ) )
where θ denotes trainable parameters, R e ^ is the logarithmically embedded Reynolds-number. The model is trained by minimizing a weighted sum of supervised data mismatch, boundary-condition penalties, and physics residuals:
min θ L V E P I N N ( θ ) = λ p d e L p d e + λ b c L b c + λ d a t a L d a t a + λ v o r t L v o r t
The four loss components are evaluated on different sample sets:
(1)
Supervised reference samples
D d = { ( x k , y k , R e ^ k , u k r e f , v k r e f ) } k = 1 N d
N d : Number of samples. u k r e f , v k r e f : Reference velocity. In this work, reference samples are extracted from Ansys Fluent solutions on a 64 × 64 grid for each Reynolds case.
(2)
Boundary samples
D b c = D l i d D w a l l
For L b c , enforcing the moving-lid and non-slip wall conditions.
(3)
Interior collocation samples
D f = { ( x k , y k , R e ^ i ) } i = 1 N f
for the Navier–Stokes residual L p d e .
(4)
Interior vorticity samples
D p = { ( x i , y i , R e ^ i ) } i = 1 N f
for the vorticity-transport residual L v o r t .
For implementation simplicity and fair comparison, we use the same interior sample set for both residuals, i.e., N p = N f and D p = D f . Here, D f denotes the interior collocation set defined on the 500 × 500 interior training grid for each Reynolds number case. All spatial derivatives in the residual terms are computed via automatic differentiation. During training, we minimize L V E P I N N with respect to θ . During inference and evaluation, predictions are reported on a separate prediction set, which is not used for backpropagation.

3. Calculation Condition

3.1. Problem Definition and Domain Setup

We consider the two-dimensional steady incompressible lid-driven cavity (LDC) flow. The calculational domain is shown in Figure 2.
The computational domain is a square cavity, defined as
Ω = [ 0 , L ] × [ 0 , L ] ,         L = 0.01
Boundary conditions are prescribed as follows:
Moving lid ( y = L ):
u = U l i d ,   v = 0
Stationary walls ( x = 0 , x = L , y = 0 ) (no-slip):
u = 0 ,   v = 0
Pressure gauge: to remove the constant-pressure null space, a reference pressure is fixed at
p ( 0,0 ) = 0
The same pressure-gauge convention is used in all cases, including all Reynolds numbers, model variants (BL-PINN/VE-PINN). This gauge condition removes only the additive pressure constant and does not affect velocity predictions or pressure gradients. Therefore, pressure fields are comparable across all experiments under a unified reference.

3.2. Reynolds-Number Parameterization and Training Range

The Reynolds-number is defined by
R e = ρ U l i d L μ
where the fluid density is ρ = 1   k g / m 3 , the dynamic viscosity is μ = 1 × 10 5   k g / ( m · s ) , and the characteristic length is L = 0.01   m . Substituting these values yields
R e = 1 · U l i d · 0.01 1 × 10 5 = 1000 U l i d
During training, the lid velocity is sampled as U l i d { 1,2 , 3 , , 50 } , which corresponds to the discrete training set R e { 1000,2000,3000 , , 50000 } , i.e., 50 Reynolds-number regimes. The primary benchmark range is Re = 1000, 2000, …, 50,000 (50 regimes), which is used for in-range training and comparison. In addition, to test out-of-distribution behavior, we conduct extrapolation at R e   =   100 and R e   =   100000 , which lie outside the training range and are reported separately.
We use high-fidelity CFD solutions from CFDBench as ground truth to evaluate accuracy and generalization. The LDC reference solutions are generated with a finite-volume solver in ANSYS-Fluent 2021R1 using a pressure-based coupled scheme for single-phase flow. The pressure equation uses second-order interpolation, and the momentum equation uses second-order upwind discretization. The transient term is treated by a first-order implicit scheme, and gradient interpolation uses the least-squares method. Near-wall mesh refinement is applied (first-layer size: 10 5 m) to better capture boundary-layer behavior. According to dataset documentation, all computational models underwent grid-independence validation, with strict residual convergence settings ( 10 9 for equation residual criteria; velocity residuals at least 10 6 ). The final released fields are then interpolated to a 64 × 64 grid. We note that Ghia et al. (1982) [4] remains the classical reference for the lid-driven cavity problem. In this study, CFDBench is adopted as the primary reference source because our goal is cross-regime PINN benchmarking under a unified and reproducible data protocol, with consistently generated CFD fields across multiple operating conditions.

3.3. Training Samples: Supervised Data, Collocation Points, and Boundary Points

We emphasize that the present study adopts a hybrid supervised–physics training setting rather than a purely physics-only PINN setting. Specifically, Ansys-Fluent reference velocity fields are used in the supervised loss term during training, while PDE residuals, boundary conditions, and (for VE-PINN) the vorticity-transport residual provide additional physics-based constraints. Training is performed using three types of samples, which serve different loss terms.
(1)
Supervised CFD samples (for L d a t a )
For each Reynolds-number case, reference velocity fields ( u r e f , v r e f ) are extracted on a structured grid of size 64 × 64 . These supervised grid samples are used only for training loss construction ( L d a t a ) and are never reused as evaluation points.
(2)
Interior collocation points (for L p d e and L v o r t )
Physics residuals are enforced at interior points sampled inside the computational domain Ω . In our implementation, for each Reynolds number case, a 500 × 500 interior training grid is used, yielding 250,000 interior collocation points. These interior points are used simultaneously to compute the Navier–Stokes residual loss L p d e and the vorticity-transport residual loss L v o r t . This sampling strategy ensures that physics constraints are enforced equally for each Reynolds regime.
(3)
Boundary points (for L b c )
Boundary points are sampled on Ω to enforce velocity boundary conditions, with the moving lid treated separately from the three stationary walls. If the boundary sampling count is not explicitly specified in the configuration, it is handled internally by the sampling routine and kept fixed across all compared methods.
For visualization and quantitative evaluation, we generate a separate prediction set consisting of N p r e d = 500 points. These points are sampled uniformly at random over Ω using a fixed random seed and are kept identical across all Reynolds numbers and all compared methods. The prediction points are used exclusively for inference and metric computation and do not participate in backpropagation. In addition, the prediction set is constructed independently from supervised, interior-collocation, and boundary samples, with no overlap by design, to prevent train–test leakage. All reported error metrics, including MAE and RMSE of velocity components, are computed on this fixed prediction set to ensure fair and reproducible comparisons across Reynolds regimes and model variants.
To isolate the contribution of the proposed enhancement, we compare:
  • BL-PINN (baseline): a PINN trained with L p d e + L b c + L d a t a .
  • VE-PINN (proposed): the baseline augmented with the vorticity transport constraint, L v o r t , and using logarithmic Reynolds embedding as the default input parameterization.
In multi-regime experiments, batch construction is Reynolds-balanced: each training epoch includes per-regime samples with the same sampling rule, and gradients are accumulated over all regimes within the same optimization cycle. This prevents under-representation of low- or high-Re cases during joint training. Concretely, each Reynolds regime uses the same per-epoch quota, so each regime contributes equally to gradient updates.

3.4. Ablation Study and Hyperparameter Groups

We conduct controlled experiments by changing one factor at a time while keeping the remaining settings fixed. The calculational groups are summarized in Table 1. Unless otherwise specified by the tested factor, all experiments follow the same training protocol, including Reynolds-balanced batch construction (equal per-regime sampling quota per epoch), identical domain/boundary settings, and fixed random seeds.
To improve clarity with respect to existing PINN enhancement categories, we map our ablation groups to representative method families. Group D corresponds to variable-scaling strategies. Group B is related to loss-balancing methods. Group C provides sampling-sensitivity evidence. The main contribution of this paper is isolated in Group F, which evaluates the effect of the vorticity-transport-enhanced physical constraint under controlled settings. To distinguish the effect of supervised anchoring from that of physics-based constraints, we further introduce Group H, in which the supervised-loss weight λ d a t a is ablated. Setting λ d a t a yields a no-data variant trained only with boundary conditions and physics residual terms.
We report field-level errors on a fixed evaluation set for each Reynolds number:
  • MAE for velocity components: M A E ( u ) and M A E ( v ) .
  • RMSE for assessing energy-scale deviations.
For cross-regime evaluation, we define the Multi-Regime Mean Relative Error (MR-MRE) as M R M R E = 1 N R e k = 1 N R e ( 1 N p i = 1 N p | u ^ k , i u k , i | 2 | u k , i | 2 + ε ) , where k indexes Reynolds-number cases, i indexes evaluation points, u ^ and u denote predicted and reference velocities, and ε is a small constant for numerical stability. Lower MR-MRE indicates better average relative accuracy across regimes.
All metrics are computed on the prediction/evaluation grid to ensure fair comparisons across methods and Reynolds regimes. The evaluation grid is strictly disjoint from all training samples and is used only for inference and metric computation.
To improve reproducibility, random seeds are fixed for network initialization and sampling. All models are trained and evaluated under the same domain, boundary conditions, Reynolds-number set, and sampling configuration. This includes using an identical pressure-reference setting across all experiments. Key hyperparameters (network size, loss weights, and optimizer settings) are reported alongside the corresponding calculational results in Section 4.

3.5. Training Protocol and Reproducibility

Unless otherwise stated, all models are trained with the Adam optimizer. The initial learning rate is set to 0.001, and a ReduceLROnPlateau scheduler is applied with factor = 0.8 and patience = 1000 epochs, with a minimum learning rate approaching 10 6 . The total training budget is 20,000 epochs. For multi-regime training, each mini-batch is constructed to contain samples from all Reynolds number cases in the training set. Specifically, for each Reynolds-number case, a 500 × 500 interior training grid is used, yielding 250,000 interior collocation points for residual evaluation during training. This design avoids bias toward any single Reynolds-number regime. Early stopping is not explicitly configured in our experiments; training proceeds for the full epoch budget with periodic checkpointing. To ensure reproducibility, we fix random seeds for network initialization, point sampling, and data shuffling. All methods are evaluated under identical domain (0.01 × 0.01 m), and boundary conditions (no-slip walls, moving lid).

4. Results and Discussion

4.1. Optimization of Network Topology and Input Embedding

The first phase of experiments aimed to establish a robust baseline backbone. As outlined in calculational groups A and D, we investigated the interplay between network capacity and input scaling.

4.1.1. Network Architecture Search

Four network configurations were tested: 64 × 4 (A01), 128 × 8 (A02), 192 × 10 (A03), and 256 × 12 (A04). Figure 3 shows that, as network complexity increases, the MAEs of the velocity components and velocity magnitude tend to increase, whereas the MAE of vorticity shows the opposite trend. Figure 4 further illustrates the distribution of absolute prediction errors. At low Reynolds numbers, increasing network complexity narrows the overall error region but increases the maximum error. At high Reynolds numbers, the maximum error decreases, whereas the spatial spread of the error becomes wider. These results suggest that increasing network complexity alone does not consistently improve flow field prediction accuracy. More suitable input parameterization and physical constraints are still needed for multi-Reynolds-number training. The CFDBench results are obtained from https://github.com/luo-yining/CFDBench (accessed on 4 June 2025).

4.1.2. Effect of Reynolds-Number Parameterization

A critical finding of this study is that the stability and effectiveness of valid multi-regime training are highly sensitive to the parameterization of the Reynolds number.
  • Linear Scaling (D1): When linear normalization is adopted, the Reynolds number is mapped directly to a uniformly spaced coordinate in the embedded space. Although this preserves constant spacing between adjacent training cases, it represents regime variation only in terms of absolute numerical difference. For the present dataset spanning R e = 1000 to R e = 50000 , such a parameterization is not well-suited for unified cross-regime learning. In practice, the network tends to bias the optimization toward high-Re regimes, where the residual distribution and numerical scale differ substantially from those of the low-Re cases, resulting in degraded global consistency.
  • Logarithmic Embedding (D2): To alleviate this issue, the Reynolds number is transformed using a logarithmic embedding, R e * = log 10 R e , followed by min-max normalization before input to the network. This transformation does not simply make the samples redistribute more uniformly. Instead, it reshapes the conditioning variable by expanding the low-Re region and compressing the high-Re region in the embedded space. As shown in Figure 5, which visualizes the embedding mapping, the sample positions in the embedded space, and the spacing between neighboring Reynolds-number cases, the logarithmic embedding allocates higher parameter resolution to the low-Re range. The inset views further show that this redistribution is especially pronounced for R e 10000 . These observations suggest that the logarithmic embedding improves parameter-space conditioning by representing Reynolds-number variation in relative-scale terms rather than absolute numerical distance. This interpretation is consistent with the quantitative results. As shown in Figure 6, the logarithmic embedding reduces the Multi-Regime Mean Relative Error (MR-MRE) by 42% compared with linear scaling, suggesting that logarithmic Reynolds-number embedding is an effective parameterization strategy for unified multi-regime training in the present study.
  • Reciprocal Parameterization (D3): To further assess alternative embeddings, we also tested a reciprocal mapping, R e * = 1 / R e , followed by min-max scaling, under the same network architecture and training budget. This transformation partially compresses the high-Re range, but it over-expands the low-Re interval and introduces excessively non-uniform sensitivity across regimes. Compared with linear scaling (D1), D3 improves the prediction in some individual cases; however, it is less stable than logarithmic embedding (D2) in terms of convergence behavior and cross-regime consistency. Overall, these results indicate that 1 / R e is a useful auxiliary baseline, whereas log 10 R e remains the most effective and robust parameterization for unified multi-regime training in the present study.
In Figure 7, in panel (a), along the lid-driven velocity direction, D01 shows a larger initial error and a smaller error at the end, with the overall trend exhibiting a significant flattened section in the middle, differing from the ground truth, whereas D02’s trend is relatively closer. In the comparison between panel (b) and panel (d), the difference between D01 and D02 is not significant, mainly near the wall at both ends. In panel (c), strategy D2 demonstrates much better control over the boundary velocity compared to D1, which shows larger velocity fluctuations at the driven velocity boundary. In panels (f) and (e), due to the smaller velocity values at the center vortex and the two side boundaries, minor differences can result in larger relative errors, but overall, the velocity prediction error is approximately below 20%. The error of D03 is significantly larger than that of D01 and D02, and 1/Re leads to completely different numerical behavior.

4.2. Efficacy of Physics-Enhanced Constraints

This section evaluates the effect of the proposed vorticity-enhanced constraint from both an empirical and an optimization-oriented perspective. We first note that the steady vorticity transport equation is not independent of the incompressible Navier–Stokes system but is derived from it. Accordingly, the gain of VE-PINN should not be interpreted as introducing new governing physics. Instead, the auxiliary vorticity residual acts as a physics-derived, derivative-level regularization term that emphasizes rotational transport consistency, especially in regions with strong shear and weak secondary vortices. The comparison with BL-PINN is therefore intended to assess whether this additional derivative-consistency constraint improves the practical trainability and predictive fidelity of the PINN in multi-Reynolds-number cavity flows. Accordingly, the practical value of VE-PINN in this section is evaluated not only by final prediction error but also by three closely related aspects: optimization stability during training, physical consistency of the reconstructed flow, and the additional computational cost required to obtain these gains.
To further compare the optimization behavior of the two models, Figure 8 presents the training loss histories of BL-PINN and VE-PINN, including the total loss, PDE residual loss, boundary-condition loss, and data loss. Overall, VE-PINN exhibits a more favorable and more convergent trajectory than BL-PINN in terms of the total loss, PDE loss, and BC loss. This suggests that the vorticity-enhanced formulation does not merely reduce the final error after training but also improves the optimization process itself by providing a more structured derivative-level constraint, leading to more reliable enforcement of physical constraint and boundary consistency across training iterations.
By adjusting the weights for λ p d e and λ b c according to Equation (9), it can be observed from Figure 9 that increasing λ b c while decreasing λ p d e leads to an increase in the Mean Absolute Error (MAE) for velocities u and v. However, in terms of vorticity representation, changing λ p d e and λ b c does not have a significant impact. Therefore, to improve the prediction ability for vorticity, simply adjusting the weights of PDE and BC is insufficient; more appropriate variables, which motivate the introduction of the vorticity-enhanced loss term L v o r t in the full VE-PINN objective.
To clarify whether the reported performance gain mainly originates from enhanced physics modeling or from supervised anchoring, we further compare BL-PINN and VE-PINN under a no-data setting by setting λ d a t a = 0 . Figure 10 compares the velocity-error distributions of BL-PINN, VE-PINN, BL-PINN ( λ d a t a = 0 ), and VE-PINN ( λ d a t a = 0 ) at R e = 1000 and R e = 50000 . It is clear that removing the supervised term causes a substantial increase in velocity error for both models, confirming that L d a t a provides important anchoring information for reconstructing the global flow field.
Table 2 suggests that the vorticity-transport residual contributes meaningful physical regularization beyond the supervised loss itself. The proposed framework should be interpreted as a physics-enhanced hybrid model: the supervised term provides global field anchoring, while the PDE, boundary, and vorticity-transport constraints improve physical consistency and the reconstruction of shear- and vortex-dominated flow structures.

4.2.1. Improvement in Global Flow Prediction

We compared the baseline model (BL-PINN), which only uses standard Navier–Stokes equation losses, with the enhanced model (VE-PINN), which incorporates a vorticity transport constraint, to evaluate the impact of physically enhanced constraints on the overall flow field prediction accuracy. As shown in Figure 11, the left side represents BL-PINN and the right side represents VE-PINN. Darker colors indicate smaller errors between predicted values and Ansys Fluent computed values, while lighter colors indicate larger errors. Overall, VE-PINN’s errors are significantly smaller than BL-PINN’s, and as the Reynolds number increases, VE-PINN’s velocity field error gradually decreases. Especially at R e = 50000 , BL-PINN exhibits noticeable errors in the secondary vortex structures at the bottom right and top left corners, whereas VE-PINN fits these well.
The observed improvement is consistent with the view that the auxiliary vorticity residual reshapes the optimization landscape by constraining derivative-level behavior in addition to primitive-variable residuals. In cavity flows, especially at elevated Reynolds numbers, the main challenge is often concentrated in localized high-gradient regions rather than in the bulk flow. A constraint that directly targets rotational transport can therefore reduce the tendency of the baseline PINN to over-smooth shear-dominated and vortical structures.

4.2.2. Capturing Vortices

At high Reynolds numbers, weaker, counter-rotating secondary vortices form in the bottom corners of the cavity. These fine structures place higher demands on the model’s physical consistency and its ability to model local gradients. This behavior should be interpreted as a derivative-regularization effect rather than the consequence of introducing an independent physical law. Since the baseline PINN tends to fit smoother and lower-frequency components more easily than localized rotational structures, the added vorticity residual provides a more direct penalty on shear- and vortex-related mismatch. This is particularly beneficial for reconstructing secondary vortices in cavity corners, where the flow features are weak in amplitude but strong in local gradient.
(1)
Baseline Failure: Standard PINN often over-smooth these low-energy flow features, predicting near-zero velocity fields in the corner regions, thereby completely ignoring the existence of secondary vortices.
(2)
Effective Capture by Enhanced Model: With the introduction of the vorticity transport constraint, the model successfully identifies and reconstructs the corner secondary vortices. Figure 12 and Figure 13 present the vorticity distribution results within the region ( x , y ) [ 0,0.01 ] 2 . It can be observed that VE-PINN provides richer flow field details in its predictions of separation point locations and recirculation centers compared to BL-PINN. Without vorticity enhancement, as the top lid velocity increases, the gradients at the edges of the primary vortex structure become concentrated, and the corner secondary vortex structures experience dissipation, making them difficult to resolve and capture. From the VE-PINN predictions, as the Reynolds number increases, changes also occur within the primary vortex structure, which in turn leads to the generation of cascaded secondary vortex structures.
Figure 12 and Figure 13 include both in-range cases ( R e   =   1000 ,   10000 ,   20000 ,   50000 ) and extrapolation cases ( R e   =   100 ,   100000 ) to examine vortex-structure prediction beyond the training Reynolds number interval.
While these contour plots clearly illustrate the improved reconstruction of secondary vortices, qualitative visualization alone is not sufficient for a rigorous assessment. Therefore, a further quantitative validation is provided in Section 4.2.3 using benchmark centerline profiles, vortex-center errors, and peak-vorticity comparisons.

4.2.3. Quantitative Validation of Vortex Capture

Although the vorticity contours in Figure 12 and Figure 13 qualitatively demonstrate that VE-PINN better reconstructs the secondary vortices than BL-PINN, a more rigorous quantitative validation is necessary. To this end, we further evaluate the vortex-capture capability from three complementary perspectives: centerline velocity profiles, vortex-center location, and peak vorticity magnitude. Figure 14 compares the predicted centerline velocity profiles with the classical benchmark data of Ghia et al. [4] and the ground-truth CFD solution. Overall, VE-PINN shows better agreement with both references than BL-PINN, especially in the vortex-dominated regions. This improvement is particularly evident in the u-velocity profile at y   =   0.5 H , where VE-PINN more accurately reproduces the profile shape and local extrema, while BL-PINN exhibits stronger deviation and excessive smoothing. To further quantify vortex localization accuracy, Table 3 reports the predicted vortex-center coordinates for the primary vortex and the bottom-right secondary vortex at R e   =   1000 and 10,000, together with the corresponding absolute location errors. For the primary vortex, VE-PINN generally reduces the center-location error relative to BL-PINN. More importantly, the improvement is much more pronounced for the weaker secondary vortex. At R e   =   10000 , for example, the bottom-right secondary vortex center, error decreases from (11.01%, 21.33%) in BL-PINN to (3.23%, 6.31%) in VE-PINN, showing a substantially improved prediction of the recirculation core. In addition, Table 3 also compares the peak vorticity magnitude within representative vortex regions. VE-PINN consistently provides values closer to the reference solution than BL-PINN. For the bottom-right secondary vortex at R e   =   10000 , the reference peak vorticity is 4.05310, compared with 3.0159 for BL-PINN and 3.7231 for VE-PINN. Similar trends are observed at R e   =   1000 and for the primary vortex, indicating that the proposed vorticity-enhanced formulation not only improves vortex localization but also better preserves the local rotational strength. These quantitative results are consistent with the contour-based observations and further confirm that the superiority of VE-PINN is not merely qualitative. Instead, it achieves more accurate benchmark-profile reconstruction, vortex-center prediction, and vorticity-intensity preservation, particularly for secondary vortices at higher Reynolds numbers.
In Figure 14, the diamond symbols represent the reference data from Ghia, the solid lines denote the reference data from CFDBench, the dash-dot lines correspond to the predictions by BL-PINN, and the dashed lines represent the predictions by VE-PINN. For the different curves, green indicates results at R e = 1000 , and orange indicates results at R e = 10,000 .

4.3. Parameter Sensitivity Analysis

To evaluate the robustness of VE-PINN and to verify that its performance gain does not rely on a fragile set of hyperparameters, we conduct a systematic sensitivity analysis. Beyond standard hyperparameter tuning, this analysis also serves a second purpose: to examine the optimization trade-off introduced by the vorticity-enhanced term. Because the auxiliary constraint imposes additional derivative-based residuals through automatic differentiation, it may improve rotational consistency while also increasing optimization stiffness if assigned an excessively large weight. The sensitivity analysis is therefore used here to identify a practically stable operating range of the proposed physics-derived regularization.

4.3.1. Synergistic Weights

The total loss consists of several competing objectives. The main trade-off lies between boundary-condition enforcement and the residual constraints from the governing equations and vorticity transport. If the weights are not properly balanced, the model may overfit the boundary conditions while sacrificing interior physical consistency, or vice versa. To investigate this issue, Group B systematically varies the relative weights of the Navier–Stokes residual and the vorticity-transport loss. The corresponding parameter settings are summarized in Table 4. By comparing convergence speed, global MAE, and local vortex reconstruction quality under different weight combinations, we assess the synergy among the loss terms and its influence on model stability.
From the comparison of the overall RMSE for velocity_magnitude in Table 4, it can be observed that the RMSE does not exhibit a linear trend with the relative changes in PDE weight and vorticity weight. Under low Reynolds number conditions, the performance difference among different weight combinations is small; however, as the Reynolds number increases, the impact of PDE weight and vorticity weight on model performance significantly strengthens, indicating that in high Reynolds-number flows, the relative strength of physical constraints plays a crucial role in the stability and accuracy of the solution. Based on this, we further analyzed the weight sensitivity from the perspective of velocity distribution details in different spatial regions of the flow field.
In particular, the sensitivity to λ v o r t provides insight into whether the gain of VE-PINN originates from a useful derivative-level regularization or from an overly stiff auxiliary constraint. The results indicate that a moderate vorticity-loss weight improves reconstruction quality, whereas excessively large λ v o r t values can deteriorate convergence stability and final accuracy. This supports the interpretation that the vorticity term is beneficial when used as a controlled auxiliary regularizer rather than as a dominant optimization target.
Vorticity weight ( λ v o r t ): As shown in Figure 15, we tested λ v o r t = 0.2 / 0.5 / 1.0 / 2.0 . The results indicate that a medium-sized vorticity weight achieves the optimal accuracy-stability trade-off across different Reynolds-number ranges. A weight that is too small is insufficient to suppress high-order derivative noise, while a weight that is too large excessively restricts the solution space, leading to underfitting of local structures.
PDE weight ( λ p d e ): Similarly, as shown in Figure 15, a λ p d e that is too small weakens the model’s physical consistency, causing prediction results to deviate from Navier–Stokes dynamics, while a λ p d e that is too large suppresses the model’s ability to fit supervised data, thereby increasing the final MAE. Experiments show that an appropriate PDE weight achieves the best balance between physical constraints and data-driven learning, leading to optimal overall performance.

4.3.2. Collocation Point Density

In the default configuration, interior grid resolution denotes the interior grid resolution in each spatial direction, i.e., a 500 × 500 interior training grid for each Reynolds-number case. We examine the effect of collocation density to illustrate the trade-off between computational cost and the strength of physics enforcement.
As shown in Table 5, increasing the collocation density generally reduces the RMSE of the velocity magnitude, indicating that a denser interior grid can enforce the governing-equation residuals more effectively and thereby improve prediction accuracy. In the following, we further compare velocity distributions at different spatial locations to qualitatively examine how collocation density affects local flow structures and gradient resolution. As also shown by the velocity-error distribution in Figure 16, the case with the lower collocation density exhibits larger overall errors in the horizontal velocity u than the denser-grid case.

4.4. Computational Time Analysis

This paper utilized an RTX 4090D (24 GB) and 16 vCPU Intel(R) Xeon(R) Platinum 8481C. Figure 17 summarizes the computational time under the tested settings.
Across cases F01–F03, adding the vorticity-enhanced component increases the training time by approximately 30% relative to the baseline PINN. This increase is expected because the auxiliary vorticity residual introduces additional derivative evaluations through automatic differentiation. However, the added cost remains moderate in relation to the observed gains. In particular, VE-PINN improves not only the final prediction accuracy but also the convergence behavior during training and the physical consistency of reconstructed vortex-dominated structures.
From a practical standpoint, this trade-off is favorable for steady parametric flow prediction because the additional training cost is substantially smaller than the cost of repeated CFD simulations and is comparable to the overhead commonly incurred when enlarging network size or tuning multiple loss-weight combinations. Therefore, the proposed formulation offers a practically attractive compromise between computational expense, optimization reliability, and physically meaningful prediction quality.
Overall, the results in Section 4 indicate that the practical advantage of VE-PINN is not limited to lower relative errors against the baseline PINN. Rather, its benefit is reflected in three connected aspects: (i) more stable optimization behavior, as evidenced by smoother and more favorable convergence of the training losses; (ii) improved physical consistency, especially in the reconstruction of shear layers, secondary vortices, vortex centers, and vorticity intensity; and (iii) a moderate computational overhead that remains acceptable for steady parametric surrogate modeling. These observations support the use of VE-PINN as a practically preferable alternative to the baseline model for the benchmark considered in this study.

5. Conclusions

This paper proposes a VE-PINN as a targeted PINN enhancement for steady two-dimensional lid-driven cavity flow across a wide Reynolds-number range. The main methodological contribution is not the introduction of a new independent physical model or governing law, but the coupled integration of two complementary components within a unified parametric PINN framework: (i) a logarithmic Reynolds-number embedding for improved cross-regime parameter conditioning and (ii) a vorticity-transport residual used as an auxiliary derivative-level regularization term to better constrain vortex-dominated flow structures. Relative to existing PINN enhancement lines, the novelty of this work therefore lies primarily in this combined design for robust multi-regime training, rather than in either component viewed in isolation.
A second, supporting contribution is the controlled benchmark and ablation protocol used to verify these design choices under matched domain settings, Reynolds-number cases, sampling rules, and baseline comparisons. Its role is to isolate and attribute the observed performance gains more clearly, rather than to serve as the main methodological novelty itself. Within this framework, the effects of parameter embedding, vorticity-based regularization, and other implementation factors can be distinguished in a more reproducible and interpretable manner.
Experiments over R e = 1000 50000 indicate that VE-PINN provides improved convergence stability and flow field accuracy relative to the baseline PINN, with clearer advantages at higher Reynolds numbers within the investigated steady regime. The ablation results are consistent with these observations. Compared with linear Reynolds scaling, the logarithmic embedding leads to more balanced multi-regime learning and improved conditioning across Reynolds-number cases. Compared with standard Navier–Stokes-only constraints, the vorticity-enhanced formulation better captures local vortex dynamics and secondary-flow structures and shear-dominated regions. These findings suggest that, for the steady cavity benchmark considered here, a physically targeted inductive bias may be more beneficial than merely increasing network size.
At the same time, the present study does not interpret the vorticity equation as an independent physical law beyond the incompressible Navier–Stokes system. Instead, the vorticity-enhanced term is understood as a physics-informed derivative-level regularization, whose practical value is supported here by empirical comparisons, ablation studies, and sensitivity analysis. A rigorous theoretical characterization of why this auxiliary constraint improves optimization and approximation quality is beyond the scope of the present work and will be investigated in future research.
These observations are consistent with recent studies showing that PINN performance is sensitive to parameterization, loss construction, and optimization conditioning. They also support the broader view that tailored physical constraints can improve solution quality and training robustness. Future work may further explore integrations with structure-aware architectures, such as graph-based models, to improve scalability and representation capability.
From an application perspective, VE-PINN may serve as a practical surrogate for repeated CFD evaluation in similar steady parametric settings, such as preliminary design screening, parametric analysis, and digital-twin-assisted flow monitoring. Because the framework improves robustness without substantially increasing model complexity, it is suitable for engineering workflows that require both physical consistency and computational efficiency.
Nevertheless, several limitations remain. The current study focuses on a steady two-dimensional benchmark with fixed collocation sampling and manually tuned loss weights. The out-of-range case at R e = 100000 should be interpreted with caution. In this work, it is evaluated only against a numerically converged steady CFD reference obtained under the same steady formulation and is used to assess extrapolation toward an unseen steady solution branch. We do not claim that this result characterizes all possible flow states at that Reynolds number, since potential bifurcations, multiple steady branches, or unsteady dynamics are beyond the scope of the present study. More generally, parametric extrapolation is inherently less reliable than interpolation, and its accuracy may degrade as the test condition moves farther from the training range or approaches regime transitions.
Future work will extend VE-PINN to unsteady and three-dimensional flows, turbulent regimes, and more complex geometries. It will also investigate adaptive residual sampling, automatic loss balancing, and robustness under noisy or partially observed data. In addition, hybrid architectures, such as PINN-CNN or PINN-GNN frameworks, and multi-fidelity transfer strategies may further improve scalability and generalization in practical fluid dynamics applications.

Author Contributions

Conceptualization, F.P.; methodology, Y.Z.; validation, Z.W.; data curation, J.L.; visualization, S.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (grant number 52371343), Naval University of Engineering Independent Research Program (grant number 202550H060) and the China Vocational Education Association Hubei Branch (grant number HBZJ2025034).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ADAutomatic Differentiation
AE-NNDeep-Autoencoder Neural Network
BL-PINNBaseline Physics-Informed Neural Network
CFDComputational Fluid Dynamics
CFDbenchCFD Benchmark Dataset
CNNConvolutional Neural Network
CRVPINNCollocation-based Robust Variational Physics-Informed Neural Network
GNNGraph Neural Network
LA-PINNLoss-Attentional Physics-Informed Neural Network
LDCLid-Driven Cavity
LESLarge Eddy Simulation
LRLearning Rate
LSTMLong-Short Term Memory
MAEMean Absolute Error
MLPMulti-layer Perceptron
MR-MREMean Relative Error across Multiple Regimes
NAS-PINNNeural Architecture Search-guided Physics-Informed Neural Network
NTKNeural Tangent Kernel
PDEPartial Differential Equation
PINNPhysics-Informed Neural Network
PIVParticle Image Velocimetry
PODProper Orthogonal Decomposition
PRNNPhysics-Reinforced Neural Network
ReReynolds-number
RMSERoot Mean Square Error
RNNRecurrent Neural Network
ROMReduced Order Model
VC-PINNVariable Coefficient Physics-Informed Neural Network
VE-PINNVorticity-Enhanced Physics-Informed Neural Network

References

  1. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  2. Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: A Navier-Stokes informed deep learning framework for assimilating flow visualization data. arXiv 2018, arXiv:1808.04327. [Google Scholar] [CrossRef]
  3. Lawal, Z.K.; Yassin, H.; Lai, D.T.C.; Che Idris, A. Physics-informed neural network (PINN) evolution and beyond: A systematic literature review and bibliometric analysis. Big Data Cogn. Comput. 2022, 6, 140. [Google Scholar] [CrossRef]
  4. Ghia, U.K.N.G.; Ghia, K.N.; Shin, C.T. High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method. J. Comput. Phys. 1982, 48, 387–411. [Google Scholar] [CrossRef]
  5. Chiu, P.H.; Wong, J.C.; Ooi, C.; Dao, M.H.; Ong, Y.S. CAN-PINN: A fast physics-informed neural network based on coupled-automatic–numerical differentiation method. Comput. Methods Appl. Mech. Eng. 2022, 395, 114909. [Google Scholar] [CrossRef]
  6. Rahaman, N.; Baratin, A.; Arpit, D.; Draxler, F.; Lin, M.; Hamprecht, F.A. On the spectral bias of neural networks. arXiv 2018, arXiv:1806.08734. [Google Scholar] [CrossRef]
  7. Wang, S.; Teng, Y.; Perdikaris, P. Understanding and mitigating gradient flow pathologies in physics-informed neural networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
  8. Wang, C.; Li, S.; He, D.; Wang, L. Is L2 Physics Informed Loss Always Suitable for Training Physics Informed Neural Network? Adv. Neural Inf. Process. Syst. 2022, 35, 8278–8290. [Google Scholar] [CrossRef]
  9. Sekar, V.; Jiang, Q.; Shu, C. Accurate near wall steady flow field prediction using Physics Informed Neural Network (PINN). arXiv 2022. [Google Scholar] [CrossRef]
  10. Ko, S.; Park, S. VS-PINN: A fast and efficient training of physics-informed neural networks using variable-scaling methods for solving PDEs with stiff behavior. J. Comput. Phys. 2025, 529, 113860. [Google Scholar] [CrossRef]
  11. Łoś, M.; Służalec, T.; Maczuga, P.; Vilkha, A.; Uriarte, C.; Paszyński, M. Collocation-based robust variational physics-informed neural networks (CRVPINNs). Comput. Struct. 2025, 316, 107839. [Google Scholar] [CrossRef]
  12. Song, Y.; Wang, H.; Yang, H. Loss-attentional physics-informed neural networks. J. Comput. Phys. 2024, 501, 111722. [Google Scholar] [CrossRef]
  13. Li, Y.; Sun, Q.; Wei, J.; Huang, C. An Improved PINN Algorithm for Shallow Water Equations Driven by Deep Learning. Symmetry 2024, 16, 1376. [Google Scholar] [CrossRef]
  14. Miao, Z.; Chen, Y. VC-PINN: Variable coefficient physics-informed neural network for forward and inverse problems of PDEs with variable coefficient. Phys. D Nonlinear Phenom. 2023, 456, 133945. [Google Scholar] [CrossRef]
  15. Wang, Y.; Zhong, L. NAS-PINN: Neural architecture search-guided physics-informed neural network for solving PDEs. J. Comput. Phys. 2024, 496, 112603. [Google Scholar] [CrossRef]
  16. Wandel, N.; Weinmann, M.; Neidlin, M.; Klein, R. Spline-PINN: Approaching PDEs without Data using Fast, Physics-Informed Hermite-Spline CNNs. arXiv 2021. [Google Scholar] [CrossRef]
  17. Bafghi, R.A.; Raissi, M. PINNs-Torch: Enhancing Speed and Usability of Physics-Informed Neural Networks with PyTorch. The Symbiosis of Deep Learning and Differential Equations III. 2023. Available online: https://openreview.net/forum?id=nl1ZzdHpab (accessed on 11 November 2023).
  18. Noorizadegan, A.; Young, D.; Hon, Y.; Chen, C. Power-Enhanced Residual Network for Function Approximation and Physics-Informed Inverse Problems. Appl. Math. Comput. 2024, 480, 128910. [Google Scholar] [CrossRef]
  19. Luo, Y.; Chen, Y.; Zhang, Z. CFDBench: A Comprehensive Benchmark for Machine Learning Methods in Fluid Dynamics. arXiv 2024. [Google Scholar] [CrossRef]
  20. Iuliano, E. Towards a POD-based Surrogate Model for CFD Optimization. In Proceedings of the ECCOMAS CFD and Optimization 2011, Antalya, Turkey, 23–25 May 2011. [Google Scholar]
  21. Raibaudo, C.; Piquet, T.; Schliffke, B.; Conan, B.; Perret, L. POD analysis of the wake dynamics of an offshore floating wind turbine model. J. Phys. Conf. Ser. 2022, 2265, 022085. [Google Scholar] [CrossRef]
  22. Karcher, N. POD-Based Model-Order Reduction for Discontinuous Parameters. Fluids 2022, 7, 242. [Google Scholar] [CrossRef]
  23. Li, T.; Zou, S.; Chang, X.; Zhang, L.; Deng, X. Predicting unsteady incompressible fluid dynamics with finite volume informed neural network. Phys. Fluids 2024, 36, 23. [Google Scholar] [CrossRef]
  24. Chen, J.; Hachem, E.; Viquerat, J. Graph neural networks for laminar flow prediction around random 2D shapes. arXiv 2021. [Google Scholar] [CrossRef]
  25. Gao, R.; Jaiman, R.K. Predicting fluid–structure interaction with graph neural networks. Phys. Fluids 2024, 36, 17. [Google Scholar] [CrossRef]
  26. Zeng, F.; Zeng, Y.; Zhao, P.; Liu, Z.; Li, W. Nonlinear reduced-order analysis of three-dimensional thermal stratification phenomenon in the upper plenum of a lead-bismuth cooled fast reactor based on a graph neural network. Prog. Nucl. Energy 2025, 188, 105874. [Google Scholar] [CrossRef]
  27. Shen, Y.; Alonso, J.J. Performance Evaluation of a Graph Neural Network-Augmented Multi-Fidelity Workflow for Predicting Aerodynamic Coefficients on Delta Wings at Low Speed. In Proceedings of the AIAA SCITECH 2025 Forum, Orlando, FL, USA, 6–10 January 2025. [Google Scholar] [CrossRef]
  28. Shen, Y.; Needels, J.T.; Alonso, J.J. VortexNet: A Graph Neural Network-Based Multi-Fidelity Surrogate Model for Field Predictions. In Proceedings of the AIAA SCITECH 2025 Forum, Orlando, FL, USA, 6–10 January 2025. [Google Scholar] [CrossRef]
  29. Chen, W.; Wang, Q.; Hesthaven, J.S.; Zhang, C. Physics-informed machine learning for reduced-order modeling of nonlinear problems. J. Comput. Phys. 2021, 446, 110666. [Google Scholar] [CrossRef]
  30. Wang, S.; Chen, X.; Geyer, P. Feasibility analysis of pod and deep autoencoder for reduced order modelling Indoor environment CFD prediction. In Proceedings of the Building Simulation Conference Proceedings 2023, Shanghai, China, 4–9 September 2023. [Google Scholar] [CrossRef]
  31. Yan, C.; Xu, S.; Sun, Z.; Guo, D.; Ju, S.; Huang, R.; Yang, G. Exploring hidden flow structures from sparse data through deep-learning-strengthened proper orthogonal decomposition. Phys. Fluids 2023, 35, 037119. [Google Scholar] [CrossRef]
  32. Mohammadpour, J.; Li, X.; Salehi, F. Modelling Cryogenic Hydrogen Jet Dispersion: CFD-POD-ML Insights. In Proceedings of the 24th Australasian Fluid Mechanics Conference—AFMC2024, Canberra, Australia, 1–5 December 2024. [Google Scholar] [CrossRef]
  33. Geneva, N.; Zabaras, N. Modeling the dynamics of PDE systems with physics-constrained deep auto-regressive networks. J. Comput. Phys. 2020, 403, 109056. [Google Scholar] [CrossRef]
  34. McClenny, L.; Braga-Neto, U. Self-adaptive physics-informed neural networks using a soft attention mechanism. arXiv 2024, arXiv:2009.04544. [Google Scholar]
  35. Shao, X.; Liu, Z.; Zhang, S.; Zhao, Z.; Hu, C. PIGNN-CFD: A physics-informed graph neural network for rapid predicting urban wind field defined on unstructured mesh. Build. Environ. 2023, 232, 110056. [Google Scholar] [CrossRef]
  36. Wang, Y.; Qiu, X.; Pei, Q.; Wang, J.; Zhang, P.; Bai, X. FAMAW-PINN: A physics-informed neural network integrating adaptive loss weighting with firefly-inspired adaptive point movement. J. Comput. Phys. 2025, 542, 114363. [Google Scholar] [CrossRef]
  37. Wu, Z.; Pan, S.; Chen, F.; Long, G.; Zhang, C.; Yu, P.S. A comprehensive survey on graph neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 4–24. [Google Scholar] [CrossRef]
Figure 1. Framework of VE-PINN.
Figure 1. Framework of VE-PINN.
Fluids 11 00093 g001
Figure 2. Domain definition [19].
Figure 2. Domain definition [19].
Fluids 11 00093 g002
Figure 3. Effect of Hidden Dimension and Layers.
Figure 3. Effect of Hidden Dimension and Layers.
Fluids 11 00093 g003
Figure 4. Flow field contour comparison on absolute error.
Figure 4. Flow field contour comparison on absolute error.
Fluids 11 00093 g004
Figure 5. Effect of Reynolds number parameterization on parameter-space geometry.
Figure 5. Effect of Reynolds number parameterization on parameter-space geometry.
Fluids 11 00093 g005
Figure 6. Normalization strategy comparison.
Figure 6. Normalization strategy comparison.
Fluids 11 00093 g006
Figure 7. Profile comparison between D01, D02, and D03.
Figure 7. Profile comparison between D01, D02, and D03.
Fluids 11 00093 g007
Figure 8. Training loss histories of BL-PINN and VE-PINN.
Figure 8. Training loss histories of BL-PINN and VE-PINN.
Fluids 11 00093 g008
Figure 9. Effect of PDE weight on model performance.
Figure 9. Effect of PDE weight on model performance.
Fluids 11 00093 g009
Figure 10. Comparison of velocity-error fields under the default hybrid setting and the no-data setting ( λ d a t a = 0 ) at R e = 1000 and R e = 50,000 .
Figure 10. Comparison of velocity-error fields under the default hybrid setting and the no-data setting ( λ d a t a = 0 ) at R e = 1000 and R e = 50,000 .
Fluids 11 00093 g010
Figure 11. Error comparison between BL-PINN and VE-PINN.
Figure 11. Error comparison between BL-PINN and VE-PINN.
Fluids 11 00093 g011aFluids 11 00093 g011b
Figure 12. Vorticity contour of BL-PINN.
Figure 12. Vorticity contour of BL-PINN.
Fluids 11 00093 g012
Figure 13. Vorticity contour of VE-PINN.
Figure 13. Vorticity contour of VE-PINN.
Fluids 11 00093 g013
Figure 14. Comparison of centerline velocity profiles with Ghia et al. benchmark data [4].
Figure 14. Comparison of centerline velocity profiles with Ghia et al. benchmark data [4].
Fluids 11 00093 g014
Figure 15. Comparison on synergistic weights.
Figure 15. Comparison on synergistic weights.
Fluids 11 00093 g015
Figure 16. Comparison of different collocation densities.
Figure 16. Comparison of different collocation densities.
Fluids 11 00093 g016
Figure 17. Computational time for each case.
Figure 17. Computational time for each case.
Fluids 11 00093 g017
Table 1. Parametric Study Design.
Table 1. Parametric Study Design.
Group IDCategoryVariables TestedValuesPurpose
ANetwork Architecturewidth;
depth;
activation
width: 64/128/256/512;
depth: 4/6/8/12/16;
activation: tanh
Identify a stable MLP configuration for AD-based residuals
BLoss Weights λ p d e , λ b c , λ v o r t λ p d e : 2.0/5.0/8.0,
λ b c : 5/10/20,
λ v o r t : 0.5/1.0/2.0
Study sensitivity to loss balancing and vorticity constraint strength
CSampling/Resolutioninterior grid resolution250 × 250/500 × 500Evaluate robustness to sampling density
DNormalization StrategyReynolds embedding; input scaling linear, l o g 10 ,   1 / R e ; scaling: 50 k/100 k/ ∞ Improve conditioning for multi-regime learning
EOptimization Strategyoptimizer; LR schedule; initializationAdam/L-BFGS/AdamW/RMSprop; constant/step/cosine; He/XavierCompare optimization dynamics and convergence stability
FPhysics Constraintsconstraint typeNone/Vorticity TransportQuantify the benefit of the vorticity transport constraint
GTraining Dynamicsbatch size; augmentation batch: 256/512/1024/2048; augmentation: none/Gaussian noise ( σ = 0.01 )/Random Fourier Test convergence–memory trade-offs and training robustness
HSupervision ablation λ d a t a default/0Distinguish the effect of supervised data from that of physics-based constraints
Table 2. MAE comparison under the default hybrid setting and the no-data setting.
Table 2. MAE comparison under the default hybrid setting and the no-data setting.
ReBL-PINNVE-PINNBL-PINN ( λ d a t a = 0 )VE-PINN ( λ d a t a = 0 )
10000.0605100.0464051.2176260.329759
10,0000.2180970.10951612.0237793.058282
20,0000.4507350.21976924.0833716.116485
50,0001.7121351.58957759.95020414.960957
Table 3. Quantitative comparison of vortex-center location and peak vorticity.
Table 3. Quantitative comparison of vortex-center location and peak vorticity.
ReVortex RegionReference Center
( x c , y c )
BL-PINN Center
( x c , y c )
VE-PINN Center
( x c , y c )
BL Center Abs -Error
(%)
VE Center Abs -Error
(%)
Reference
| ω | p e a k
BL   | ω | p e a k VE   | ω | p e a k
1000Primary Vortex(0.5313, 0.5625)(0.5301, 0.5312)(0.5316, 0.5811)(0.23, 5.56)(0.06, 3.31)2.049681.96132.1201
Bottom-right secondary vortex(0.8594, 0.1094)(0.7651, 0.0814)(0.8058, 0.1189)(10.97, 25.59)(6.24, 8.68)1.154651.01541.1843
10,000Primary Vortex(0.5117, 0.5333)(0.4918, 0.5102)(0.5202, 0.5561)(3.89, 4.33)(1.66, 4.28)1.880821.96711.9431
Bottom-right secondary vortex(0.7656, 0.0586)(0.6813, 0.0711)(0.7903, 0.0623)(11.01, 21.33)(3.23, 6.31)4.053103.01593.7231
Table 4. Weight Parameters of Vorticity and PDE in Group B.
Table 4. Weight Parameters of Vorticity and PDE in Group B.
ID λ v o r t λ p d e Velocity_magnitude_RMSE
Re = 1000Re = 10,000Re = 50,000
B010.580.0754750.5803572.327602
B021.050.0778730.4174473.895379
B032.020.0757280.2792271.883633
Table 5. Effect of collocation point density in group C.
Table 5. Effect of collocation point density in group C.
IDInterior Grid ResolutionVelocity_magnitude_RMSE
Re = 1000Re = 10,000Re = 50,000
C01250 × 2500.0907250.6740814.021075
C02500 × 5000.0888830.3742912.180845
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zheng, Y.; Peng, F.; Wang, Z.; Lei, J.; Pian, S. A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding. Fluids 2026, 11, 93. https://doi.org/10.3390/fluids11040093

AMA Style

Zheng Y, Peng F, Wang Z, Lei J, Pian S. A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding. Fluids. 2026; 11(4):93. https://doi.org/10.3390/fluids11040093

Chicago/Turabian Style

Zheng, Yaxiong, Fei Peng, Zhanzhi Wang, Jianming Lei, and Shan Pian. 2026. "A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding" Fluids 11, no. 4: 93. https://doi.org/10.3390/fluids11040093

APA Style

Zheng, Y., Peng, F., Wang, Z., Lei, J., & Pian, S. (2026). A Vorticity-Enhanced Physics-Informed Neural Network with Logarithmic Reynolds Embedding. Fluids, 11(4), 93. https://doi.org/10.3390/fluids11040093

Article Metrics

Back to TopTop