Next Article in Journal
Ophthalmic Nanoemulsion-Based Mitigation of Environmental Stressors Impact on the Surface Properties of Meibomian Films
Previous Article in Journal
Overview of the Composition of Cosmetic Preparations for Intimate Hygiene
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Purely Physics-Driven Neural Networks for Tracking the Spatiotemporal Evolution of Time-Dependent Flow

1
School of Physics, Northwest University, Xi’an 710127, China
2
Shaanxi Provincial Basic Science Center (Quantum Physics), Xi’an 710127, China
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2026, 16(5), 2294; https://doi.org/10.3390/app16052294
Submission received: 27 January 2026 / Revised: 20 February 2026 / Accepted: 25 February 2026 / Published: 27 February 2026
(This article belongs to the Section Computing and Artificial Intelligence)

Abstract

As a mesh-free solving paradigm, Physics-Informed Neural Networks (PINNs) demonstrate potential in both forward and inverse problems by embedding physical equations into the loss function. However, they still face challenges in capturing the spatiotemporal evolution of complex physical processes. When applied to time-dependent complex flows, such as high-Reynolds-number cylinder flow, they often rely on supervised data, which is frequently difficult to obtain accurately in practice. To address these issues, this paper proposes a novel unsupervised solving framework—the Adaptive Hard-Constraint Physics-Informed Neural Network (AHC-PINN). This method integrates an adaptive sampling mechanism based on partial differential equation residuals with a hard-constraint strategy. By dynamically evaluating the contribution of collocation points to the loss and incorporating analytically embedded boundary constraints, it directs the network training entirely toward solving the governing equations. Using two-dimensional unsteady cylinder flow as a validation case, experimental results show that AHC-PINN significantly improves the prediction accuracy of wake evolution under unsupervised conditions. Its performance surpasses that of traditional soft-constraint PINNs by an order of magnitude and is even superior to methods using sparse supervised data. Furthermore, through analysis of the PDE loss and gradient distribution, the study explicitly identifies the impact of large-gradient regions on PINN training stability and prediction accuracy, providing a basis for subsequent optimization.

1. Introduction

Computational fluid dynamics is a critical tool for understanding and predicting complex flow phenomena across a wide range of engineering and scientific problems, with applications from aircraft design to environmental forecasting [1,2]. Traditional numerical methods, such as the finite volume and finite element methods, have seen widespread practical success by discretizing the governing equations on computational grids [3,4,5]. However, traditional grid-based methods face high computational costs and geometric constraints when resolving high-fidelity or unsteady flows [6,7,8]. Recently, Physics-Informed Neural Networks (PINNs) have emerged as a mesh-free alternative for solving partial differential equations [9], integrating physical laws directly into the loss function and showing promise for complex geometries and data-sparse problems [10,11,12,13,14,15]. In fluid mechanics, PINNs have been applied to complex boundary flows [16], non-Newtonian viscosity modeling [17], transient pipeline flows [18], and heat convection [19].
However, fully unsupervised PINNs struggle with strongly time-dependent flows characterized by broad spatiotemporal scales and localized dynamic features, such as vortex shedding in cylinder wakes [14,20,21]. These flows exhibit sharp transitions from slowly varying regions to highly periodic vortex structures, posing challenges in accurately capturing separation, shear layers, and vortex evolution [22,23]. To enhance the performance of PINNs in such problems, various approaches have been explored, such as incorporating supervised data, employing mixed-variable schemes, or adopting enhanced temporal modeling techniques [24,25,26,27]. Due to the inherent limitations of PINNs in directly handling such strongly time-dependent and physically intense flows, existing studies often rely on supervised data—such as sparse measurement points in the wake region—to improve accuracy [28,29]. However, in actual experiments, obtaining even sparse yet precise transient data is often challenging. For instance, hot-wire or hot-film anemometers, while capable of providing flow information, are limited by their intrusive interference with the flow field [21]. Particle Image Velocimetry (PIV), although able to capture instantaneous velocity distributions in a plane or volumetric domain, may suffer from measurement bias because particles may not perfectly follow fluid parcel motion, thereby constraining its accuracy [30,31]. These issues fundamentally limit the practicality and generalizability of such hybrid methods in complex flow modeling. Therefore, developing a PINN-based approach capable of accurately modeling strongly time-dependent, complex spatiotemporally evolving flows under a fully unsupervised condition holds significant theoretical importance and practical value.
To overcome these limitations, this work proposes a fully unsupervised framework termed the Adaptive Hard-Constrained Physics-Informed Neural Network (AHC-PINN). This framework integrates an adaptive sampling scheme driven by the contribution of PDE residuals with a hard-constrained strategy. At its core lies a dynamic sampling algorithm that continuously and quantitatively assesses the contribution of each collocation point to the total loss during training, thereby dynamically prioritizing the allocation of collocation points to regions of high physical activity. Building upon this, the hard-constrained strategy further focuses the training objective on the residual of the governing equations. Validation on a two-dimensional unsteady cylinder flow problem demonstrates that AHC-PINN achieves significantly superior performance in both accuracy and training stability compared to baseline methods. Furthermore, through an analysis of PDE residuals and flow field gradients, this study explicitly identifies the adverse impact of large-gradient regions on the training stability and predictive accuracy of PINNs. This finding provides concrete guidance for subsequent methodological improvements.

2. Methodology

2.1. Navier–Stokes Equations

This work considers a two-dimensional incompressible flow governed by the Navier–Stokes equations. The governing equations are expressed as follows:
u t + ( u · ) u + 1 ρ p ν 2 u = 0 ,
· u = 0 .
In the equations, u, p, ρ , and ν represent the velocity vector, fluid pressure, density, and dynamic viscosity coefficient, respectively. For incompressible flow, ρ and ν are characteristic fluid parameters that remain constant in space and time. Their dimensionless two-dimensional form is given as follows:
u t + u u x + v u y + p x 1 R e u 2 2 x + u 2 2 y = 0 ,
v t + u v x + v v y + p y 1 R e v 2 2 x + v 2 2 y = 0 ,
u x + v y = 0 ,
where u and v are the velocity components in the x and y directions, respectively. The Reynolds number R e is a dimensionless parameter defined as R e = U L / ν . Nondimensionalization, as one of the simplest and most effective methods for improving model performance, is the process of converting physical quantities into dimensionless forms by introducing characteristic scales, aiming to simplify equations, reducing parameters, and highlighting the dominant physical mechanisms [32]. In this study, the diameter of the cylinder in the flow model around a-cylinder is selected as the characteristic length L, the free-stream velocity as the characteristic velocity U, the characteristic time T = L / U , and the characteristic pressure P = ρ U 2 . Dimensionless variables are defined as:
x * = x L , y * = y L , t * = t T , u * = u U , v * = v U , p * = p P .
Numerous studies have successfully approximated solutions to PDEs within dimensionless frameworks [33,34,35,36].

2.2. PINN Method

PINNs represent a novel scientific machine learning framework that embeds physical laws into the neural network training process in the form of PDE loss. The core idea is to utilize neural networks as approximating functions to directly learn solutions that satisfy specific governing physical equations, thereby unifying forward problem solving and inverse parameter identification under a single optimization-based paradigm. For a general PDE system:
N [ u ( x , t ) ; λ ] = 0 , x Ω , t [ 0 , T ] ,
where N is the differential operator, u is the variable of the physical field to be solved, λ represents the physical parameters, and Ω denotes the spatial domain. PINNs employ a feedforward neural network u N N ( x , t , θ ) to approximate the solution u. Its inputs are the spatial and temporal coordinates ( x , t ) , its output is the corresponding predicted value of the physical field, and θ represents the weights and biases of the network. The network training is achieved by constructing a composite loss function that incorporates the residuals of the governing equations, initial conditions, boundary conditions, and data residuals:
L ( θ ) = ω PDE L PDE ( θ ) + ω IC L IC ( θ ) + ω BC L BC ( θ ) + ω Data L Data ( θ ) .
Here, the loss of the physics equation L PDE enforces compliance with the governing physical equations. For the dimensionless Navier–Stokes equations considered in Equations (3)–(5), their residual on a set of collocation points { ( x i , t i ) } i = 1 N PDE within the computational domain is:   
L PDE ( θ ) = 1 N PDE i = 1 N PDE ( u NN t + ( u NN · ) u NN + p NN 1 Re 2 u NN 2 + · u NN 2 ) ( x i , t i ) .
Here, the neural network simultaneously outputs the velocity field u NN = ( u N N , v N N ) and the pressure field p N N . The partial derivatives of all orders in the equations are computed via automatic differentiation (AD). The initial and boundary condition losses, L IC and L BC , ensure compliance with the specified initial state and boundary constraints, guaranteeing the uniqueness of the solution. The data loss L Data forces the network predictions to align with the observed data:
L IC ( θ ) = 1 N IC j = 1 N IC u NN ( x j , 0 ) u 0 ( x j ) 2 ,
L BC ( θ ) = 1 N BC k = 1 N BC B [ u NN ] ( x k , t k ) g ( x k , t k ) 2 ,
L Data ( θ ) = 1 N Data Σ l = 1 N Data u NN ( x l , t l ) u ^ l 2 .
where B is the boundary operator (e.g., for Dirichlet or Neumann conditions), u 0 and g are the given initial state and boundary condition functions, respectively, and u ^ l denotes the observed values at corresponding measurement points. The terms ω PDE , ω IC , ω BC , ω Data are weight hyperparameters for each loss component, typically optimized empirically or via adaptive strategies.

2.3. AHC-PINN

Conventional Physics-Informed Neural Networks perform well when solving partial differential equations with smooth solutions or relatively slow variations, and their purely physics-driven unsupervised training paradigm demonstrates significant potential. However, when confronted with complex flow problems such as cylinder-induced Kármán vortex streets, the strong nonlinearities and multi-scale structures present in the flow fields pose pronounced difficulties for traditional PINNs in strictly satisfying conservation laws and capturing key physical mechanisms. To improve solution accuracy, existing methods often rely on high-fidelity simulation data as supervisory signals, which to a considerable extent undermines the original intent of PINNs to “learn solely from physical laws.” To address these limitations, this paper proposes a novel hard-constrained Physics-Informed Neural Network framework: AHC-PINN. By integrating analytically embedded hard boundary constraints and a residual-driven adaptive sampling strategy, this framework achieves high-accuracy solutions for transient cylinder flow without supervision from external flow field data.
In the solution process of the Navier–Stokes equations, physical quantities often undergo sharp variations in local regions, particularly near walls and in vortex shedding zones, where the velocity field exhibits high spatial non-uniformity. This characteristic stems primarily from the complex coupling and dynamic balance between the strongly nonlinear convective terms and the viscous diffusion terms within the momentum equations. In the traditional PINN framework, boundary conditions are typically introduced into the loss function as soft penalty terms. This forces the optimization process to trade off between the accurate enforcement of boundary conditions and the physical consistency of the interior flow field. For problems like cylinder flow that feature pronounced spatially heterogeneous flow structures, this soft-constraint modeling approach often makes it difficult for the neural network to simultaneously achieve high-fidelity representation of the boundaries and strict satisfaction of the internal physical constraints.
To address this issue, as shown in Figure 1, our proposed AHC-PINN first employs a hard-constraint strategy to analytically embed Dirichlet boundary conditions directly into the functional expression of the solution. The core of this strategy lies in decomposing the total physical field to be solved, denoted as q ( x , t ) = [ u , v , p ] T , into a particular solution field q p ( x , t ) that satisfies all boundary conditions, and an incremental field q g ( x , t ; θ ) learned by a main network (denoted as N N g ) trained primarily against the PDE loss. These components are coupled via a smooth distance function D ( x ) to ensure that the contribution of the incremental field vanishes on the boundaries. The expression for the total field is:
q ( x , t ) = q p ( x , t ) + D ( x ) · q g ( x , t ; θ ) .
Here, q p ( x , t ) is a known particular solution obtained through pre-training on all boundary conditions and remains fixed during subsequent training. It must satisfy:
B [ q p ] ( x , t ) = g ( x , t ) , ( x , t ) Ω D × [ 0 , T ] .
Through this strategy, the boundary conditions become an inherent property of the solution space rather than an optimization objective during training. Consequently, the boundary loss term is entirely eliminated from the training process, allowing the main network to focus solely on minimizing the residuals of the Navier–Stokes equations across the entire computational domain.
When solving the Navier–Stokes equations, the spatiotemporal distribution of physical quantities is often highly non-uniform, resulting in significant spatial heterogeneity in the PDE residuals. Regions of intense physical activity, such as vortex shedding zones, shear layers, and near-wake areas, typically exhibit high residual magnitudes. While this approach is effective for simple problems with smoothly varying solutions, it results in an imbalanced allocation of training resources for complex flow fields exhibiting such localized high residuals: most collocation points lie in areas of low residual, while the physical constraints in critical regions are insufficiently reinforced. Although the hard-constraint strategy enhances the network’s focus on fitting the governing equations themselves, if the spatial distribution of collocation points does not align with the residual distribution, the model will still struggle to accurately capture the local details where the flow evolution is most intense. To address this, we introduce a residual-driven adaptive sampling algorithm built upon the hard-constraint framework. As shown in Figure 2, this algorithm dynamically adjusts the probability distribution of collocation points based on the real-time PDE residuals calculated during training. This continuously shifts the sampling focus toward spatiotemporal regions with higher residuals, thereby enabling more efficient and precise unsupervised learning of complex flow structures.
The algorithm is executed at each adaptive cycle (e.g., every k training steps). Before training begins, a dense background grid point set T h = { x i , t i } i = 1 N b covering the computational domain is established. At the start of each cycle, the current network parameters θ ( K ) are used to perform a forward computation on the background point set to obtain the PDE residual at each point. We construct a scalar error indicator E i , defined as the weighted sum of the norms of the residuals of the momentum and continuity equations:
E i = λ c · u i ( k ) 2 + λ m ( M x ) i 2 + M y i 2 .
where M x and M y represent the residuals of the momentum equations in the x- and y-directions (Equation (3) and Equation (4)), respectively; λ c and λ m are the weighting coefficients for the continuity and momentum equations, respectively. These weights can be adjusted for different models to control the focus of adaptation. In the present study, λ c = 1 and λ m = 1 are selected to avoid the introduction of manual hyperparameter tuning, thereby precluding the reliance on empirical weight-balancing adjustments. To suppress numerical noise and highlight the main error regions, the indicator is smoothed using a Gaussian filter:
E ˜ ι = j N ( i ) G σ ( x i x j ) E j .
where G σ is a Gaussian kernel function with standard deviation σ , and N ( i ) denotes the spatial neighborhood of point i. The smoothed error indicator is then normalized into a probability density function (PDF), which guides the generation of new collocation points. The normalized probability P i is calculated as follows:
P i = E ˜ ι γ j = 1 N b E ˜ ι γ + ϵ .
where γ 1 is the sharpening exponent used to control the degree to which sampling concentrates on high-error regions, and ϵ is a small positive constant ensuring numerical stability. Subsequently, importance sampling with replacement is performed based on this PDF from the background grid T h to select N C candidate points, forming the initial resampled point set S i m p ( k ) . This step ensures that the new point set has a higher expected density in regions where the error indicator is large.
To prevent the sampled points from converging into a static local distribution or causing overfitting due to a fixed point set, a controllable random perturbation is applied to each point in S i m p ( k ) :
( x i , t i ) = ( x i , t i ) + ( η x · Δ x , η y · Δ y , η t · Δ t ) ,
where ( Δ x , Δ y , Δ t ) are the local characteristic dimensions of the background grid; η x , η y , η t U ( 0.5 , 0.5 ) are uniformly distributed random variables. Since the cylinder flow problem involves an internal solid obstacle Ω s o l i d , geometric validity checks and corrections are performed on the perturbed sampling points:
If x i Ω s o l i d , x i Proj Ω s o l i d ( x i ) + n .
Specifically, points that have penetrated the solid interior are projected onto the nearest boundary point on Ω solid and then extrapolated by a small safety distance δ along the outward normal direction n . The resulting set of adaptive collocation points, denoted S adaptive ( k ) , lies entirely within the fluid domain and is used in the subsequent training phase.
The procedure of Residual Driven Adaptive Sampling is shown in Algorithm 1. This algorithm operates in cycles k, allowing the collocation point set to dynamically respond to the evolution of the residual distribution during the network optimization process, thereby forming a closed-loop feedback system of “training—evaluation—resampling”.
Algorithm 1: Residual-Driven Adaptive Sampling
Applsci 16 02294 i001
In this study, the flow field at a specific moment after the cylinder flow has reached a steady state is used as the initial condition for the PINN. Since this state already possesses a relatively complex flow structure, the initial condition is still incorporated into the loss function alongside the PDE loss, even under the hard-constraint strategy. The total loss J ( θ ) consists of two components:
J ( θ ) = L PDE θ ; S a d a p t i v e ( k ) + · L IC ( θ ) .
The two loss terms, which are computed using a set of adaptive sampling collocation points S a d a p t i v e ( k ) = x j , t j j = 1 N c and a set of initial condition sampling points S i c = x m , 0 m = 1 N o c respectively, are defined as follows:
L PDE = 1 N c Σ j = 1 N c · u j 2 Residual of the continuity equation + ( M x ) j 2 + ( M y ) j 2 Residual of the momentum equation ,
L IC = 1 N i c m = 1 N i c q ( x m , 0 ; θ ) q 0 ( x m ) 2 .
During each adaptive cycle, the sampled collocation point set S a d a p t i v e ( k ) is used to minimize the objective function J ( θ ) and update the network parameters θ . After each cycle, the adaptive sampling algorithm is triggered. Based on the current parameters θ ( k ) , the global residuals are computed to generate a new collocation point set S a d a p t i v e ( k + 1 ) , which is then used for training in the next cycle. The training procedure of AHC-PINN is shown in Algorithm 2.
Algorithm 2: AHC-PINN Training Framework
Applsci 16 02294 i002

3. Results and Discussion

In this study, the physical model considers a two-dimensional transient flow past a circular cylinder, as shown in Figure 3. To clearly illustrate the flow field structure, a rectangular region near the cylinder is highlighted for detailed analysis. The cylinder is positioned at the center of a square computational domain. After non-dimensionalization, the cylinder diameter is set to 1, and the dimensions of the computational domain are 33.4 in length and 17.5 in width. The flow boundary conditions are set as follows: the left side is a uniform inflow inlet with a velocity of 1; the static pressure at the right-side outlet is set to 0; all other walls and the cylinder surface employ a no-slip boundary condition. The specific boundary condition settings are:
u = 1 , x = 0 , 0 y 17.5 ; u = 0 , others ; v = 0 , on Ω ; p = 0 , x = 33.4 , 0 y 17.5 .
Here, Ω denotes the interior of the geometric domain, while Ω represents its boundaries. After the flow has fully developed into a stable Kármán vortex street, a specific moment is selected as the initial condition, and one full flow cycle is computed. The non-dimensionalized computation time is 1.375, and the model Reynolds number is R e = 450 . During the training phase, the neural network employs the hyperbolic tangent (tanh) as its activation function, uses the Adam optimizer, and adopts a learning rate annealing strategy, with the initial learning rate set to 0.001.
The training effectiveness of AHC-PINN is first evaluated for the cylinder flow problem. Using a baseline network architecture of size 6 × 64 , four configurations are compared: the traditional soft-constrained PINN (sPINN), the soft-constrained PINN with adaptive sampling (A-sPINN), the data-supervised sPINN (sPINN+data), and the proposed AHC-PINN. To further assess performance under data-supervised conditions, sPINN, A-sPINN, and AHC-PINN are also tested with two different amounts of supervision data: 4 and 15 points. According to Xiang et al. [26], four supervision points constitute the minimum sensor threshold required for conventional methods to achieve initial convergence. Utilizing fewer than four points renders the model entirely incapable of capturing any wake dynamics. In contrast, 15 points correspond to a scenario where a moderately structured sensor array is available within the wake region. Finally, the influence of model capacity on solution accuracy is examined by testing each configuration with three network scales: 6 × 32 , 6 × 64 , and 6 × 128 . In the AHC-PINN framework, both the particular solution network and the distance metric network are designed as lightweight neural networks with a 4 × 20 architecture. Specifically, the particular solution network is trained for 50,000 epochs, while the distance metric network undergoes 300,000 epochs. Given their compact scale, the computational overhead is negligible compared to that of the main network.
The reference solution (ground truth, GT) for the flow field used in this work is obtained by solving the two-dimensional transient incompressible Navier–Stokes equations using the finite element method, with a time step of Δ t = 0.01 employed for the transient simulation. To satisfy the LBB stability condition, P2-P1 mixed elements are adopted for spatial discretization, while the Backward Differentiation Formula scheme is utilized for time integration. The nonlinear solver employs the Newton–Raphson method, with a stringent relative tolerance of 10 4 imposed at each time step to guarantee the accuracy of the temporal evolution. In the grid generation process, coarse, medium, and fine mesh densities were evaluated. With the relative difference between the medium and fine meshes being less than 1.5%, the fine mesh, containing 293,080 elements, was selected to ensure precise reference data for training and testing the PINN. All computational fluid dynamics simulations are performed on an Intel Xeon E5-2640 v4 CPU. The training of the deep neural networks is implemented based on the PyTorch framework and executed on an NVIDIA GeForce RTX 4090 (24 GB) GPU.
To validate the impact of the adaptive sampling algorithm on the solution accuracy of transient cylinder flow, we conducted a comparative analysis of the velocity field predictions at time t = 1.35 . Figure 4 presents the velocity contours in the wake region and their corresponding absolute errors compared to CFD results for four methods: sPINN, A-sPINN, sPINN+data, and AHC-PINN (all methods employ the 6 × 64 network size; the sPINN+data method utilizes the four true data points near the cylinder shown in Figure 3).
The results indicate that the unsupervised sPINN encounters significant difficulties in solving this flow problem, with almost no periodic Kármán vortex street structures appearing in the flow field. Combined with the error contour in Figure 4b and the velocity gradient distribution in Figure 5a, it is evident that the errors are primarily concentrated in regions with high velocity gradients behind the cylinder. This suggests that the predictive accuracy of PINNs for solving this PDE is largely constrained by the physical variations in high-gradient regions. With the A-sPINN method, the prediction results in the wake region show clear improvement compared to sPINN, and a distinct periodic flow structure becomes observable. This trend is further enhanced after incorporating the hard-constraint strategy (AHC-PINN). Furthermore, observing the PDE loss decay curve in Figure 5b, even A-sPINN, which only adjusts via loss-driven importance sampling, shows a notably faster reduction in PDE loss than the traditional sPINN method, while AHC-PINN demonstrates an even more significant advantage in lowering the PDE loss. Figure 5c and Figure 5d illustrate the loss curves for the BC and IC, respectively. Table 1 lists the quantitative mean squared errors for each physical quantity, showing a continuous improvement in numerical accuracy as loss-adaptive sampling and the hard-constraint strategy are progressively applied. Under completely unsupervised conditions, AHC-PINN achieves orders-of-magnitude accuracy improvements over conventional sPINN across all physical quantities, and its accuracy even slightly surpasses that of the supervised sPINN+data method. This fully demonstrates the advantage of AHC-PINN in unsupervised scenarios and highlights its significant practical value for real-world engineering problems where obtaining precise point data is challenging.
Although the adaptive sampling mechanism in AHC-PINN necessitates calculating the full-field residual distribution and updating sampling points, which results in a training duration approximately 2.7 times that of sPINN, the method operates as an unsupervised framework that eliminates the prohibitive cost of acquiring high-fidelity labeled flow field data. Consequently, this increase in training time is justifiable when compared to the alternatives of compromising accuracy or relying on inaccessible data.
In Figure 6, panels (a) and (b) respectively present contour plots of the PDE residual distribution across the temporal and spatial dimensions. For the sPINN method, the PDE residuals are notably concentrated in regions near the cylinder with high velocity gradients, which aligns with our previous analytical conclusions. Based on the analysis of the Navier–Stokes equations, a stream function ψ ( x , y ) is introduced such that u = ψ / y and v = ψ / x , automatically satisfying the continuity equation, while the velocity gradient components manifest as second-order partial derivatives of ψ . The vorticity is defined as ω = 2 ψ . Taking the curl of the momentum equations to eliminate the pressure term yields the vorticity transport equation:
ω t + ( u · ) ω = ν 2 ω
Substituting ω = 2 ψ , we obtain the equation for ψ :
( 2 ψ ) x + ψ y ( 2 ψ ) x ψ x ( 2 ψ ) y = ν 4 ψ
This equation governs the evolution of the stream function ψ , which in turn dictates the evolution of the velocity gradient. The continuity equation only requires the velocity field to be divergence-free, imposing no direct constraint on the magnitude of the velocity gradient; in contrast, the vorticity equation derived from the momentum equations directly incorporates the velocity gradient and describes its convection, diffusion, and generation mechanisms. In regions with large velocity gradients, the magnitudes of the nonlinear and viscous terms in the vorticity equation increase significantly—a characteristic prominently observed in the near-wake region behind the cylinder in flow-past-body problems. The distribution of the momentum equation loss shown in Figure 6c corroborates this view: the loss is predominantly concentrated in this high-gradient region. The substantially reduced momentum equation loss for the A-sPINN method effectively demonstrates the positive impact of the loss-driven sampling strategy for PINNs when dealing with large-gradient regions, while the AHC-PINN method further reduces the overall equation loss across the entire domain. Although introducing supervised data (sPINN+data) can improve prediction accuracy, as observed in Figure 6, its equation loss remains high. This strategy essentially uses data to provide local compensation in high-gradient regions without fundamentally enhancing the PINN’s ability to fit the governing equations themselves.
Table 2 presents a performance comparison of the four strategies under different network sizes. The results show that the unsupervised method proposed in this work, AHC-PINN, performs better than or close to the data-supervised sPINN+data method in most cases. Under the baseline network size (6 × 64), AHC-PINN outperforms sPINN+data in both MSE and Relative L2 metrics, with only a slightly higher MAE. This outcome demonstrates that through the synergy of loss-adaptive sampling and the hard-constraint mechanism, AHC-PINN effectively enhances the embedding capability of physical laws. For the cylinder flow model, it can achieve accuracy comparable to or even surpassing that of data-supervised methods under unsupervised conditions. Furthermore, as the network size increases progressively from 6 × 32 to 6 × 128, the performance of AHC-PINN continues to improve (the mean squared error decreases from 0.00459 to 0.00254), indicating that the method can fully utilize increased network capacity to enhance modeling capability without showing significant performance saturation. In contrast, the performance improvement of sPINN+data at larger network sizes is unstable, limited by the influence of the data loss term. Table 3 compares the velocity errors of various methods relative to the CFD benchmarks at R e = 200 and R e = 800 . Across these Reynolds number scenarios, AHC-PINN consistently exhibits slightly superior performance compared to the sPINN+data method utilizing sparse data.
To comprehensively evaluate the performance of each method, we conducted comparative tests of the strategies under identical supervised data conditions, with the corresponding error results presented in Table 4. When all methods were augmented with supervised data, both A-sPINN and AHC-PINN demonstrated significantly higher accuracy than sPINN, and the accuracy of both further improved as the number of supervised data points increased. However, after introducing supervised data, the influence of network size on the performance of all methods tended to level off due to the impact of the data loss term—a phenomenon consistent with the trend observed for sPINN+data error variations in Table 2. Integrating the results from Table 2 and Table 4, AHC-PINN not only shows superior accuracy in unsupervised training, making it more suitable for scenarios where obtaining precise data is challenging in practice, but it also utilizes increases in deep neural network scale more effectively to enhance modeling performance, demonstrating favorable scalability.
To validate the generalization capability of the AHC-PINN method, this section tests the flow field prediction performance of four approaches—sPINN, A-sPINN, sPINN+data, and AHC-PINN—under the presence of obstacle structures with different shapes, as configured in Figure 7. Figure 8 and Table 5 present the velocity contour results and the corresponding quantitative errors compared to CFD results for each shape. The prediction outcomes across different obstacle shapes are largely consistent with those observed for the single circular cylinder case. Under unsupervised conditions, the AHC-PINN method achieves significantly higher accuracy than the sPINN method, and its performance is close to or even surpasses that of the supervised sPINN+data method. Furthermore, by examining the velocity contours in Figure 8, the AHC-PINN method demonstrates better physical consistency compared to the sPINN+data approach, further substantiating the importance of developing unsupervised PINN methodologies.

4. Conclusions

Integrating the quantitative error results from Table 1, Table 2, Table 3 and Table 4 and the flow field comparisons shown in Figure 4, the proposed AHC-PINN demonstrates superior performance when solving time-dependent cylinder flow problems, outperforming traditional soft-constrained PINNs using supervised data even under unsupervised conditions. This method effectively alleviates the bottleneck of acquiring high-precision supervised data for such problems, providing a feasible pathway for applying PINNs to more complex real-world flow scenarios. Through the analysis of PDE loss and prediction results, we observe that PINN performance is notably constrained in regions with intense physical variations, such as large-gradient areas like the wake region in cylinder flow. This limitation is a key reason for their often insufficient accuracy in time-dependent problems—where temporal evolution further increases gradient complexity. Although AHC-PINN achieves better results in an unsupervised setting, its optimization essentially relies on adjusting numerical loss function values and does not specifically address the mechanistic difficulties of training PINNs in large-gradient regions. Future work will continue to conduct in-depth analysis and methodological exploration focused on this critical issue. Furthermore, we plan to extend the framework to three-dimensional flows and high Reynolds number regimes. While the mesh-free nature of AHC-PINNs theoretically supports 3D applications, more efficient residual evaluation mechanisms will be required to manage the increased computational cost. Similarly, addressing the multi-scale challenges of turbulence may necessitate integrating AHC-PINNs with turbulence modeling (e.g., RANS) to ensure computational feasibility.

Author Contributions

Conceptualization, C.Z. and G.X.; methodology, C.Z.; software, C.Z.; validation, C.Z.; formal analysis, C.Z.; investigation, C.Z. and P.N.; resources, H.Y. and G.X.; data curation, C.Z. and Y.L.; writing—original draft preparation, C.Z.; writing—review and editing, C.Z. and H.Y.; visualization, C.Z.; supervision, G.X. and H.Y.; project administration, P.N. and H.Y.; funding acquisition, H.Y. and P.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (12304346), Scientific Research Foundation of Education Department of Shaan’xi Province, China (21JK0945), Key Research and Development Projects of Shaanxi Province (2025CY-YBXM-073), the financial support received from Donghai Laboratory (2024SSYS0091), Open Fund of Beijing Key Laboratory of Advanced Optical Remote Sensing Technology (AORS202408).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to patent preparation.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AHC-PINNAdaptive Hard-Constraint Physics-Informed Neural Network
PINNPhysics-Informed Neural Network
sPINNSoft-Constraint Physics-Informed Neural Network
A-sPINNSoft-Constraint PINN with Adaptive Sampling
CFDComputational Fluid Dynamics
PDEPartial Differential Equation
NSNavier–Stokes(Equations)
ReReynolds Number
ADAutomatic Differentiation
MSEMean Squared Error
MAEMean Absolute Error
L2Relative L2 Norm
GTGround Truth
ICInitial Condition
BCBoundary Condition
PIVParticle Image Velocimetry
PDFProbability Density Function

References

  1. Lin, C.-L.; Tawhai, M.H.; Mclennan, G.; Hoffman, E.A. Computational fluid dynamics. IEEE Eng. Med. Biol. Mag. 2009, 28, 25–33. [Google Scholar] [CrossRef]
  2. Kamyar, A.; Saidur, R.; Hasanuzzaman, M. Application of Computational Fluid Dynamics (CFD) for nanofluids. Int. J. Heat Mass Transf. 2012, 55, 4104–4115. [Google Scholar] [CrossRef]
  3. Lobovský, L.; Vimmr, J. Smoothed particle hydrodynamics and finite volume modelling of incompressible fluid flow. Math. Comput. Simul. 2007, 76, 124–131. [Google Scholar] [CrossRef]
  4. Eymard, R.; Hilhorst, D.; Vohralk, M. A combined finite volume–finite element scheme for the discretization of strongly nonlinear convection–diffusion–reaction problems on nonmatching grids. Numer. Methods Partial. Differ. Equ. Int. J. 2010, 26, 612–646. [Google Scholar]
  5. Van Hoecke, L.; Boeye, D.; Gonzalez-Quiroga, A.; Patience, G.S.; Perreault, P. Experimental methods in chemical engineering: Computational fluid dynamics/finite volume method—CFD/FVM. Can. J. Chem. Eng. 2023, 101, 545–561. [Google Scholar]
  6. Vinuesa, R. High-fidelity simulations in complex geometries: Towards better flow understanding and development of turbulence models. Results Eng. 2021, 11, 100254. [Google Scholar] [CrossRef]
  7. Jiang, Y. General mesh method: A unified numerical scheme. Comput. Methods Appl. Mech. Eng. 2020, 369, 113049. [Google Scholar] [CrossRef]
  8. Cheng, C.; Zhang, G.-T. Deep Learning Method Based on Physics Informed Neural Network with Resnet Block for Solving Fluid Flow Problems. Water 2021, 13, 423. [Google Scholar] [CrossRef]
  9. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  10. Wang, S.; Yu, X.; Perdikaris, P. When and why PINNs fail to train: A neural tangent kernel perspective. J. Comput. Phys. 2022, 449, 110768. [Google Scholar] [CrossRef]
  11. Kabiri, M.; Sabooni, S. Solving inverse Gardner–Kawahara problems with physics-informed neural networks: A data-driven approach. Phys. Scr. 2025, 100, 105204. [Google Scholar]
  12. Ren, Z.; Zhou, S.; Liu, D.; Liu, Q. Physics-Informed Neural Networks: A Review of Methodological Evolution, Theoretical Foundations, and Interdisciplinary Frontiers Toward Next-Generation Scientific Computing. Appl. Sci. 2025, 15, 8092. [Google Scholar] [CrossRef]
  13. Sahli Costabal, F.; Pezzuto, S.; Perdikaris, P. Δ-PINNs: Physics-informed neural networks on complex geometries. Eng. Appl. Artif. Intell. 2024, 127, 107324. [Google Scholar]
  14. El Hassan, M.; Mjalled, A.; Miron, P.; Mönnigmann, M.; Bukharin, N. Machine Learning in Fluid Dynamics—Physics-Informed Neural Networks (PINNs) Using Sparse Data: A Review. Fluids 2025, 10, 226. [Google Scholar]
  15. Patel, Y.; Mons, V.; Marquet, O.; Rigas, G. Turbulence model augmented physics-informed neural networks for mean-flow reconstruction. Phys. Rev. Fluids 2024, 9, 034605. [Google Scholar]
  16. Zhou, C.; Li, T.; Lan, C.; Du, R.; Xin, G.; Li, W.; Wang, G.; Liu, X.; Yang, H. Hybrid Boundary Physics-Informed Neural Networks for Solving Navier-Stokes Equations with Complex Boundary. In Proceedings of the Thirty-Ninth Annual Conference on Neural Information Processing Systems, San Diego, CA, USA, 2–7 December 2025. [Google Scholar]
  17. Reyes, B.; Howard, A.A.; Perdikaris, P.; Tartakovsky, A.M. Learning unknown physics of non-Newtonian fluids. Phys. Rev. Fluids 2021, 6, 073301. [Google Scholar] [CrossRef]
  18. Zhang, C.; Shafieezadeh, A. Nested physics-informed neural network for analysis of transient flows in natural gas pipelines. Eng. Appl. Artif. Intell. 2023, 122, 106073. [Google Scholar] [CrossRef]
  19. Liu, Y.; Zhou, C.; Xin, G.; Nan, P.; Yang, H. A Cooperative Soft-Hard Physics-Informed Neural Networks Framework for Decoupling the Thermoelasticity and Thermal Convection Multiphysics. Appl. Sci. 2026, 16, 1885. [Google Scholar] [CrossRef]
  20. García, C. Kármán vortex street in incompressible fluid models. Nonlinearity 2020, 33, 1625. [Google Scholar] [CrossRef]
  21. Zhao, C.; Zhang, F.; Lou, W.; Wang, X.; Yang, J. A comprehensive review of advances in physics-informed neural networks and their applications in complex fluid dynamics. Phys. Fluids 2024, 36, 101301. [Google Scholar] [CrossRef]
  22. Chuang, P.-Y.; Barba, L.A. Predictive Limitations of Physics-Informed Neural Networks in Vortex Shedding. arXiv 2023, arXiv:2306.00230. [Google Scholar] [CrossRef]
  23. Jiang, C.; Chen, N.-Z. Gradient-free physics-informed neural networks (GF-PINNs) for vortex shedding prediction in flow past square cylinders. Comput. Ind. 2025, 169, 104304. [Google Scholar]
  24. Harmening, J.H.; Peitzmann, F.-J.; el Moctar, O. Effect of network architecture on physics-informed deep learning of the Reynolds-averaged turbulent flow field around cylinders without training data. Front. Phys. 2024, 12, 1385381. [Google Scholar]
  25. Dou, Y.; Han, X.; Lin, P. Flow field reconstruction and prediction of the two-dimensional cylinder flow using data-driven physics-informed neural network combined with long short-term memory. Eng. Appl. Artif. Intell. 2025, 149, 110547. [Google Scholar]
  26. Xiang, H.; Zhang, Y.; Zhou, Y.; Liu, T.; Xie, X.; Li, Y.; Zhao, Q. Simulation of Unsteady Incompressible 2D Cylinder Flow with Physics-Informed Neural Network. Proc. Korean Soc. Comput. Fluid Eng. 2022, 121. [Google Scholar]
  27. Ren, X.; Hu, P.; Su, H.; Zhang, F.; Yu, H. Physics-informed neural networks for transonic flow around a cylinder with high Reynolds number. Phys. Fluids 2024, 36, 036129. [Google Scholar] [CrossRef]
  28. Mo, Y.; Magri, L. Reconstructing unsteady flows from sparse, noisy measurements with a physics-constrained convolutional neural network. Phys. Rev. Fluids 2023, 10, 034901. [Google Scholar] [CrossRef]
  29. Malineni, V.S.K.; Rajendran, S. Physics-Informed Neural Network Approaches for Sparse Data Flow Reconstruction of Unsteady Flow Around Complex Geometries. arXiv 2025, arXiv:2508.01314. [Google Scholar] [CrossRef]
  30. Liu, H.; Wang, Z.; Deng, R.; Wang, S.; Xu, C.; Cai, S. Inferring velocity and pressure fields from particle images via physics-informed neural networks. Phys. Fluids 2025, 37, 097170. [Google Scholar] [CrossRef]
  31. Yang, Z.; Xu, Y.; Jing, J.; Fu, X.; Wang, B.; Ren, H.; Zhang, M.; Sun, T. Investigation of Physics-Informed Neural Networks to Reconstruct a Flow Field with High Resolution. J. Mar. Sci. Eng. 2023, 11, 2045. [Google Scholar] [CrossRef]
  32. Wang, S.; Sankaran, S.; Wang, H.; Perdikaris, P. An Expert’s Guide to Training Physics-informed Neural Networks. arXiv 2023, arXiv:2308.08468. [Google Scholar]
  33. Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 2020, 367, 1026–1030. [Google Scholar] [CrossRef] [PubMed]
  34. Hu, Z.; Shukla, K.; Karniadakis, G.E.; Kawaguchi, K. Tackling the curse of dimensionality with physics-informed neural networks. Neural Netw. 2024, 176, 106369. [Google Scholar] [CrossRef] [PubMed]
  35. Shukla, K.; Zou, Z.; Chan, C.H.; Pandey, A.; Wang, Z.; Karniadakis, G.E. NeuroSEM: A hybrid framework for simulating multiphysics problems by coupling PINNs and spectral elements. Comput. Methods Appl. Mech. Eng. 2025, 433, 117498. [Google Scholar]
  36. Cai, S.; Li, H.; Zheng, F.; Kong, F.; Dao, M.; Karniadakis, G.E.; Suresh, S. Artificial intelligence velocimetry and microaneurysm-on-a-chip for three-dimensional analysis of blood flow in physiology and disease. Proc. Natl. Acad. Sci. USA 2021, 118, e2100697118. [Google Scholar] [PubMed]
Figure 1. AHC-PINN architecture.
Figure 1. AHC-PINN architecture.
Applsci 16 02294 g001
Figure 2. Loss-based adaptive sampling.
Figure 2. Loss-based adaptive sampling.
Applsci 16 02294 g002
Figure 3. Two-dimensional cylinder flow model configuration.
Figure 3. Two-dimensional cylinder flow model configuration.
Applsci 16 02294 g003
Figure 4. Comparison of velocity field predictions and their errors against the CFD reference solution at t = 1.35 for sPINN, A-sPINN, sPINN+data and AHC-PINN. (a) Velocity contour plots. (b) Absolute error contour plots.
Figure 4. Comparison of velocity field predictions and their errors against the CFD reference solution at t = 1.35 for sPINN, A-sPINN, sPINN+data and AHC-PINN. (a) Velocity contour plots. (b) Absolute error contour plots.
Applsci 16 02294 g004
Figure 5. (a) Velocity gradient magnitude from the CFD reference solution at t = 1.35 . (b) Navier–Stokes loss decay during training for sPINN, A-sPINN, sPINN+data and AHC-PINN. (c) BC loss decay during training for sPINN, A-sPINN, sPINN+data and AHC-PINN. (d) IC loss decay during training for sPINN, A-sPINN, sPINN+data and AHC-PINN.
Figure 5. (a) Velocity gradient magnitude from the CFD reference solution at t = 1.35 . (b) Navier–Stokes loss decay during training for sPINN, A-sPINN, sPINN+data and AHC-PINN. (c) BC loss decay during training for sPINN, A-sPINN, sPINN+data and AHC-PINN. (d) IC loss decay during training for sPINN, A-sPINN, sPINN+data and AHC-PINN.
Applsci 16 02294 g005
Figure 6. Spatiotemporal distribution contour plots of the Navier–Stokes equation residuals. (a) Spatiotemporal evolution of the residuals along the central axis in the y-direction. (b) Total equation residuals on the xy-plane at t = 1.35 . (c) Momentum equation residuals on the xy-plane at t = 1.35 .
Figure 6. Spatiotemporal distribution contour plots of the Navier–Stokes equation residuals. (a) Spatiotemporal evolution of the residuals along the central axis in the y-direction. (b) Total equation residuals on the xy-plane at t = 1.35 . (c) Momentum equation residuals on the xy-plane at t = 1.35 .
Applsci 16 02294 g006
Figure 7. In the flow model, obstacle structures including triangular, square, and two vertically aligned circular shapes are respectively configured.
Figure 7. In the flow model, obstacle structures including triangular, square, and two vertically aligned circular shapes are respectively configured.
Applsci 16 02294 g007
Figure 8. Comparison of velocity contours obtained using sPINN, A-sPINN, sPINN+data, and AHC-PINN with CFD reference results for flows past various obstacles. (a) Triangular obstacle. (b) Square obstacle. (c) Two vertically aligned circular obstacles.
Figure 8. Comparison of velocity contours obtained using sPINN, A-sPINN, sPINN+data, and AHC-PINN with CFD reference results for flows past various obstacles. (a) Triangular obstacle. (b) Square obstacle. (c) Two vertically aligned circular obstacles.
Applsci 16 02294 g008
Table 1. Comparison of predictive accuracy and training efficiency for the sPINN, A-sPINN, sPINN+data, and AHC-PINN methods at t = 1.35 , including mean squared errors for u, v, and p relative to CFD reference data, PDE residual, and training time.
Table 1. Comparison of predictive accuracy and training efficiency for the sPINN, A-sPINN, sPINN+data, and AHC-PINN methods at t = 1.35 , including mean squared errors for u, v, and p relative to CFD reference data, PDE residual, and training time.
MethodMSE_uMSE_vMSE_pLoss_pdeCost Time (s)
sPINN0.01910.012000.00586 2.38 × 10 4 72.6
A-sPINN0.00650.001280.00227 2.38 × 10 4 114.5
sPINN+data0.00350.001120.00137 2.38 × 10 4 78.6
AHC-PINN0.00280.000840.00092 2.38 × 10 5 196.9
Table 2. Errors of sPINN, A-sPINN, sPINN+data, and AHC-PINN against CFD reference results for different network sizes.
Table 2. Errors of sPINN, A-sPINN, sPINN+data, and AHC-PINN against CFD reference results for different network sizes.
MethodNetwork ScaleError Metrics
MSEMAERelative L2
sPINN 6 × 32 0.018370.09230.2570
6 × 64 0.019180.10320.2627
6 × 128 0.016110.09130.2407
A-sPINN 6 × 32 0.006970.07240.1584
6 × 64 0.006510.07090.1531
6 × 128 0.006040.06810.1501
sPINN+data 6 × 32 0.004850.04170.1321
6 × 64 0.003560.03430.1132
6 × 128 0.003620.03430.1141
AHC-PINN 6 × 32 0.004590.05900.1286
6 × 64 0.002830.04410.1010
6 × 128 0.002540.04140.0956
Table 3. Error metrics comparison of different methods at Reynolds numbers R e = 200 and R e = 800 .
Table 3. Error metrics comparison of different methods at Reynolds numbers R e = 200 and R e = 800 .
Reynolds NumberMethodError Metrics
MSE MAE Relative L2
R e = 200 sPINN0.027520.13560.3115
A-sPINN0.012580.10500.2106
sPINN+data0.002970.033050.1023
AHC-PINN0.002130.028520.0915
R e = 800 sPINN0.019790.10050.2649
A-sPINN0.011160.09480.1990
sPINN+data0.004620.03610.1280
AHC-PINN0.004580.04010.1275
Table 4. Errors of sPINN, A-sPINN, and AHC-PINN relative to CFD results with varying numbers of supervised data points.
Table 4. Errors of sPINN, A-sPINN, and AHC-PINN relative to CFD results with varying numbers of supervised data points.
MethodSupervisionNetworkError Metrics
Points Scale MSE MAE Relative L2
sPINN4 points 6 × 32 0.004850.04170.1321
6 × 64 0.003560.03430.1132
6 × 128 0.003620.03430.1141
15 points 6 × 32 0.002110.02170.0871
6 × 64 0.002280.02380.0906
6 × 128 0.001990.02160.0847
A-sPINN4 points 6 × 32 0.001420.02480.0715
6 × 64 0.001080.01980.0625
6 × 128 0.001130.01970.0638
15 points 6 × 32 0.001510.02230.0737
6 × 64 0.000800.01660.0537
6 × 128 0.001030.01740.0609
AHC-PINN4 points 6 × 32 0.001570.03080.0753
6 × 64 0.001750.03420.0794
6 × 128 0.001520.03190.0741
15 points 6 × 32 0.000780.01780.0531
6 × 64 0.000940.01570.0583
6 × 128 0.000730.01460.0519
Table 5. Quantitative errors of sPINN, A-sPINN, sPINN+data, and AHC-PINN relative to CFD results for flows past triangular, square, and two vertically aligned circular obstacles.
Table 5. Quantitative errors of sPINN, A-sPINN, sPINN+data, and AHC-PINN relative to CFD results for flows past triangular, square, and two vertically aligned circular obstacles.
ShapeMethodMSEMAERelative L2
TrianglesPINN0.025040.11600.3008
A-sPINN0.012450.08930.2122
sPINN+data0.005050.04360.1351
AHC-PINN0.004010.03370.1204
SquaresPINN0.029320.12670.3260
A-sPINN0.002630.13960.2864
sPINN+data0.003310.03440.1095
AHC-PINN0.007210.06380.1366
Stacked circlessPINN0.067710.20870.5258
A-sPINN0.043370.18280.4208
sPINN+data0.005660.07290.1521
AHC-PINN0.009740.04540.1995
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhou, C.; Liu, Y.; Xin, G.; Nan, P.; Yang, H. Purely Physics-Driven Neural Networks for Tracking the Spatiotemporal Evolution of Time-Dependent Flow. Appl. Sci. 2026, 16, 2294. https://doi.org/10.3390/app16052294

AMA Style

Zhou C, Liu Y, Xin G, Nan P, Yang H. Purely Physics-Driven Neural Networks for Tracking the Spatiotemporal Evolution of Time-Dependent Flow. Applied Sciences. 2026; 16(5):2294. https://doi.org/10.3390/app16052294

Chicago/Turabian Style

Zhou, Chuyu, Yuxin Liu, Guoguo Xin, Pengyu Nan, and Hangzhou Yang. 2026. "Purely Physics-Driven Neural Networks for Tracking the Spatiotemporal Evolution of Time-Dependent Flow" Applied Sciences 16, no. 5: 2294. https://doi.org/10.3390/app16052294

APA Style

Zhou, C., Liu, Y., Xin, G., Nan, P., & Yang, H. (2026). Purely Physics-Driven Neural Networks for Tracking the Spatiotemporal Evolution of Time-Dependent Flow. Applied Sciences, 16(5), 2294. https://doi.org/10.3390/app16052294

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop