Next Article in Journal
A Novel ANP-DEMATEL Framework for Multi-Criteria Decision-Making in Adaptive E-Learning Systems
Previous Article in Journal
Explainable Machine Learning Financial Econometrics for Digital Inclusive Finance Impact on Rural Labor Market
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Physics-Informed Neural Networks Integrating Multi-Relaxation-Time Lattice Boltzmann Method for Forward and Inverse Flow Problems

by
Mengyu Feng
1,
Minglei Shan
1,*,
Ling Kuai
1,
Chenghui Yang
1,
Yu Yang
2,
Cheng Yin
1 and
Qingbang Han
1
1
College of Information Science and Engineering, Hohai University, Changzhou 213200, China
2
College of Information Science and Technology, Nanjing Forestry University, Nanjing 210037, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(22), 3712; https://doi.org/10.3390/math13223712
Submission received: 14 October 2025 / Revised: 11 November 2025 / Accepted: 17 November 2025 / Published: 19 November 2025

Abstract

Although physics-informed neural networks (PINNs) offer a novel, mesh-free paradigm for computational fluid dynamics (CFD), existing models often suffer from poor stability and insufficient accuracy, particularly when dealing with complex flows at high Reynolds numbers. To address this limitation, we propose, for the first time, a novel hybrid architecture, PINN-MRT, which integrates the multi-relaxation-time lattice Boltzmann method (MRT-LBM) with PINNs. The model embeds the MRT-LBM evolution equation as a physical constraint within the loss function and employs a unique dual-network architecture to separately predict macroscopic conserved variables and non-equilibrium distribution functions, enabling both forward and inverse problem-solving through a composite loss function. Benchmark tests on the lid-driven cavity flow demonstrate the superior performance of PINN-MRT. In inverse problems, it remains stable at Reynolds numbers up to 5000 with parameter inversion errors below 15%, whereas standard PINN and single-relaxation-time PINN-LBM models fail at a Reynolds number of 1000 with errors exceeding 80%. In purely physics-driven forward problems, PINN-MRT also provides stable solutions at a Reynolds number of 400, while the other models completely collapse. This study confirms that incorporating mesoscopic kinetic theory into PINNs effectively overcomes the stability bottlenecks of conventional approaches, providing a more robust and accurate architecture for CFD and paving the way for solving more challenging fluid dynamics problems.

1. Introduction

Developing stable, accurate, and generalizable numerical simulation methods has long been one of the core challenges in computational fluid dynamics (CFD) [1,2,3]. Traditional numerical methods, such as the finite difference method (FDM), finite volume method (FVM), and spectral method (SM), have established the basic framework of modern CFD through discretization theory, mesh partitioning, and algebraic equation solving [1,4]. These methods have become benchmark tools for simulating physical phenomena such as fluid dynamics and heat transfer. However, when dealing with strong nonlinearity, discontinuous solutions, and complicated geometries, these traditional methods face significant limitations, including heavy reliance on refined meshes, numerical instability, and severe accuracy degradation or even failure [5,6,7].
In recent years, physics-informed neural networks (PINNs) have emerged as a novel computational method that combines the advantages of physical laws and data-driven learning [8]. Leveraging their mesh-free nature, PINNs demonstrate great potential in addressing diverse fluid problems, gradually becoming a key new paradigm for breaking through the bottlenecks of traditional numerical methods [9]. Specifically, PINNs embed physical laws directly into the loss function of neural networks and utilize parameter optimization to implicitly solve the governing equations, thus largely eliminating mesh dependence and enhancing adaptability to complex physical problems [10,11,12]. For instance, this approach has shown promising performance in shock capturing, turbulence modeling, and unsteady flow simulations [13]. Nonetheless, PINNs also suffer from challenges such as spurious solutions, convergence difficulties, and limited capability for dynamic system modeling [10,14,15]. As a result, introducing hybrid model architectures to improve robustness has become a promising research direction. Among these approaches, the integration of PINNs with the lattice Boltzmann method (LBM) represents a recently developed and physically motivated extension.
The LBM, owing to its clear evolution mechanism, highly parallel computational structure, and flexibility in handling intricate boundaries and multi-physics coupling problems, has been widely adopted for numerical simulation in CFD [16]. Its fundamental governing equation, the lattice Boltzmann equation (LBE), originates from the underlying kinetic equation, i.e., the continuous Boltzmann equation. This intrinsic physical nature rooted in kinetic theory enables the LBE to systematically preserve the essential coupling between microscopic particle dynamics and macroscopic conservation law evolution in physical modeling, thus endowing it with inherent physical consistency, theoretical closure, and high generality for fluid flow solutions [17,18]. The LBE models the mesoscopic evolution of particle distributions through collision and streaming processes, and offers a solid theoretical basis for complex flow simulations. Based on this foundation, several researchers have explored coupling the LBM with physics-informed neural networks by replacing macroscopic PDEs such as the Navier–Stokes (N–S) equations with the LBE, thereby embedding mesoscopic kinetic physics directly into the learning process [19]. Lou et al., for instance, implemented a single-relaxation-time (SRT) Bhatnagar–Gross–Krook (BGK) model within a PINN framework and demonstrated its feasibility for both forward and inverse flow reconstruction [20]. While the SRT formulation simplifies the kinetic description by applying a single relaxation time to all kinetic modes, its direct use within a PINN framework introduces new challenges that differ from those in conventional solvers. Specifically, the coupling of a single relaxation scale with the data-driven loss landscape can lead to poorly conditioned residuals and gradient imbalance, resulting in optimization stiffness and slow convergence.
To overcome the aforementioned challenges, this study incorporates the multi-relaxation-time (MRT) formulation into the PINN framework. By assigning distinct relaxation rates to different kinetic moments, the MRT model provides a more flexible and physically consistent representation of mesoscopic relaxation processes, which helps alleviate gradient imbalance and improves the conditioning of the physics-informed residuals. This enhanced formulation serves as the foundation for achieving stable and accurate training in complex flow regimes.
Building upon this idea, we develop a dual-network PINN-MRT architecture that separately learns macroscopic conserved variables and non-equilibrium distribution functions. The objective is to improve convergence stability and predictive accuracy in both forward and inverse problems, while maintaining robustness across moderate to high-Reynolds-number (Re) flows. The proposed framework is validated using the two-dimensional lid-driven cavity flow benchmark over a Re range of Re = 100 5000 , enabling a systematic comparison against PINN-SRT and baseline PINN models.
The remainder of this paper is organized as follows: Section 2 presents the theoretical foundation and design of the proposed PINN-MRT architecture, which integrates dual networks and physics-informed loss functions. Section 3 introduces the benchmark problem setup and model configurations. Section 4 demonstrates the applications of PINN-MRT to forward and inverse problems, including predictive accuracy and parameter identification at different Reynolds numbers. Finally, Section 5 summarizes the main findings and outlines future research directions.

2. Method

2.1. PINN-MRT

2.1.1. Neural Network Architecture

The neural network architecture has two sub-networks, as illustrated in Figure 1. The first sub-network, denoted as N N e q , takes the spatial coordinates ( x , t ) as input and outputs the macroscopic density ρ and velocity components ( u , v ) , then calculates the equilibrium distribution function f i e q using Equation (1). The second sub-network, N N n e q , shares the same input as N N e q but predicts the nine components of the non-equilibrium distribution function f i n e q . This physically motivated separation leverages the smoothness of f i e q and the complexity of f i n e q , allowing each network to focus on distinct features and thereby improving stability and accuracy [21,22,23].
Based on the predicted outputs, the total distribution function at each spatial location is composed as
f i = f i e q + f i n e q .
Figure 1. The presented PINN-MRT architecture with MRT-LBM physics constraints.
Figure 1. The presented PINN-MRT architecture with MRT-LBM physics constraints.
Mathematics 13 03712 g001

2.1.2. Loss Function Design

For effective training of the PINN-MRT model, a loss function is formulated incorporating the physical residuals, boundary conditions, initial conditions, and data-driven terms, as shown in Equation (2) [24,25].
L = β 1 L PDE + β 2 L BC + β 3 L IC + β 4 L DATA ,
where β i ( i = 1 , 2 , 3 , 4 ) denote the weight coefficients corresponding to the respective loss components. To ensure consistency and fairness in comparison with other models, this study does not perform loss weight optimization but instead adopts fixed unit coefficients (i.e., β i = 1 ) for active loss terms, while setting β i = 0 for inactive components. This design choice allows all models to share identical training settings, ensuring that any performance difference arises solely from the underlying physical residual formulation rather than from different weighting strategies.
Although adaptive weighting schemes such as GradNorm or uncertainty-based reweighting can dynamically balance multiple loss components, they were deliberately not employed in this work for two main reasons. First, the primary goal is to isolate the intrinsic influence of the physics-informed residual formulation under controlled and consistent training conditions. Incorporating adaptive mechanisms would introduce model-dependent optimization effects that could obscure this comparison. Second, fixed weights enhance reproducibility and transparency, allowing all reported results to be replicated without additional hyperparameter tuning. Notably, the proposed PINN-MRT achieves stable convergence and low errors even without adaptive balancing, indicating that the embedded MRT formulation inherently improves the conditioning of the composite loss function. Nevertheless, adaptive weighting remains a promising future extension and is explicitly noted in Section 5 as a direction for continued investigation.
The physics residual loss L PDE ensures that the neural network predictions satisfy the governing dynamics derived from the MRT-LBM equations. The specific equation can be expressed as [26,27,28]
L PDE = 1 N e i = 0 Q 1 j = 0 N t 1 R i ( x j , t j ) 2 ,
R i = f i t + e i · f i + M 1 Λ M i j f j f j e q ,
where R i denotes the residual of the MRT-LBM evolution in the i-th discrete velocity direction, Q refers to the total number of discrete velocity directions, and N t and N e specify the numbers of temporal and internal sampling points, respectively. The spatial coordinate for the j-th sampling point is given by x j , with t j representing its corresponding temporal coordinate. The collocation points used in L PDE are drawn once from a uniform random distribution and remain fixed during training, providing a consistent basis for all compared models.
The boundary condition loss L BC is introduced to enforce that the predicted solutions strictly satisfy the prescribed boundary conditions, thereby ensuring physical consistency at the domain boundaries. In the LBM kinetic framework, it is further divided into three components
  • Macroscopic boundary loss L m B C : Constrains the predicted macroscopic variables on the boundaries to match the prescribed physical values.
  • Distribution function consistency loss L f B C : Constrains the predicted equilibrium and non-equilibrium distribution functions to match the theoretically defined boundary distributions.
  • Boundary PDE residual loss L e B C : Imposes constraints on the neural network outputs by penalizing the residuals of the governing equations at the boundaries.
The equations for the three components are given by
L m B C = 1 N b j = 0 N b 1 ϕ N N e q ( x b , j , t j ) ϕ ( x b , j , t j ) 2 ,
L f B C = 1 N b j = 0 N b 1 ξ i · n > 0 f i , N N n e q ( x b , j , t j ) f i , n e q ( x b , j , t j ) 2 ,
L e B C = 1 N b j = 0 N b 1 ξ i · n < 0 R i ( x b , j , t j ) 2 .
Thus the formula for L BC is
L BC = L m B C + L f B C + L e B C .
In Equation (5), where ϕ N N e q denotes the predicted macroscopic physical quantities (e.g., velocity or density), ϕ represents the ground truth boundary condition, N b is the number of boundary sampling points, and x b , j and t j denote the spatial and temporal coordinates, respectively, of the j-th boundary point. In Equation (6), f i , N N n e q n e q represents the non-equilibrium distribution function in the i-th discrete velocity direction predicted by the neural network, f i n e q , is the theoretical reference value at the boundary, and ξ i · n > 0 indicates that only outgoing velocity directions are considered (from the interior to the boundary). Equation (7) indicates that the evolution equation in the incident direction remains enforced at the boundary.
The initial condition loss L IC ensures that the predicted initial state strictly conforms to the known reference solution. The equation is given by
L IC = 1 N i j = 0 N i 1 ϕ N N e q ( x j , 0 ) ϕ ( x j , 0 ) 2 ,
with N i representing the number of sampling points.
The data-driven loss L DATA minimizes the discrepancy between the predicted values and the experimentally measured or observed data. It is defined as
L DATA = 1 N d j = 0 N d 1 u ( x u j , t u j ) u ( x u j , t u j ) 2 ,
where N d denotes the total number of data points, u represents the observed velocity data, thereby guiding the neural network to better approximate the actual physical data.

2.2. MRT-LBM

The LBM components required by the method are summarized here, with full derivations deferred to Appendix A. On the D2Q9 lattice, the moment transform M and the diagonal relaxation matrix Λ decouple hydrodynamic and non-hydrodynamic modes, enabling independent control of physical and numerical modes through the MRT framework. This decoupling significantly improves numerical stability and accuracy compared to the SRT or BGK model, particularly in high-Re flows [17,29]. The kinematic viscosity is related to the shear-mode relaxation time τ ν via ν = c s 2 ( τ ν 1 / 2 ) δ t , while other relaxation times can be tuned to suppress numerical instabilities without affecting physical dynamics.

3. Benchmark Modeling

The classical lid-driven cavity flow is selected as the benchmark case to evaluate the model performance. As shown in Figure 2, the computational domain is a unit square with spatial coordinates ( x , y ) [ 0 , 1 ] 2 . A constant velocity of U = 1 , V = 0 is prescribed on the top boundary, and the remaining walls are subjected to no-slip conditions (i.e., stationary). This setup induces a characteristic steady-state vortex structure that varies with Re, making it a widely adopted benchmark for assessing the predictive capability of both forward and inverse modeling approaches.
To enable a systematic comparison of different levels of physical embedding in physics-informed neural networks, three distinct formulations are considered
  • Standard PINN: Enforces the macroscopic incompressible N–S equations directly as PDE residuals.
  • PINN-SRT: Incorporates the SRT approximation by using a scalar relaxation matrix Λ = τ 1 I in the LBE.
  • PINN-MRT: Employs the MRT collision model, utilizing a full diagonal relaxation matrix Λ derived from MRT theory to better capture kinetic effects.
The macroscopic dynamics are governed by the incompressible N-S equations [30]
u x + v y = 0 , u u x + v u y = 1 ρ p x + ν 2 u x 2 + 2 u y 2 , u v x + v v y = 1 ρ p y + ν 2 v x 2 + 2 v y 2 ,
where ( u , v ) denote the velocity components in the x and y directions, respectively, p is the pressure, and ν represents the kinematic viscosity, which can be expressed in terms of the Re as
ν = U L / R e ,
here, U represents the lid velocity, and L denotes the characteristic length (i.e., the cavity length).
Three models share the same network architecture and training configuration to ensure a fair comparison. The detailed configurations for each model in the inverse problem are listed in Table 1. Table 2 summarizes the corresponding configurations for forward problem [10,14,31,32].
Notably, the treatment of the lid velocity U lid differs between forward and inverse problems. In forward simulations, U lid = 1 is imposed as a fixed boundary condition, and the model predicts the full velocity field. In contrast, for inverse problems, both U lid and the kinematic viscosity ν are treated as unknown parameters to be inferred from sparse and potentially noisy observations.
The reference data for both problem types are generated using a high-resolution FDM solver. All spatial points used for collocation, boundary conditions, and observational data are uniformly sampled at the start of training and remain unchanged throughout the process, which corresponds to a static sampling strategy. No adaptive refinement or iterative resampling is applied. A fixed random seed (2341) is used to ensure reproducibility across all experiments.
Spatial and temporal derivatives in the physics-informed loss are computed using automatic differentiation (AD) through the PyTorch framework (version 2.7.1 with CUDA 11.8 support). This approach provides exact gradients of the neural network outputs with respect to the input coordinates without discretization errors, unlike finite difference schemes. As a result, the loss function remains fully differentiable with respect to both network parameters and physical parameters such as ν and U lid .
Table 1. Configurations and training settings for inverse problem.
Table 1. Configurations and training settings for inverse problem.
CategoryDescription
HardwareNVIDIA RTX 4070 GPU (16 GB memory)
FrameworkPyTorch
Batch size1024
Network structure5-layer fully connected network (1 input, 3 hidden, 1 output)
Hidden neurons60 neurons per hidden layer
Activation functionTanh
Trainable parameterBoth ν and U lid are randomly initialized in ( 0 , 1 )
Loss function β 1 = β 2 = β 4 = 1 , β 3 = 0
Data sourceFDM
Data locationRandom sampling
Sampling distribution20,000 collocation pts; 5000 boundary pts; 1000 data pts
OptimizerAdam (80,000 iterations, initial LR 10 3 , exponential decay)
Table 2. Configurations and training settings for forward problem.
Table 2. Configurations and training settings for forward problem.
CategoryDescription
HardwareNVIDIA RTX 4070 GPU (16 GB memory)
FrameworkPyTorch
Batch size1024
Network structure7-layer fully connected network (1 input, 5 hidden, 1 output)
Hidden neurons60 neurons per hidden layer in N N eq ; 100 in N N neq
Activation functionTanh
Trainable parameterNone
Loss function β 1 = β 2 = 1 , β 3 = β 4 = 0
Data sourceNone
Data locationNone
Sampling distribution20,000 collocation pts; 5000 boundary pts
OptimizerAdam (80,000 iterations, initial LR 10 3 , exponential decay)
The complete training workflow, including data preparation, dual-network initialization, loss construction, and iterative optimization with the Adam optimizer, is illustrated in Figure 3 for the inverse and forward problems.

4. Results

4.1. Inverse Problem

The subsequent analysis of the inverse problem is divided into two parts: Flow Field Analysis, focusing on the reconstruction of physical fields, and Parameter Inversion, addressing the identification of underlying model parameters. All reported results were obtained under a fixed random seed to ensure reproducibility. As the proposed PINN-MRT framework is deterministic in both initialization and optimization, the training process converges consistently under identical conditions. Therefore, the presented solutions are representative of the model’s typical performance and can be regarded as numerically stable within the deterministic setting of the study.

4.1.1. Flow Field Analysis

The comprehensive performance comparison among the standard PINN, PINN-SRT and PINN-MRT models is conducted up to Re = 1000, since the standard PINN and PINN-SRT models become numerically unstable and fail to produce reliable results when the Re exceeds 1000. In this range, all models are systematically analyzed in terms of vortex structure reconstruction, error distribution, and velocity profile consistency. To further verify the robustness of the PINN-MRT model under more challenging flow conditions, additional evaluations at Re = 2000 and 5000 are conducted exclusively for the PINN-MRT model.
1.
Vortex Structure Reconstruction and Error Distribution
At R e = 100 (Figure 4 and Figure 5), all three models recover the dominant single vortex structure, but the PINN-MRT most accurately reproduces the vortex rotation direction and centroid location. The PINN-SRT and standard PINN partially capture the primary vortex but fail to resolve the secondary corner vortices. These results highlight the advantage of incorporating MRT-LBM as a physics prior, as its decoupled relaxation mechanism improves the conditioning of the residual constraints and enables the network to better learn multiscale flow features, even at low Reynolds numbers where such structures remain weak but physically meaningful. This enhanced representational fidelity lays the foundation for reliable inverse modeling under sparse or noisy observations.
Figure 6 presents the flow field results at Re = 1000, where more complex structures appear compared to low Re. The primary vortex is mainly identified in the u velocity field as a large central rotational structure, whereas the secondary vortices are clearly observed in the v velocity field near the bottom corners. Among the models, the PINN-MRT accurately reconstructs both primary and secondary flow structures features, capturing their rotational directions and spatial distributions. By comparison, the PINN-SRT partially recovers the primary vortex but fails to resolve the secondary features. The standard PINN exhibits a complete breakdown in prediction. As shown in Figure 7, the PINN-MRT maintains low error levels, with peak errors below 0.10 for u and 0.15 for v. Meanwhile, the PINN-SRT reaches peak errors of 0.30 with wider distributions, and the standard PINN yields errors exceeding 0.40, indicating clear failure.
Quantitatively, Table 3 demonstrates that the MRT model consistently yields the narrowest error bounds. The error reduction in Δ u and Δ v reaches 73.9% and 51.8% at Re = 1000 relative to PINN-SRT. These differences demonstrate the enhanced conditioning of the PINN-MRT residuals. By introducing independent relaxation rates for distinct kinetic moments, the MRT formulation mitigates the stiffness inherent in the SRT approach. The error reductions reported in Table 3 thus reflect not only numerical improvement but also a physically grounded stabilization of the optimization process.
2.
Velocity Profile
Centerline velocity profiles are analyzed to evaluate the accuracy of flow reconstruction along key directions. Table 3 summarizes the absolute error ranges of the horizontal ( Δ u ) and vertical ( Δ v ) velocity components for Re = 100 and 1000, along with the error reduction percentages of PINN-MRT relative to PINN-SRT. The PINN-MRT model achieves the narrowest error bounds in both directions, with maximum velocity errors reduced by up to 74% compared to PINN-SRT, particularly at high Re. This improvement verifies that MRT residuals enhance not only accuracy but also numerical conditioning across scales.
3.
High-Re Error Evaluation of PINN-MRT
Table 4 reports the relative L 2 errors of the PINN-MRT model across Re = 100–5000. As the Re increases, the errors in the velocity components also rise ( err u = 9.07 % to 15.28 % , err v = 11.94 % to 21.26 % , and err total = 4.55 % to 14.12 % ). This increase is physically and numerically justified. At higher Re, thin boundary layers and corner-induced secondary vortices generate strong localized gradients and small scale flow features that are inherently difficult to resolve uniformly. Moreover, relative errors are dominated by regions with small reference magnitudes, which amplifies the local L 2 ratios without compromising the overall flow fidelity.
At Re beyond 5000, the PINN-SRT and standard PINN models fail to converge due to the coupled stiffness of shear and bulk modes, which causes gradient explosion and unstable optimization. The MRT residual decouples these modes, effectively redistributing stiffness and preserving training stability. Although the global L 2 error slightly increases with Re, the flow topology and primary vortex characteristics remain correctly captured, confirming the physical consistency of the MRT formulation even under high-Re conditions.
Figure 4. Inverse problem solution comparison of velocity field at Re = 100.
Figure 4. Inverse problem solution comparison of velocity field at Re = 100.
Mathematics 13 03712 g004
Figure 5. Absolute error distributions of the velocity components u and v at Re = 100.
Figure 5. Absolute error distributions of the velocity components u and v at Re = 100.
Mathematics 13 03712 g005
Figure 6. Inverse problem solution comparison of velocity field at Re = 1000.
Figure 6. Inverse problem solution comparison of velocity field at Re = 1000.
Mathematics 13 03712 g006
Figure 7. Absolute error distributions of the velocity components u and v at Re = 1000.
Figure 7. Absolute error distributions of the velocity components u and v at Re = 1000.
Mathematics 13 03712 g007
Table 3. Absolute error ranges of horizontal ( Δ u ) and vertical ( Δ v ) velocity components for all three models in inverse problem at Re = 100 and 1000. Reduction percentages are given relative to the PINN-SRT model.
Table 3. Absolute error ranges of horizontal ( Δ u ) and vertical ( Δ v ) velocity components for all three models in inverse problem at Re = 100 and 1000. Reduction percentages are given relative to the PINN-SRT model.
Model Δ u Δ v Δ u red Δ v red
Re = 100
PINN[0.0005, 0.0763][0.0005, 0.0654]
PINN-SRT[0.0002, 0.0531][0.0003, 0.0526]
PINN-MRT[0.0003, 0.0447][0.0004, 0.0310]15.8%41.1%
Re = 1000
PINN
PINN-SRT[0.0005, 0.2195][0.0001, 0.1318]
PINN-MRT[0.0005, 0.0573][0.0001, 0.0635]73.9%51.8%
Table 4. Relative L 2 errors (%) of PINN-MRT in inverse problem.
Table 4. Relative L 2 errors (%) of PINN-MRT in inverse problem.
PINN-MRT err u (%) err v (%) err total (%)
Re = 1009.06711.9424.549
Re = 100012.17917.0489.894
Re = 200014.08820.36913.845
Re = 500015.28121.26214.121

4.1.2. Parameter Inversion

Building on the successful flow field reconstruction, the proposed PINN-MRT model is further assessed for its capability in parameter inversion, focusing on both the kinematic viscosity coefficient ν and the lid velocity U lid .
As shown in Figure 8, the left panels present the relative L 2 errors of the inferred ν and U lid for different models, while the right panels illustrate robustness under noisy data. For the noiseless cases, both PINN and PINN-SRT exhibit rapid degradation of accuracy as Re increases, with relative errors exceeding 80 % and 50 % , respectively, at Re = 5000 . In contrast, the PINN-MRT maintains errors below 20 % for both parameters across the entire Re range, demonstrating remarkable stability and resistance to ill-conditioning at high Re.
Notably, the 20% error in the inferred U lid at high Re is physically meaningful rather than indicative of failure. This arises from the indirect inference mechanism, where U lid is determined through mesoscopic momentum consistency rather than direct boundary supervision. At high Re, strong near-wall gradients cause phase shifts and smoothing effects that magnify scalar deviations without affecting the overall flow topology or momentum balance.
Under noisy inputs, the PINN-MRT results remain stable and show only marginal degradation, confirming that the MRT residual regularization suppresses noise amplification and ensures robust parameter inference.
Figure 8. Relative L 2 errors of the inferred parameters ν (top) and U lid (bottom) under different Reynolds numbers. The (left) column compares the inversion accuracy of PINN, PINN-SRT, and PINN-MRT models, while the (right) column shows the robustness of PINN-MRT under noisy data.
Figure 8. Relative L 2 errors of the inferred parameters ν (top) and U lid (bottom) under different Reynolds numbers. The (left) column compares the inversion accuracy of PINN, PINN-SRT, and PINN-MRT models, while the (right) column shows the robustness of PINN-MRT under noisy data.
Mathematics 13 03712 g008
The convergence behavior of all models in the inverse problem is illustrated in Appendix B (Figure A1).

4.2. Forward Problem

4.2.1. Flow Field Analysis

This section investigates the forward prediction capability of each model under known physical parameters. In this task, the networks solve the governing equations directly without observational supervision, thereby reflecting their intrinsic physical generalization ability. Comparisons are carried out at Re = 100 and 400 for all models, while higher Re tests (Re > 400) are reserved for the PINN-MRT model due to the numerical breakdown of the other two formulations.
All simulations were performed under a fixed random seed to guarantee reproducibility. Because the training process is deterministic, repeated runs under identical settings converge to the same solution, ensuring that the reported results are numerically consistent and representative.
1.
Vortex Structure and Error Distribution
Unlike the inverse problem, the forward configuration tests each model’s ability to infer flow structures purely from physical constraints. Visual inspection of Figure 9 and Figure 10 shows that at R e = 100 , the PINN-MRT model reproduces smooth, symmetric streamlines consistent with the benchmark, whereas both PINN and PINN-SRT introduce noticeable asymmetry near the lid. The MRT formulation thus demonstrates a stronger tendency to preserve physical symmetry and enforce incompressibility without external data.
As the Re increases to 400 (Figure 11 and Figure 12), the separation between models becomes more pronounced. While the standard PINN and PINN-SRT fail to converge and display irregular velocity fields, the PINN-MRT maintains coherent vortex structures and smooth contours. This outcome reinforces the earlier conclusion from the inverse analysis that the decoupled relaxation mechanisms in MRT provide a well-conditioned residual system.
2.
Velocity Profile and Quantitative Analysis
The quantitative behavior summarized in Table 5 highlights the same trend from a numerical perspective. At Re = 100 , both the standard PINN and PINN-SRT yield error ranges in Δ u and Δ v that are roughly twice those of the PINN-MRT. At Re = 400 , the discrepancy increases rapidly, with Δ u errors in PINN and PINN-SRT being approximately three and five times higher than those of PINN-MRT, while the Δ v errors are larger by factors of seven and six, respectively. This escalation demonstrates the limited capability of single relaxation schemes to capture the stiff and multiscale characteristics of cavity flow, whereas the MRT residual formulation preserves well-scaled gradients and maintains uniform accuracy in both directions.
3.
High-Re Behavior of PINN-MRT
The relative L 2 errors listed in Table 6 confirm the model’s robustness under increasingly turbulent regimes. The total error rises moderately from 12.06% at Re = 100 to 17.08% at Re = 5000, consistent with the expected difficulty of resolving thin boundary layers and corner vortices. The v component remains the most sensitive variable, reflecting the weaker magnitude of vertical velocity and its stronger dependence on secondary eddies. Nevertheless, the MRT residual maintains overall stability and correctly reproduces the global vortex topology across the entire Re range.
Figure 9. Forward problem solution comparison of velocity field at Re = 100.
Figure 9. Forward problem solution comparison of velocity field at Re = 100.
Mathematics 13 03712 g009
Figure 10. Absolute error distributions of the velocity components u and v at Re = 100.
Figure 10. Absolute error distributions of the velocity components u and v at Re = 100.
Mathematics 13 03712 g010
Figure 11. Forward problem solution comparison of velocity field at Re = 400.
Figure 11. Forward problem solution comparison of velocity field at Re = 400.
Mathematics 13 03712 g011
Figure 12. Absolute error distributions of the velocity components u and v at Re = 400.
Figure 12. Absolute error distributions of the velocity components u and v at Re = 400.
Mathematics 13 03712 g012
Table 5. Absolute error ranges of horizontal ( Δ u ) and vertical ( Δ v ) velocity components for all three models in the forward problem at R e = 100 and 400. Reduction percentages are given relative to the PINN-SRT model.
Table 5. Absolute error ranges of horizontal ( Δ u ) and vertical ( Δ v ) velocity components for all three models in the forward problem at R e = 100 and 400. Reduction percentages are given relative to the PINN-SRT model.
Model Δ u Δ v Δ u red Δ v red
Re = 100
PINN[0.0001, 0.0765][0.0000, 0.0248]
PINN-SRT[0.0004, 0.0851][0.0005, 0.1050]
PINN-MRT[0.0003, 0.0447][0.0004, 0.0310]41.3%87.6%
Re = 400
PINN[0.0002, 0.3099][0.0059, 0.3688]
PINN-SRT[0.0041, 0.5389][0.0003, 0.3197]
PINN-MRT[0.0003, 0.1114][0.0004, 0.0508]64.5%86.2%
Table 6. Relative L 2 errors (%) of PINN-MRT in forward problems.
Table 6. Relative L 2 errors (%) of PINN-MRT in forward problems.
PINN-MRT err u (%) err v (%) err total (%)
Re = 10013.43622.01412.058
Re = 40014.32822.66914.145
Re = 100017.08622.83415.910
Re = 200017.15222.96216.691
Re = 500018.00323.81517.076

4.2.2. Viscosity Sensitivity

To assess model stability against parameter perturbations, the kinematic viscosity ν was systematically varied. Figure 13 displays the corresponding relative L 2 errors of the reconstructed velocity components (u, v, and total) under viscosity deviations up to 20%. Across all test cases, the PINN-MRT maintains nearly constant error levels, confirming its robustness to parameter uncertainties. Slightly higher v component errors at large Re reflect the sensitivity of vertical motion to small changes in boundary-layer thickness. These results demonstrate that the MRT formulation provides both numerical and physical stability under parametric variations.
The forward analysis complements the inverse results by demonstrating the predictive capability of the PINN-MRT model in purely physics-driven settings. Quantitative comparisons in Table 5 and Table 6 reveal that the MRT residual formulation sustains accurate, physically consistent solutions across a wide range of Re and viscosity perturbations, whereas single relaxation models collapse under identical deterministic conditions. Taken together, these findings confirm that the proposed PINN-MRT framework achieves both forward predictability and inverse consistency through the same physically grounded mechanism of multiscale relaxation.
To further assess the model’s stability across different flow regimes, convergence curves of the PINN, PINN-SRT, and PINN-MRT models, as well as the PINN-MRT under various Re, are provided in Appendix B (Figure A2).

4.3. Efficiency and Stability

The dual-network architecture introduces higher computational cost per iteration compared to the standard single-network PINN. Each step requires forward and backward evaluations of two neural networks, along with the computation of mesoscopic residuals across all nine discrete velocities through automatic differentiation. This increases the number of trainable parameters and gradient computations, resulting in longer processing time per iteration. As shown in Table 7, the average time for every thousand iterations rises from 55 s for PINN to 67 s for PINN–MRT, leading to a total training duration of approximately 2.0 h under the full 80,000 iteration setting.
Despite this increase in per-iteration cost, the proposed PINN-MRT demonstrates exceptional training stability and robustness. It achieves a 100% convergence rate across all test cases. In contrast, the baseline PINN achieves a convergence rate of only 33.33%. PINN-SRT performs better but still falls short at the most complex configurations.
This dramatic improvement in reliability means that PINN-MRT rarely requires retraining due to divergence, making it significantly more efficient in practice. The method’s ability to consistently deliver physically accurate solutions across a wide range of conditions enhances its suitability for data-free modeling tasks, where robustness is as critical as speed. Thus, the additional computational expense per iteration is well justified by the substantial gains in convergence reliability and workflow efficiency.
Table 7. Computational cost and convergence characteristics of different models (trained on an NVIDIA GeForce RTX 4070 Laptop GPU).
Table 7. Computational cost and convergence characteristics of different models (trained on an NVIDIA GeForce RTX 4070 Laptop GPU).
ModelNetwork TypeIterationsTime per 1 k Iter. (s)Total Runtime (h)Convergence Rate (%)
PINNSingle-network80,00055 ± 51.3 ± 0.133.33
PINN–SRTDual-network80,00065 ± 61.8 ± 0.155.55
PINN–MRTDual-network80,00067 ± 72.0 ± 0.2100.00
Note: Convergence rate is defined as the percentage of successful runs across nine test cases: forward problems at Re = 100 , 400 , 1000 , 2000 , 5000 and inverse problems at Re = 100 , 1000 , 2000 , 5000 . A run is considered converged if the relative L 2 error of velocity fields remains below 15% after training.

5. Conclusions

In this study, we introduced PINN-MRT, a novel hybrid architecture that integrates MRT-LBM into PINNs, demonstrating significantly enhanced stability and accuracy for CFD simulations, particularly in high-Re regimes. Our core finding is that the superiority of PINN-MRT stems from the MRT mechanism’s ability to fundamentally improve the optimization landscape by decoupling the gradients of different physical modes, which mitigates the training difficulties inherent in standard PINN formulations. The proposed dual-network architecture further aids this process by separating the learning tasks for equilibrium and non-equilibrium components. While the current work is validated on 2D steady flows with preset relaxation parameters, its robust performance highlights its considerable potential. From a fluid-mechanics standpoint, the same formulation can be naturally extended to unsteady flows by incorporating temporal derivatives into the residual equations, allowing the model to capture transient phenomena such as vortex shedding and oscillatory boundary layers. This extension mainly involves ensuring temporal stability and appropriate time-step consistency during rapid flow evolution. Furthermore, applying the method to three-dimensional flows would require resolving all three velocity components and handling secondary vortices and anisotropic shear effects, which substantially increase computational cost but remain physically compatible with the MRT framework.
Future research will focus on extending the PINN-MRT framework to more complex scenarios, including three-dimensional unsteady turbulence (e.g., LES) and developing adaptive optimization strategies for the relaxation parameters. This approach establishes a promising pathway for applying deep learning to solve challenging multi-physics problems that remain intractable for conventional methods.
Despite the demonstrated advantages, several limitations of the present study should be acknowledged. First, the proposed PINN-MRT framework incurs a relatively high computational cost due to the dual-network structure and the additional mesoscopic constraints, and its scalability to large-scale or real-time simulations has not yet been systematically assessed. Second, a formal analysis of error propagation from the MRT relaxation parameters to the macroscopic PINN outputs remains to be developed, which would provide deeper insights into the sensitivity and robustness of the coupled learning process. Third, the current validation is limited to two-dimensional, single-phase steady flows, and the extension to three-dimensional or multiphase configurations requires further investigation. Finally, the present work employs fixed weighting coefficients in the composite loss function; introducing adaptive weighting schemes could further improve convergence efficiency and balance among competing physical constraints. Addressing these aspects constitutes an important direction for future research toward a more general, efficient, and physically consistent PINN-MRT framework.

Author Contributions

Conceptualization, M.S. and Q.H.; Data curation, M.F. and L.K.; Formal analysis, C.Y. (Cheng Yin); Funding acquisition, M.S., Y.Y., and Q.H.; Investigation, M.F. and C.Y. (Chenghui Yang); Methodology, M.F. and C.Y. (Chenghui Yang); Project administration, C.Y. (Cheng Yin) and Q.H.; Software, M.S. and Y.Y.; Supervision, Y.Y. and C.Y. (Cheng Yin); Validation, L.K. and C.Y. (Chenghui Yang); Writing—original draft, M.F.; Writing—review & editing, M.S. All authors have read and agreed to the published version of the manuscript.

Funding

Financial support from the National Natural Science Foundation of China (Grant Nos. 12474453, 12174085, and 12404530) is also gratefully acknowledged.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors are grateful for the valuable comments and suggestions from the respected reviewers, which have greatly enhanced the quality and significance of this work.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

The lattice Boltzmann method (LBM) is a mesoscopic numerical approach that models fluid dynamics through the evolution of particle distribution functions in discretized space, time, and velocity space. Its theoretical foundation stems from the continuous Boltzmann equation [33]
f t + ξ · f = Ω ( f ) ,
where f is the particle distribution function in continuous velocity space, ξ denotes the microscopic particle velocity, and Ω ( f ) is the collision operator describing the relaxation toward local thermodynamic equilibrium.
A widely used simplification is the single-relaxation-time (SRT) or Bhatnagar–Gross– Krook (BGK) approximation, which linearizes the collision term as [34]
f t + ξ · f = 1 τ ( f f e q ) ,
where τ is the relaxation time, directly related to the fluid’s kinematic viscosity, and f e q is the equilibrium distribution function, given by
f e q = ρ ( 2 π R T ) D / 2 exp | ξ u | 2 2 R T .
Here, ρ and u are the macroscopic density and velocity, respectively. However, the BGK model applies a single relaxation parameter to all hydrodynamic and non-hydrodynamic modes, leading to intrinsic coupling between physical processes and limiting numerical flexibility.
To address this limitation, the multi-relaxation-time (MRT) model performs the collision operation in moment space, allowing independent relaxation rates for different physical modes. The evolution equation in MRT-LBM is expressed as [29,35]
f t + ξ · f = M 1 Λ M ( f f e q ) ,
where M is an orthogonal transformation matrix that projects the distribution function f from velocity space to moment space, and Λ is a diagonal matrix containing the relaxation rates for each moment. In this study, the D2Q9 lattice model is adopted.
The discrete form of the evolution equation is
f i t + e i · f i = M 1 Λ M i j f j f j e q ,
with the discrete equilibrium distribution function defined as
f j e q ( x , t ) = w j ρ 1 + e j · u c s 2 + ( e j · u ) 2 2 c s 4 u · u 2 c s 2 ,
where w j are the weight coefficients:
w j = 4 9 , j = 0 , 1 9 , j = 1 , 2 , 3 , 4 , 1 36 , j = 5 , 6 , 7 , 8 .
The macroscopic variables are recovered by moment summation:
ρ = i = 0 8 f i , ρ u = i = 0 8 e i f i ,
where e i are the discrete velocities:
e i = ( 0 , 0 ) , i = 0 , cos ( i 1 ) π 2 , sin ( i 1 ) π 2 , i = 1 , 2 , 3 , 4 , 2 cos ( 2 i 1 ) π 4 , sin ( 2 i 1 ) π 4 , i = 5 , 6 , 7 , 8 .
The transformation matrix M for the D2Q9 model is [36]
M = 1 1 1 1 1 1 1 1 1 4 1 1 1 1 2 2 2 2 4 2 2 2 2 1 1 1 1 0 1 0 1 0 1 1 1 1 0 2 0 2 0 1 1 1 1 0 0 1 0 1 1 1 1 1 0 0 2 0 2 1 1 1 1 0 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 1 1 .
The relaxation matrix is
Λ = diag τ ρ 1 , τ e 1 , τ f 1 , τ j 1 , τ q 1 , τ j 1 , τ q 1 , τ ν 1 , τ ν 1 ,
where the subscripts denote relaxation rates for: density ( ρ ), energy (e), square of energy (f), momentum flux (j), energy flux (q), and shear stress/pressure tensor ( ν ). In this work, we set τ ρ 1 = τ j 1 = 1.0 , τ e 1 = τ f 1 = 0.8 , τ q 1 = 1.1 , while τ ν 1 is determined by the target kinematic viscosity ν via
ν = c s 2 τ ν 1 2 δ t ,
with c s = 1 / 3 being the lattice sound speed in D2Q9, and δ t the time step [37,38].

Appendix B

For the inverse problem (exemplified by Re = 1000 in Figure A1), the left panel shows that PINN-MRT and PINN-SRT exhibit smoother loss decay and lower final convergence values compared to PINN, demonstrating that physical constraints (MRT) enhance optimization stability for inverse problems, while PINN shows more oscillatory behavior. The right panel of Figure A1 presents that PINN-MRT curves under Re = 100, 400, 1000, 2000, and 5000 all decay monotonically, indicating good adaptability across flow regimes without oscillation or premature stagnation.
For the forward problem (exemplified by Re = 400 in Figure A2, the left panel reveals that PINN-MRT achieves the fastest loss reduction and lowest final value, followed by PINN-SRT, with PINN performing relatively poorly, validating MRT constraints’ role in boosting optimization robustness. The right panel of Figure A2 indicates that PINN-MRT curves under different Re decay stably, confirming its reliability for forward problems across flow conditions. In summary, the PINN-MRT model with MRT-LBM physical constraints enhances optimization robustness, enabling stable convergence across flow regimes for both inverse and forward problems, thus validating the effectiveness of integrating physical constraints with deep learning.
Figure A1. The (left) panel compares the loss convergence of three models for the inverse problem at the representative Re (Re = 1000), while the (right) panel presents the convergence curves of the PINN-MRT model under different Re.
Figure A1. The (left) panel compares the loss convergence of three models for the inverse problem at the representative Re (Re = 1000), while the (right) panel presents the convergence curves of the PINN-MRT model under different Re.
Mathematics 13 03712 g0a1
Figure A2. The (left) panel compares the loss convergence of three models for the forward problem at the representative Re (Re = 400), while the (right) panel presents the convergence curves of the PINN-MRT model under different Re.
Figure A2. The (left) panel compares the loss convergence of three models for the forward problem at the representative Re (Re = 400), while the (right) panel presents the convergence curves of the PINN-MRT model under different Re.
Mathematics 13 03712 g0a2

References

  1. Runchal, A.K. Evolution of CFD as an engineering science. A personal perspective with emphasis on the finite volume method. Comptes Rendus. Mécanique 2022, 350, 233–258. [Google Scholar] [CrossRef]
  2. Ranganathan, P.; Pandey, A.K.; Sirohi, R.; Hoang, A.T.; Kim, S.H. Recent advances in computational fluid dynamics (CFD) modelling of photobioreactors: Design and applications. Bioresour. Technol. 2022, 350, 126920. [Google Scholar] [CrossRef]
  3. Lee, S.; Zhao, Y.; Luo, J.; Zou, J.; Zhang, J.; Zheng, Y.; Zhang, Y. A review of flow control strategies for supersonic/hypersonic fluid dynamics. Aerosp. Res. Commun. 2024, 2, 13149. [Google Scholar] [CrossRef]
  4. Tu, J.; Yeoh, G.H.; Liu, C.; Tao, Y. Computational Fluid Dynamics: A Practical Approach; Elsevier: Amsterdam, The Netherlands, 2023. [Google Scholar]
  5. Hafeez, M.B.; Krawczuk, M. A Review: Applications of the Spectral Finite Element Method: MB Hafeez and M. Krawczuk. Arch. Comput. Methods Eng. 2023, 30, 3453–3465. [Google Scholar] [CrossRef]
  6. Xu, H.; Cantwell, C.D.; Monteserin, C.; Eskilsson, C.; Engsig-Karup, A.P.; Sherwin, S.J. Spectral/hp element methods: Recent developments, applications, and perspectives. J. Hydrodyn. 2018, 30, 1–22. [Google Scholar] [CrossRef]
  7. Droniou, J. Finite volume schemes for diffusion equations: Introduction to and review of modern methods. Math. Model. Methods Appl. Sci. 2014, 24, 1575–1619. [Google Scholar] [CrossRef]
  8. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  9. Rui, E.Z.; Zeng, G.Z.; Ni, Y.Q.; Chen, Z.W.; Hao, S. Time-averaged flow field reconstruction based on a multifidelity model using physics-informed neural network (PINN) and nonlinear information fusion. Int. J. Numer. Methods Heat Fluid Flow 2024, 34, 131–149. [Google Scholar] [CrossRef]
  10. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
  11. Liu, B.; Wei, J.; Kang, L.; Liu, Y.; Rao, X. Physics-informed neural network (PINNs) for convection equations in polymer flooding reservoirs. Phys. Fluids 2025, 37, 036622. [Google Scholar] [CrossRef]
  12. Wong, J.C.; Gupta, A.; Ong, Y.S. Can transfer neuroevolution tractably solve your differential equations? IEEE Comput. Intell. Mag. 2021, 16, 14–30. [Google Scholar] [CrossRef]
  13. Willard, J.; Jia, X.; Xu, S.; Steinbach, M.; Kumar, V. Integrating scientific knowledge with machine learning for engineering and environmental systems. ACM Comput. Surv. 2022, 55, 1–37. [Google Scholar] [CrossRef]
  14. Lu, L.; Meng, X.; Mao, Z.; Karniadakis, G.E. DeepXDE: A deep learning library for solving differential equations. SIAM Rev. 2021, 63, 208–228. [Google Scholar] [CrossRef]
  15. Lee, J.; Shin, S.; Kim, T.; Park, B.; Choi, H.; Lee, A.; Choi, M.; Lee, S. Physics informed neural networks for fluid flow analysis with repetitive parameter initialization. Sci. Rep. 2025, 15, 16740. [Google Scholar] [CrossRef]
  16. d’Humieres, D. Generalized lattice-Boltzmann equations. In Rarefied Gas Dynamics; American Institute of Aeronautics and Astronautics, Inc.: Reston, VA, USA, 1992. [Google Scholar]
  17. Hamdi, M.; Elalimi, S.; Nasrallah, S.B. Large Eddy Simulation-Based Lattice Boltzmann Method with Different Collision Models. In Exergy for a Better Environment and Improved Sustainability 1: Fundamentals; Springer: Berlin/Heidelberg, Germany, 2018; pp. 661–683. [Google Scholar]
  18. Shams Taleghani, A.; Sheikholeslam Noori, M. Numerical investigation of coalescence phenomena, affected by surface acoustic waves. Eur. Phys. J. Plus 2022, 137, 975. [Google Scholar] [CrossRef]
  19. Chen, Z.; Liu, Y.; Sun, H. Physics-informed learning of governing equations from scarce data. Nat. Commun. 2021, 12, 6136. [Google Scholar] [CrossRef] [PubMed]
  20. Lou, Q.; Meng, X.; Karniadakis, G.E. Physics-informed neural networks for solving forward and inverse flow problems via the Boltzmann-BGK formulation. J. Comput. Phys. 2021, 447, 110676. [Google Scholar] [CrossRef]
  21. Zhao, B.; Sun, D.; Wu, H.; Qin, C.; Fei, Q. Physics-informed neural networks for solcing inverse problems in phase field models. Neural Networks 2025, 190, 107665. [Google Scholar] [CrossRef]
  22. Guzella, M.d.S.; Cabezas-Gómez, L. Pseudopotential Lattice Boltzmann Method Simulation of Boiling Heat Transfer at Different Reduced Temperatures. Fluids 2025, 10, 90. [Google Scholar] [CrossRef]
  23. Liu, Z.; Li, S.; Ruan, J.; Zhang, W.; Zhou, L.; Huang, D.; Xu, J. A new multi-level grid multiple-relaxation-time lattice Boltzmann method with spatial interpolation. Mathematics 2023, 11, 1089. [Google Scholar] [CrossRef]
  24. Jagtap, A.D.; Mao, Z.; Adams, N.; Karniadakis, G.E. Physics-informed neural networks for inverse problems in supersonic flows. J. Comput. Phys. 2022, 466, 111402. [Google Scholar] [CrossRef]
  25. Jagtap, A.D.; Karniadakis, G.E. Extended physics-informed neural networks (XPINNs): A generalized space-time domain decomposition based deep learning framework for nonlinear partial differential equations. Commun. Comput. Phys. 2020, 28, 2002–2041. [Google Scholar] [CrossRef]
  26. Lu, L.; Pestourie, R.; Yao, W.; Wang, Z.; Verdugo, F.; Johnson, S.G. Physics-informed neural networks with hard constraints for inverse design. SIAM J. Sci. Comput. 2021, 43, B1105–B1132. [Google Scholar] [CrossRef]
  27. Peng, W.; Zhou, W.; Zhang, J.; Yao, W. Accelerating physics-informed neural network training with prior dictionaries. arXiv 2020, arXiv:2004.08151. [Google Scholar] [CrossRef]
  28. Yu, J.; Lu, L.; Meng, X.; Karniadakis, G.E. Gradient-enhanced physics-informed neural networks for forward and inverse PDE problems. Comput. Methods Appl. Mech. Eng. 2022, 393, 114823. [Google Scholar] [CrossRef]
  29. Krüger, T.; Kusumaatmaja, H.; Kuzmin, A.; Shardt, O.; Silva, G.; Viggen, E.M. The Lattice Boltzmann Method; Springer: Cham, Switzerland, 2017; Volume 10. [Google Scholar]
  30. Jin, X.; Cai, S.; Li, H.; Karniadakis, G.E. NSFnets (Navier-Stokes flow nets): Physics-informed neural networks for the incompressible Navier-Stokes equations. J. Comput. Phys. 2021, 426, 109951. [Google Scholar] [CrossRef]
  31. Markidis, S. The Old and the New: Can Physics-Informed Deep-Learning Replace Traditional Linear Solvers? arXiv 2021, arXiv:2103.09655. [Google Scholar] [CrossRef]
  32. Hao, Z.; Yao, J.; Su, C.; Su, H.; Wang, Z.; Lu, F.; Xia, Z.; Zhang, Y.; Liu, S.; Lu, L.; et al. PINNacle: A Comprehensive Benchmark of Physics-Informed Neural Networks for Solving PDEs. arXiv 2023, arXiv:2306.08827. [Google Scholar] [CrossRef]
  33. Zheng, J.; Zha, Y.; Feng, M.; Shan, M.; Yang, Y.; Yin, C.; Han, Q. Numerical investigation on jet-enhancement effect and interaction of out-of-phase cavitation bubbles excited by thermal nucleation. Ultrason. Sonochem. 2025, 118, 107365. [Google Scholar] [CrossRef]
  34. Chai, Z.; Shi, B.; Guo, Z. A multiple-relaxation-time lattice Boltzmann model for general nonlinear anisotropic convection–diffusion equations. J. Sci. Comput. 2016, 69, 355–390. [Google Scholar] [CrossRef]
  35. Yang, Y.; Tu, J.; Shan, M.; Zhang, Z.; Chen, C.; Li, H. Acoustic cavitation dynamics of bubble clusters near solid wall: A multiphase lattice Boltzmann approach. Ultrason. Sonochem. 2025, 114, 107261. [Google Scholar] [CrossRef]
  36. Shan, M.; Zha, Y.; Yang, Y.; Yang, C.; Yin, C.; Han, Q. Morphological characteristics and cleaning effects of collapsing cavitation bubble in fractal cracks. Phys. Fluids 2024, 36. [Google Scholar] [CrossRef]
  37. Shi, X.; Huang, X.; Zheng, Y.; Ji, T. A hybrid algorithm of lattice Boltzmann method and finite difference–based lattice Boltzmann method for viscous flows. Int. J. Numer. Methods Fluids 2017, 85, 641–661. [Google Scholar] [CrossRef]
  38. Cheng, X.; Su, R.; Shen, X.; Deng, T.; Zhang, D.; Chang, D.; Zhang, B.; Qiu, S. Modeling of indoor airflow around thermal manikins by multiple-relaxation-time lattice Boltzmann method with LES approaches. Numer. Heat Transf. Part A Appl. 2020, 77, 215–231. [Google Scholar] [CrossRef]
Figure 2. 2D computational domain and boundary conditions for the lid-driven cavity flow.
Figure 2. 2D computational domain and boundary conditions for the lid-driven cavity flow.
Mathematics 13 03712 g002
Figure 3. Training workflows for the inverse and forward problems.
Figure 3. Training workflows for the inverse and forward problems.
Mathematics 13 03712 g003
Figure 13. Relative L 2 errors of reconstructed velocity components under kinematic viscosity perturbations from 0 % to 20 % for different Reynolds numbers.
Figure 13. Relative L 2 errors of reconstructed velocity components under kinematic viscosity perturbations from 0 % to 20 % for different Reynolds numbers.
Mathematics 13 03712 g013
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Feng, M.; Shan, M.; Kuai, L.; Yang, C.; Yang, Y.; Yin, C.; Han, Q. Hybrid Physics-Informed Neural Networks Integrating Multi-Relaxation-Time Lattice Boltzmann Method for Forward and Inverse Flow Problems. Mathematics 2025, 13, 3712. https://doi.org/10.3390/math13223712

AMA Style

Feng M, Shan M, Kuai L, Yang C, Yang Y, Yin C, Han Q. Hybrid Physics-Informed Neural Networks Integrating Multi-Relaxation-Time Lattice Boltzmann Method for Forward and Inverse Flow Problems. Mathematics. 2025; 13(22):3712. https://doi.org/10.3390/math13223712

Chicago/Turabian Style

Feng, Mengyu, Minglei Shan, Ling Kuai, Chenghui Yang, Yu Yang, Cheng Yin, and Qingbang Han. 2025. "Hybrid Physics-Informed Neural Networks Integrating Multi-Relaxation-Time Lattice Boltzmann Method for Forward and Inverse Flow Problems" Mathematics 13, no. 22: 3712. https://doi.org/10.3390/math13223712

APA Style

Feng, M., Shan, M., Kuai, L., Yang, C., Yang, Y., Yin, C., & Han, Q. (2025). Hybrid Physics-Informed Neural Networks Integrating Multi-Relaxation-Time Lattice Boltzmann Method for Forward and Inverse Flow Problems. Mathematics, 13(22), 3712. https://doi.org/10.3390/math13223712

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop