Next Article in Journal
Social Image Security with Encryption and Watermarking in Hybrid Domains
Previous Article in Journal
An Entropy-Based Approach to Model Selection with Application to Single-Cell Time-Stamped Snapshot Data
Previous Article in Special Issue
An Adaptive Sampling Algorithm with Dynamic Iterative Probability Adjustment Incorporating Positional Information
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

High-Accuracy Parallel Neural Networks with Hard Constraints for a Mixed Stokes/Darcy Model

Department of Mathematics, Jinan University, Guangzhou 510632, China
*
Author to whom correspondence should be addressed.
Entropy 2025, 27(3), 275; https://doi.org/10.3390/e27030275
Submission received: 3 February 2025 / Revised: 1 March 2025 / Accepted: 4 March 2025 / Published: 6 March 2025
(This article belongs to the Special Issue Physics-Informed Neural Networks)

Abstract

:
In this paper, we study numerical algorithms based on Physics-Informed Neural Networks (PINNs) for solving a mixed Stokes/Darcy model that describes a fluid flow coupled with a porous media flow. A Hard Constrained Parallel PINN (HC-PPINN) is proposed for the mixed model, in which the boundary conditions are enforced by modified the neural network architecture. Numerical experiments with different settings are conducted to demonstrate the accuracy and efficiency of our method by comparing it with the methods based on vanilla PINNs for the mixed model.

1. Introduction

In the real world, there are many applications involving the interaction of different physical processes in different subdomains of the problem domain. We focus on the mixed Stokes/Darcy model, which models the motion of fluid between the surface and subsurface regions. The behavior of the fluid is characterized by different partial differential equations, with the Stokes equations describing the behavior in the surface region and Darcy’s law governing the behavior in the subsurface region [1,2,3,4]. The two regions are coupled by the appropriate interface conditions that ensure the conservation of mass and the balance of normal forces across the interface, and the Beavers–Joseph–Saffman interface condition, which states that the shear stress along the interface is proportional to the slip velocity along the interface [5,6,7,8]. The mixed Stokes/Darcy model boasts an extensive array of applications, e.g., groundwater systems [9,10], industrial filtration [11,12,13], blood flow in tumors [14,15], etc.
The traditional numerical techniques for resolving the mixed Stokes/Darcy model are extensively documented in the literature [2,3,16,17,18,19,20,21]. In general, there are two main approaches: one approach involves solving the coupled problem directly [22,23], while the other involves decoupling the mixed model first and then applying appropriate local solvers independently [24,25,26]. These methods can be difficult to solve in the face of irregular regions and curved interfaces.
Over the last two decades, deep learning has achieved extraordinary success in a range of domains, such as computer vision and natural language processing [27]. Solving Partial Differential Equations (PDEs) via deep learning has recently surfaced as a promising topic known as Scientific Machine Learning (SciML) [28]. A representative class of results has been presented, including but not limited to the following: In 2018, M. Raissi et al. devised a deep-learning framework, namely, a Physics-Informed Neural Network (PINN), to solve the forward and inverse problems of partial differential equations, and utilized it to study the equations of hydrodynamics and their inverse processes [29,30]. J. Sirignano et al. proposed the use of deep neural networks to solve high-dimensional partial differential equations, called the Deep Galerkin Method (DGM), and gave a theoretical analysis of the approximation performance of neural networks [31]. In [32], the Deep Ritz Method has been proposed by authors to deal with variational problems. In 2020, Y. Zang et al. proposed solving high-dimensional partial differential equations over irregular domains using weak adversarial networks [33]. In 2021, S. Dong et al. solved linear and nonlinear partial differential equations using domain decomposition and local extreme learning machines [34], etc. These neural network-based PDE solving methods are widely popular due to their universal approximation properties [35,36]. Compared with traditional grid-based methods, deep learning to solve PDE is a grid-free method that utilizes automatic differentiation [29], which can break the curse of dimensionality [37,38].
Among these methods, PINNs is one of the most popular methods. There are currently many variants of PINNs, such as variational hp-VPINNs [39], conservative PINNs (cPINNs) [40], extended PINNs (XPINNs) [41], Parallel PINNs (PPINNs), etc. However, the constraints for boundary and initial conditions in most of the PINN-based methods are soft constraints. In order to strictly enforce the boundary and initial conditions, Refs.  [42,43] devised a PINN-based architecture that could enhance both the precision and generalization ability of the neural network.
According to the literature research, most of the current studies on deep learning for solving PDEs are for problems controlled by a single set of physics equations, while there are fewer studies on the Stokes/Darcy coupled problem [44]. In 2022, R. Pu et al. investigated the steady Stokes/Darcy coupled problem by using PINNs and proposed a strategy to improve the accuracy [45]. In 2023, J. Yue et al. proposed Coupled Deep Neural Networks (CDNNs) for solving the time-dependent Stokes/Darcy coupling problem [46]. Ref. [47] investigated the neural network solution method for the forward and backward problems of the Navier–Stokes/Darcy coupling problem based on PINNs. However, the fitting accuracies of the existing studies are low, with the relative L 2 error remaining between 10−2 and 10−4, and these studies primarily focus on regular regions and straight-line interfaces.
In this paper, to improve neural network accuracy in solving Stokes/Darcy coupled problems, we first design a parallel physics-informed neural network, namely, Parallel PINNs (PPINNs). Then, we modify the network architecture to enforce boundary conditions, and at the same time incorporate the control equations as well as the interface equations into the loss with soft constraints for training; we call it HC-PPINNs. Specifically, the training of HC-PPINNs only needs to be driven by minimizing the loss of the governing equations and interface equations and does not need to be data driven. Since it is not to be data driven, the training cost is greatly reduced. In addition, HC-PPINNs achieve higher accuracy compared to the methods based on vanilla PINNs, including the Parallel PINNs (PPINNs) and CDNNs in [46]. Furthermore, HC-PPINNs can also keep good performance in both irregular regions and curved interfaces. The performance and accuracy of our method, HC-PPINNs, are demonstrated by five examples.
The structure of the paper is as follows. The coupling problem is presented in Section 2. In Section 3, we present the architecture of our PPINN and HC-PPINN. Section 4 demonstrates the performance of HC-PPINNs by five examples. The last section concludes the paper.

2. Problem Formulation

We consider a coupled fluid flow and porous media flow in a bounded domain Ω R d ( d = 2 or 3), which consists of a fluid flow in Ω f and a porous media flow in Ω p , separated by an interface Γ (see Figure 1), where Ω f Ω p = Ω , Ω f Ω p = , and  Ω ¯ f Ω ¯ p = Γ . Let n f and n p be the unit outward normal vectors on the boundaries of Ω f and Ω p , respectively, and  τ i , i = 1 , , d 1 , the unit tangential vectors on the interface Γ . Then, we have n p = n f on Γ .
The Stokes equations are used to describe the motion of the fluid flow in Ω f
ν Δ u f + p f = g f in Ω f ,
· u f = 0 in Ω f ,
where u f ( x ) is the velocity of the fluid flow in Ω f , p f ( x ) is the pressure, and  g f is the external force.
The following equations are used to describe the motion of the porous media flow in  Ω p :
· q = g p in Ω p ,
q = K φ in Ω p ( Darcy s law ) ,
u p = q n in Ω p ,
where q is the specific discharge, which is defined as the volume of the fluid flowing per unit time through a unit cross-sectional area normal to the direction of the flow. φ ( x ) = z + p p ρ g is the piezometric head, which is the sum of elevation head z and the pressure head. p p is the pressure of the fluid in Ω p , ρ is the density of the fluid, and g is the gravitational acceleration. u p is the fluid velocity in Ω p , K is the hydraulic conductivity tensor, n is the volumetric porosity, and g p is the source term. For simplicity, we assume z = 0 and the porous media is homogeneous, i.e.,  K = diag ( K , , K ) with K L ( Ω p ) , K > 0 . Then, the continuity Equation (3) in Ω p can be written in the following form by using Darcy’s law (4):
· ( K φ ) = g p in Ω p .
The interface coupling conditions are the important part in a mixed model. For the mixed Stokes/Darcy model, the following interface conditions are used in the literature [5,6,7,8]:
u f · n f + u p · n p = 0 on Γ ,
p f ν n f u f n f = ρ g φ on Γ ,
ν τ i u f n f = α τ i · K τ i u f · τ i , i = 1 , , d 1 on Γ ,
where α > 0 is a parameter depending on the properties of the porous medium and should be experimentally determined. The first interface condition (7) is the mass conservation across the interface Γ . Using (4) and (5), it can be rewritten as
u f · n f = K n φ n p on Γ .
The second interface condition (8) is the balance of the normal forces across the interface. The third one (9), known as the Beavers–Joseph–Saffman law [5,7,17], states that the slip velocity along the interface is proportional to the shear stress along the interface.
For the convenience of the following discussion, the Dirichlet Boundary Conditions (BCs) are considered for the mixed model:
u f = u f b on Ω f Γ ,
φ = φ b on Ω p Γ .
Besides the Dirichlet BCs, the mixed Dirichlet and Neumann BCs are set up in the later numerical examples.
In summary, the mixed Stokes/Darcy model consists of the control equations, including the Stokes Equations (1) and (2) Darcy’s law (6), and the interface conditions (8)–(10) and the boundary conditions (11) and (12).

3. Methodology

In this section, the mixed Stokes/Darcy model is first solved using a parallel physics-informed neural networks that is mainly based on the vanilla PINNs [29]. Then, a high-accuracy parallel neural network with hard constraints for the boundary conditions is presented for the mixed model.

3.1. Parallel Physics-Informed Neural Networks

In [41], the authors divided the solution region into many sub-regions and utilized separate networks inside each sub-region to solve nonlinear PDEs on domains with arbitrary complex geometries. Inspired by this approach, we present a parallel Physics-Informed Neural Network, called PPINN, for the mixed Stokes/Darcy model, in which there is one neural network for the fluid flow region Ω f , and another for the porous media flow region Ω p . Figure 2 displays the diagram of the PPINN architecture.
Let { x f ( i ) } i = 1 N f in Ω f , { x p ( i ) } i = 1 N p in Ω p , { x Γ ( i ) } i = 1 N Γ on Γ , and boundary points { x b f ( i ) } i = 1 N b f on Ω f Γ , { x b p ( i ) } i = 1 N b p on Ω p Γ be the set of randomly selected collocation points. Here, N f , N p , and  N Γ are the numbers of collocation points in the interior of the domain Ω f , Ω p , and  Γ , respectively; N b f and N b p are the numbers of the collocation points on the boundary of Ω f and Ω p , respectively.
Let U ( x ; θ f ) , P ( x ; θ f ) , and  Φ ( x ; θ p ) be the neural network approximation solutions of u f , p f , and  φ , respectively, where θ f denotes the parameters of the neural network N N f for the fluid flow and θ p the parameters of the neural network N N p for the porous media flow. Based on the vanilla PINNs, we restrict these two neural networks to satisfy the Stokes/Dacy problem by using a PDE-informed loss function, where the boundary conditions are treated in a “soft” manner, namely soft constraints, through a loss function.
Then, the problem of PPINNs for the mixed Stokes/Darcy model can be described by the following minimization problem of the loss function L ( θ f , θ p ) with respect to the parameters θ f and θ p ,
min θ f , θ p L ( θ f , θ p ) = λ f L Ω f ( θ f ) + λ p L Ω p ( θ p ) + λ Γ L Γ ( θ f , θ p ) ,
where
L Ω f ( θ f ) = 1 N f i = 1 N f G Ω f , 1 [ U , P ] ( x f ( i ) ; θ f ) 2 + G Ω f , 2 [ U ] ( x f ( i ) ; θ f ) 2 L Ω f , p d e
+ 1 N b f i = 1 N b f U ( x b f ( i ) ; θ f ) u f b ( x b f ( i ) ) 2 L Ω f , b c + 1 N b f i = 1 N b f P ( x b f ( i ) ; θ f ) p f ( i ) 2 L Ω f , p r e s s u r e ,
L Ω p ( θ p ) = 1 N p i = 1 N p G Ω p [ Φ ] ( x p ( i ) ; θ p ) 2 L Ω p , p d e + 1 N b p i = 1 N b p Φ ( x b p ( i ) ; θ p ) φ b ( x b p ( i ) ) 2 L Ω p , b c , L Γ ( θ f , θ p ) = 1 N Γ i = 1 N Γ G Γ , 1 [ U , Φ ] ( x Γ ( i ) ; θ f , θ p ) 2 + G Γ , 2 [ U , P , Φ ] ( x Γ ( i ) ; θ f , θ p ) 2
+ G Γ , 3 [ U ] ( x Γ ( i ) ; θ f ) 2
with
G Ω f , 1 [ U , P ] = ν Δ U + P g f , G Ω f , 2 [ U ] = · U , G Ω p [ Φ ] = · ( K Φ ) g p , G Γ , 1 [ U , Φ ] = U · n f K n Φ n p , G Γ , 2 [ U , P , Φ ] = P ν n f U n f ρ g Φ , G Γ , 3 [ U ] = i = 1 d 1 ν τ i U n f + α τ i · K τ i U · τ i , i = 1 , , d 1 .
The total loss L ( θ f , θ p ) includes three loss functions, L Ω f ( θ f ) , L Ω p ( θ p ) , and L Γ ( θ f , θ p ) , from the fluid flow region Ω f , the porous media region Ω p , and the interface Γ , respectively. The positive parameters λ f , λ p , and λ Γ represent the weights of the loss functions in Ω f , Ω p , and Γ , respectively. These weights ensure that the different components of the loss function are balanced, which can improve the convergence of the PINN-based methods [48,49]. The terms L · , p d e ( ) are the PDE-informed loss functions from the fluid flow region Ω f and the porous media region Ω p . The terms L · , b c ( ) are the loss from the boundary conditions. Each of the loss L Ω f ( θ f ) and L Ω p ( θ f ) includes the PDE-informed loss and the loss from the boundary conditions. For the Stokes equation, the pressure field can only be determined up to a constant, and additional constraints need to be introduced. These constraints typically include fixing the pressure at a specific reference point or incorporating a regularization term to ensure that the mean pressure over the domain is zero [45]. To improve the pressure approximation, we take the approach presented in reference [50] by adding a pressure training loss function L Ω f , p r e s s u r e in the loss L Ω f ( θ f ) . The pressure data on the boundary p f ( i ) should be given by the additional information about the pressure. We denote the pressure data by p f b = { p f ( i ) } i = 1 N b f . Note that, for the case of in the absence of available pressure data, the pressure-related loss function is omitted from the formulation, and the constraints mentioned above could be imposed. In our numerical experiments, if there exist exact solutions for the coupled model, then the pressure data are given by the exact solution, such as in Examples 1, 2, 3, and 5. There is no exact solution for Example 4, where the pressure loss is omitted.
The main steps of PPINNs are presented as Algorithm 1.
Algorithm 1 PPINNs for the mixed Stokes/Darcy model
  • Input: Randomly selected collocation points { x f ( i ) } i = 1 N f , { x p ( i ) } i = 1 N p , { x Γ ( i ) } i = 1 N Γ , { x b f ( i ) } i = 1 N b f , and  { x b p ( i ) } i = 1 N b p ; The pressure data p f b = { p f ( i ) } i = 1 N b f if available; The maximum number of iterations M; The learning rate α f and α p .
  • Output:  θ f n + 1 and θ p n + 1 .
  1:
n = 1.
  2:
Initialize the neural network parameters θ f n and θ p n .
  3:
while  n < = M   do
      read current;
      compute L ( θ f n , θ p n ) = λ f L Ω f ( θ f n ) + λ p L Ω p ( θ p n ) + λ Γ L Γ ( θ f n , θ p n ) ;
      update
             θ f n + 1 = θ f n α f θ f n L ( θ f n , θ p n ) ;
             θ p n + 1 = θ p n α p θ p n L ( θ f n , θ p n ) ;
       n = n + 1 ;
  4:
end while

3.2. Hard Constrained PPINNs

In PPINNs, the loss function L ( θ f , θ p ) will continue to decrease with optimization and gradually approach zero. For the known Dirichlet BCs u f b and φ b , and the data p f b if available, the loss L Ω f , b c , L Ω p , b c and L Ω f , p r e s s u r e should be able to reach zero theoretically during optimization. However, this may not be achieved in the numerical optimization of PPINNs, which reduces the accuracy of the PPINNs. Therefore, in order to make full use of the known data, we designed a new network architecture based on PPINNs to enforce Dirichlet BCs u f b and φ b , as well as the data p f b , ensuring that the loss of the new network from boundary remains 0. In this way, the boundary conditions and the pressure data are treated in a ”hard” manner, called hard constraints [42,43].
We strictly impose the Dirichlet BCs by modifying the neural network architecture. Specifically, we construct the neural network solutions as
U ˜ ( x ; θ ˜ f ) = u p a r ( x ) + d f ( x ) U ( x ; θ ˜ f ) ,
P ˜ ( x ; θ ˜ f ) = p p a r ( x ) + d f ( x ) P ( x ; θ ˜ f ) ,
Φ ˜ ( x ; θ ˜ p ) = φ p a r ( x ) + d p ( x ) Φ ( x ; θ ˜ p ) ,
where U ˜ ( x ; θ ˜ f ) , P ˜ ( x ; θ ˜ f ) , Φ ˜ ( x ; θ ˜ p ) are the final outputs of the networks; see Figure 3. Here, u p a r ( x ) and φ p a r ( x ) are solutions that just satisfy the Dirichlet BCs, respectively: u p a r | Ω f Γ = u f b and φ p a r | Ω p Γ = φ b . While p p a r ( x ) satisfies the pressure data: p p a r | Ω f Γ = p f b if the pressure data is available. Analogous to the pressure treatment methodology employed in Algorithm 1 for the mixed Stokes/Darcy model, the hard constraint on pressure (19) will be conducted if the pressure data are available, otherwise, this constraint will be omitted. Specifically, in the later numerical experiments, the hard constraint on pressure will only be imposed in Examples 1, 2, 3, and 5. d i ( x ) ( i = f , p ) is a smooth distance function satisfying the following two conditions:
d i ( x ) = 0 , x Ω i Γ , d i ( x ) > 0 , x Ω i ,
where d i ( x ) ( i = f , p ) needs to be constructed case-by-case. For example, when Ω f = ( 0 , 1 ) × ( 1 , 2 ) , Ω p = ( 0 , 1 ) × ( 0 , 1 ) and Γ = ( 0 , 1 ) × 1 , we can choose d f ( x ) = x ( x 1 ) ( y 2 ) and d p ( x ) = x ( x 1 ) y , where x = ( x , y ) . This example will be used in Section 4. For complex regions, please refer to [51] for the construction method of d i ( x ) .
Then, the loss function here is transformed into a form without simulation data:
min θ ˜ f , θ ˜ p J ( θ ˜ f , θ ˜ p ) = λ ˜ f J Ω f ( θ ˜ f ) + λ ˜ p J Ω p ( θ ˜ p ) + λ ˜ Γ J Γ ( θ ˜ f , θ ˜ p ) ,
where
J Ω f ( θ ˜ f ) = 1 N f i = 1 N f G Ω f , 1 [ U ˜ , P ˜ ] ( x f ( i ) ; θ ˜ f ) 2 + G Ω f , 2 [ U ˜ , P ˜ ] ( x f ( i ) ; θ ˜ f ) 2 ,
J Ω p ( θ ˜ p ) = 1 N p i = 1 N p G Ω p [ Φ ˜ ] ( x p ( i ) ; θ ˜ p ) 2 , J Γ ( θ ˜ f , θ ˜ p ) = 1 N Γ i = 1 N Γ G Γ , 1 [ U ˜ , Φ ˜ ] ( x Γ ( i ) ; θ ˜ f , θ ˜ p ) 2 + G Γ , 2 [ U ˜ , P ˜ , Φ ˜ ] ( x Γ ( i ) ; θ ˜ f , θ ˜ p ) 2
+ G Γ , 3 [ U ˜ ] ( x Γ ( i ) ; θ ˜ f ) 2 ,
where the definitions of the operators G Ω f , 1 [ · , · ] , G Ω f , 2 [ · , · ] , G Ω p [ · ] , G Γ , 1 [ · , · ] , G Γ , 2 [ · , · , · ] , and  G Γ , 3 [ · ] are in Equation (17). The positive parameters λ ˜ f , λ ˜ p , and λ ˜ Γ represent the weights of the loss functions in Ω f , Ω p , and  Γ , respectively.
We call the above parallel PINNs with hard constraints HC-PPINNs, and the main steps are listed in Algorithm 2.
Algorithm 2 HC-PPINNs for the mixed Stokes/Darcy model
  • Input: Randomly selected collocation points { x f ( i ) } i = 1 N f , { x p ( i ) } i = 1 N p , { x Γ ( i ) } i = 1 N Γ , The maximum number of iterations M ˜ ; The learning rate α ˜ f and α ˜ p .
  • Output:  θ ˜ f n + 1 and θ ˜ p n + 1 .
  1:
n = 1.
  2:
Initialize the neural network parameters θ ˜ f n and θ ˜ p n .
  3:
while  n < = M ˜   do
      read current;
      compute J ( θ ˜ f n , θ ˜ p n ) = λ ˜ f J Ω f ( θ ˜ f n ) + λ ˜ p J Ω p ( θ ˜ p n ) + λ ˜ Γ J Γ ( θ ˜ f n , θ ˜ p n ) ;
      update
             θ ˜ f n + 1 = θ ˜ f n α ˜ f θ ˜ f n J ( θ ˜ f n , θ ˜ p n ) ;
             θ ˜ p n + 1 = θ ˜ p n α ˜ p θ ˜ p n J ( θ ˜ f n , θ ˜ p n ) ;
       n = n + 1 ;
  4:
end while

4. Computational Results and Discussion

For illustrating the performance of the two neural network PPINNs and HC-PPINNs presented above, we present numerical results for five different settings of the mixed Stokes/Darcy model.
Our experiments are based on Python 3.8.19, TensorFlow 2.0.0, and Keras 2.3.1, and the computer is configured as an Intel(R) Core (TM) i5-8300H. If not specified, in the following numerical experiments the network N N f and N N p will use the same number of hidden layers and the same number of neurons in each hidden layer. We employ Xavier initialization, tanh activation function, a learning rate of 1 × 10−4, and iterations M = M ˜ = 1 × 104. For the collocation points, we take N f = N p = 400 , N Γ = 100 , N b f = N b p = 300 . The weight parameters λ i and λ ˜ i   ( i = f , p , Γ ) are all set to 1.
The following relative L 2 error between the neural network approximation solution U and the exact solution u will be used in the examples.
E ( U , u ) = i = 1 N U ( i ) u ( i ) 2 i = 1 N u ( i ) 2 ,
where N represents the number of points in the test set. We employ a test set of 10,000 points with equi-spaced uniform distribution in Ω f and Ω p , respectively, in the following numerical experiments.

4.1. Example 1

Assume that the computational domain Ω f = ( 0 , 1 ) × ( 1 , 2 ) , Ω p = ( 0 , 1 ) × ( 0 , 1 ) and the interface Γ = ( 0 , 1 ) × 1 . The physical parameter n = ρ = g = ν = K = α = 1 . The Dirichlet BCs and the forcing terms are given by the following exact solutions:
u f ( x ) = ( x 2 ( y 1 ) 2 , 2 3 x ( y 1 ) 3 sin ( x ) ) ,
p f ( x ) = x y 0.25 ,
φ ( x ) = sin ( x ) ( y 2 y ) + x 0.25 .
The relative L 2 error (25) of PPINNs and HC-PPINNs with three hidden layers and different numbers of neurons in each hidden layer N n e u r o n are displayed in Table 1. From Table 1, we can see that, for HC-PPINNs and PPINNs, the relative L 2 error gradually decreases when N n e u r o n increases. The approximated solutions obtained from HC-PPINNs are more accurate than those from PPINNs. In particular, the accuracy of HC-PPINNs with 8 neurons is higher than that of PPINNs with 32 neurons.
Figure 4 shows the loss history of PPINNs and HC-PPINNs with three hidden layers and 32 neurons in each hidden layer. We can see that the training loss of HC-PPINNs is much lower than that of PPINNs.
Figure 5 displays the predicted values by HC-PPINNs with three hidden layers and 32 neurons in each hidden layer, as well as the absolute error between the predicted values and the exact solutions. It can be observed that the predicted values approximate the exact solutions well.

4.2. Example 2

In this example, we consider the case with different values of the hydraulic conductivity K. The computational domain and the other physical parameters remain the same as in Example 1. The Dirichlet BCs and the forcing terms are chosen such that the exact solution of the coupled model is given by
u f ( x ) = ( y 2 2 y + 1 , x 2 x ) ,
p f ( x ) = 2 ν ( x + y + 1 ) + g n 3 K ,
φ ( x ) = n K x ( 1 x ) ( y 1 ) + 1 3 y 3 y 2 + y + 2 ν g x .
For HC-PPINNs, when K = 0.001 , the weight parameters λ ˜ f = 1 , λ ˜ p = 10 , λ ˜ Γ = 1 . When K = 0.0001 , the weight parameters λ ˜ f = 10 , λ ˜ p = 100 , λ ˜ Γ = 1 . For PPINNs, when K = 0.01 , the weight parameters λ f = 1 , λ p = 1 , λ Γ = 10 . The relative L 2 error (25) of Example 2 with the varying hydraulic conductivity K is displayed in Table 2, where the neural networks have three hidden layers and 16 neurons in each hidden layer. From Table 2, we can see that HC-PPINNs can keep high accuracy when the hydraulic conductivity K becomes smaller, while PPINNs have lower accuracy even at K = 0.01 . We depict the predicted values of HC-PPINNs with three hidden layers and 16 neurons in each hidden layer, as well as the contrast between the exact solutions and the approximate solutions for K = 0.0001 in Figure 6.

4.3. Example 3

Since PINNs is a mesh-free method for solving PDEs, in this example we consider an irregular computational domain to demonstrate that our method, HC-PPINNs, can still show good performance in this case. Let the computational domain Ω f = ( 0 , 1 ) × ( 1 , 1.5 ) and Ω p = ( 0.25 , 0.75 ) × ( 0.75 , 1 ) with the interface Γ = ( 0.25 , 0.75 ) × 1 ; see Figure 7. The physical parameters and the exact solutions remain the same as in Example 1.
Here, the boundary ( 0 , 0.25 ) , ( 0.75 , 1 ) × 1 is on the same horizontal line as the interface Γ = ( 0.25 , 0.75 ) × 1 ; it will be difficult to construct d f ( x ) if this part of the boundary conditions are considered, so this part of the boundary conditions are incorporated into the loss in a “soft” manner.
We employ N f = 400 , N p = 100 , N Γ = 30 , and N b f = 50 . The sampling points are displayed in Figure 7. Then, we choose d f ( x ) = x ( x 1 ) ( y 1.5 ) and d p ( x ) = ( x 0.25 ) ( x 0.75 ) ( y 0.75 ) .
Table 3 shows the relative L 2 error (25) with three hidden layers and different numbers of neurons in each hidden layer, N n e u r o n . Figure 8 depicts the comparison of the predicted values of HC-PPINNs and the exact solutions, where the neural networks have three hidden layers and 32 neurons in each hidden layer. We can observe that the HC-PPINN still maintains high accuracy in the case of an irregular computational domain.

4.4. Example 4

In this example, we consider a more complex situation, the Stokes/Darcy model with mixed boundary conditions, to demonstrate the performance of HC-PPINNs. The computational domain and the parameters remain the same as in Example 1. The setting of the BCs is showed in Figure 9. We set g f = g p = 0.
Here, for HC-PPINNs, the Dirichlet BCs are incorporated into the loss in a “hard” manner, while the Neumann BCs are incorporated into the loss in a “soft” manner.
We consider two different interface situations: one is the straight-line interface Γ : y + 0.4 x 1.2 = 0 , and the other is the curved interface Γ : y = 0.025 s i n ( 3 π x ) . We employ N f = 453 , N p = 447 , N b f = 100 , and N b p = 175 for the straight-line interface and N f = 444 , N p = 456 , N b f = 100 , and N b p = 200 for the curved interface in the training. Figure 10 and Figure 11 show the training points and the simulation results, respectively. It can be seen that HC-PPINNs shows good performance for both straight-line and curved interfaces.

4.5. Example 5

In order to demonstrate that our method, HC-PPINNs, can greatly improve the accuracy of solving the Stokes/Darcy coupling problem, we consider the non-stationary mixed Stokes/Darcy model in this example. We consider the first test with nonhomogeneous boundary conditions as described in [46], and perform comparisons among HC-PPINNs, PPINNs, and Coupled Deep Neural Networks (CDNNs) in [46]. The computational domain is Ω f = ( 0 , 1 ) × ( 1 , 2 ) , Ω p = ( 0 , 1 ) × ( 0 , 1 ) and the interface Γ = ( 0 , 1 ) × 1 with t ( 0 , 1 ] . All the physical parameters are set to 1. The boundary data and the forcing terms are chosen such that the exact solution of the coupled model is given by
u f x , t = ( x 2 ( y 1 ) 2 + y cos t , 2 3 x ( y 1 ) 3 + 2 π sin ( π x ) cos t ) ,
p f x , t = [ 2 π sin ( π x ) ] sin π 2 y cos t ,
φ x , t = [ 2 π sin ( π x ) ] [ 1 y cos ( π y ) ] cos t .
For HC-PPINNs, we choose d f ( x , t ) = x ( x 1 ) ( y 2 ) t and d p ( x , t ) = x y ( x 1 ) t . Thus, the initial conditions are also enforced in the loss function. We employ N f = N p = 512 , a learning rate of 0.001, and 20,000 iterations for both PPINNs and HC-PPINNs.
The relative L 2 errors (25) of HC-PPINNs, PPINNs, and CDNNs in [46] for Example 5 with different numbers of hidden layers and 16 neurons in each hidden layer are displayed in Table 4. We can see that, compared with CDNNs and PPINNs, HC-PPINNs are much more accurate.

5. Conclusions

In this paper, we aim to design HC-PPINNs to improve the accuracy of neural networks for solving Stokes/Darcy coupled problems. The method enforces the boundary conditions by changing the network architecture, and only the control equations as well as the interface equations need to be trained by incorporating them into the loss in a “soft” manner. Since it is not to be data driven, the training cost is greatly reduced. Through numerical experiments, the HC-PPINN has demonstrated that it can maintain good performance, not only in the regular region but also in the irregular region and the curved interface. And by comparing with PPINNs and CDNNs in [46], it is showed that HC-PPINNs greatly improve the network’s prediction accuracy. However, for the non-stationary mixed Stokes/Darcy coupling problem, our method does not have the extrapolation capability [52,53] to extend solutions to future time. This limitation arises because the treatment of temporal and spatial variables is consistent across all cases in HC-PPINNs, and further exploration is still needed to address this issue.

Author Contributions

Conceptualization, Z.L. and X.Z.; methodology, Z.L.; software, Z.L. and J.Z.; validation, Z.L. and X.Z.; formal analysis, Z.L. and X.Z.; investigation, J.Z.; resources, Z.L.; data curation, J.Z.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L. and X.Z.; visualization, Z.L. and J.Z.; supervision, X.Z.; project administration, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work was partially supported by “the Fundamental Research Funds for the Central Universities”.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

All data supporting the reported results are contained within the article itself. No external datasets or repositories were used, and no new data were generated that would require separate archiving.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Bear, J. Hydraulics of Groundwater, 1st ed.; McGraw-Hill: New York, NY, USA, 1979. [Google Scholar]
  2. Discacciati, M. Domain Decomposition Methods for the Coupling of Surface and Groundwater Flows. Ph.D. Thesis, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2008. [Google Scholar]
  3. Discacciati, M.; Miglio, E.; Quarteroni, A. Mathematical and numerical models for coupling surface and groundwater flows. Appl. Numer. Math. 2002, 43, 57–74. [Google Scholar] [CrossRef]
  4. Wood, W.L. Introduction to Numerical Methods for Water Resources, 1st ed.; Clarendon Press: Oxford, MS, USA, 1993. [Google Scholar]
  5. Beavers, G.S.; Joseph, D.D. Boundary conditions at a naturally permeable wall. J. Fluid. Mech. 1967, 30, 197–207. [Google Scholar] [CrossRef]
  6. Jones, I.P. Low reynolds number flow past a porous spherical shell. Proc. Camb. Phil. Soc. 1973, 73, 231–238. [Google Scholar] [CrossRef]
  7. Saffman, P.G. On the boundary condition at the interface of a porous medium. Stud. Appl. Math. 1971, 50, 93–101. [Google Scholar] [CrossRef]
  8. Arbogast, T.; Brunson, D.S. A computational method for approximating a Darcy-Stokes system governing a vuggy porous medium. Comput. Geosci. 2007, 11, 207–218. [Google Scholar] [CrossRef]
  9. Cao, Y.; Gunzburger, M.; Hu, X.; Hua, F.; Wang, X.; Zhao, W. Finite element approximations for Stokes-Darcy flow with Beavers-Joseph interface conditions. Siam. J. Numer. Anal. 2010, 47, 4239–4256. [Google Scholar] [CrossRef]
  10. Cao, Y.; Gunzburger, M.; Hu, X.; Hua, F.; Wang, X. Coupled Stokes-Darcy model with Beavers-Joseph interface boundary condition. Commun. Math. Sci. 2010, 8, 1–25. [Google Scholar] [CrossRef]
  11. Chidyagwai, P.; Riviére, B. Numerical modelling of coupled surface and subsurface flow systems. Adv. Water. Resour. 2010, 8, 92–105. [Google Scholar] [CrossRef]
  12. Hanspal, N.S.; Waghode, A.N.; Nassehi, V.; Wakeman, R.J. Numerical analysis of coupled Stokes/Darcy flows in industrial filtrations. Transport. Porous. Med. 2006, 64, 73–101. [Google Scholar] [CrossRef]
  13. Nassehi, V. Modelling of combined Navier-Stokes and Darcy flows in crossflow membrane filtration. Chem. Eng. Sci. 1998, 53, 1253–1265. [Google Scholar] [CrossRef]
  14. Pozrikidis, C.; Farrow, D.A. A model of fluid flow in solid tumors. Ann. Biomed. Eng. 2003, 31, 181–194. [Google Scholar] [CrossRef] [PubMed]
  15. Hanspal, N.S.; Waghode, A.N.; Nassehi, V.; Wakeman, R.J. Development of a predictive mathematical model for coupled Stokes/Darcy flows in cross-flow membrane filtration. Chem. Eng. J. 2009, 149, 132–142. [Google Scholar] [CrossRef]
  16. Disc, M.; Quarteroni, A. Convergence analysis of a subdomain iterative method for the finite element approximation of the coupling of Stokes and Darcy equations. Comput. Vis. Sci. 2004, 6, 93–103. [Google Scholar]
  17. Mikelic, A.; Jäger, W. On the interface boundary condition of Beavers, Joseph, and Saffman. Siam. J. Appl. Math. 2000, 60, 1111–1127. [Google Scholar] [CrossRef]
  18. Jäger, W.; Mikelic, A.; Neuss, N. Asymptotic analysis of the laminar viscous flow over a porous bed. Siam. J. Sci. Comput. 2001, 22, 2006–2028. [Google Scholar] [CrossRef]
  19. Layton, W.J.; Schieweck, F.; Yotov, I. Coupling fluid flow with porous media flow. Siam. J. Numer. Anal. 2002, 40, 2195–2218. [Google Scholar] [CrossRef]
  20. Miglio, E.; Quarteroni, A.; Saleri, F. Coupling of free surface and groundwater flows. Comput. Fluids 2003, 32, 73–83. [Google Scholar] [CrossRef]
  21. Riviére, B.; Yotov, I. Locally conservative coupling of Stokes and Darcy flows. Siam. J. Numer. Anal. 2005, 40, 1959–1977. [Google Scholar] [CrossRef]
  22. Lee, H.; Rife, K. Least squares approach for the time-dependent nonlinear Stokes-Darcy flow. Comput. Math. Appl. 2014, 67, 1806–1815. [Google Scholar] [CrossRef]
  23. Rybak, I.; Magiera, J. A multiple-time-step technique for coupled free flow and porous medium systems. J. Comput. Phys. 2014, 272, 327–342. [Google Scholar] [CrossRef]
  24. Mu, M.; Xu, J. A Two-Grid Method of a Mixed Stokes-Darcy Model for Coupling Fluid Flow with Porous Media Flow. Siam. J. Numer. Anal. 2007, 45, 1801–1813. [Google Scholar] [CrossRef]
  25. Mu, M.; Zhu, X. Decoupled schemes for a non-stationary mixed Stokes-Darcy model. Math. Comput. 2010, 79, 707–731. [Google Scholar] [CrossRef]
  26. Shan, L.; Zheng, H.; Layton, W.J. A decoupling method with different subdomain time steps for the nonstationary Stokes-Darcy model. Numer. Meth. Part. Differ. Equ. 2012, 29, 549–583. [Google Scholar] [CrossRef]
  27. LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
  28. Baker, N.; Alexander, F.; Bremer, T.; Hagberg, A.; Kevrekidis, Y.; Najm, H.; Parashar, M.; Patra, A.; Sethian, J.; Wild, S.; et al. Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence; Technical Report; USDOE Office of Science (SC): Washington, DC, USA, 2019.
  29. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  30. Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 2020, 367, 1026–1030. [Google Scholar] [CrossRef]
  31. Sirignano, J.; Spiliopoulos, K. DGM: A deep learning algorithm for solving partial differential equations. J. Comput. Phys. 2018, 375, 1339–1364. [Google Scholar] [CrossRef]
  32. Yu, B. The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems. Commun. Math. Stat. 2018, 6, 1–12. [Google Scholar]
  33. Zang, Y.; Bao, G.; Ye, X.; Zhou, H. Weak adversarial networks for high-dimensional partial differential equations. J. Comput. Phys. 2020, 411, 109409. [Google Scholar] [CrossRef]
  34. Dong, S.; Li, Z. Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Comput. Methods Appl. Mech. Eng. 2021, 387, 114129. [Google Scholar] [CrossRef]
  35. Barron, A. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory 1993, 39, 930–945. [Google Scholar] [CrossRef]
  36. Chen, T.; Chen, H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Trans. Neural Netw. 1995, 6, 911–917. [Google Scholar] [CrossRef] [PubMed]
  37. Poggio, T.; Mhaskar, H.; Rosasco, L.; Miranda, B.; Liao, Q. Why and when can deep but not shallow-networks avoid the curse of dimensionality: A review. Int. J. Autom. Comput. 2017, 14, 503–519. [Google Scholar] [CrossRef]
  38. Grohs, P.; Hornung, F.; Jentzen, A.; Wurstemberger, P. A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of black-scholes partial differential equations. arXiv 2018, arXiv:1809.02362. [Google Scholar] [CrossRef]
  39. Kharazmi, E.; Zhang, Z.; Karniadakis, G.E.M. hp-VPINNs: Variational physics-informed neural networks with domain decomposition. Comput. Methods Appl. Mech. Eng. 2021, 374, 113547. [Google Scholar] [CrossRef]
  40. Jagtapa, A.D.; Kharazmia, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [Google Scholar] [CrossRef]
  41. Jagtapa, A.D.; Karniadakis, G.E. Extended Physics-Informed Neural Networks (XPINNs):A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations. Commun. Comput. Phys. 2020, 28, 2002–2041. [Google Scholar] [CrossRef]
  42. Sun, L.; Gao, H.; Panc, S.; Wang, J. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. Commun. Comput. Phys. 2020, 361, 112732. [Google Scholar] [CrossRef]
  43. Lu, L.; Pestourie, R.; Yao, W.; Wang, Z.; Verdugo, F.; Johnson, S.G. Physics-Informed Neural Networks with Hard Constraints for Inverse Design. Siam. J. Sci. Comput. 2020, 43, 1105–1132. [Google Scholar] [CrossRef]
  44. Cai, S.; Mao, Z.; Wang, Z.; Wang, Z.; Yin, M.; Karniadakis, G.E. Physics-informed neural networks (PINNs) for fluid mechanics: A review. Acta Mech. Sin. 2021, 37, 1727–1738. [Google Scholar] [CrossRef]
  45. Pu, R.; Feng, X. Physics-Informed Neural Networks for Solving Coupled Stokes-Darcy Equation. Entropy 2022, 24, 1106. [Google Scholar] [CrossRef]
  46. Yue, J.; Li, J. Efficient coupled deep neural networks for the time-dependent coupled Stokes-Darcy problems. Appl. Math. Comput. 2023, 437, 127514. [Google Scholar] [CrossRef]
  47. Zhang, Z. Neural Network Method for Solving Forward and Inverse Problems of Navier-Stokes/Darcy Coupling Model. Master’s Thesis, East China Normal University, Shanghai, China, 2023. [Google Scholar]
  48. McClenny, L.D.; Braga-Neto, U.M. Self-adaptive physics-informed neural networks. J. Comput. Phys. 2023, 474, 111722. [Google Scholar] [CrossRef]
  49. Berardi, M.; Difonzo, F.V.; Icardi, M. Inverse Physics-Informed Neural Networks for transport models in porous materials, Computer Methods in Applied Mechanics and Engineering. Comput. Methods Appl. Mech. Eng. 2025, 435, 117628. [Google Scholar] [CrossRef]
  50. Farkane, A.; Ghogho, M.; Oudani, M.; Boutayeb, M. Enhancing physics informed neural networks for solving Navier-Stokes equations. Int. J. Numer. Meth. Fluids 2024, 96, 381–396. [Google Scholar] [CrossRef]
  51. Berg, J.; Nyström, K. A unified deep artificial neural network approach to partial differential equations in complex geometries. Neurocomputing 2018, 317, 28–41. [Google Scholar] [CrossRef]
  52. Ren, P.; Rao, C.; Liu, Y.; Wang, J.; Sun, H. PhyCRNet: Physics-informed convolutional-recurrent network for solving spatiotemporal PDEs. Comput. Methods Appl. Mech. Eng. 2022, 389, 114399. [Google Scholar] [CrossRef]
  53. Arda, M.; Ali, C.B.; Ehsan, H.; Erdogan, M. An unsupervised latent/output physics-informed convolutional-LSTM network for solving partial differential equations using peridynamic differential operator. Comput. Methods Appl. Mech. Eng. 2023, 407, 115944. [Google Scholar]
Figure 1. The global domain Ω consisting of the fluid flow region Ω f and the porous media flow region Ω p , separated by the interface Γ .
Figure 1. The global domain Ω consisting of the fluid flow region Ω f and the porous media flow region Ω p , separated by the interface Γ .
Entropy 27 00275 g001
Figure 2. Diagram of the PPINNs architecture.
Figure 2. Diagram of the PPINNs architecture.
Entropy 27 00275 g002
Figure 3. Diagram of the HC-PPINNs architecture.
Figure 3. Diagram of the HC-PPINNs architecture.
Entropy 27 00275 g003
Figure 4. The loss history of Example 1, where both the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) Loss history of PPINNs. (Right) Loss history of HC-PPINNs.
Figure 4. The loss history of Example 1, where both the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) Loss history of PPINNs. (Right) Loss history of HC-PPINNs.
Entropy 27 00275 g004
Figure 5. Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 1, where the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) The left column is the predicted velocity U 1 in the x direction of fluid flow, the predicted velocity U 2 in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted Φ of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
Figure 5. Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 1, where the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) The left column is the predicted velocity U 1 in the x direction of fluid flow, the predicted velocity U 2 in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted Φ of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
Entropy 27 00275 g005aEntropy 27 00275 g005b
Figure 6. Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 2 when K = 0.0001 , where the neural networks have three hidden layers and 16 neurons in each hidden layer. (Left) The left column is the predicted velocity U 1 in the x direction of fluid flow, the predicted velocity U 2 in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted Φ of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
Figure 6. Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 2 when K = 0.0001 , where the neural networks have three hidden layers and 16 neurons in each hidden layer. (Left) The left column is the predicted velocity U 1 in the x direction of fluid flow, the predicted velocity U 2 in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted Φ of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
Entropy 27 00275 g006
Figure 7. The sampling area and sampling points of Example 3. The black points ‘∘’ are boundary points, which will be treated in a “soft” manner, the red points ‘★’ are interface points, and the rest are residual points of equations.
Figure 7. The sampling area and sampling points of Example 3. The black points ‘∘’ are boundary points, which will be treated in a “soft” manner, the red points ‘★’ are interface points, and the rest are residual points of equations.
Entropy 27 00275 g007
Figure 8. Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 3, where the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) The left column is the predicted velocity U 1 in the x direction of fluid flow, the predicted velocity U 2 in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted Φ of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
Figure 8. Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 3, where the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) The left column is the predicted velocity U 1 in the x direction of fluid flow, the predicted velocity U 2 in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted Φ of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
Entropy 27 00275 g008
Figure 9. Boundary conditions of Stokes/Darcy problem.
Figure 9. Boundary conditions of Stokes/Darcy problem.
Entropy 27 00275 g009
Figure 10. Sampling area and training points of Example 4, where the black and green points ‘∘’ are Neumann BCs points, the red points are interface points, and the rest are residual points of the equation. (Left) The straight-line interface. (Right) The curved interface.
Figure 10. Sampling area and training points of Example 4, where the black and green points ‘∘’ are Neumann BCs points, the red points are interface points, and the rest are residual points of the equation. (Left) The straight-line interface. (Right) The curved interface.
Entropy 27 00275 g010
Figure 11. Simulation results of HC-PPINNs for Example 4, where the neural networks have three hidden layers and 16 neurons in each hidden layer. The color bar represent the approximated result of pressure and the vectors represent the velocity of the fluid. (Left) The straight-line interface. (Right) The curved interface.
Figure 11. Simulation results of HC-PPINNs for Example 4, where the neural networks have three hidden layers and 16 neurons in each hidden layer. The color bar represent the approximated result of pressure and the vectors represent the velocity of the fluid. (Left) The straight-line interface. (Right) The curved interface.
Entropy 27 00275 g011
Table 1. The relative L 2 error of PPINNs and HC-PPINNs for Example 1 with three hidden layers and different numbers of neurons in each hidden layer.
Table 1. The relative L 2 error of PPINNs and HC-PPINNs for Example 1 with three hidden layers and different numbers of neurons in each hidden layer.
N n e u r o n Algorithm u f p f φ
8PPINNs4.97 × 10−36.52 × 10−32.71 × 10−3
HC-PPINNs4.15 × 10−57.01 × 10−41.02 × 10−4
16PPINNs1.84 × 10−32.63 × 10−31.51 × 10−3
HC-PPINNs2.64 × 10−52.12 × 10−41.60 × 10−5
32PPINNs1.71 × 10−33.17 × 10−31.65 × 10−3
HC-PPINNs6.68 × 10−69.23 × 10−55.62 × 10−6
Table 2. The relative L 2 error of Example 2 with the varying hydraulic conductivity K, where the neural networks have three hidden layers and 16 neurons in each hidden layer.
Table 2. The relative L 2 error of Example 2 with the varying hydraulic conductivity K, where the neural networks have three hidden layers and 16 neurons in each hidden layer.
AlgorithmK u f p f φ
HC-PPINNs11.69 × 10−51.93 × 10−52.50 × 10−6
0.1 3.48 × 10−51.75 × 10−52.60 × 10−6
0.01 4.86 × 10−56.87 × 10−64.65 × 10−6
0.001 6.56 × 10−56.39 × 10−71.12 × 10−6
0.0001 6.17 × 10−56.06 × 10−84.59 × 10−7
PPINNs 0.01 4.66 × 10−21.22 × 10−34.73 × 10−3
Table 3. The relative L 2 error of HC-PPINNs for Example 3, where the neural networks have three hidden layers and a different number of neuron in each hidden layer.
Table 3. The relative L 2 error of HC-PPINNs for Example 3, where the neural networks have three hidden layers and a different number of neuron in each hidden layer.
N n e u r o n u f p f φ
81.23 × 10−59.58 × 10−52.12 × 10−6
168.51 × 10−66.94 × 10−51.08 × 10−6
325.67 × 10−63.98 × 10−53.79 × 10−6
Table 4. The relative L 2 error of HC-PPINNs, PPINNs, and CDNNs in [46] for Example 5 with different numbers of hidden layers and 16 neurons in each hidden layer.
Table 4. The relative L 2 error of HC-PPINNs, PPINNs, and CDNNs in [46] for Example 5 with different numbers of hidden layers and 16 neurons in each hidden layer.
Number of Hidden LayersAlgorithm u f p f φ
HC-PPINNs2.24 × 10−51.63 × 10−41.90 × 10−5
1PPINNs3.93 × 10−21.63 × 10−12.44 × 10−1
CDNNs1.25 × 10−21.87 × 10−18.02 × 10−2
HC-PPINNs2.34 × 10−51.68 × 10−41.64 × 10−5
2PPINNs1.71 × 10−26.32 × 10−27.40 × 10−2
CDNNs5.00 × 10−41.64 × 10−21.09 × 10−3
HC-PPINNs1.21 × 10−51.28 × 10−48.46 × 10−6
3PPINNs1.50 × 10−25.18 × 10−26.24 × 10−2
CDNNs1.15 × 10−43.14 × 10−32.28 × 10−4
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, Z.; Zhang, J.; Zhu, X. High-Accuracy Parallel Neural Networks with Hard Constraints for a Mixed Stokes/Darcy Model. Entropy 2025, 27, 275. https://doi.org/10.3390/e27030275

AMA Style

Lu Z, Zhang J, Zhu X. High-Accuracy Parallel Neural Networks with Hard Constraints for a Mixed Stokes/Darcy Model. Entropy. 2025; 27(3):275. https://doi.org/10.3390/e27030275

Chicago/Turabian Style

Lu, Zhulian, Junyang Zhang, and Xiaohong Zhu. 2025. "High-Accuracy Parallel Neural Networks with Hard Constraints for a Mixed Stokes/Darcy Model" Entropy 27, no. 3: 275. https://doi.org/10.3390/e27030275

APA Style

Lu, Z., Zhang, J., & Zhu, X. (2025). High-Accuracy Parallel Neural Networks with Hard Constraints for a Mixed Stokes/Darcy Model. Entropy, 27(3), 275. https://doi.org/10.3390/e27030275

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop