Abstract
In this paper, we study numerical algorithms based on Physics-Informed Neural Networks (PINNs) for solving a mixed Stokes/Darcy model that describes a fluid flow coupled with a porous media flow. A Hard Constrained Parallel PINN (HC-PPINN) is proposed for the mixed model, in which the boundary conditions are enforced by modified the neural network architecture. Numerical experiments with different settings are conducted to demonstrate the accuracy and efficiency of our method by comparing it with the methods based on vanilla PINNs for the mixed model.
1. Introduction
In the real world, there are many applications involving the interaction of different physical processes in different subdomains of the problem domain. We focus on the mixed Stokes/Darcy model, which models the motion of fluid between the surface and subsurface regions. The behavior of the fluid is characterized by different partial differential equations, with the Stokes equations describing the behavior in the surface region and Darcy’s law governing the behavior in the subsurface region [,,,]. The two regions are coupled by the appropriate interface conditions that ensure the conservation of mass and the balance of normal forces across the interface, and the Beavers–Joseph–Saffman interface condition, which states that the shear stress along the interface is proportional to the slip velocity along the interface [,,,]. The mixed Stokes/Darcy model boasts an extensive array of applications, e.g., groundwater systems [,], industrial filtration [,,], blood flow in tumors [,], etc.
The traditional numerical techniques for resolving the mixed Stokes/Darcy model are extensively documented in the literature [,,,,,,,]. In general, there are two main approaches: one approach involves solving the coupled problem directly [,], while the other involves decoupling the mixed model first and then applying appropriate local solvers independently [,,]. These methods can be difficult to solve in the face of irregular regions and curved interfaces.
Over the last two decades, deep learning has achieved extraordinary success in a range of domains, such as computer vision and natural language processing []. Solving Partial Differential Equations (PDEs) via deep learning has recently surfaced as a promising topic known as Scientific Machine Learning (SciML) []. A representative class of results has been presented, including but not limited to the following: In 2018, M. Raissi et al. devised a deep-learning framework, namely, a Physics-Informed Neural Network (PINN), to solve the forward and inverse problems of partial differential equations, and utilized it to study the equations of hydrodynamics and their inverse processes [,]. J. Sirignano et al. proposed the use of deep neural networks to solve high-dimensional partial differential equations, called the Deep Galerkin Method (DGM), and gave a theoretical analysis of the approximation performance of neural networks []. In [], the Deep Ritz Method has been proposed by authors to deal with variational problems. In 2020, Y. Zang et al. proposed solving high-dimensional partial differential equations over irregular domains using weak adversarial networks []. In 2021, S. Dong et al. solved linear and nonlinear partial differential equations using domain decomposition and local extreme learning machines [], etc. These neural network-based PDE solving methods are widely popular due to their universal approximation properties [,]. Compared with traditional grid-based methods, deep learning to solve PDE is a grid-free method that utilizes automatic differentiation [], which can break the curse of dimensionality [,].
Among these methods, PINNs is one of the most popular methods. There are currently many variants of PINNs, such as variational hp-VPINNs [], conservative PINNs (cPINNs) [], extended PINNs (XPINNs) [], Parallel PINNs (PPINNs), etc. However, the constraints for boundary and initial conditions in most of the PINN-based methods are soft constraints. In order to strictly enforce the boundary and initial conditions, Refs. [,] devised a PINN-based architecture that could enhance both the precision and generalization ability of the neural network.
According to the literature research, most of the current studies on deep learning for solving PDEs are for problems controlled by a single set of physics equations, while there are fewer studies on the Stokes/Darcy coupled problem []. In 2022, R. Pu et al. investigated the steady Stokes/Darcy coupled problem by using PINNs and proposed a strategy to improve the accuracy []. In 2023, J. Yue et al. proposed Coupled Deep Neural Networks (CDNNs) for solving the time-dependent Stokes/Darcy coupling problem []. Ref. [] investigated the neural network solution method for the forward and backward problems of the Navier–Stokes/Darcy coupling problem based on PINNs. However, the fitting accuracies of the existing studies are low, with the relative error remaining between 10−2 and 10−4, and these studies primarily focus on regular regions and straight-line interfaces.
In this paper, to improve neural network accuracy in solving Stokes/Darcy coupled problems, we first design a parallel physics-informed neural network, namely, Parallel PINNs (PPINNs). Then, we modify the network architecture to enforce boundary conditions, and at the same time incorporate the control equations as well as the interface equations into the loss with soft constraints for training; we call it HC-PPINNs. Specifically, the training of HC-PPINNs only needs to be driven by minimizing the loss of the governing equations and interface equations and does not need to be data driven. Since it is not to be data driven, the training cost is greatly reduced. In addition, HC-PPINNs achieve higher accuracy compared to the methods based on vanilla PINNs, including the Parallel PINNs (PPINNs) and CDNNs in []. Furthermore, HC-PPINNs can also keep good performance in both irregular regions and curved interfaces. The performance and accuracy of our method, HC-PPINNs, are demonstrated by five examples.
2. Problem Formulation
We consider a coupled fluid flow and porous media flow in a bounded domain or 3), which consists of a fluid flow in and a porous media flow in , separated by an interface (see Figure 1), where , , and . Let and be the unit outward normal vectors on the boundaries of and , respectively, and , , the unit tangential vectors on the interface . Then, we have on .
Figure 1.
The global domain consisting of the fluid flow region and the porous media flow region , separated by the interface .
The Stokes equations are used to describe the motion of the fluid flow in
where is the velocity of the fluid flow in , is the pressure, and is the external force.
The following equations are used to describe the motion of the porous media flow in :
where is the specific discharge, which is defined as the volume of the fluid flowing per unit time through a unit cross-sectional area normal to the direction of the flow. is the piezometric head, which is the sum of elevation head z and the pressure head. is the pressure of the fluid in , is the density of the fluid, and g is the gravitational acceleration. is the fluid velocity in , is the hydraulic conductivity tensor, n is the volumetric porosity, and is the source term. For simplicity, we assume and the porous media is homogeneous, i.e., with , . Then, the continuity Equation (3) in can be written in the following form by using Darcy’s law (4):
The interface coupling conditions are the important part in a mixed model. For the mixed Stokes/Darcy model, the following interface conditions are used in the literature [,,,]:
where is a parameter depending on the properties of the porous medium and should be experimentally determined. The first interface condition (7) is the mass conservation across the interface . Using (4) and (5), it can be rewritten as
The second interface condition (8) is the balance of the normal forces across the interface. The third one (9), known as the Beavers–Joseph–Saffman law [,,], states that the slip velocity along the interface is proportional to the shear stress along the interface.
For the convenience of the following discussion, the Dirichlet Boundary Conditions (BCs) are considered for the mixed model:
Besides the Dirichlet BCs, the mixed Dirichlet and Neumann BCs are set up in the later numerical examples.
3. Methodology
In this section, the mixed Stokes/Darcy model is first solved using a parallel physics-informed neural networks that is mainly based on the vanilla PINNs []. Then, a high-accuracy parallel neural network with hard constraints for the boundary conditions is presented for the mixed model.
3.1. Parallel Physics-Informed Neural Networks
In [], the authors divided the solution region into many sub-regions and utilized separate networks inside each sub-region to solve nonlinear PDEs on domains with arbitrary complex geometries. Inspired by this approach, we present a parallel Physics-Informed Neural Network, called PPINN, for the mixed Stokes/Darcy model, in which there is one neural network for the fluid flow region , and another for the porous media flow region . Figure 2 displays the diagram of the PPINN architecture.
Figure 2.
Diagram of the PPINNs architecture.
Let in , in , on , and boundary points on , on be the set of randomly selected collocation points. Here, , , and are the numbers of collocation points in the interior of the domain , , and , respectively; and are the numbers of the collocation points on the boundary of and , respectively.
Let , , and be the neural network approximation solutions of , , and , respectively, where denotes the parameters of the neural network for the fluid flow and the parameters of the neural network for the porous media flow. Based on the vanilla PINNs, we restrict these two neural networks to satisfy the Stokes/Dacy problem by using a PDE-informed loss function, where the boundary conditions are treated in a “soft” manner, namely soft constraints, through a loss function.
Then, the problem of PPINNs for the mixed Stokes/Darcy model can be described by the following minimization problem of the loss function with respect to the parameters and ,
where
with
The total loss includes three loss functions, , , and , from the fluid flow region , the porous media region , and the interface , respectively. The positive parameters , , and represent the weights of the loss functions in , , and , respectively. These weights ensure that the different components of the loss function are balanced, which can improve the convergence of the PINN-based methods [,]. The terms are the PDE-informed loss functions from the fluid flow region and the porous media region . The terms are the loss from the boundary conditions. Each of the loss and includes the PDE-informed loss and the loss from the boundary conditions. For the Stokes equation, the pressure field can only be determined up to a constant, and additional constraints need to be introduced. These constraints typically include fixing the pressure at a specific reference point or incorporating a regularization term to ensure that the mean pressure over the domain is zero []. To improve the pressure approximation, we take the approach presented in reference [] by adding a pressure training loss function in the loss . The pressure data on the boundary should be given by the additional information about the pressure. We denote the pressure data by . Note that, for the case of in the absence of available pressure data, the pressure-related loss function is omitted from the formulation, and the constraints mentioned above could be imposed. In our numerical experiments, if there exist exact solutions for the coupled model, then the pressure data are given by the exact solution, such as in Examples 1, 2, 3, and 5. There is no exact solution for Example 4, where the pressure loss is omitted.
The main steps of PPINNs are presented as Algorithm 1.
| Algorithm 1 PPINNs for the mixed Stokes/Darcy model |
|
3.2. Hard Constrained PPINNs
In PPINNs, the loss function will continue to decrease with optimization and gradually approach zero. For the known Dirichlet BCs and , and the data if available, the loss , and should be able to reach zero theoretically during optimization. However, this may not be achieved in the numerical optimization of PPINNs, which reduces the accuracy of the PPINNs. Therefore, in order to make full use of the known data, we designed a new network architecture based on PPINNs to enforce Dirichlet BCs and , as well as the data , ensuring that the loss of the new network from boundary remains 0. In this way, the boundary conditions and the pressure data are treated in a ”hard” manner, called hard constraints [,].
We strictly impose the Dirichlet BCs by modifying the neural network architecture. Specifically, we construct the neural network solutions as
where , , are the final outputs of the networks; see Figure 3. Here, and are solutions that just satisfy the Dirichlet BCs, respectively: and . While satisfies the pressure data: if the pressure data is available. Analogous to the pressure treatment methodology employed in Algorithm 1 for the mixed Stokes/Darcy model, the hard constraint on pressure (19) will be conducted if the pressure data are available, otherwise, this constraint will be omitted. Specifically, in the later numerical experiments, the hard constraint on pressure will only be imposed in Examples 1, 2, 3, and 5. is a smooth distance function satisfying the following two conditions:
where needs to be constructed case-by-case. For example, when , and , we can choose and , where . This example will be used in Section 4. For complex regions, please refer to [] for the construction method of .
Figure 3.
Diagram of the HC-PPINNs architecture.
Then, the loss function here is transformed into a form without simulation data:
where
where the definitions of the operators , , , , , and are in Equation (17). The positive parameters , , and represent the weights of the loss functions in , , and , respectively.
We call the above parallel PINNs with hard constraints HC-PPINNs, and the main steps are listed in Algorithm 2.
| Algorithm 2 HC-PPINNs for the mixed Stokes/Darcy model |
|
4. Computational Results and Discussion
For illustrating the performance of the two neural network PPINNs and HC-PPINNs presented above, we present numerical results for five different settings of the mixed Stokes/Darcy model.
Our experiments are based on Python 3.8.19, TensorFlow 2.0.0, and Keras 2.3.1, and the computer is configured as an Intel(R) Core (TM) i5-8300H. If not specified, in the following numerical experiments the network and will use the same number of hidden layers and the same number of neurons in each hidden layer. We employ Xavier initialization, tanh activation function, a learning rate of 1 × 10−4, and iterations 1 × 104. For the collocation points, we take , , . The weight parameters and are all set to 1.
The following relative error between the neural network approximation solution U and the exact solution u will be used in the examples.
where N represents the number of points in the test set. We employ a test set of 10,000 points with equi-spaced uniform distribution in and , respectively, in the following numerical experiments.
4.1. Example 1
Assume that the computational domain , and the interface . The physical parameter . The Dirichlet BCs and the forcing terms are given by the following exact solutions:
The relative error (25) of PPINNs and HC-PPINNs with three hidden layers and different numbers of neurons in each hidden layer are displayed in Table 1. From Table 1, we can see that, for HC-PPINNs and PPINNs, the relative error gradually decreases when increases. The approximated solutions obtained from HC-PPINNs are more accurate than those from PPINNs. In particular, the accuracy of HC-PPINNs with 8 neurons is higher than that of PPINNs with 32 neurons.
Table 1.
The relative error of PPINNs and HC-PPINNs for Example 1 with three hidden layers and different numbers of neurons in each hidden layer.
Figure 4 shows the loss history of PPINNs and HC-PPINNs with three hidden layers and 32 neurons in each hidden layer. We can see that the training loss of HC-PPINNs is much lower than that of PPINNs.
Figure 4.
The loss history of Example 1, where both the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) Loss history of PPINNs. (Right) Loss history of HC-PPINNs.
Figure 5 displays the predicted values by HC-PPINNs with three hidden layers and 32 neurons in each hidden layer, as well as the absolute error between the predicted values and the exact solutions. It can be observed that the predicted values approximate the exact solutions well.

Figure 5.
Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 1, where the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) The left column is the predicted velocity in the x direction of fluid flow, the predicted velocity in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
4.2. Example 2
In this example, we consider the case with different values of the hydraulic conductivity K. The computational domain and the other physical parameters remain the same as in Example 1. The Dirichlet BCs and the forcing terms are chosen such that the exact solution of the coupled model is given by
For HC-PPINNs, when , the weight parameters , , . When , the weight parameters , , . For PPINNs, when , the weight parameters , , . The relative error (25) of Example 2 with the varying hydraulic conductivity K is displayed in Table 2, where the neural networks have three hidden layers and 16 neurons in each hidden layer. From Table 2, we can see that HC-PPINNs can keep high accuracy when the hydraulic conductivity K becomes smaller, while PPINNs have lower accuracy even at . We depict the predicted values of HC-PPINNs with three hidden layers and 16 neurons in each hidden layer, as well as the contrast between the exact solutions and the approximate solutions for in Figure 6.
Table 2.
The relative error of Example 2 with the varying hydraulic conductivity K, where the neural networks have three hidden layers and 16 neurons in each hidden layer.
Figure 6.
Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 2 when , where the neural networks have three hidden layers and 16 neurons in each hidden layer. (Left) The left column is the predicted velocity in the x direction of fluid flow, the predicted velocity in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
4.3. Example 3
Since PINNs is a mesh-free method for solving PDEs, in this example we consider an irregular computational domain to demonstrate that our method, HC-PPINNs, can still show good performance in this case. Let the computational domain and with the interface ; see Figure 7. The physical parameters and the exact solutions remain the same as in Example 1.
Figure 7.
The sampling area and sampling points of Example 3. The black points ‘∘’ are boundary points, which will be treated in a “soft” manner, the red points ‘★’ are interface points, and the rest are residual points of equations.
Here, the boundary is on the same horizontal line as the interface ; it will be difficult to construct if this part of the boundary conditions are considered, so this part of the boundary conditions are incorporated into the loss in a “soft” manner.
We employ , , , and . The sampling points are displayed in Figure 7. Then, we choose and .
Table 3 shows the relative error (25) with three hidden layers and different numbers of neurons in each hidden layer, . Figure 8 depicts the comparison of the predicted values of HC-PPINNs and the exact solutions, where the neural networks have three hidden layers and 32 neurons in each hidden layer. We can observe that the HC-PPINN still maintains high accuracy in the case of an irregular computational domain.
Table 3.
The relative error of HC-PPINNs for Example 3, where the neural networks have three hidden layers and a different number of neuron in each hidden layer.
Figure 8.
Comparison of the predicted values by HC-PPINNs and the exact solutions for Example 3, where the neural networks have three hidden layers and 32 neurons in each hidden layer. (Left) The left column is the predicted velocity in the x direction of fluid flow, the predicted velocity in the y direction of fluid flow, the predicted pressure P of fluid flow, and the predicted of porous media flow, respectively. (Right) The right column is the corresponding absolute error between the predicted values and the exact solutions.
4.4. Example 4
In this example, we consider a more complex situation, the Stokes/Darcy model with mixed boundary conditions, to demonstrate the performance of HC-PPINNs. The computational domain and the parameters remain the same as in Example 1. The setting of the BCs is showed in Figure 9. We set = = 0.
Figure 9.
Boundary conditions of Stokes/Darcy problem.
Here, for HC-PPINNs, the Dirichlet BCs are incorporated into the loss in a “hard” manner, while the Neumann BCs are incorporated into the loss in a “soft” manner.
We consider two different interface situations: one is the straight-line interface : , and the other is the curved interface . We employ , , , and for the straight-line interface and , , , and for the curved interface in the training. Figure 10 and Figure 11 show the training points and the simulation results, respectively. It can be seen that HC-PPINNs shows good performance for both straight-line and curved interfaces.
Figure 10.
Sampling area and training points of Example 4, where the black and green points ‘∘’ are Neumann BCs points, the red points are interface points, and the rest are residual points of the equation. (Left) The straight-line interface. (Right) The curved interface.
Figure 11.
Simulation results of HC-PPINNs for Example 4, where the neural networks have three hidden layers and 16 neurons in each hidden layer. The color bar represent the approximated result of pressure and the vectors represent the velocity of the fluid. (Left) The straight-line interface. (Right) The curved interface.
4.5. Example 5
In order to demonstrate that our method, HC-PPINNs, can greatly improve the accuracy of solving the Stokes/Darcy coupling problem, we consider the non-stationary mixed Stokes/Darcy model in this example. We consider the first test with nonhomogeneous boundary conditions as described in [], and perform comparisons among HC-PPINNs, PPINNs, and Coupled Deep Neural Networks (CDNNs) in []. The computational domain is , and the interface with . All the physical parameters are set to 1. The boundary data and the forcing terms are chosen such that the exact solution of the coupled model is given by
For HC-PPINNs, we choose and . Thus, the initial conditions are also enforced in the loss function. We employ , a learning rate of 0.001, and 20,000 iterations for both PPINNs and HC-PPINNs.
The relative errors (25) of HC-PPINNs, PPINNs, and CDNNs in [] for Example 5 with different numbers of hidden layers and 16 neurons in each hidden layer are displayed in Table 4. We can see that, compared with CDNNs and PPINNs, HC-PPINNs are much more accurate.
Table 4.
The relative error of HC-PPINNs, PPINNs, and CDNNs in [] for Example 5 with different numbers of hidden layers and 16 neurons in each hidden layer.
5. Conclusions
In this paper, we aim to design HC-PPINNs to improve the accuracy of neural networks for solving Stokes/Darcy coupled problems. The method enforces the boundary conditions by changing the network architecture, and only the control equations as well as the interface equations need to be trained by incorporating them into the loss in a “soft” manner. Since it is not to be data driven, the training cost is greatly reduced. Through numerical experiments, the HC-PPINN has demonstrated that it can maintain good performance, not only in the regular region but also in the irregular region and the curved interface. And by comparing with PPINNs and CDNNs in [], it is showed that HC-PPINNs greatly improve the network’s prediction accuracy. However, for the non-stationary mixed Stokes/Darcy coupling problem, our method does not have the extrapolation capability [,] to extend solutions to future time. This limitation arises because the treatment of temporal and spatial variables is consistent across all cases in HC-PPINNs, and further exploration is still needed to address this issue.
Author Contributions
Conceptualization, Z.L. and X.Z.; methodology, Z.L.; software, Z.L. and J.Z.; validation, Z.L. and X.Z.; formal analysis, Z.L. and X.Z.; investigation, J.Z.; resources, Z.L.; data curation, J.Z.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L. and X.Z.; visualization, Z.L. and J.Z.; supervision, X.Z.; project administration, X.Z.; funding acquisition, X.Z. All authors have read and agreed to the published version of the manuscript.
Funding
This work was partially supported by “the Fundamental Research Funds for the Central Universities”.
Institutional Review Board Statement
Not applicable.
Data Availability Statement
All data supporting the reported results are contained within the article itself. No external datasets or repositories were used, and no new data were generated that would require separate archiving.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Bear, J. Hydraulics of Groundwater, 1st ed.; McGraw-Hill: New York, NY, USA, 1979. [Google Scholar]
- Discacciati, M. Domain Decomposition Methods for the Coupling of Surface and Groundwater Flows. Ph.D. Thesis, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 2008. [Google Scholar]
- Discacciati, M.; Miglio, E.; Quarteroni, A. Mathematical and numerical models for coupling surface and groundwater flows. Appl. Numer. Math. 2002, 43, 57–74. [Google Scholar] [CrossRef]
- Wood, W.L. Introduction to Numerical Methods for Water Resources, 1st ed.; Clarendon Press: Oxford, MS, USA, 1993. [Google Scholar]
- Beavers, G.S.; Joseph, D.D. Boundary conditions at a naturally permeable wall. J. Fluid. Mech. 1967, 30, 197–207. [Google Scholar] [CrossRef]
- Jones, I.P. Low reynolds number flow past a porous spherical shell. Proc. Camb. Phil. Soc. 1973, 73, 231–238. [Google Scholar] [CrossRef]
- Saffman, P.G. On the boundary condition at the interface of a porous medium. Stud. Appl. Math. 1971, 50, 93–101. [Google Scholar] [CrossRef]
- Arbogast, T.; Brunson, D.S. A computational method for approximating a Darcy-Stokes system governing a vuggy porous medium. Comput. Geosci. 2007, 11, 207–218. [Google Scholar] [CrossRef]
- Cao, Y.; Gunzburger, M.; Hu, X.; Hua, F.; Wang, X.; Zhao, W. Finite element approximations for Stokes-Darcy flow with Beavers-Joseph interface conditions. Siam. J. Numer. Anal. 2010, 47, 4239–4256. [Google Scholar] [CrossRef]
- Cao, Y.; Gunzburger, M.; Hu, X.; Hua, F.; Wang, X. Coupled Stokes-Darcy model with Beavers-Joseph interface boundary condition. Commun. Math. Sci. 2010, 8, 1–25. [Google Scholar] [CrossRef]
- Chidyagwai, P.; Riviére, B. Numerical modelling of coupled surface and subsurface flow systems. Adv. Water. Resour. 2010, 8, 92–105. [Google Scholar] [CrossRef]
- Hanspal, N.S.; Waghode, A.N.; Nassehi, V.; Wakeman, R.J. Numerical analysis of coupled Stokes/Darcy flows in industrial filtrations. Transport. Porous. Med. 2006, 64, 73–101. [Google Scholar] [CrossRef]
- Nassehi, V. Modelling of combined Navier-Stokes and Darcy flows in crossflow membrane filtration. Chem. Eng. Sci. 1998, 53, 1253–1265. [Google Scholar] [CrossRef]
- Pozrikidis, C.; Farrow, D.A. A model of fluid flow in solid tumors. Ann. Biomed. Eng. 2003, 31, 181–194. [Google Scholar] [CrossRef] [PubMed]
- Hanspal, N.S.; Waghode, A.N.; Nassehi, V.; Wakeman, R.J. Development of a predictive mathematical model for coupled Stokes/Darcy flows in cross-flow membrane filtration. Chem. Eng. J. 2009, 149, 132–142. [Google Scholar] [CrossRef]
- Disc, M.; Quarteroni, A. Convergence analysis of a subdomain iterative method for the finite element approximation of the coupling of Stokes and Darcy equations. Comput. Vis. Sci. 2004, 6, 93–103. [Google Scholar]
- Mikelic, A.; Jäger, W. On the interface boundary condition of Beavers, Joseph, and Saffman. Siam. J. Appl. Math. 2000, 60, 1111–1127. [Google Scholar] [CrossRef]
- Jäger, W.; Mikelic, A.; Neuss, N. Asymptotic analysis of the laminar viscous flow over a porous bed. Siam. J. Sci. Comput. 2001, 22, 2006–2028. [Google Scholar] [CrossRef]
- Layton, W.J.; Schieweck, F.; Yotov, I. Coupling fluid flow with porous media flow. Siam. J. Numer. Anal. 2002, 40, 2195–2218. [Google Scholar] [CrossRef]
- Miglio, E.; Quarteroni, A.; Saleri, F. Coupling of free surface and groundwater flows. Comput. Fluids 2003, 32, 73–83. [Google Scholar] [CrossRef]
- Riviére, B.; Yotov, I. Locally conservative coupling of Stokes and Darcy flows. Siam. J. Numer. Anal. 2005, 40, 1959–1977. [Google Scholar] [CrossRef]
- Lee, H.; Rife, K. Least squares approach for the time-dependent nonlinear Stokes-Darcy flow. Comput. Math. Appl. 2014, 67, 1806–1815. [Google Scholar] [CrossRef]
- Rybak, I.; Magiera, J. A multiple-time-step technique for coupled free flow and porous medium systems. J. Comput. Phys. 2014, 272, 327–342. [Google Scholar] [CrossRef]
- Mu, M.; Xu, J. A Two-Grid Method of a Mixed Stokes-Darcy Model for Coupling Fluid Flow with Porous Media Flow. Siam. J. Numer. Anal. 2007, 45, 1801–1813. [Google Scholar] [CrossRef]
- Mu, M.; Zhu, X. Decoupled schemes for a non-stationary mixed Stokes-Darcy model. Math. Comput. 2010, 79, 707–731. [Google Scholar] [CrossRef]
- Shan, L.; Zheng, H.; Layton, W.J. A decoupling method with different subdomain time steps for the nonstationary Stokes-Darcy model. Numer. Meth. Part. Differ. Equ. 2012, 29, 549–583. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef] [PubMed]
- Baker, N.; Alexander, F.; Bremer, T.; Hagberg, A.; Kevrekidis, Y.; Najm, H.; Parashar, M.; Patra, A.; Sethian, J.; Wild, S.; et al. Workshop Report on Basic Research Needs for Scientific Machine Learning: Core Technologies for Artificial Intelligence; Technical Report; USDOE Office of Science (SC): Washington, DC, USA, 2019.
- Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
- Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 2020, 367, 1026–1030. [Google Scholar] [CrossRef]
- Sirignano, J.; Spiliopoulos, K. DGM: A deep learning algorithm for solving partial differential equations. J. Comput. Phys. 2018, 375, 1339–1364. [Google Scholar] [CrossRef]
- Yu, B. The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems. Commun. Math. Stat. 2018, 6, 1–12. [Google Scholar]
- Zang, Y.; Bao, G.; Ye, X.; Zhou, H. Weak adversarial networks for high-dimensional partial differential equations. J. Comput. Phys. 2020, 411, 109409. [Google Scholar] [CrossRef]
- Dong, S.; Li, Z. Local extreme learning machines and domain decomposition for solving linear and nonlinear partial differential equations. Comput. Methods Appl. Mech. Eng. 2021, 387, 114129. [Google Scholar] [CrossRef]
- Barron, A. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inform. Theory 1993, 39, 930–945. [Google Scholar] [CrossRef]
- Chen, T.; Chen, H. Universal approximation to nonlinear operators by neural networks with arbitrary activation functions and its application to dynamical systems. IEEE Trans. Neural Netw. 1995, 6, 911–917. [Google Scholar] [CrossRef] [PubMed]
- Poggio, T.; Mhaskar, H.; Rosasco, L.; Miranda, B.; Liao, Q. Why and when can deep but not shallow-networks avoid the curse of dimensionality: A review. Int. J. Autom. Comput. 2017, 14, 503–519. [Google Scholar] [CrossRef]
- Grohs, P.; Hornung, F.; Jentzen, A.; Wurstemberger, P. A proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of black-scholes partial differential equations. arXiv 2018, arXiv:1809.02362. [Google Scholar] [CrossRef]
- Kharazmi, E.; Zhang, Z.; Karniadakis, G.E.M. hp-VPINNs: Variational physics-informed neural networks with domain decomposition. Comput. Methods Appl. Mech. Eng. 2021, 374, 113547. [Google Scholar] [CrossRef]
- Jagtapa, A.D.; Kharazmia, E.; Karniadakis, G.E. Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [Google Scholar] [CrossRef]
- Jagtapa, A.D.; Karniadakis, G.E. Extended Physics-Informed Neural Networks (XPINNs):A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations. Commun. Comput. Phys. 2020, 28, 2002–2041. [Google Scholar] [CrossRef]
- Sun, L.; Gao, H.; Panc, S.; Wang, J. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. Commun. Comput. Phys. 2020, 361, 112732. [Google Scholar] [CrossRef]
- Lu, L.; Pestourie, R.; Yao, W.; Wang, Z.; Verdugo, F.; Johnson, S.G. Physics-Informed Neural Networks with Hard Constraints for Inverse Design. Siam. J. Sci. Comput. 2020, 43, 1105–1132. [Google Scholar] [CrossRef]
- Cai, S.; Mao, Z.; Wang, Z.; Wang, Z.; Yin, M.; Karniadakis, G.E. Physics-informed neural networks (PINNs) for fluid mechanics: A review. Acta Mech. Sin. 2021, 37, 1727–1738. [Google Scholar] [CrossRef]
- Pu, R.; Feng, X. Physics-Informed Neural Networks for Solving Coupled Stokes-Darcy Equation. Entropy 2022, 24, 1106. [Google Scholar] [CrossRef]
- Yue, J.; Li, J. Efficient coupled deep neural networks for the time-dependent coupled Stokes-Darcy problems. Appl. Math. Comput. 2023, 437, 127514. [Google Scholar] [CrossRef]
- Zhang, Z. Neural Network Method for Solving Forward and Inverse Problems of Navier-Stokes/Darcy Coupling Model. Master’s Thesis, East China Normal University, Shanghai, China, 2023. [Google Scholar]
- McClenny, L.D.; Braga-Neto, U.M. Self-adaptive physics-informed neural networks. J. Comput. Phys. 2023, 474, 111722. [Google Scholar] [CrossRef]
- Berardi, M.; Difonzo, F.V.; Icardi, M. Inverse Physics-Informed Neural Networks for transport models in porous materials, Computer Methods in Applied Mechanics and Engineering. Comput. Methods Appl. Mech. Eng. 2025, 435, 117628. [Google Scholar] [CrossRef]
- Farkane, A.; Ghogho, M.; Oudani, M.; Boutayeb, M. Enhancing physics informed neural networks for solving Navier-Stokes equations. Int. J. Numer. Meth. Fluids 2024, 96, 381–396. [Google Scholar] [CrossRef]
- Berg, J.; Nyström, K. A unified deep artificial neural network approach to partial differential equations in complex geometries. Neurocomputing 2018, 317, 28–41. [Google Scholar] [CrossRef]
- Ren, P.; Rao, C.; Liu, Y.; Wang, J.; Sun, H. PhyCRNet: Physics-informed convolutional-recurrent network for solving spatiotemporal PDEs. Comput. Methods Appl. Mech. Eng. 2022, 389, 114399. [Google Scholar] [CrossRef]
- Arda, M.; Ali, C.B.; Ehsan, H.; Erdogan, M. An unsupervised latent/output physics-informed convolutional-LSTM network for solving partial differential equations using peridynamic differential operator. Comput. Methods Appl. Mech. Eng. 2023, 407, 115944. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).