Next Article in Journal
Exploiting the Flexibility and Frequency Support Capability of Grid-Forming Energy Storage: A Bi-Level Robust Planning Model Considering Uncertainties
Previous Article in Journal
Data-Driven Analysis of the Effectiveness of Water Control Measures in Offshore Horizontal Wells
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Physics-Constrained Meta-Embedded Neural Network for Bottom-Hole Pressure Prediction in Radial Oil Flow Reservoirs

1
College of Petroleum Engineering, Xi’an Shiyou University, Xi’an 710065, China
2
School of Information Science and Technology, Northwest University, Xi’an 710127, China
3
School of Earth Science and Engineering, Xi’an Shiyou University, Xi’an 710065, China
*
Author to whom correspondence should be addressed.
Processes 2026, 14(1), 89; https://doi.org/10.3390/pr14010089 (registering DOI)
Submission received: 11 November 2025 / Revised: 12 December 2025 / Accepted: 23 December 2025 / Published: 26 December 2025
(This article belongs to the Topic Exploitation and Underground Storage of Oil and Gas)

Abstract

With the advancement of petroleum engineering, the increasing complexity of formations and unpredictable conditions make wellbore pressure prediction more challenging. Accurate bottom-hole pressure (BHP) prediction is crucial for the safe and stable development of oil and gas reservoirs. Solving the partial differential equations (PDEs) governing fluid flow is key to this prediction. As deep learning becomes widespread in scientific and engineering applications, physics-informed neural networks (PINNs) have emerged as powerful tools for solving PDEs. However, traditional PINNs face challenges such as insufficient fitting accuracy, large errors, and gradient explosion. This study introduces MetaPress, a novel physics-informed neural network structure, to address inaccurate formation pressure prediction. MetaPress incorporates a meta-learning-based embedding function that integrates spatial information into the input and forget gates of Long Short-Term Memory networks. This enables the model to capture complex spatiotemporal features of flow problems, improving its generalization and nonlinear modeling capabilities. Using the MetaPress architecture, we predicted BHP under single-phase flow conditions, achieving an error of less than 2% for L2. This approach offers a novel method for solving seepage equations and predicting BHP, providing new insights for subsequent studies on reservoir fluid flow processes.

1. Introduction

In oil exploitation, the complex and variable formation conditions make it difficult to compute and predict formation pressure, which poses significant obstacles to the efficient and safe development of oil and gas reservoirs. Formation pressure and bottom-hole pressure (BHP) are key parameters for well productivity evaluation and production optimization. Accurate prediction of these pressures relies on solving the governing seepage partial differential equations (PDEs). Besides traditional methods like the Finite Element Method (FEM) [1] and Finite Difference Method (FDM) [2], more researchers are beginning to use neural networks to solve PDEs.
Neural networks have been widely adopted in scientific and engineering problems [3], including petroleum engineering, due to their ability to approximate complex nonlinear relationships and reduce the computational cost of traditional numerical schemes [4].
Various studies on artificial intelligence and neural networks have been conducted in the petroleum industry. These can be divided into four main research areas: the application of artificial neural networks in exploration, drilling, production, and reservoir engineering [5]. In exploration, Ross [6] used neural networks to improve the resolution and clarity of seismic data, enabling the rapid and accurate identification of porosity in tight sandstone. However, distinguishing between shale and sandstone with differing acoustic impedances in the strata remains challenging. Ogiesoba et al. [7] applied neural networks to convert 3D seismic data into well logging data, analyzing lithology and depositional attributes to predict hydrocarbon sweet-spot distribution in the Serbin field in Texas. In drilling, Arehart [8] utilized neural networks to determine drill bit wear levels, reducing drilling costs. However, only the physical parameters of the drill bit were considered, neglecting the impact of formation parameters during drilling. Yashodhan [9] improved on previous work by using drilling and rock strength data to predict drill bit wear using neural networks. This approach optimizes drilling speed and reduces costs. In oil production, Khan [10] applied neural networks to analyze production data from oil wells and predict the optimal production rate to ensure efficient and stable production. However, the model lacked consideration of formation data, leading to lower accuracy. Guofan Luo [11] used horizontal well completion data to establish a model that links first-year production to well completion characteristics, predicting oil well output. In reservoir engineering, Elshafei et al. [12] used neural networks to predict porosity and water saturation in formations. Hamam et al. [13] developed a neural network tool to inject CO2 into natural fractures. Yuxi Yang et al. [14] used small datasets to train deep neural networks to predict initial production rates in tight oil horizontal wells. In addition to these applications, neural networks are used in other petroleum contexts, such as predicting flow states and fracture directions and simulating microflows.
In predicting bottom-hole and formation pressure, neural networks are valuable tools, enabling accurate predictions that help researchers understand the fluid phase characteristics in the formation, control production rates, and assess formation energy. For normal oil well production, determining the production state is crucial. Nait Amar et al. [15] proposed using Support Vector Regression (SVR) optimized by the Firefly Algorithm (FFA) to train and predict oil well BHP for 100 oil wells and reservoir datasets from multiple fields. However, the prediction accuracy was limited since the training dataset focused solely on oil well data without incorporating reservoir data. Chengkai Zhang et al. [16] proposed a hybrid neural network integrating Convolutional Neural Networks (CNNs) and Gate Recurrent Units (GRUs). This method uses CNN to extract reservoir spatial parameters and trains the model with measured BHP to predict BHP fluctuations. However, this hybrid model is a “black-box” with limited interpretability. Nwanwe et al. [17] addressed the limitations of “black-box” models by proposing a visualized mathematical (white-box) neural network model for real-time prediction of bottom-hole flowing pressure using collected real-time data points. The training and prediction of BHP are based on data collected in the field. The training and prediction of fluctuating states between data improved neural networks, with low prediction accuracies. Despite various improvements or hybrid models, the accuracy of the oil well BHP prediction remains suboptimal. The root cause is that, for accurate BHP prediction, the formation flow state must be considered and the seepage equation must be solved. Introducing seepage into neural network models helps networks learn physical relationships between formation and fluid, leading to more accurate training and predictions. Recent data-driven studies have shown that pressure behavior can be learned directly from simulator outputs. For gas-condensate settings, neuro-adaptive models have reproduced production and pressure drop dynamics with high fidelity, evidencing the feasibility of learning pressure trajectories from synthetic labels [18]. Complementarily, Dynamic Mode Decomposition (DMD) reconstructs reservoir pressure fields from spatiotemporal snapshots with very high accuracy on benchmark cases, highlighting an effective modal route to pressure prediction [19]. More recent dynamic mode learning further improves long-horizon tracking of (multi-phase) pressure dynamics via learned feature reduction [20]. Current studies, however, do not consider the seepage equation in BHP prediction models, and the key of the seepage equation is to solve the related PDE problem.
The advantage of using neural networks for PDE-solving lies in their ability to deal with high-dimensional and complex problems by training and testing on data without directly solving the PDEs. This approach has shown immense potential in dealing with PDEs [21]. However, when using methods like Convolutional Neural Networks (CNNs) [22], energy functional minimization networks [23], and deep neural networks [24] to solve PDEs, large real datasets are required, and the physical mechanisms are often neglected. This limitation leads to prediction inaccuracies and poor fitting when dealing with complex formation conditions and production variations. To address these challenges, Raissi et al. [25] introduced a new neural network architecture, the physics-informed neural network (PINN), to solve nonlinear PDEs. This method incorporates physical constraints into the neural network training process, optimizing the network with residuals from boundary conditions and control equations as the loss function. By incorporating physical information during training, PINNs achieve better results with less training data than traditional neural networks. Due to their flexibility and expressive power in function approximation, PINNs have been widely applied to solve various types of PDEs, such as integral differential equations [26], surface PDEs [27], and stochastic differential equations [28]. In practical applications, PINNs have been used in fluid mechanics [29], solid mechanics [30], material science [31], mechanical engineering [32], mathematical calculations [33], and electrical systems [34]. However, in the petroleum industry, PINN applications remain limited, often relying on optimized or modified architectures [35,36].
PINNs leverage physical information as hard or soft boundary constraints, which reduces the dependence on labeled data and enforces physical consistency during training. However, they can still suffer from prediction inaccuracies [37], imbalanced loss terms, and convergence difficulties [38], especially when training data are sparse and reservoir conditions are complex. These limitations motivate the development of improved physics-constrained architectures for reservoir seepage problems [39].
This study proposes a novel physics-constrained neural network architecture, MetaPress, for PDE-based BHP prediction in oil and gas production. Specifically, we derive the seepage equation for formation fluids and clarify the governing equations and boundary conditions in the seepage process. Based on this formulation, we construct a meta function that encodes spatial information and embed it into an LSTM framework, forming the MetaPress network for solving the seepage equation. The loss function is built from the fundamental PDEs and seepage equations, combining control equations, inner and outer boundary conditions, and an L2-norm penalty to improve generalization. We then perform numerical experiments on a synthetic single-well radial flow benchmark, using PEBI-simulated data as a reference, and compare MetaPress with a baseline PINN under the same settings to evaluate its performance.
The remainder of this paper is structured as follows. Section 2 primarily introduces and details the MetaPress construction process, including the derivation of the seepage equation, the MetaPress network framework, the construction of the error function, and the setting of neural network parameters. Section 3 focuses on the experimental process and results of employing MetaPress for seepage solution and pressure prediction, including a comparison of pilot models, the determination of model parameters, pressure prediction experiments, and an analysis and discussion of the results. Finally, some key conclusions are summarized in Section 4. It is important to note that the present work should be regarded as a proof-of-concept study. We focus on an idealized but classical single-phase, slightly compressible radial flow benchmark in a horizontal reservoir, and use this setting to assess the capability of MetaPress to predict bottom-hole pressure under PDE-constrained conditions. The goal is to demonstrate the methodological advantages of the MetaPress architecture in a controlled scenario, rather than to provide a fully general model for all field-scale reservoir settings.

2. Materials and Methods

2.1. Seepage Equation of Formation Fluid

The traditional general form of parameterized nonlinear partial differential equations (PDE) is given as follows:
u t + N u ; λ = 0 , x Ω , t 0 , T
where u(t, x) represents the latent (hidden) solution, N[u,λ] is a nonlinear operator parameterized by λ, and Ω is a subset of RD. The above formulas contain various mathematical and physical expressions, including conservation laws and diffusion processes [39].
When λ, is given in the model, the function can be defined as follows:
u t + N u = 0 , x Ω , t 0 , T
Here, u(t, x) denotes the solution to the PDE, and N[●] is a nonlinear operator.
The continuous time approach defines a f (t, x) such that
f = u t + N u
In this study, the flow equation adheres to the following assumptions: the fluid flow in the reservoir is single-phase oil flow, where the fluid and pore are slightly compressible. Additionally, the effects of the wellbore and the near-wellbore are also considered. In reservoir simulations, the assumption is made that the reservoir is horizontal, thus neglecting the effects of gravity [35].
Based on Darcy’s experiments on fluid flow, the relationship of fluid movement in the core can be expressed as follows:
Q = K A Δ p μ L
Here, Q is the flow rate, K is the absolute permeability tensor of the porous medium, A is the cross-sectional area of the core, p is the pressure, Δp is the fluid pressure difference, μ is the fluid viscosity, and L is the core length.
Rewriting the above equation in differential form
v = K μ p
where ν denotes the Darcy velocity.
The continuity equation of unsteady active single-phase flow is as follows:
ρ u = t ρ ϕ + q ¯
where ρ denotes the density, ϕ denotes porosity, and q ¯ denotes the average volume flow rate.
Substituting Equation (5) into Equation (6), the flow equation is as follows:
K μ B ( p ) = t ( ϕ B ) + q s c
where B denotes the formation volume factor, and qsc denotes volume flow per unit volume per unit time under standard conditions.
Assuming constant temperature, with the fluid and pores being slightly compressible, the following relationship holds:
B = B r e f 1 + C f ( p p r e f ) ϕ = ϕ r e f 1 + C r ( p p r e f )
where Bref is the formation volume factor under reference conditions, Cf is the fluid compression coefficient, Cr is the rock compression coefficient, Pref is the formation pressure under reference conditions, and ϕref is the porosity under reference conditions.
Substituting Equation (8) into the right-hand side of Equation (7), the equation can be written as
t ( ϕ B ) = t ϕ r e f 1 + ( C r + C f ) ( p p r e f ) + C r C f ( p p r e f ) 2 B r e f
Given that the term Cr·Cf is small, this item was omitted and Equation (9) can be written as follows:
t ( ϕ B ) =   t ϕ r e f 1 + ( C r + C f ) ( p p r e f ) B r e f =   p ϕ r e f 1 + ( C r + C f ) ( p p r e f ) B r e f p t =   ϕ r e f ( C r + C f ) B r e f p t
According to the flow equation earlier, Equation (6) is as follows:
K μ B ( p ) = t ( ϕ B ) + q s c
Substituting Equation (11) into Equation (10), the final form of the seepage equation is as follows:
K μ B ( p ) = ϕ r e f ( C r + C f ) B r e f p t + q s c
Because of
Q = A v A = 2 π r h
The differential form of Darcy’s Theorem is as follows:
v = K μ p
Then, by combining Equations (13) and (14), the following equation can be captured:
Q = 2 π r h K μ p r
By separating the variables and integrating left and right for Equation (15), the relationship between formation pressure and radius can be expressed as follows:
Q = 2 π h K ( p e p w f ) μ B ln ( r e / r w )
Considering the wellbore skin effect, Equation (16) can be written as
Q = 2 π h K ( p e p w f ) μ B ln ( r e / r w ) + S
where Pwf denotes the BHP, h denotes the reservoir thickness, re denotes the equivalent outer boundary radius of the reservoir and can also be expressed as rb, rw denotes the actual radius of the wellbore, and S denotes the skin coefficient.
At the inner boundary (re = rw), the following equation can be obtained:
ln ( r e / r w ) = 0
Then, by combining Equations (15) and (17) and (18), the inner boundary conditions are as follows:
Q = 2 π r w h k μ p r p r p w f = Q B μ 2 π k h S
The constant pressure boundary condition can be given as
p ( t , r e ) = P b
where Pr denotes pressure at r = rw, Q denotes the flow rate, re denotes the equivalent outer boundary radius of the reservoir, and Pb denotes the pressure at the constant pressure boundary.
These steps explicitly connect the physical assumptions (slightly compressible single-phase flow, Darcy velocity, and skin effect) to the final governing equation and the inner/outer boundary conditions in Equations (12)–(20), which are later embedded in the MetaPress loss formulation.

2.2. MetaPress Neural Network Framework

In this section, we detail the MetaPress architecture designed to solve the radial seepage PDE for reservoir flow. The key idea is to embed a physics-based meta function into an LSTM-based PINN to guide convergence toward physically consistent solutions.
When the NN part struggles to converge and is prone to getting stuck in local minima, the Meta function is introduced in the hidden layers of the NN, guiding the network to approach the differential equation while constraining the network’s part for fitting, resulting in improved convergence accuracy.
The solution process and network architecture of MetaPress are shown in Figure 1.
The input layer is a three-dimensional input concerning r, and t. The Meta is a function of r, re, rw. The formula is expressed as follows:
f φ { φ ( r ) } = e φ ( r ) r w φ ( r ) = r r e 1
Physically, the Meta function is constructed as a dimensionless descriptor of the radial configuration of the reservoir, Meta = f(r, rw, re), where rw and re denote the wellbore radius and the equivalent outer boundary radius, respectively. This function encodes the relative position of each point between the wellbore and the boundary and reflects the typical radial pressure shape implied by the seepage equation. In particular, Meta varies more strongly in the near-wellbore region, where pressure gradients are highest, and more smoothly in the far-field region, where pressure changes are mild. As a result, Meta acts as a physically informed feature-scaling term: it amplifies the network’s sensitivity to locations with strong pressure variation and attenuates sensitivity where the solution is slowly varying.
In the input, hidden, and output layers of the NN, physical constraints on the input elements are applied using the function f{ϕ(r)}, which is incorporated into the network architecture to guide the network during training and fitting.
Where r represents the pressure coordinates in the formation, t represents the time point, re denotes the equivalent outer boundary radius of the reservoir, and rw denotes the actual radius of the wellbore.
By embedding the Meta function into Long Short-Term Memory (LSTM), a more general method for coping with input conditions is provided, reducing the impact of non-relevant factors in the training process. It enhances the NN’s ability to capture information during training and solving, reducing the reliance on excessive conditional assumptions of traditional seepage equations. Instead of eigenvalue inputs, Meta provides a complete expression of the input relationship in complex, high-dimensional, multi-phase seepage environment conditions. In the face of seepage problems under different conditions, the flexibility and adaptability of the Meta function make the input of the neural network more mobile.
When Meta is injected into the LSTM gates, it does not simply provide an additional input channel; instead, it modulates the gating dynamics according to the physically meaningful radial structure. In the forget and input gates, Meta enters linearly before the sigmoid nonlinearity and therefore rescales the effective pre-activation as a function of radius. Near the wellbore, where Meta exhibits a larger magnitude and sharper variation, the gates become more responsive to changes in the hidden state, enabling the network to allocate more capacity to capturing rapid pressure depletion and strong gradients. In the far field, where Meta varies only slowly, the same mechanism encourages smoother temporal updates. This embedding introduces an inductive bias toward pressure fields that respect radial symmetry and physically realistic near-wellbore behavior, while simultaneously regularizing the gate activations and improving training stability.
Since the fluid flow process in reservoirs is a time series state, the fluid flow in the wellbore and pressure in the reservoir vary over time. In solving time series problem, traditional neural networks often cannot consider the mutual effects of sequential data. Thus, LSTM is used in place of the original NN architecture. LSTM is specialized in handling sequential data and learning long-term dependencies, solving issues of gradient explosion or vanishing gradients during training. Using LSTM, the time step is incorporated into the training and simulation of formation pressure, making the results more consistent with actual pressure changes over time. In subsequent experiments, the comprehensive test loss for the training of the PINN model using the DNN algorithm ranges from 0.03 to 0.04. In contrast, the comprehensive test loss for the training of the MetaPress algorithm was below 0.02, and the longer the training time and the greater the amount of training data, the lower the test loss was, approaching 0.01. MetaPress, when coupled with LSTM, achieves lower training and test losses than the baselines. This leads to better fitting performance and predictions that are closer to the simulated formation data.
A special gating system is introduced when constructing Long Short-Term Memory (LSTM) networks, including the forget gate, input gate, and output gate. These gates manage the priority of data, deciding whether information should be remembered or forgotten, facilitating the network’s handling of temporal data and sequential dependencies [40]. The primary framework of the Meta-embedded LSTM is as follows:
Forget Gate: The first layer is the forget gate. It can decide what information will be thrown away from the cell state. It determines how much of the previous unit state Ct−1 is retained in the current unit state Ct. The meta function is injected into the gating system at the first level. The calculation is as follows:
f t = σ ( W f h t 1 , M e t a + b f ) [ W f ] h t 1 M e t a = [ W f h   , W f m ] h t 1 M e t a                                     = W f h h t 1 + W f m M e t a
where ft is the forget gate’s threshold at time t. σ is the sigmoid activation function. Wf is the weight matrix of the forget gate. Meta denotes the input at time t. ht−1 is the output state from the previous time step t − 1. The weight matrix Wf is a combination of two different matrices: Wfh corresponds to the previous output term ht−1, and Wfm corresponds to the input term Meta.
Input Gate: The second layer is the input gate. It determines how much of the current input Meta and the previous output ht−1 should be preserved in the current cell state ct. The equation is as follows:
i t = σ ( W i h t 1 , M e t a + b i ) c ˜ t = t a n h ( W c h t 1 , M e t a + b c ) c t = f t c t 1 + i t c ˜ t
where it is the input gate threshold at time t. c ˜ t is the intermediate cell state at time t. ct−1 is the output state at time t − 1. ct is the output state at time t and the next memory input state. Wi and Wc are the weight matrices. bi and bc are the bias term. σ is the sigmoid activation function, tanh is the hyperbolic tangent activation function, and is the Element-wise multiplication.
Output Gate: The third layer is the output gate. It determines how much information of the current cell state ct is output as the LSTM output value ht. The calculation is as follows:
o t = σ ( W o h t 1 , M e t a + b o ) h t = o t t a n h ( c t )
where ot is the output gate activation at time t. Wo is the weight matrix, and bo is the bias term. tanh is the hyperbolic tangent activation function. ht is the output value at time t, ct is the output state at time t, and is the Element-wise summation.
In the MetaPress neural network, the optimization of two key parameters typically w, b uses the minimization of the loss function. In this work, both MetaPress and the baseline PINN use the same compact LSTM architecture with 5 hidden layers and 5 units per layer. This 5 × 5 configuration is adopted as a fixed baseline that is sufficiently expressive for the single-phase radial benchmark while keeping the computational cost moderate and comparable between the two models; it is not intended to represent a globally optimized network design. The input includes the Meta function and time t, while the output corresponds to pressure values as functions of x, y, and t. Two activation functions, tanh and sigmoid, are used in the network. The loss function is optimized using the Adam optimizer. In this proof-of-concept study, this 5 × 5 LSTM configuration is adopted as a simple, fixed architecture that is sufficient to represent the single-phase radial benchmark, while keeping the computational cost moderate and comparable between MetaPress and the baseline PINN, rather than as an exhaustively optimized design.

2.3. Loss Function Construction

The loss function is constructed for the MetaPress neural network by incorporating the control equation, initial conditions, and boundary conditions. The general form of the loss function for a PDE is expressed as
M S E = M S E b + M S E f + M S E u L o s s = M S E
where Loss denotes the total loss function. MSE is the minimum mean square error, MSEb is the residual of the boundary condition, MSEf is the residual of the governing equation, and MSEu is the residual between the exact solution and the network output.
The specific form of the loss function based on Equations (2) and (3) is as follows:
M S E b = 1 N b i = 1 N b f b u i , x b i 2 M S E f = 1 N f i = 1 N f f t i , x f i 2 M S E u = 1 N u i = 1 N f u i t i , x f i u i 2
where {ui}i = Nf is the solution of the equation that acted as the labeled data, {ui(ti,xif)}i = Nf is the neural network’s output, Nf is the number training samples of control equation{ti,xif}i = Nf, and Nb is the number of training samples of boundary {ti,xib}i = Nb. When the network parameters are optimized by minimizing the loss function, the MetaPress neural network output results will converge to the solution of the PDE.
Two types of constraints are generally used in constructing the boundary conditions: “hard boundary constraints” and “soft boundary constraints”. Soft boundary constraints require the sample error MSEu to be calculated at the labeled samples, while hard boundary constraints are directly constrained using MSEf and MSEb, and MSEu training is not involved in the training of the NN. Soft boundary constraints slow down the convergence of NNs trained based on physical information, and soft boundary constraints require labeled data. Hard boundary constraints provide more rigorous constraints, which is why dual boundary conditions (hard and soft boundaries) are selected as constraints in this study [41].
Control functions, boundary conditions, and initial conditions are constructed based on Equations (7), (8), (12), (17) and (18).
The control equation function f (r, t) is given as follows:
f = ϕ r e f ( C r + C f ) B r e f p t + q s c K μ B ( p )
The inner boundary control function fin-b (p) is given as
f in b = 2 π r w h k μ p r Q
The outer boundary control function fout-b (t, r) is given as
f c o n s = p ( t , r max ) P b
Based on these control functions, the loss function is computed as follows:
M S E f = 1 N f i = 1 N f f t i , r f i 2
M S E c o n s = 1 N c o n s i = 1 N c o n s max ( 0 , p ( t i , r i ) P b ) 2
M S E b = M S E i n b + M S E o u t b = 1 N i n b i = 1 N i n b f i n b p ( t i , r w ) , r i 2 + 1 N o u t b i = 1 N o u t b f o u t b t i , r b i 2
M S E u = 1 N i n b i = 1 N i n b p t i , r w i p w f i 2
where λ is the coefficient of the balancing term for each loss term. Nf and Ncons denote the number of training samples that satisfy the equation f. Nin-b and Nout-b denote the number of training samples corresponding to the inner and outer boundary conditions. p (ti,riw) denotes the function p (t,rw) at t = ti, i∈[1,Ncons].
In this work, these coefficients λ play the role of balancing hyperparameters that weight the contributions of the PDE residual, the inner and outer boundary conditions, and the data-fit term in the total loss. Following common practice in physics-informed neural networks, we choose these weights heuristically rather than from a formal optimization procedure. The values used in our experiments are selected to provide numerically stable training and to keep the different residual components within a comparable order of magnitude for the single-phase radial benchmark considered here. They should therefore be regarded as reasonable, problem-dependent settings for this proof-of-concept study, rather than as globally optimal choices.
The loss function, as shown in Figure 1, can be expressed as the following specific formula:
L o s s = λ f M S E f + λ c o n s M S E c o n s + λ o u t b M S E b + λ u M S E u

2.4. Geological Background and Network Parameter Settings

This experiment employs the radial flow single-phase oil seepage equation under constant pressure boundary conditions. All experiments were conducted as single-phase radial flow tests within a constant pressure circular domain featuring vertical wells, controlled by BHP. The experiments were simulated based on real data using the PEBI grid black oil model to obtain single-well radial flow production data. No field data was directly used for training or fitting.
The parameters for the seepage equation experiment are shown in Table 1. The experiment adopts constant pressure boundary conditions. The reservoir’s radius is 200 m, and the formation’s permeability is 40 mD. The experimental pressure is simulated at 1.77 × 107 Pa. The oil well BHP for 0–2 days was used as the training sample in the experimental study, and the data for the next 8 days were used as a validation set for the test results.
The experiments were programmed in PyCharm 2022.2.3 using Python 3.9. Data visualization was performed using MATLAB R2023a and Origin 2021 software, running on a desktop with an Intel Core i5-12400F 6-core processor at 2.5 GHz, an RTX 3080 Ti GPU, and 32 GB RAM running Windows 10.
To ensure reproducibility, we briefly summarize the sampling and normalization strategy used to construct the collocation sets. For each simulator output, we extract four point clouds corresponding to interior points X, inner boundary points X1, outer boundary points X2, and initial condition points X3, together with time stamps t0 and BHP trajectories (t, pwf). Each set is then augmented to 500 samples by random resampling without replacement, which are used as collocation points for the PDE residual, inner and outer boundary conditions, and initial condition, respectively. Spatial coordinates are converted to the radial distance r = (x2 + y2)1/2 and normalized to [−1, 1] via r* = 2r/(200√2) − 1, while time is similarly mapped to [−1, 1] by t* = 2t/1.2 × 106 − 1. In addition, a meta feature h(r*) = exp [0.14(0.1 − r*)] is computed and concatenated with (r*,t*) at the LSTM input, and all pressure outputs are represented in Pascals through an affine scaling of the network output. These normalized collocation sets define the training mesh for MetaPress and are fixed for all experiments.

3. Experimental Process and Results

3.1. Preliminary Model Comparison

The seepage equation is highly complex, and without explicit solution constraints, the model may not be governed by it, potentially failing to converge. Typically, two approaches are used to resolve such challenges: (1) adding constraint conditions to ensure convergence, and (2) replacing the type of neural network in the PINN framework with one that can resolve issues like gradient explosion and vanishing gradients. This study introduces MetaPress, which incorporates a Meta architecture to enhance physical constraints and LSTM as the neural network architecture.
As shown in Figure 2, the pressure distribution during a well’s constant boundary production process is illustrated. The pressure change is smaller closer to the pressure boundary, while it becomes larger near the wellbore. Due to the intense pressure variation near the wellbore, this area is prone to gradient explosion or vanishing during pressure prediction.
Before formal training, preliminary experiments are conducted to assess the feasibility and necessity of training, as shown in Figure 3.
During training, various errors (due to improper training operations or condition settings) can lead to a range of issues, causing the training process to fail. Figure 3a shows the prediction results for day 6 after normal training. Figure 3b indicates an error in the loss function handling, where the pressure control boundary term and output term in the loss function were set incorrectly or lost. Figure 3c represents the experimental prediction for day 7 after training. Figure 3d shows overfitting during training.
To prevent these errors, the sample point selection must first be adjusted by using radius r as the input sample point instead of simple x and y functions. Furthermore, the Meta function should be introduced during network training, and specific loss functions and balancing coefficients should be established for different loss terms. To address gradient explosion, LSTM is employed as the neural network architecture for training. Additionally, to prevent overfitting during training and improve generalization on unseen data, an L2 norm penalty term is added to the loss function, which is the sum of the squares of each parameter in the weight vector. The final loss function can be expressed as depicted in the following Figure 4:
Figure 5 shows the test error after 1000 iterations using the MetaPress neural network. The red dashed line indicates how the training loss changes as iteration steps increase. The green solid line represents the loss threshold when the training fit stabilizes, with the shaded area indicating that the training fit remains unstable when iteration steps are less than 200. When the test iteration reaches 200 steps, the error stabilizes between 0.01 and 0.02, demonstrating a small overall loss and a high fitting accuracy.
Figure 6 demonstrates the effect of the learning rate on the test error during optimization with the Adam optimizer. When the initial learning rate exceeds 0.01, the test loss increases. When the initial learning rate exceeds 0.05, the test loss increases sharply. When the learning rate is 0.08, the neural network does not converge. However, the test loss reaches its minimum when the initial learning rate is 0.01.
By comparing experiments, it can be concluded that the MetaPress neural network exhibits better convergence during training. The Meta function as a constraint provides a more favorable and reasonable gradient descent direction, enabling the model to converge more quickly and obtain a physically constrained solution, making it a robust neural network structure. The model architecture and training hyperparameters are shown in Table 2.
It should be noted that the hidden layer depth and width in Table 2 are not the result of a systematic architecture search. Instead, they are chosen as a reasonably compact baseline to isolate the impact of the Meta embedding and physics-constrained loss. A more extensive sensitivity analysis with respect to network size and other hyperparameters is an important topic for future work, particularly for more complex multi-phase and heterogeneous reservoir settings.
The loss-balancing coefficients (λf, λcons, λin-b, λout-b, λu) used in this study were selected heuristically to stabilize training in the single-phase radial flow benchmark. These values were not optimized through a formal weighting strategy such as adaptive reweighting, gradient normalization, or dynamic loss-scaling. We acknowledge that the magnitude disparities (e.g., 105 vs. 10−6) may influence the convergence landscape and potentially bias the contribution of different residuals during optimization. Because the goal of this work is a proof-of-concept demonstration, we limited the analysis to a fixed set of weights; however, a systematic study of weight sensitivity and its effect on convergence behavior will be conducted in future work to improve reproducibility and provide a more rigorous understanding of the role of loss balancing.

3.2. Result

Based on the basic parameters of the reservoir seepage equation and the fundamental settings of the neural network, the seepage equation under constant pressure boundary conditions is trained. The experiment uses the 0–2 days pressure dataset as a training dataset to test pressures from 3 to 10 days. In particularly, to analyze and determine the process of pressure drop propagation, 0.1 and 0.2 days (when the experiment started and when the pressure drop just started to propagate) are shown and compared in the experimental results.
Figure 7 and Figure 8 show the real pressure distributions at different times before training. In Figure 7, after 0.1 day of production, a near-wellbore region of accelerated pressure depletion appears, which can be interpreted as the onset of a radial pressure–drawdown cone. As production time increases, the pressure decreases monotonically and the radius of influence of this drawdown cone expands from 0.1 to 8 days, while the bottom-hole flowing pressure continuously declines, which is consistent with single-phase radial flow theory.
Figure 8 resolves the 0–0.1 m region around the wellbore. The pressure exhibits a very steep radial gradient, with the minimum pressure at the well center (r = 0) and the gradient magnitude increasing as r decreases. This behavior reflects the singular nature of the radial seepage solution near the well and explains why this inner zone dominates the overall funnel-shaped drawdown geometry.
Figure 9 shows the pressure distribution of the oil well on the 9th and 10th days predicted by the seepage equation (using the same color scale and coordinate limits for comparison and analysis). It is observed that the pressure drop funnels predicted on the 9th and 10th days are significantly different. The pressure gradient increases significantly near the wellbore. The range of the pressure gradient range spreads out wider as time goes on.
Figure 10 compares the pressure distributions from the seepage equation and the MetaPress predictions using identical color scales and coordinate limits. From day 6 to day 10, the dominant pressure drop remains confined within approximately 100 m of the wellbore, while the region beyond 100 m stays close to the imposed boundary pressure, which is consistent with the constant pressure outer boundary condition. MetaPress reproduces both the correct boundary value and the spatial decay of the radial pressure gradient: discrepancies are confined to small variations within the near-wellbore zone, and the overall L2 error of the pressure field remains below 2%.
Figure 11 focuses on the interval from 0 to 0.1 m around the bottom-hole, again using the same color scale and coordinate limits. In this inner zone, the magnitude of pressure depletion and the radial gradient are significantly larger than in the 0–200 m window, which is consistent with the funnel-shaped drawdown cone observed at the reservoir scale. The MetaPress predictions accurately track both the local gradient and the minimum bottom-hole pressure, and the L2 loss in this most sensitive near-wellbore region remains below 2% relative to the simulator results.

3.3. Discussion

Based on the MetaPress neural network training, BHP predictions were made. Figure 12 shows the relative errors of BHP predictions using 2-day and 3-day BHP samples as training sets. It can be seen that when MetaPress is used to solve the seepage equation and predict BHP, the overall relative error is below 2%. The prediction accuracy significantly increases by utilizing physical information as boundary conditions and using a multi-conditional loss function. Using the 3-day data for training results in a lower relative error than using the 2-day data. However, since the test errors after both training periods are very close, the 2-day test error is only 0.0016 higher than the maximum 3-day training error. In contrast, the 2-day training can achieve no worse results with less training data. The 2-day training set is preferred to reduce the number of training steps.
To quantify the contribution of the meta-embedding, we conduct a controlled ablation in which we remove Meta from the LSTM gates while keeping all other settings identical (architecture, collocation sets, loss weights, optimizer, seeds, and data windows). We evaluate after training with 2-, 3-, and 4-day BHP and report the training error used elsewhere. As shown in Figure 13, MetaPress (with Meta) consistently yields lower error than the basal PINN (no-Meta) at all horizons: 0.015 vs. 0.032 (−53.1%), 0.0135 vs. 0.033 (−59.1%), and 0.011 vs. 0.042 (−73.8%). The comprehensive error of MetaPress ranges from 0.011 to 0.015. These results indicate that Meta-Press shows better training precision and prediction results under the studied single-phase, radial, constant pressure boundary, BHP-controlled setting. As training time and sample size increase, the error decreases significantly and prediction accuracy also decreases.
In this study, we used the L2 loss (mean squared error) to quantify the performance of the model. This is a standard and widely used method for measuring error. Although RMSE is also commonly used for evaluating the model, since the L2 loss is highly correlated with RMSE (RMSE is the square root of the L2 loss), we believe that the L2 loss is sufficient to reflect the model’s performance. Therefore, we no longer calculate RMSE or other evaluation metrics separately.
In addition, although the study primarily reports L2 errors for pressure fields and BHP prediction, we recognize the value of complementary metrics such as RMSE, MAE, and relative error maps for a fuller model assessment. Future extensions of this work will include these metrics and spatially resolved error visualizations to strengthen quantitative comparisons between MetaPress and baseline PINNs.
Figure 14 presents the pressure training and prediction results using MetaPress in this study. The pressure distribution within the 0–200 m range after 2 days of training, with predictions for 3 to 10 days, had a total error of less than 0.2%. Over time, the pressure drop funnel deepens, the funnel taper widens, and the affected area expands. Pressure changes more rapidly closer to the wellbore. The results indicate that BHP decreases significantly within 10 days, but the taper and shape of the pressure drop funnel remain unchanged. This is because the pressure drop has already propagated to the 0.1 m range in the early stages of oil well production.

4. Conclusions

This paper presents a novel physics-constrained PDE solution method, MetaPress, for bottom-hole pressure prediction under a single-phase radial flow benchmark. MetaPress combines the Meta function as an additional input, LSTM as the neural network architecture, and multi-conditional physical information as a loss function. Within the single-well, single-phase, radial, constant boundary setting considered in this study, MetaPress is shown to be highly effective for solving the seepage equation and predicting reservoir pressure behavior. The main conclusions are as follows.
1. By embedding a physics-based meta function into an LSTM-based PINN, MetaPress effectively captures the coupled temporal–spatial characteristics of reservoir flow and stabilizes the training process
2. The multi-condition loss, which combines governing equations, boundary/initial conditions, and L2 regularization, improves generalization and yields accurate BHP and pressure predictions with L2 errors below 2%.
3. In this study, realistic production parameters are incorporated into a PEBI grid black oil simulation model to obtain radial flow production data for a single well, which are subsequently used for MetaPress training and prediction. The results demonstrate that, under the studied single-phase, radial, constant pressure, BHP-controlled benchmark, MetaPress outperforms the traditional PINN in solving seepage problems. MetaPress achieves an L2 error of less than 2% in predicting BHP and reservoir pressure distributions, and as the training duration and dataset size increase, the comprehensive prediction error approaches 1.1%. This indicates high accuracy for BHP prediction in this specific benchmark setting.
In addition, MetaPress is applicable to single-straight oil well production processes under single-phase radial flow with constant outer boundary pressure and PEBI-simulated labels. Moreover, the LSTM architecture is kept fixed to a compact 5 × 5 configuration for both MetaPress and the baseline PINN, and no systematic sensitivity study with respect to network depth or width is performed in this paper. We do not provide a quantitative analysis of how MetaPress scales to multilayer, anisotropic, multi-phase, or transient boundary PDE systems; such scalability remains to be assessed. As discussed in the paper, real two-phase oil–water seepage is considerably more complex. Future work will extend MetaPress to two-phase oil–water flow, multilayer and anisotropic media, and more flexible boundary and well control conditions, and will systematically investigate the associated accuracy and computational cost trade-offs in these more complex PDE settings.
Although MetaPress demonstrates strong performance in the single-phase, homogeneous, radial flow benchmark considered in this study, this setting represents an idealized scenario. Real reservoirs typically exhibit heterogeneity, anisotropy, multilayer interactions, and multi-phase flow behavior, and the well control conditions may vary with time. The present architecture has not yet been tested under these more complex PDE regimes. It is therefore inappropriate to directly generalize the current results to field-scale production forecasting. Extending MetaPress to multi-phase flow, spatially variable properties, transient boundaries, and layered or fractured systems will require additional architectural adjustments, as the coupled nonlinear terms and steep spatial–temporal contrasts may challenge the stability and scalability of the current LSTM–Meta embedding. These directions constitute the main focus of our ongoing research.

Author Contributions

Conceptualization, L.Q. and Y.Y.; methodology, L.Q. and Y.Y.; software, L.Q. and Y.Y.; validation, L.Q. and Y.Y.; formal analysis, L.Q. and Y.Y.; data curation, L.Q.; writing—original draft preparation, L.Q. and Y.Y.; writing—review and editing, Y.S.; visualization, Y.C.; project administration, Y.C.; funding acquisition, Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the CNPC Innovation Found (2022DQ02-0202), the Shaanxi Provincial Natural Science Basic Research Program (2024JC-YBQN-0397), and the National Key Research and Development Program of China: Efficient Development Technology for Fractured-Porous Carbonate Gas Reservoirs (2025ZD1406406).

Data Availability Statement

The datasets generated and/or analyzed during the current study are available from the corresponding author on reasonable request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Tornabene, F.; Fantuzzi, N.; Ubertini, F.; Viola, E. Strong Formulation Finite Element Method Based on Differential Quadrature: A Survey. Appl. Mech. Rev. 2015, 67, 020801. [Google Scholar] [CrossRef]
  2. Santoro, R.; Elishakoff, I. Accuracy of the finite difference method in stochastic setting. J. Sound Vib. 2006, 291, 275–284. [Google Scholar] [CrossRef]
  3. Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific Machine Learning Through Physics–Informed Neural Networks: Where we are and What’s Next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  4. Wu, Y.; Feng, J. Development and Application of Artificial Neural Network. Wireless Pers. Commun. 2018, 102, 1645–1656. [Google Scholar] [CrossRef]
  5. Alkinani, H.H.; Al-Hameedi, A.T.; Dunn-Norman, S.; Flori, R.E.; Alsaba, M.T.; Amer, A.S. Applications of Artificial Neural Networks in the Petroleum Industry: A Review. In Proceedings of the SPE Middle East Oil and Gas Show and Conference, Manama, Bahrain, 18–21 March 2019; p. D032S063R002. [Google Scholar] [CrossRef]
  6. Ross, C. Improving resolution and clarity with neural networks. In Proceedings of the SEG Technical Program Expanded Abstracts 2017, Houston, TX, USA, 17 August 2017; Society of Exploration Geophysicists: Houston, TX, USA, 2017; pp. 3072–3076. [Google Scholar] [CrossRef]
  7. Ogiesoba, O.; Ambrose, W. Seismic attributes investigation of depositional environments and hydrocarbon sweet-spot distribution in Serbin field, Taylor group, central Texas. In Proceedings of the Seg Technical Program Expanded Abstracts, Houston, TX, USA, 17 August 2017; Society of Exploration Geophysicists: Houston, TX, USA, 2017; pp. 2274–2278. [Google Scholar] [CrossRef]
  8. Arehart, R.A. Drill-Bit Diagnosis With Neural Networks. SPE Comput. Appl. 1990, 2, 24–28. [Google Scholar] [CrossRef]
  9. Gidh, Y.; Purwanto, A.; Bits, S. Artificial Neural Network Drilling Parameter Optimization System Improves ROP by Predicting/Managing Bit Wear. In SPE Intelligent Energy International; Society of Petroleum Engineers: Utrecht, The Netherlands, 2012; p. SPE-149801-MS. [Google Scholar] [CrossRef]
  10. Khan, M.R.; Tariq, Z.; Abdulraheem, A. Utilizing State of the Art Computational Intelligence to Estimate Oil Flow Rate in Artificial Lift Wells. In SPE Kingdom of Saudi Arabia Annual Technical Symposium and Exhibition; Society of Petroleum Engineers: Utrecht, The Netherlands, 2018; p. SPE-192321-MS. [Google Scholar] [CrossRef]
  11. Guofan, L.; Yao, T.; Mariia, B.C.E. Production Optimization Using Machine Learning in Bakken Shale. In Proceedings of the 6th Unconventional Resources Technology Conference, Houston, TX, USA, 9 July 2018; American Association of Petroleum Geologists: Tulsa, OK, USA, 2018. [Google Scholar] [CrossRef]
  12. Elshafei, M.; Hamada, G.M. Neural Network Identification of Hydrocarbon Potential of Shaly Sand Reservoirs. In Proceedings of the SPE Saudi Arabia Section Technical Symposium, Dhahran, Saudi Arabia, 7–8 May 2007; p. SPE-110959-MS. [Google Scholar] [CrossRef]
  13. Hamam, H.; Ertekin, T. A Generalized Varying Oil Compositions and Relative Permeability Screening Tool for Continuous Carbon Dioxide Injection in Naturally Fractured Reservoirs. In Proceedings of the SPE Kingdom of Saudi Arabia Annual Technical Symposium and Exhibition, Dammam, Saudi Arabia, 23–26 April 2018; p. SPE-192194-MS. [Google Scholar] [CrossRef]
  14. Yang, Y.; Tan, C.; Cheng, Y.; Luo, X.; Qiu, X. Using a Deep Neural Network with Small Datasets to Predict the Initial Production of Tight Oil Horizontal Wells. Electronics 2023, 12, 4570. [Google Scholar] [CrossRef]
  15. Nait Amar, M.; Zeraibi, N. A combined support vector regression with firefly algorithm for prediction of bottom hole pressure. SN Appl. Sci. 2019, 2, 23. [Google Scholar] [CrossRef]
  16. Zhang, C.; Zhang, R.; Zhu, Z.; Song, X.; Su, Y.; Li, G.; Han, L. Bottom hole pressure prediction based on hybrid neural networks and Bayesian optimization. Petrol. Sci. 2023, 20, 3712–3722. [Google Scholar] [CrossRef]
  17. Nwanwe, C.C.; Duru, U.I.; Anyadiegwu, C.; Ekejuba, A.I.B. An artificial neural network visible mathematical model for real-time prediction of multiphase flowing bottom-hole pressure in wellbores. Pet. Res. 2023, 8, 370–385. [Google Scholar] [CrossRef]
  18. Ali, A.; Guo, L. Neuro-Adaptive Learning Approach for Predicting Production Performance and Pressure Dynamics of Gas Condensation Reservoir. IFAC-PapersOnLine 2019, 52, 122–127. [Google Scholar] [CrossRef]
  19. Ali, A.; Guo, L. Data-driven based investigation of pressure dynamics in underground hydrocarbon reservoirs. Energy Rep. 2021, 7, 104–110. [Google Scholar] [CrossRef]
  20. Ali, A.; Diala, U.; Guo, L. Data-Driven Based Modelling of Pressure Dynamics in Multiphase Reservoir Model. In Proceedings of the 2022 UKACC 13th International Conference on Control (CONTROL), Plymouth, UK, 20–22 April 2022; pp. 189–194. [Google Scholar] [CrossRef]
  21. Kim, K.G. Book Review: Deep Learning. Healthc. Inform. Res. 2016, 22, 351–354. [Google Scholar] [CrossRef]
  22. Zhu, Y.; Zabaras, N. Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. J. Comput. Phys. 2018, 366, 415–447. [Google Scholar] [CrossRef]
  23. Yu, B. The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems. Commun. Math. Stat. 2018, 6, 1–12. [Google Scholar] [CrossRef]
  24. Long, Z.; Lu, Y.; Dong, B. PDE-Net 2.0: Learning PDEs from data with a numeric-symbolic hybrid deep network. J. Comput. Phys. 2019, 399, 108925. [Google Scholar] [CrossRef]
  25. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 2019, 378, 686–707. [Google Scholar] [CrossRef]
  26. Pang, G.; Lu, L.; Karniadakis, G.E. fPINNs: Fractional Physics-Informed Neural Networks. Siam J. Sci. Comput. 2019, 41, A2603–A2626. [Google Scholar] [CrossRef]
  27. Fang, Z.; Zhan, J. A Physics-Informed Neural Network Framework for PDEs on 3D Surfaces: Time Independent Problems. IEEE Access 2020, 8, 26328–26335. [Google Scholar] [CrossRef]
  28. Zhang, D.; Guo, L.; Karniadakis, G.E. Learning in Modal Space: Solving Time-Dependent Stochastic PDEs Using Physics-Informed Neural Networks. Siam J. Sci. Comput. 2020, 42, A639–A665. [Google Scholar] [CrossRef]
  29. Zhu, Q.; Zhao, Z.; Yan, J. Physics-informed machine learning for surrogate modeling of wind pressure and optimization of pressure sensor placement. Comput. Mech. 2023, 71, 481–491. [Google Scholar] [CrossRef]
  30. Bai, J.; Rabczuk, T.; Gupta, A.; Alzubaidi, L.; Gu, Y. A physics-informed neural network technique based on a modified loss function for computational 2D and 3D solid mechanics. Comput. Mech. 2023, 71, 543–562. [Google Scholar] [CrossRef]
  31. Chen, Y.; Lu, L.; Karniadakis, G.E.; Dal Negro, L. Physics-informed neural networks for inverse problems in nano-optics and metamaterials. Opt. Express 2020, 28, 11618–11633. [Google Scholar] [CrossRef]
  32. Meng, Z.; Qian, Q.; Xu, M.; Yu, B.; Yıldız, A.R.; Mirjalili, S. PINN-FORM: A new physics-informed neural network for reliability analysis with partial differential equation. Comput. Methods Appl. Mech. Eng. 2023, 414, 116172. [Google Scholar] [CrossRef]
  33. Wang, S.; Karniadakis, G.E. GMC-PINNs: A new general Monte Carlo PINNs method for solving fractional partial differential equations on irregular domains. Comput. Methods Appl. Mech. Eng. 2024, 429, 117189. [Google Scholar] [CrossRef]
  34. Huang, B.; Wang, J. Applications of Physics-Informed Neural Networks in Power Systems—A Review. IEEE Trans. Power Syst. 2023, 38, 572–588. [Google Scholar] [CrossRef]
  35. Shan, L.; Liu, C.; Liu, Y.; Tu, Y.; Deng, L.; Hei, X. Physics-informed machine learning for solving partial differential equations in porous media. Adv. Geo-Energy Res. 2023, 8, 37–44. [Google Scholar] [CrossRef]
  36. Lv, S.; Li, D.; Zha, W.; Shen, L.; Xing, Y. Solving seepage equation using physics-informed residual network without labeled data. Comput. Methods Appl. Mech. Eng. 2024, 418, 116563. [Google Scholar] [CrossRef]
  37. Shi, S.; Liu, D.; Ji, R.; Han, Y. An Adaptive Physics-Informed Neural Network with Two-Stage Learning Strategy to Solve Partial Differential Equations. Numer. Math. Theory Methods Appl. 2023, 16, 298–322. [Google Scholar] [CrossRef]
  38. Li, D.; Shen, L.; Zha, W.; Liu, X.; Tan, J. Physics-constrained deep learning for solving seepage equation. J. Petrol. Sci. Eng. 2021, 206, 109046. [Google Scholar] [CrossRef]
  39. Wu, B.; Hennigh, O.; Kautz, J.; Choudhry, S.; Byeon, W. Physics Informed RNN-DCT Networks for Time-Dependent Partial Differential Equations. In Proceedings of the International Conference on Computational Science–ICCS 2022, Cham, Switzerland, 15 June 2022; Groen, D., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A., Eds.; Springer International Publishing: Cham, Switzerland, 2022; pp. 372–379. [Google Scholar] [CrossRef]
  40. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A Review of Recurrent Neural Networks: LSTM Cells and Network Architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  41. Sun, L.; Gao, H.; Pan, S.; Wang, J. Surrogate modeling for fluid flows based on physics-constrained deep learning without simulation data. Comput. Methods Appl. Mech. Eng. 2020, 361, 112732. [Google Scholar] [CrossRef]
Figure 1. MetaPress solution process network architecture.
Figure 1. MetaPress solution process network architecture.
Processes 14 00089 g001
Figure 2. Pressure distribution during the production process of a well with a constant boundary pressure.
Figure 2. Pressure distribution during the production process of a well with a constant boundary pressure.
Processes 14 00089 g002
Figure 3. The results of solving the seepage equation based on MetaPress. Due to improper training operations and constraint settings, various issues appear in the grid model. (a,c) show normal test results for the control group, while (b,d) show erroneous results due to mistakes during the training process or incorrect condition settings.
Figure 3. The results of solving the seepage equation based on MetaPress. Due to improper training operations and constraint settings, various issues appear in the grid model. (a,c) show normal test results for the control group, while (b,d) show erroneous results due to mistakes during the training process or incorrect condition settings.
Processes 14 00089 g003
Figure 4. The comparison of training loss curves for different neural networks.
Figure 4. The comparison of training loss curves for different neural networks.
Processes 14 00089 g004
Figure 5. The test loss of the MetaPress neural network.
Figure 5. The test loss of the MetaPress neural network.
Processes 14 00089 g005
Figure 6. The test loss at different initial learning rates.
Figure 6. The test loss at different initial learning rates.
Processes 14 00089 g006
Figure 7. Real pressure distribution of well production before training.
Figure 7. Real pressure distribution of well production before training.
Processes 14 00089 g007
Figure 8. Real pressure distribution of 0–0.1 m before training.
Figure 8. Real pressure distribution of 0–0.1 m before training.
Processes 14 00089 g008
Figure 9. Comparison of pressure distribution predicted by seepage equation.
Figure 9. Comparison of pressure distribution predicted by seepage equation.
Processes 14 00089 g009
Figure 10. Comparison of real and predicted pressure distributions, where (a,b) share the same color scale and coordinate limits, and (c,d) share the same color scale and coordinate limits.
Figure 10. Comparison of real and predicted pressure distributions, where (a,b) share the same color scale and coordinate limits, and (c,d) share the same color scale and coordinate limits.
Processes 14 00089 g010
Figure 11. Comparison of BHP distribution for production and predicted values, where (a,b) share the same color scale and coordinate limits, and (c,d) share the same color scale and coordinate limits.
Figure 11. Comparison of BHP distribution for production and predicted values, where (a,b) share the same color scale and coordinate limits, and (c,d) share the same color scale and coordinate limits.
Processes 14 00089 g011
Figure 12. Predicting BHP errors with different training times.
Figure 12. Predicting BHP errors with different training times.
Processes 14 00089 g012
Figure 13. Comprehensive prediction error based on MetaPress and PINN at different training durations.
Figure 13. Comprehensive prediction error based on MetaPress and PINN at different training durations.
Processes 14 00089 g013
Figure 14. Using MetaPress to predict 0–200 m formation pressure distributions for wells at 0–10 days (0.1–2 days for training results).
Figure 14. Using MetaPress to predict 0–200 m formation pressure distributions for wells at 0–10 days (0.1–2 days for training results).
Processes 14 00089 g014
Table 1. Basic parameters of the seepage equation.
Table 1. Basic parameters of the seepage equation.
ParameterValueUnit
Porosity0.15/
Skin factor1/
Well radius0.1m
Reservoir thickness10m
permeability40mD
Viscosity0.001Pa·s
Initial pressure1.77 × 107Pa
Volume factor1.05/
Well number1/
Boundary and controlConstant pressure boundary
Fluid modelBlack oil; single-phase
Reservoir radius200m
Table 2. Model training architecture and basic hyperparameters.
Table 2. Model training architecture and basic hyperparameters.
ParameterValue
ParameterValue
Hidden layers5
Neurons per layer5
Activationtanh/sigmoid
OptimizerAdam
Learning rate0.01
Epochs340
Batch size10
L2 penalty term10−4
Random seedRandom (42), np.ran andom (42), torch.manual_seed (42)
λf105
λcons10−5
λout-b10−6
λu10−6
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qiu, L.; Yang, Y.; Sai, Y.; Cheng, Y. Physics-Constrained Meta-Embedded Neural Network for Bottom-Hole Pressure Prediction in Radial Oil Flow Reservoirs. Processes 2026, 14, 89. https://doi.org/10.3390/pr14010089

AMA Style

Qiu L, Yang Y, Sai Y, Cheng Y. Physics-Constrained Meta-Embedded Neural Network for Bottom-Hole Pressure Prediction in Radial Oil Flow Reservoirs. Processes. 2026; 14(1):89. https://doi.org/10.3390/pr14010089

Chicago/Turabian Style

Qiu, Linhao, Yuxi Yang, Yunxiu Sai, and Youyou Cheng. 2026. "Physics-Constrained Meta-Embedded Neural Network for Bottom-Hole Pressure Prediction in Radial Oil Flow Reservoirs" Processes 14, no. 1: 89. https://doi.org/10.3390/pr14010089

APA Style

Qiu, L., Yang, Y., Sai, Y., & Cheng, Y. (2026). Physics-Constrained Meta-Embedded Neural Network for Bottom-Hole Pressure Prediction in Radial Oil Flow Reservoirs. Processes, 14(1), 89. https://doi.org/10.3390/pr14010089

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop