1. Introduction
Conventional inversion is based on theoretical foundations such as the convolution principle and geostatistics. By combining seismic data with logging data, a model is established to invert reservoir parameters [
1,
2,
3]. However, in complex geological conditions and thin reservoir identification, there are challenges such as insufficient accuracy and low computational efficiency, which make it difficult to meet the current high-precision reservoir inversion requirements [
4,
5,
6]. Waveform-constrained artificial intelligence high-resolution reservoir inversion is based on artificial intelligence algorithms, which can fully explore seismic waveform information and integrate multiple data sources. With its high precision and strong noise resistance, it has obvious advantages in complex geological conditions and can accurately characterize reservoir characteristics [
7,
8,
9].
The inversion based on the convolution principle is an ideal model. It does not consider the loss and attenuation of seismic waves propagating in the reservoir, and belongs to noiseless synthetic seismic records [
10,
11]. The basic principle can be expressed by the following formula:
In the equation,
W represents the seismic wavelet,
R represents the impulse response or reflection coefficient, and
S represents the seismic record. According to Formula (2), the reflection coefficient can be calculated by giving an appropriate wavelet, and then the logging curve, which is the reservoir inversion result, can be obtained [
12,
13]. However, in the actual inversion process, inter-well wavelets are usually calculated using fixed wavelets, which have problems such as poor lateral adaptability and low vertical resolution. At the same time, the reservoir lateral prediction method based on training images and variation functions has strong ambiguity and struggles to characterize complex spatial structures [
14,
15,
16]. In terrestrial sedimentary systems, this method also exhibits strong heterogeneity and complex structures (
Figure 1).
In response to the problems of conventional inversion mentioned above, waveform-constrained artificial intelligence high-resolution reservoir inversion technology provides a new solution for inversion and high-resolution imaging under complex geological conditions [
17].
The seismic waveform-constrained artificial intelligence high-resolution reservoir inversion technology can fully utilize high-frequency seismic information. This technology directly establishes complex and nonlinear mapping relationships between seismic attributes and logging responses through artificial intelligence learning under the constraints of waveform classification [
18,
19]. Artificial intelligence has the natural advantages of robustness, parallelism, and adaptability in implementing any complex mapping. It can effectively solve the application problems of geological prior knowledge (such as sedimentary facies and waveform classification), making the inversion results match geological data better and further improving the accuracy of seismic reservoir prediction in thin sand layers [
20,
21].
This study extracts key characteristic parameters such as lithology, logging curves, and seismic waveforms, and constructs a big data sample library through data integration and cleaning. Combined with deep learning machine learning algorithms, an artificial intelligence thin sand layer prediction model with seismic waveform constraints was constructed. This model can perform feature extraction and pattern recognition, and can deeply explore the key features. The waveform constraint mechanism introduces seismic waveform features to improve the accuracy and reliability of prediction. The method, principle, and technical roadmap of this article are shown in
Figure 2.
2. Field Background and Geological Setting
This study selected a block in the Daqing Oilfield as the research object. The target reservoir is a typical channel sand body, mainly composed of siltstone, with good physical continuity and sedimentary sequence characteristics. The average single-layer thickness of the reservoir in the research area is about 2.5 m, which is a type of medium-high frequency sedimentary joint development, and has typical characteristics for conducting high-resolution seismic geological fusion research.
There are four wells arranged in the research area, from north to south: W250-328, W262-332, W251-318, and W258-308. The distribution of well points covers the main sedimentary facies zones within the coverage area, achieving representative sampling of the geological features within the block. Through fine seismic interpretation, two key seismic reflection layers, H11 and H21, were identified in the study area. Combined with three seismic attribute bodies, including amplitude, frequency, and phase (as shown in
Figure 3), a multi-scale and multi-dimensional comprehensive dataset was constructed, providing a data foundation for fine characterization of reservoir configuration and extraction of deep geological information.
This study comprehensively utilized multi-parameter logging data for reservoir evaluation, mainly including key logging curves such as natural gamma (GR), mud content (Vsand), deep lateral resistivity (LLD), and natural potential (SP). Based on the fluid properties and production performance of the reservoir, it is divided into five categories: oil (No. 0), poor oil (1), oil-water (2), dry (4), and mudstone (6).
The analysis of the well profile and logging response characteristics shown in
Figure 4 indicates that typical oil reservoirs exhibit obvious “two lows and one high” electrical characteristics: low natural gamma values (reflecting high sandstone cleanliness), high acoustic time difference (indicating good porosity), and high resistivity (indicating high oil saturation). In contrast, poor oil layers and oil-water layers exhibit slightly higher natural gamma values and lower resistivity, while dry layers and mudstones show significantly high gamma and low resistivity responses. This logging response pattern provides a reliable explanatory basis for identifying reservoir fluids in the study area.
The analysis of the cross-well seismic profile shows that the sedimentary stability of the study area is defined by continuous and parallel seismic phase axes, and the lateral variation of the thickness of each reflection layer is relatively small (
Figure 5). It is worth noting that significant seismic phase axis displacement and waveform abrupt changes were observed between wells W250-328 and W251-318, which indicates the presence of fault structures in the area. This interpretation result is consistent with the analysis of planar seismic attributes (
Figure 6), showing obvious linear discontinuity characteristics in both coherence and curvature attributes. The existence of the fracture system has been further confirmed. The seismic response characteristics of fault zones provide an important basis for the analysis of structural evolution and evaluation of reservoir connectivity in the study area.
3. Materials and Methods
3.1. Non-Destructive Time–Depth Conversion and Sequence Domain Modeling
By integrating geostatistical methods, deep learning algorithms, and seismic sedimentology theory, this study achieved high-precision conversion of time-domain seismic data to depth-domain data. A three-dimensional seismic sequence grid model based on sequence stratigraphy constraints was constructed, providing unified data and a structural framework for precise reservoir prediction. The key steps of this method are as follows:
(1) Constructing depth-domain seismic bodies based on time–depth relationships and variable step size sequence grids.
In the process of time–depth conversion, the seismic wave velocity model accurately maps seismic time data to depth space. The conventional method includes direct conversion based on the average velocity field, which is suitable for areas with simple geological structures and little velocity changes. In complex conditions that require high conversion accuracy, it is necessary to first establish a fine layer model and combine it with the average velocity model for layered conversion to ensure the accuracy of layer matching. On the basis of high-resolution sequence boundaries, this study used a variable step grid method to sample the depth domain and convert it to the time domain to obtain seismic volume data under layered constraints [
22].
(2) Building a sequence grid model to achieve high consistency visualization between seismic events and stratigraphic sequences.
The sequence grid model takes the stratigraphic sequence as the core constraint, assigning specific geological attribute information to each grid cell and using a “sequence constraint display algorithm” in the visualization process. This algorithm can automatically correlate the seismic event tracking results in the time domain when displaying any sequence unit. Then, the algorithm enhances the continuity expression of the same phase axis through color labeling, transparency adjustment, and other means. These help highlight the discontinuity features of interfaces between different sequences. In heterogeneous sedimentary environments such as braided river deltas, this technique can effectively demonstrate the lateral distribution trend of sand bodies of the same period in the depth domain. This feature makes it superior to traditional isochronous slicing due to changes in the dip angle of the strata, causing phase axis misalignment. The practical application results show that the accuracy of sequence boundary recognition based on this method is significantly improved. The matching error of the same phase axis layer in the target section of the reservoir is controlled within half a sampling interval, demonstrating good geological consistency and engineering operability [
23].
3.2. Kriging Interpolation Algorithm Based on Global Optimization
This study adopted the Kriging interpolation method based on global optimization to improve the accuracy and geological consistency of reservoir parameter spatial interpolation. All well-point data were used to jointly construct the Kriging equation system. The covariance function parameters were adjusted through global optimization algorithms to obtain the optimal unbiased estimation parameters. This method effectively suppresses the occurrence of local anomalies while ensuring strict consistency between interpolation results and well-point data, significantly improving the continuity and physical rationality of reservoir parameters in lateral distribution (as shown in
Figure 7).
Kriging interpolation is a geostatistical method based on regionalized variable theory, widely used in spatial modeling of oil and gas reservoir parameters. The basic principle is to use the spatial variability of known data points and the variation function to describe the spatial correlation between variables to achieve an unbiased optimal linear estimation of unknown points. Hosseini et al. used the Kriging method for stratigraphic interpolation and combined it with the generalized triangular prism modeling method to construct a structurally reasonable three-dimensional stratigraphic model, verifying its applicability in complex geological backgrounds [
23].
Compared to traditional Kriging methods, introducing a global optimization algorithm to optimize the covariance function parameters can further improve the stability and accuracy of the interpolation model. Traditional methods belong to local optimization strategies, which tend to assign excessively high weights to some nodes and ignore the influence of surrounding points, resulting in abnormally high or low interpolation results near the well points. Global optimization can ensure that the covariance function parameters obtained reach the global optimum, thereby constructing a reservoir model with a more realistic spatial structure, smoother interpolation results, and smaller errors.
In practical implementation, Romero et al.’s genetic algorithm-based optimization method simulates natural selection and genetic mechanisms to utilize multiple chromosomes to characterize different reservoir parameters and achieve global search of covariance function parameters. Gholamreza Khademi combined finite difference gradient (FDG) with Kriging interpolation to optimize well placement and improve geological modeling accuracy. Romero et al. proposed a parallel global optimization strategy based on the Kriging surrogate model, and combined it with design domain reduction techniques to achieve efficient and high-precision parameter estimation [
24].
In practical implementation, Romero et al. studied optimization methods based on genetic algorithms, which simulate natural selection and genetic mechanisms, utilize multiple chromosomes to characterize different reservoir parameters, and iteratively optimize based on fitness functions to achieve global search of covariance function parameters. Khademi combined finite difference gradient (FDG) with Kriging interpolation to optimize well placement and improve geological modeling accuracy. He proposed a parallel global optimization strategy based on the Kriging surrogate model, and combined it with design domain reduction techniques to achieve efficient and high-precision parameter estimation [
25].
In addition, when dealing with situations where both the main variable and multiple spatially correlated auxiliary variables are present, the Co-Kriging interpolation method can be introduced. This method fully utilizes the cross-covariance structure between multiple variables to significantly improve the accuracy of principal variable estimation, and is suitable for multi-source and multi-scale reservoir parameter fusion modeling scenarios.
The application of the Kriging interpolation method based on global optimization can comprehensively reflect the spatial variation characteristics of underground reservoirs, improve interpolation accuracy and geological consistency, and effectively avoid local overfitting phenomena. The linear combination of initial and secondary variables for collaborative Kriging estimation is as follows:
—the estimated value of the random variable Z at position 0;
—n sample data of initial variables;
—m sample data of secondary variables;
—the collaborative Kriging weighting coefficients that need to be determined.
The estimation error can be represented by the following equation:
—the estimated value of the random variable at position 0;
—the sampled value of the random variable at position 0.
The equation system of ordinary collaborative Kriging estimation can be derived by combining the least squares method of Kriging estimation with unbiasedness, as follows:
= 1; 0;
—n sample data of the initial variable;
— S#sample data of secondary variables;
—collaborative Kriging weighted coefficient.
and Lagrange factor;
—covariance.
3.3. Kriging Interpolation Algorithm Based on Waveform Constraints
The Kriging interpolation algorithm based on waveform constraints is a high-precision inter-well prediction technique that integrates seismic waveform information with well-point measurement data. It can improve the prediction accuracy of reservoir parameter spatial distribution under sparse well conditions. The basic principle is to introduce seismic waveform similarity as a soft constraint in the geostatistical framework of Kriging interpolation, and use the geological similarity reflected by seismic waveforms to assist in attribute estimation of inter-well regions (
Figure 8). This method breaks through the limitations of traditional Kriging interpolation that only relies on hard data from well points, effectively compensating for the problem of insufficient inter-well prediction ability.
The specific implementation process includes the following core steps:
(1) Data preparation and preprocessing: Well-logging data and 3D seismic data of the covered blocks are obtained, and the time–depth conversion and spatial registration of the data are completed.
(2) Waveform feature extraction: Waveform attributes that reflect geological features from seismic records, such as dominant frequency, amplitude, phase, instantaneous frequency, etc., are extracted and normalized.
(3) Constructing a waveform similarity constraint matrix: Based on the similarity between well-point and inter-well seismic waveforms, waveform similarity measurement indicators (such as cross-correlation coefficient, dynamic time regularization distance, etc.) are defined to construct soft constraint weight functions.
(4) Kriging modeling and prediction under waveform constraints: Waveform similarity constraints are introduced into the traditional Kriging equation system, spatial covariance weights are adjusted, and the influence of well points on waveform similar regions are enhanced, thereby improving the geological consistency of inter-well region attribute estimation.
Traditional Kriging interpolation often exhibits shortcomings such as a smooth transition between wells and a lack of geological constraints in areas with sparse wells or large well spacing. These limitations result in the estimation of reservoir parameters in the inter-well region appearing as simple transitions, making it difficult to characterize potential heterogeneous features. The waveform-constrained interpolation method incorporates seismic waveform information as an auxiliary factor for spatial continuity. This approach assigns higher geological consistency weights to areas with similar waveforms, thereby more accurately reflecting the spatial variability of inter-well reservoir properties. The method not only improves the spatial resolution of reservoir parameter interpolation but also enhances the geological plausibility of the model.
3.4. Generation of Seismic Waveform Classification Volume
Seismic waveform classification technology is an important means of finely characterizing geological bodies by mining the lateral continuity, differences, and sedimentary response characteristics of seismic waveforms. This technology extracts features and models the classification of 3D seismic data, mapping seismic responses of different waveform types into spatially significant geological distributions and providing high-resolution geological constraints for reservoir prediction and seismic inversion (
Figure 9).
Compared with conventional seismic attribute analysis, seismic waveform classification not only preserves reflection wave characteristics (e.g., amplitude and frequency) but also incorporates waveform morphology and evolutionary features. This integration makes the classification results better aligned with actual sedimentary processes. The technique effectively characterizes vertical variations in depositional environments and potential heterogeneity by revealing lateral changes in seismic waveforms. It demonstrates superior geological interpretability, particularly in complex structural zones or multi-phase superimposed depositional systems.
Furthermore, the application of seismic waveform classification in seismic inversion enhances the geological plausibility of inversion models. By introducing classification results as soft constraints into the inversion workflow, this approach improves inversion stability while reinforcing geological consistency. Consequently, it achieves synergistic modeling between waveform characteristics and reservoir parameters.
In summary, seismic waveform classification technology overcomes the limitations of conventional methods that rely solely on single-attribute responses. It achieves deep integration of waveform morphology, frequency–phase characteristics, and sedimentary geological context, establishing itself as a critical tool for detailed reservoir characterization and intelligent inversion in complex reservoirs.
3.5. Artificial Neural Network Reservoir Inversion
Although the extraction of multiple seismic attributes is a common strategy to exploit information from seismic data, it invariably increases computational demands and introduces inherent drawbacks. The indiscriminate addition of attributes consumes substantial storage, prolongs processing time, and expends significant computational resources.
For specific reservoir prediction problems, most attributes among numerous options may be redundant and could even introduce interfering noise, consequently reducing prediction accuracy and reliability. Moreover, seismic attributes frequently exhibit high intercorrelation.
The artificial neural network (ANN) inversion technique integrates geological, geophysical, and other multi-dimensional data by leveraging ANN’s nonlinear mapping and adaptive learning capabilities. The core innovation of our implementation lies in the construction and training of a multi-dimensionally constrained ANN specifically designed for high-resolution reservoir prediction. To provide comprehensive constraints for the model, we meticulously designed an input feature vector that integrates seven types of information, formally expressed as W = F (S1, S2, S3, F, X, Y, Z), which includes not only the original (S1) and high-frequency (S2) seismic volumes but also incorporates a waveform classification volume (S3) characterizing geological structures, a sedimentary facies model (F) serving as a strong geological constraint, and spatial coordinates (X, Y, Z) to explicitly capture spatial distribution trends and heterogeneity of reservoir parameters.
In the practical implementation, this study deployed a deeply optimized feedforward neural network architecture. The network consists of four hidden layers with neuron counts set to 128, 64, 32, and 16, respectively, after systematic hyperparameter tuning. This progressively tapered design aims to gradually refine and compress information while avoiding overfitting risks. Across all hidden layers, this study employed the Rectified Linear Unit (ReLU) activation function (ReLU (a) = max (0, a)) to ensure training stability and convergence speed. Network weights were initialized using the He method, optimized specifically for ReLU, while the output layer utilized a linear activation function to directly regress continuous reservoir parameter values.
Xing proposed a particle swarm optimization-based neural network algorithm that achieved improved inversion results [
26]. This methodology involves data acquisition and processing, neural network construction, constraint application, and network training optimization. The training process represents a rigorous systems engineering approach with the core objective of preventing overfitting. Utilizing all available well data, this study strictly partitioned it into training and validation test sets at ratios of 80% and 20%, respectively. As shown in
Figure 10, the training was driven by the Adam optimizer using Mean Squared Error (MSE) as the loss function, with an initial learning rate set to 0.001. Gradient descent was performed using mini-batches of 32 samples. To enhance generalization capability, this study implemented L2 regularization (weight decay factor of 0.0001) and Dropout layers (rate of 0.2) after each hidden layer, complemented by an early stopping mechanism that automatically halts training if validation loss shows no improvement for 50 consecutive epochs.
The ANN inversion approach fundamentally differs from conventional methods by operating directly on the pre-defined 0.5 m resolution grid framework established in
Section 3.1. This enables the model to predict reservoir properties at a resolution that far exceeds the seismic tuning thickness, effectively addressing the scale gap between seismic data and logging measurements. The integration of multiple constraint types (waveform classification, sedimentary facies, spatial coordinates) ensures that the high-resolution predictions maintain geological consistency throughout the reservoir volume.
It effectively overcomes the non-unique solution problem inherent in single-data inversion, yielding more reliable reservoir parameter estimates. The AI prediction approach incorporating seismic frequency components, waveform characteristics, and sedimentary facies information demonstrates unique advantages. The model’s performance was rigorously quantified on a completely independent blind test set using evaluation metrics including Root Mean Square Error (RMSE), Coefficient of Determination (R2), and Average Absolute Percentage Error (AAPE). Unaffected by seismic resolution limitations, it enables non-interpolated high-resolution reservoir prediction between wells, effectively characterizes thin interbedded sand bodies and provides a robust solution to the challenges associated with thin interbed prediction.
Stochastic inversion combines well-logging and seismic data to broaden the frequency spectrum, exhibiting enhanced thin reservoir identification capability. In contrast, conventional deterministic inversion maintains resolution equivalent to the original seismic data, primarily operating within the seismic dominant frequency band. However, its vertical resolution—the ability to distinguish between two geological bodies—remains constrained by the seismic frequency range.