Next Article in Journal
Storm-Induced Evolution on an Artificial Pocket Gravel Beach: A Numerical Study with XBeach-Gravel
Previous Article in Journal
Multidimensional Comprehensive Evaluation Method for Sonar Detection Efficiency Based on Dynamic Spatiotemporal Interactions
Previous Article in Special Issue
Electrical Resistivity Tomography Methods and Technical Research for Hydrate-Based Carbon Sequestration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Research on Reservoir Identification of Gas Hydrates with Well Logging Data Based on Machine Learning in Marine Areas: A Case Study from IODP Expedition 311

1
Nanchang Key Laboratory of Intelligent Sensing Technology and Instruments for Geological Hazards, East China University of Technology, Nanchang 330013, China
2
Shaanxi Key Laboratory of Petroleum Accumulation Geology, Xi’an Shiyou University, Xi’an 710065, China
3
Hubei Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences, Wuhan 430078, China
4
Key Laboratory of Gas Hydrate, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences, Guangzhou 510640, China
5
School of Geophysics and Information Technology, China University of Geosciences, Beijing 100083, China
*
Authors to whom correspondence should be addressed.
J. Mar. Sci. Eng. 2025, 13(7), 1208; https://doi.org/10.3390/jmse13071208
Submission received: 3 June 2025 / Revised: 18 June 2025 / Accepted: 20 June 2025 / Published: 21 June 2025

Abstract

Natural gas hydrates, with their efficient and clean energy characteristics, are deemed a significant pillar within the future energy sector, and their resource quantification and development have a profound impact on the transformation of global energy structure. However, how to accurately identify gas hydrate reservoirs (GHRs) is currently a hot research topic. This study explores the logging identification method of marine GHRs based on machine learning (ML) according to the logging data of the International Ocean Drilling Program (IODP) Expedition 311. This article selects six ML methods, including Gaussian process classification (GPC), support vector machine (SVM), multilayer perceptron (MLP), random forest (RF), extreme gradient boosting (XGBoost), and logistic regression (LR). The internal relationship between logging data and hydrate reservoir is analyzed through six ML algorithms. The results show that the constructed ML model performs well in gas hydrate reservoir identification. Among them, RF has the highest accuracy, precision, recall, and harmonic mean of precision and recall (F1 score), all of which are above 0.90. With an area under curve (AUC) of nearly 1 for RF, it is confirmed that ML technology is effective in this area. Research has shown that ML provides an alternative method for quickly and efficiently identifying GHRs based on well logging data and also offers a scientific foundation and technical backup for the future prospecting and mining of natural gas hydrates.

1. Introduction

Natural gas hydrate is a kind of crystalline object formed by the combination of water molecules and gas molecules under low temperature and high pressure. It has an appearance reminiscent of ice and is predominantly found within the sedimentary deposits of permafrost-bearing regions and along the continental margins [1,2,3]. It is a new type of energy resource with astonishing reserves and widespread distribution. Owing to its singular physical and chemical attributes, it has emerged as a central point of interest in the international energy community and the field of geosciences in recent years [4,5,6,7,8,9,10]. The precise identification of gas hydrate reservoirs (GHRs) holds significant importance as it serves as a crucial prerequisite for both the assessment and exploitation of hydrate reserves. The interplay among natural gas hydrates, the surrounding sediments, and fluids is an intricate occurrence. As a consequence, it often makes the process of identifying GHRs more challenging [11,12,13]. In order to develop natural gas hydrate resources in a reasonable and feasible manner in the future, it is necessary to find methods that can effectively identify GHRs.
Over the past few decades, many experts have proposed a variety of natural gas hydrate identification methods through the comprehensive use of geology, geochemistry, geophysics, and other multidisciplinary knowledge and technical means. In the early days, scientists mainly obtained core samples through drilling and judged the existence of hydrate through naked-eye observation or low-temperature anomaly obtained by an infrared thermal imager [14]. As technology advances steadily, the research findings indicated a decline in the chlorine concentration or salinity of pore water within natural gas hydrate deposits. Therefore, it is viable to identify natural gas hydrate by analyzing the chloride ion anomaly present in pore water [15]. Although these identification methods are very accurate, drilling and coring are inefficient and costly [16,17]. In addition, due to the easy decomposition of hydrate, it is difficult to collect complete hydrate samples. Geophysical methods have relatively high cost-effectiveness and are suitable for preliminary large-scale exploration. Among many geophysical methods, seismic and logging are considered to be the two most effective methods to identify gas hydrates. Within seismic exploration, bottom-simulating reflectors (BSRs) are regarded as a crucial geophysical indicator for confirming the presence of marine gas hydrates. But natural gas hydrate and BSRs are not in one-to-one correspondence [18]. Geophysical logging techniques present the advantage of directly, continuously, and economically assessing the attributes of GHRs at the actual site. This holds substantial importance for the qualitative characterization of GHRs as well as the quantitative computation of reservoir porosity and hydrate saturation parameters. Tian and Liu [19] put forward an innovative attribute that integrates velocity and density to identify hydrates, which has achieved notable success in the Dongsha area of the South China Sea and the Hydrate Ridge along the Oregon continental margin. Wu et al. [20] proposed a gas hydrate reservoir identification method based on a petrophysical model and sensitive elastic parameters. The method was applied to the Mallik permafrost region in Canada and the Shenhu sea area in the South China Sea, and the effectiveness of the method in GHR recognition was verified. These methods are usually based on a rock physical volume model. The presence of multiple micro-existence modes of natural gas hydrate gives rise to numerous challenges in the micro-response mechanism of natural GHR logging. This, in turn, results in the fact that many theories and methods of conventional reservoir logging evaluation cannot be fully applied in hydrate exploration. Therefore, these model-driven hydrate reservoir identification methods have the problems of difficult identification, low accuracy, and low processing efficiency. At the same time, they are also vulnerable to human factors and a complex geological environment.
With the continuous progress of technology, artificial intelligence has penetrated into various fields and has had a very positive and far-reaching impact. As a segment in artificial intelligence, machine learning (ML) is widely used in oil and gas reservoir identification [21,22,23,24], fluid identification [25,26], lithology identification [27,28,29,30,31,32], and other fields and has yielded favorable outcomes. This approach can be used in the area of natural gas hydrates as well. Lee et al. [33] proposed the K-means clustering method, which used a variety of attributes to identify the distribution of potential GHRs. Zhu et al. [34] proposed an effective classification of GHRs based on K-means clustering and the AdaBoost method. Tian et al. [35] used seven ML algorithms to identify hydrates in the Oregon Hydrate Ridge by P-wave and S-wave velocities. In summary, there are relatively few studies on hydrate reservoir identification using multiple conventional logging curves based on the ML method, and the identification methods still have shortcomings. It is evident that further research is required in order to develop high-precision hydrate reservoir identification methods.
In this study, well logging data from four boreholes located within the accretionary prism of the Cascadia subduction zone will be utilized to select well logging curves that exhibit heightened sensitivity to gas hydrates. Subsequently, six ML algorithms, namely support vector machine (SVM), Gaussian process classification (GPC), multilayer perceptron (MLP), random forest (RF), extreme gradient boosting (XGBoost), and logistic regression (LR), will be employed to identify GHRs within the study area.

2. Geological Setting

The specific objective of Expedition 311 of the International Ocean Drilling Program (IODP) is to study the enrichment and formation mechanism of natural gas hydrates in aggradation complexes [36]. The study area is located in the accretionary prism of the Cascadia subduction zone. Its tectonic setting is on the accretionary wedge formed by the subduction of the Juan de Fuca plate to the North American plate, and the Juan de Fuca plate subducts downward at a speed of ~45mm/y [37,38]. Tectonic plate subduction conveys considerable sediment volumes from deep-sea basins to the outermost edge of the subduction–accretion prism, where the sediment gradually builds up. Favorable temperature and pressure conditions, coupled with an abundant supply of hydrocarbons, enable this region to host natural gas hydrates, making it a quintessential example of an active continental margin hydrate research area.
The IODP Expedition 311 was conducted from September to October 2005. Throughout the entire voyage, five stations were established at the edge of the Cascadia subduction zone, namely sites U1325, U1326, U1327, U1328, and U1329. As depicted in Figure 1, the five stations are placed in the front, middle, and back parts of the accretionary wedge, representing the different stages of gas hydrate evolution in the context of active continental margins [39]. The location information (longitude and latitude) of each borehole is shown in Table 1. Among them, the depths of U1326A, U1327A, and U1328A are 300 m below the seafloor (mbsf), while the depths of U1325A and U1329A are 350 mbsf and 220 mbsf, respectively [39].

3. Methodology

In this study, six algorithms are proposed to be used: GPC, SVM, MLP, RF, XGBoost, and LR. For model training and testing, we used logging data from boreholes in the study area and the identification of reservoir segments for gas hydrates. The underlying theoretical principles and corresponding model evaluation metrics of these algorithms are concisely summarized below.

3.1. Algorithm Introduction

3.1.1. Gaussian Classification Process

As an ML paradigm grounded in the Gaussian process, GPC is specifically formulated to tackle classification problems. The Gaussian process is a random process in which each point x corresponds to a random variable f. The joint distribution of these variables exhibits a Gaussian nature, defined via the mean and covariance functions [40]. In GPC, since the target value y of the classification problem is discrete, it is no longer assumed that y|x obeys the Gaussian distribution, but other suitable distributions, such as the Bernoulli distribution, are selected. In order to solve this problem, a latent function f (implicit variable) is introduced, and its normal prior is given. Then, the observed data (x, y) is used to calculate the potential function f and its output after passing through the response function (such as the sigmoid or probit function). These outputs are projected onto the [0, 1] range to derive the classification probability [41]. The implementation of GPC includes selecting appropriate kernel functions (such as RBF kernel, etc.) to define the covariance function of the Gaussian process, using training datasets (x, y) to train the model and obtain the posterior distribution of potential function f, and to finally predict the new input x* to obtain the classification label.
GPC performs well in dealing with nonlinear classification problems and has a strong generalization ability. However, due to its high computational complexity, approximation methods (such as Laplace approximation, expected propagation, etc.) are often used to reduce the computational complexity [42].

3.1.2. Support Vector Machine

SVM stands as a potent instrument within the domain of supervised learning, showcasing remarkable proficiency in handling both classification and regression analysis assignments [43]. The core principle involves identifying the most suitable hyperplane that maximizes the margin separating distinct categories of sample points, thereby enabling precise classification. For linear separable data, SVM constructs a decision boundary; for linear non-separable data, it is solved by mapping kernel function to high-dimensional space [44]. In addition, SVM also introduces the concept of a soft interval to allow for certain classification errors to deal with noise and outliers, and the penalty coefficient C controls the penalty intensity. SVM demonstrates robust performance in handling high-dimensional and noisy datasets, making them highly adaptable to diverse classification tasks. Additionally, SVM enables nonlinear classification via kernel-induced feature mapping, thereby extending its applicability to complex decision boundaries. Its training process involves solving convex quadratic programming problems, and its performance is highly dependent on parameter selection. It is often optimized by grid search, cross-validation, and other methods.
With regard to nonlinear functions, the SVM algorithm unfolds its general operational procedure as detailed below:
(I) First select the kernel function: K(xi, xj).
(II) In the feature space, we construct the following optimization problem to find the optimal separation hyperplane:
min α 1 2 i = 1 N j = 1 N α i α j y i y j K x i , x j i = 1 N α i .
(III) Constraints:
i = 1 N α i y i = 0 0 α i C , i = 1 , 2 , , N   ,
where αi is the Lagrange multiplier, C is the regularization parameter, which is used to control the punishment for classification errors, and yi is the label of the data point xi (value ± 1).
(IV) For the purpose of resolving the above-described constrained optimization problem, the sequential minimal optimization (SMO) method, along with other convex optimization algorithms, is typically employed. After solving, we obtain the optimal Lagrange multiplier vector:
α = ( α 1 , α 2 , , α N ) T .
(V) Calculate the support vector in the decision function:
i = 1 N α i y i K ( x , x i ) .
(VI) Calculate the bias term and solve this equation. We can obtain a possible value of the bias term b*, which is usually calculated by using the average value of all support vectors:
y s i = 1 N α i y i K ( x s , x i ) + b = 1 .
(VII) Finally, we build a decision function to classify the following:
f ( x ) = s i g n i = 1 N a i y i K ( x , x i ) + b .

3.1.3. Multilayer Perceptron

MLP serves as the foundational architecture within the realm of deep learning, encompassing an input layer, one or more hidden layers, and an output layer. The input layer assumes the responsibility of capturing raw data information. The hidden layer is dedicated to conducting feature extraction and applying nonlinear mapping to the data, and the output layer is employed to yield the final prediction results [45]. The operations of linear transformation and activation function processing are performed on the data during forward propagation. Operations are performed step-by-step through the layers, leading to the data being conveyed to the output layer [46]. The loss function is employed to gauge the discrepancy between the anticipated outcomes and the true values in the dataset. Consequently, the choice of the loss function holds significant importance. Following this, the gradient is calculated through the backpropagation mechanism, and subsequently, the model’s parameters are adjusted utilizing the gradient descent approach to attain model optimization. MLP possesses a powerful ability to represent complex relationships, making it appropriate for nonlinear issues and high-dimensional datasets. Nevertheless, it entails a substantial training time and has high requirements for data preprocessing.
Figure 2 shows the MLP with only one hidden layer, where f(x) represents the activation function, which is a nonlinear function. We determine whether neurons should be activated by calculating the weighted sum and adding bias, mainly including Sigmoid, Tanh, and ReLu.
In a multilayer perceptron, every neural network layer comprises numerous neurons. The architectural details of an individual neuron are presented in Figure 3, which encompasses four components:
(I) Input: The neuron receives multiple input signals from the previous layer or external sources. These input signals are usually weighted values; that is, each input value is multiplied by a corresponding weight, and its vector expression is as follows:
X = [ x 1 , x 2 , x 3 , , x n ] T .
(II) Weighted summation: w1, w2, …, wn is the weight of neurons, and its vector expression is as follows:
W = [ w 1 , w 2 , w 3 , , w n ] .
The neuron multiplies all input signals by their corresponding weights and adds these products to obtain a weighted sum. However, in some cases, especially in neural networks, the formula of weighted summation may be extended to include a constant term b; that is, the bias term. The relevant formula is as follows:
z = i = 1 n w i x i + b .
(III) Activation Function: A nonlinear activation function is employed to transform the weighted sum, thereby resulting in the output of the neuron.
(IV) Output: The outcomes derived from the processing of values by the nonlinear activation mechanism constitute the outputs produced by neurons, and its value is as follows:
y = σ ( z ) = σ ( W X + b ) .

3.1.4. Logistic Regression

In the statistical learning methodology system, LR plays a crucial role and has been widely applied in various classification tasks, especially in the field of binary classification, demonstrating excellent applicability [47]. Grounded in linear models, LR employs the Sigmoid function to project input features onto a probability scale spanning from 0 to 1. This allows for a quantitative assessment of the probability that samples fall into positive classes. In particular, the Sigmoid function is employed to map the output generated by a linear model onto a probabilistic scale, and the decision function sets a threshold (such as 0.5) based on this probability value to achieve sample classification judgment. In LR, the log-likelihood loss serves to assess the deviation between the forecasted outcomes and the actual labels, thereby enabling the evaluation of prediction accuracy. When it comes to the pursuit of optimal parameters for LR, gradient descent stands as a broadly utilized optimization algorithm. It computes the gradient of the loss function relative to the parameters and iteratively modifies the parameters in the direction opposite to the gradient, thus progressively attaining the minimization of the loss function.
LR is constructed upon a linear modeling framework. Within this framework, the model’s predictions of the output are achieved through a linear combination of input features. To be more specific, linear models can be characterized by the following formula:
z = β 0 + β 1 x 1 + β 2 x 2 + + β n x n ,
where β0 is the intercept term, β1, β2, …, βn represents the coefficient associated with the feature, and x1, x2, …, xn is the input feature.
The Sigmoid function converts the output z of the linear model into the probability value p, and the formula is as follows:
p = 1 1 + e z ,
where the probability value p represents the possibility that the sample belongs to a positive class. LR uses a decision function to convert the probability value p into a classification result. Generally, when p ≥ 0.5, the samples are classified as positive classes; when p < 0.5, the sample are classified as negative. This threshold (0.5) can be adjusted according to the actual demand.
In the LR model system, logarithmic likelihood loss is used to accurately measure the degree of difference between predicted probabilities and true labels. For the common and important application scenario of binary classification, logarithmic likelihood loss has a specific mathematical expression, and its formula is presented as follows:
L = 1 m i = 1 m y i log ( p i ) + ( 1 y i ) log ( 1 p i ) ,
where m denotes the quantity of samples, yi represents the actual label (either 0 or 1) of the i-th sample, and pi signifies the predicted probability for the i-th sample.

3.1.5. Random Forest

RF is composed of multiple decision trees, which is an integrated learning method. The calculation process is a complex and delicate process, which combines the prediction results of multiple decision trees to improve the accuracy and stability of the model.
RF uses the self-help sampling method to randomly extract multiple sample book subsets D1, D2, …, Dn from the original training dataset D by means of put-back sampling, where n is the number of decision trees. Each sample subset Di (i = 1, 2, …, n) contains the same number of samples as the original dataset, but the samples in Di may be duplicated due to the put-back sampling. Each sample subset Di is used to train a decision tree Ti independently. In the process of constructing each decision tree Ti, the RF algorithm will not use all features to split nodes. Let all available features be set to F, and the RF algorithm will randomly select a subset of features FiF from F. Then select the best feature to split nodes. This random feature selection method further increases the diversity of the model, which helps to reduce overfitting and improves the generalization ability of the model. When the new sample x is predicted, each decision tree Ti gives the prediction result yi.
For the classification problem, the majority voting method is used to determine the final category y. If C is the set of category labels, the final category y can be determined by the following formula:
y = arg max c     C i = 1 n I y i = c ,
where I (·) is the indicating function, which is 1 when yi = c; otherwise, it is 0.
The RF algorithm has high accuracy and robustness and reduces error by integrating the prediction results of multiple decision trees. It has a strong anti-overfitting ability and uses self-help sampling method and random feature selection to increase the diversity of the models. It can process high-dimensional data and automatically select important features.

3.1.6. Extreme Gradient Boosting

XGBoost is an efficient ensemble learning algorithm that is based on the gradient lifting decision tree framework and gradually optimizes the prediction results by iteratively training multiple decision trees. Each new tree will focus on correcting the prediction error of the previous tree and finally the weighted sum of the prediction results of all trees to obtain a more accurate final prediction. In order to prevent the model from overfitting, XGBoost introduces regularization technology to control the complexity of the model. At the same time, it also supports parallel computing, which can significantly accelerate the training process. In addition, XGBoost can automatically process the missing values in the data without the need for cumbersome manual preprocessing. Because of its excellent performance and ease of use, XGBoost has been widely used in ML tasks such as classification and regression, especially for processing structured data and high-dimensional sparse data.

3.2. Performance Evaluation of Machine Learning Model

For this research endeavor, in order to obtain a comprehensive and accurate evaluation of the classification model’s performance, we adopted multiple evaluation indicators, including accuracy, precision, recall, the harmonic mean of precision and recall (F1 score), the receiver operating characteristic (ROC) curve, and the confusion matrix. These evaluation indexes play an important role in quantifying model accuracy, evaluating model robustness, and identifying model problems.
Accuracy, serving as a pivotal metric for assessing the overall efficacy of a model, holds significant importance in classification tasks. It offers a straightforward reflection of the model’s precision in forecasting sample categories, namely the proportion of samples correctly predicted by the model (including samples correctly identified as hydrate and non-hydrate) in the total samples:
Accuracy = TP + TN TP + TN + FP + FN .
In the realm of model evaluation, precision signifies the fraction of samples genuinely belonging to the positive category within the set of model-predicted positive samples; that is, the proportion of samples predicted to be hydrates that are actually hydrates. Its essential significance resides in its capacity to significantly mitigate the risk of false positives in the model, subsequently enhancing the reliability of the prediction outcomes:
Precision = TP TP + FP .
Recall rate measures the coverage rate of positive samples; that is, the proportion correctly predicted by the model in the samples that are actually hydrate to identify more positive instances:
Recall = TP TP + FN .
As a holistic evaluation indicator, the F1 score ingeniously amalgamates the performance of precision and recall. It is not a simple arithmetic mean but a harmonic average calculation that takes these two factors into account. When the F1 score is high, it indicates that the model has a well-balanced and excellent performance regarding both accuracy and recall, showcasing its ability to accurately discern and fully cover classification tasks:
F 1 = 2 × TP TP + FP × TP TP + FN TP TP + FP + TP TP + FN = 2 × TP 2 × TP + FP + FN .
The true positive rate (TPR), which is also referred to as sensitivity or recall within the realm of model evaluation, is employed to ascertain the ratio of samples that a model accurately predicts as positive out of all the samples that are truly positive. It serves as a significant metric for gauging the model’s capability to recognize positive samples:
TPR = TP TP + FN = Recall .
The false positive rate is the percentage of samples that are really negative but are wrongly predicted as positive by the model:
FPR = FP FP + TN .
In the field of binary classification model evaluation, the ROC curve and the area under curve (AUC) are indispensable tools. They offer an intuitive way to reflect the model’s classification performance under different threshold conditions, which is of great practical significance for optimizing model performance and making informed decisions. The ROC curve is a crucial analytical tool. This curve takes the true case rate as the vertical axis and the false positive case rate as the horizontal axis. By depicting the dynamic relationship between the two under different classification thresholds, it intuitively presents the changing trend of model performance with the adjustment of classification thresholds. This feature provides a solid theoretical foundation and analytical basis for comprehensively evaluating model performance, effectively screening key features, and reasonably evaluating model performance on imbalanced datasets. AUC, as the core quantitative indicator for measuring the classification performance of the model represented by the ROC curve, is defined as the area enclosed by the ROC curve and the coordinate axis. The range of AUC values is strictly defined between 0 and 1. When the AUC value approaches 1, it indicates that the model has a stronger ability to distinguish positive and negative samples; that is, the classification performance of the model is better. On the contrary, if the AUC value is closer to 0, it means that the classification performance of the model is worse. They are often used together to comprehensively evaluate the performance of classifiers.
The AUC, in mathematical terms, is defined as the area encompassed beneath the ROC curve. The corresponding calculation formula is as follows:
AUC = 0 1 ( 1 FPR ( t ) ) d t .
The confusion matrix serves as an analytical tool within the ML sphere for gauging the performance of classification models and visually representing the classification outcomes. A differentiation exists between the rows and columns within the confusion matrix. Typically, the columns of the confusion matrix correspond to predicted instances, whereas the rows denote real instances. Data points where the number of rows and columns are equal signify the count of correct predictions, while those with unequal row and column counts indicate the number of incorrect predictions. In the confusion matrix, we can not only see the overall accuracy of the prediction but also obtain the specific error point of the prediction error.

4. Results

4.1. Data

The data of this study are mainly from the logging while drilling (LWD) data of IODP Expedition 311 during the inspection using Schlumberger measurement tools. LWD tools provide conventional logging curves and special logging data. Among them, conventional logging includes Density Caliper, Average (DCAV), Gamma Ray (GR), Bulk Density (RHOB), Best Thermal Neutron Porosity, Average (BPHI), Deep Resistivity Average (BDAV), Medium resistivity, shallow resistivity, and P-wave velocity (Vp), and the sampling interval of these logging curves is 0.1524 m. Special logging includes LWD image log-Resistivity At Bit (RAB) images, and the sampling interval is 0.0305 m. As the hydrate reservoir in borehole U1329A is not clearly defined, in order not to affect the research results, the data of U1329A are not used in this study. The logging data of the other four wells and the hydrate reservoir location are shown in Table 2. According to statistics, in borehole U1325A, the interval where natural gas hydrate exists is within the depth range of 195–230 mbsf [48,49,50,51]. For borehole U1326A, scientific analysis reveals the existence of gas hydrate within the depth intervals spanning from 73 to 94 mbsf and from 252 to 261 mbsf [49,52]. In borehole U1327A, the interval containing gas hydrate is in the depth range of 120–138 mbsf [39,49,53]. In borehole U1328A, gas hydrate is distributed in the depth intervals of 20–35 mbsf and 215–222 mbsf [39,48,51,54]. We do not use sonic velocity logging data because sonic velocity logging curves are missing at some depths. Pursuant to the correlation analysis delineated in Figure 4, among the conventional logging data in the study area, five types of logging parameters, namely DCAV, GR, BPHI, RHOB, and BDAV, are chosen due to their strong correlation and high sensitivity to gas hydrate. Therefore, in this study, these five logging curves from four boreholes (U1325A, U1326A, U1327A, and U1328A) are selected as the input characteristic parameters, as shown in Figure 5.

Analysis of Log Curve Cross Plot

We selected five logging data corresponding to hydrate-bearing and hydrate-free intervals in the four boreholes and drew the transaction diagram. The results are shown in Figure 6. Among them, the blue dot represents no hydrate, and the red dot represents hydrate. It can be seen from Figure 6 that the cross plot is composed of four curves of well diameter, density, natural gamma, and porosity. The two types of data points are mixed, and the points containing hydrate and those without hydrate can hardly be distinguished. However, when the resistivity curves interact with the other four kinds of logging curves, most hydrate-bearing intervals and hydrate-free intervals are well distinguished. Nevertheless, some of the two types of data points are mixed together, which makes it difficult to distinguish quickly and accurately. To accurately differentiate between intervals containing gas hydrates and those without, this research employs a series of ML algorithms geared toward the identification of GHRs.

4.2. Data Preprocessing, Model Training, and Optimization

The data need to be preprocessed before model training. We extracted the logging data of the interval needed in this study and labeled the extracted data. Among these intervals, the one with natural gas is assigned a value of 1, while the one without is given a value of 0. Finally, a total of 3753 data were selected from the data of four wells, including 743 data containing gas hydrate and 3010 data without gas hydrate, as illustrated in Figure 7. In order to improve the fairness of the model, we use the synthetic minority oversampling technique (SMOTE) to increase the number of small samples. In the XGBoost model, we increase the minority sample to the same level as the majority sample. In the other five models, we increase the minority sample to 0.3 times the majority sample for experiments. With the aim of optimizing the model’s training efficiency and overall performance, scaling the data to a standardized range and carrying out standardization are indispensable procedures.
The standardized formula, commonly referred to as the Z-score standardization formula, serves to transform the original data into a standard normal distribution, where the mean equals 0 and the standard deviation is 1. The formula is presented below:
Z = X μ σ ,
where Z is the standardized value, and it indicates the degree to which data points deviate from the mean, X corresponds to the raw value of each individual data point, μ is the arithmetic mean of the entire dataset, and σ is the measure of dispersion, representing how much the data points deviate from the mean within the dataset.
Upon finishing data preprocessing, the dataset samples are arbitrarily split into a training set and a test set in a 3:1 ratio. We utilize the training set to support model learning, enabling the model to identify the characteristics and hidden rules within the data. In contrast, the test set is used to assess how well the trained model functions, testing its generalization capability and its accuracy. The categorization of dataset samples could potentially result in bias during model performance assessment, and cross-validation is thus adopted as a more reliable evaluation approach. This approach trains and assesses the model through the process of splitting the dataset into several subsets. Each subset is used in rotation as a validation set, while the others are employed as training sets. By adopting this approach, every subset has a chance to act as a validation set, enabling a more all-encompassing assessment of the model’s capabilities across various data subsets. In the pursuit of model performance optimization, hyperparameter optimization is an indispensable link [55]. Grid search functions as a method for optimizing hyperparameters. It can systematically traverse the combination of multiple parameters and determine the best effect parameters through cross-validation. Its general principle is to first specify a value range for each hyperparameter. Then, the grid search algorithm will traverse all the combinations in the grid and use cross-validation for each group of parameters to evaluate the effect. Finally, the grid search will output the best combination of parameters based on the results of cross-validation; that is, those parameters that perform best in all cross-validation rounds. For this investigation, the grid search strategy is employed to optimize the hyperparameters of the four classifiers, and the 5-fold cross-validation approach is adopted to assess the model’s performance across every parameter configuration. This strategy not only enhances the efficiency while elevating the accuracy of model optimization but also offers us a more comprehensive and robust model evaluation framework, laying a solid foundation for subsequent research and application.

4.3. Identification Results

In this study, we use six different models to identify marine GHRs. The results of each model are obtained from the fine-tuning optimal model using the grid search method. The best parameter selection is shown in Table 3. Table 4 presents the numerical values of the determined evaluation indicators. Figure 8 displays the ROC curves along with the corresponding AUC values for the six models.

4.3.1. GPC Identification Results

We use logging data as input characteristics to construct a Gaussian process to simulate the potential distribution of data based on the GPC algorithm and then classify and predict GHRs. During the construction of the GPC and recognition model, the kernel sets the initial values of a set of hyperparameters to start the first optimization process. In the subsequent iterative optimization, the value of the hyperparameter will be randomly selected from the preset reasonable value range. As shown in Table 3, the default parameter values of the Gaussian classification algorithm in scikit learn are used in this study.
Within the test set, the GPC recognition accuracy stands at 0.8988, fully attesting to its effectiveness in recognizing natural gas hydrates. The excellent performance of GPC with respect to this index indicates its capability to precisely detect the presence of natural gas hydrate based on logging data. In terms of precision, GPC also performed well, reaching 0.8925. In the task of natural gas hydrate identification, positive samples usually refer to the logging data containing natural gas hydrate. The high performance of GPC in terms of precision shows that it can effectively identify these positive samples, thus reducing the possibility of misjudgment. At the same time, its recall rate is as high as 0.8988, which means that the algorithm can identify the vast majority of actual positive samples in the test set. This result further verifies the stability and reliability of the GPC algorithm in the natural gas hydrate recognition task. The F1 score of the algorithm is 0.8926, which is between the accuracy and recall metrics. This shows that the algorithm can balance these two indicators well. In addition, the AUC value (0.9022), a key metric for evaluating the model’s classification capability, also underscores the GPC model’s excellent performance in differentiating gas hydrate and non-hydrate samples. Therefore, it can be inferred that the algorithm performs well in the task of natural gas hydrate recognition.

4.3.2. SVM Identification Results

SVM separates different types of samples by constructing the optimal hyperplane. SVM achieves high-precision classification prediction on logging data. During the construction of an SVM model, the choice of parameters substantially influences the model’s performance. To precisely ascertain the optimal parameters, this research employed the grid search approach integrated with 5-fold cross-validation. Finally, the parameter value that shows the best classification accuracy on the test set is determined as c = 10, γ = 1, and the kernel function is RBF type.
Following the optimization of model parameters and their deployment on the testing dataset, a set of crucial evaluation metrics is acquired. In the evaluation, the model yields an accuracy of 0.9105, a precision of 0.9062, a recall rate of 0.9105, and an F1 score of 0.9070. Concurrently, the area under the ROC curve of the model registers at 0.8973, with the ROC curve depicted in Figure 8. In light of the preceding metrics, we ascertain that the SVM model demonstrates robust performance in discerning natural gas hydrate, boasting high accuracy, precision, recall rate, and F1 score. At the same time, the AUC value additionally indicates that the model exhibits competent classification capabilities. This implies that the model exhibits high reliability and practicability in practical application, thereby offering robust assistance in the detection and identification of natural gas hydrate.

4.3.3. MLP Identification Results

MLP learns the nonlinear characteristics of data by constructing multiple hidden layers, and the MLP model achieves high-precision classification and prediction on logging data. In this model, we optimize the regularization parameters, maximum number of iterations, initialization learning rate, optimization algorithm, batch size, and hidden layer size. As indicated in Table 3, via the grid search approach, the regularization parameter is configured at 0.0001, while the model’s maximum iteration count is 100 and the initial learning rate is set to 0.01. Opt for the ‘Adam’ optimization algorithm, with each training iteration utilizing 100 samples, and set the number of neurons in the hidden layer to 100. Upon deploying the meticulously tuned model on the test dataset, a comprehensive set of evaluation metrics pertaining to the model’s performance is derived. The accuracy, precision, recall rate, and F1 score are 0.9095, 0.9052, 0.9095, and 0.9060, respectively. The AUC value of this model reaches 0.9180, and the ROC curve is shown in Figure 8. According to the comprehensive analysis of the above evaluation indexes, the MLP model shows excellent classification performance in the identification of natural gas hydrate. It has high-precision prediction ability, stable evaluation index, strong feature learning ability, and performs well in dealing with complex nonlinear relationships and data feature extraction.

4.3.4. LR Identification Results

As a classical linear classification algorithm, LR has also been applied to the task of gas hydrate recognition. Although its performance may not be as good as that of nonlinear models (such as GPC, SVM, and MLP), the LR model can still play a certain role in some specific cases. Following the implementation of grid search coupled with cross-validation, the parameters adopted by LR are determined to be the default parameters in the scikit learn library. After being applied to the test data, the evaluation index of the model is obtained. The model achieved an accuracy of 0.8871, a precision of 0.8791, a recall rate of 0.8871, and an F1 score of 0.8746. Additionally, the AUC metric for the model was recorded as 0.8723, with the corresponding ROC curve depicted in Figure 8.
Among the six models, the reason for the poor performance of LR may be that the algorithm has some limitations in dealing with nonlinear features, which may affect its performance in the task of natural gas hydrate recognition. It also indicates that the model exhibits relatively poor performance in differentiating between gas hydrate specimens and their non-hydrate counterparts. In summary, the LR model shows good performance in the task of gas hydrate recognition. In some specific cases, such as linear separable data, the LR model is still a choice worth considering.

4.3.5. RF Identification Results

RF builds a strong classification model by integrating multiple decision trees and using a bagging strategy. The RF model shows excellent classification performance in the task of gas hydrate recognition. In order to determine the optimal parameters, the grid search method and 5-fold cross-validation are used to find the optimal parameter combination in the process of establishing the RF model. As shown in Table 3, we set the random selection of features to 3, the minimum sample size for node splitting to 6, the maximum depth of the tree to 15, and the number of trees to 30. Finally, we apply the set parameters to the model and obtain a series of evaluation indexes of the model. The accuracy, precision, recall rate, and F1 score were 0.9191, 0.9154, 0.9191, and 0.9153, respectively, and the AUC value of the model reached 0.9314. The ROC curve is shown in Figure 7. Based on the above indicators, it can be clearly concluded that among the six comparison models, the indicators of the RF model rank first, especially in the identification of natural gas hydrate. The RF model has high accuracy, precision, recall rate, F1 score, and AUC value. This means that the RF model, with its excellent classification performance and generalization ability, shows significant application value in the actual detection scenario of natural gas hydrate and provides strong technical support for the efficient identification of potential reservoirs and optimization of exploration decisions.

4.3.6. XGBoost Identification Results

XGBoost, as an integrated learning algorithm based on a gradient lifting decision tree, shows certain feature learning ability in natural gas hydrate recognition task by iteratively optimizing residuals and introducing regularization items to prevent overfitting. Parameter settings are shown in Figure 3. However, according to the evaluation results of the current model, its comprehensive performance does not meet the expected application standards. In the test set, the accuracy, precision, recall rate, F1 score, and AUC value are 0.8860, 0.6340, 0.7736, 0.6969, and 0.9243, respectively.
Although the AUC value (0.9243) indicates that the model has a certain classification ability, the following key indicators reveal its limitations: (1) The accuracy is insufficient. The precision (0.6340) is significantly lower than that of similar models (such as 0.9108 of SVM), indicating that the actual positive samples accounted for only 63.4% of the samples predicted as “natural gas hydrate” by the model, and the risk of false positives is high, which may increase the exploration cost. (2) The F1 score is low. The F1 value (0.6969) comprehensively reflects the balance between accuracy and recall. The index below 0.7 indicates that the model has obvious shortcomings in distinguishing positive and negative samples, which cannot meet the needs of high-precision detection.
Comprehensive evaluation conclusion: The current XGBoost model performs poorly in the task of natural gas hydrate recognition, and its lack of precision, low F1 score, and low feature overlap may lead to a high misjudgment rate in practical application. Combined with the excellent performance of SVM and other comparative models, we recommend carefully evaluating the applicability of XGBoost in this scenario and giving priority to other algorithms or further optimizing the performance through feature engineering.

4.4. SHapley Additive exPlanations Value Analysis of Six Models

SHapley Additive exPlanations (SHAP) analysis reveals differences in feature contribution patterns and model behavior of different ML models in natural gas hydrate recognition tasks. The SHAP summary diagram of the six models of SVM, RF, LR, GPC, MLP, and XGBoost is shown in Figure 9.
As shown in Figure 9, RHOB and BDAV are key features for identifying natural gas hydrates in the SVM model, which have significant impacts on the model output and have diverse directions of influence. The RF model shows that BDAV and RHOB are much more important than other features, and these two features may have a positive or negative impact on the model output in different situations. In the LR model, BDAV dominates and mainly exhibits a positive impact, while GR mainly exhibits a negative impact, reflecting the linear regression model’s ability to capture the linear relationship between features and target variables. In the GPC model, BDAV and RHOB are equally important, and all features have diverse directions of influence, demonstrating the flexibility of GPC models in handling complex nonlinear relationships. In the MLP model, the importance of BDAV and RHOB is prominent, and their influence directions are diverse, which reflects the strong capturing ability of the multilayer perceptron model as a neural network model for nonlinear relationships. Finally, in the XGBoost model, BDAV is a key predictive factor, mainly manifested as a positive impact, indicating that the XGBoost model highly relies on BDAV features when identifying natural gas hydrates.
In summary, through a simple analysis of the SHAP summary graphs of six models, we have revealed the key features, feature influence directions, and differences between models in the task of identifying natural gas hydrates for each model. These pieces of information provide an important basis for understanding model behavior, optimizing model performance, and selecting more suitable models for natural gas hydrate identification.

5. Discussion

In this study, six ML algorithms are applied to train the logging data from four wells of the IODP Expedition 311. Through data preprocessing and hyperparameter optimization of the model, the samples with and without gas hydrate layers are identified. A range of evaluation metrics, encompassing accuracy, precision, recall, the F1 score, the AUC value, and the ROC curve, are leveraged to assess the efficacy of distinct ML models in hydrate recognition tasks. Next, model performance, parameter optimization, model comparison, and research limitations will be discussed in depth based on the experimental results.

5.1. Comparison of Six Machine Learning Methods

This study demonstrates the various evaluation metrics (Table 4), ROC curves and AUC values (Figure 8), and the prediction confusion matrices (Figure 10) of each model for hydrate identification by the six ML methods. From the synthesis of these indicators, the six ML models have shown certain accuracy in the task of GHR identification. Among them, the indicators of SVM, MLP, and RF have reached more than 0.90, and the RF indicators are the highest, which is the best performance of the six models. This shows that for this binary classification problem, the RF model constructs a strong classification model by integrating multiple decision trees and using the Bagging strategy and realizes the optimal distinction between GHRs and non-reservoirs. Its accuracy, precision, recall rate, F1 score, and AUC value rank first among the six comparison models, thus achieving the effect of accurately and efficiently identifying GHRs. For the high score indicators presented by MLP and SVM, it also shows that the MLP model, as a kind of deep learning, can learn the complex mapping relationship between logging data and hydrate reservoir characteristics through the training of a multilayer neural network and can also play a good role in recognition. The SVM model can effectively distinguish GHRs from non-reservoirs by finding the optimal hyperplane, so as to achieve the effect of accurately identifying GHRs. For GPC and LR models, the evaluation indexes are close to 0.90, although they are not reached, and the overall recognition effect is good. The reason why the GPC effect is slightly inferior to the first two may be related to parameter settings or data characteristics. The reason why the LR effect is slightly inferior may be that the LR model, as a linear classifier, is relatively weak in nonlinear problems, which has a certain impact on its ability to identify GHRs under complex geological conditions. For XGBoost, its overall performance is relatively weak compared with the other five models, which may be due to the model’s insufficient ability to capture complex nonlinear relationships in logging data or its limited ability to distinguish GHR boundary characteristics.
The AUC value is a key indicator for evaluating classifier performance. As presented in Figure 8, the AUC values of the six models are ordered as follows: RF > XGBoost > MLP > GPC > SVM > LR. Among them, the AUC value of RF is the highest, reaching 0.9314, and GPC is slightly lower than MLP, which is 0.9022. Although the AUC value of MLP is slightly higher than that of GPC, considering that the calculation of the AUC value has certain randomness and volatility, the difference between them is not significant. It can be considered that the performance of MLP and GPC models is equivalent. The AUC value of SVM is 0.8973, which is not as high as the first two, but still at a high level, which indicates that the model has a certain application value in the task of GHR identification. The AUC value of LR is 0.8723, which has reached an acceptable level, but there is still a certain gap compared with other models. Although other indicators of XGBoost are low, its AUC value is 0.9243. The high AUC of XGBoost indicates that it has the overall classification potential to be developed.
According to the analysis of various indicators, although the XGBoost model performs well in terms of its AUC value, its accuracy and F1 score are significantly lower than other models. Through further analysis of model characteristics and data characteristics, we found that the reason may be the mismatch between model complexity and data support. Although the class distribution is balanced through smote, the limited amount of data (especially labeled samples) is still not enough to support XGBoost to fully learn complex nonlinear patterns. Unlike the linear constraint of the LR model, the main challenge of XGBoost is to avoid overfitting and optimize the interactive expression of features.

5.2. Advantages and Limitations of Machine Learning Methods

The traditional method of identifying GHRs using logging data is usually based on an empirical formula and the intersection diagram method. However, the recognition effect of these methods usually depends on the experience of logging interpretation personnel, and the recognition efficiency is low. It commonly poses a challenge to precisely define the nonlinear relationship between logging data and GHRs. This study is based on a data-driven method, using different ML algorithms to train the relationship between the input parameters (logging data) and the output results (whether there is a hydrate layer or not). Except for XGBboost, which needs to be used with caution, the accuracy of the other five models is higher than 0.88. The recognition effect is good, and the discrimination classification model constructed by them is not interfered with by logging interpretation personnel, with high efficiency.
This study leverages an extensive collection of well logging data along with labeled hydrate reservoir data to form the training samples. Given the paucity of samples, the model’s training performance could be profoundly impacted, thereby leading to a reduction in its generalization capacity and prediction accuracy. Furthermore, due to the fact that the number of hydrate-bearing reservoirs is less than that of non-hydrate-bearing reservoirs in actual boreholes, the number of samples in these two categories is unbalanced, which may have a certain impact on the accuracy of ML.

5.3. Model Limitations and Transferability Across Geological Settings

The study area of this paper is located in the accretionary prism area of the Cascadia subduction zone. The geological conditions of the region (such as marine sedimentary environment and specific tectonic background) may limit the universality of the model. The limitations of the model are as follows: (1) The correlation between the response characteristics of logging curves and geological parameters may vary due to differences in sedimentary facies or lithological combinations. For example, in areas with active tectonic activity or permafrost on land, there are significant differences in the occurrence state of hydrates compared to the marine environment, leading to a decrease in the predictive ability of model input parameters. (2) Although ML models can capture nonlinear relationships, they may overlook the physical mechanisms of geological processes, leading to prediction biases in extreme geological conditions. (3) The weight or feature combination of logging curves may be optimal in the IODP Expedition 311 region, but the key parameters in other regions may be different. Consequently, the algorithm of the model developed in this research is predominantly suitable for regions with similar formation lithology as the study area. When dealing with regions that have considerable lithological differences relative to the study area, retraining the model and adjusting its parameters become necessary.

5.4. Ideas for Further Investigations

This study relies on the available logging data, whose quality and volume might impose constraints. As a recommendation for future studies, the multi-source data of geology, geophysics, geochemistry, and other disciplines can be combined to conduct a more comprehensive study on the reservoir identification method of natural gas hydrate. Moreover, the ML approach utilized in this research can be integrated with other cutting-edge techniques, including, but not limited to, deep learning and ensemble learning, to enhance the model’s recognition precision and generalization capacity.

6. Conclusions

In this study, the logging information from four boreholes related to natural gas hydrates within the research zone is integrated into the training samples as feature inputs, and whether the formation contains gas hydrates is used as a label. Six ML algorithms are used to identify GHRs. The evaluation of the classification model’s performance is carried out by a range of performance indicators, encompassing accuracy, precision, recall, the F1 score, the ROC, and the AUC. The following conclusions can be drawn:
(1) Except for the XGBoost model, the accuracy of the other five models is higher than 0.89, which shows that these models can effectively identify GHRs. The evaluation scores of these models are RF, SVM, MLP, GPC, LR, and XGBoost in descending order.
(2) Compared with the traditional intersection diagram method to identify GHRs, the ML method can independently establish a model to characterize the nonlinear relationship between logging data and GHRs through a data-driven method. The discriminant classification model constructed by the ML method can effectively reduce the impact of human intervention in the process of logging interpretation and significantly improve the efficiency of interpretation. Consequently, it introduces an innovative concept and approach for the identification of GHRs, which establishes a basis for the subsequent prospecting and exploitation of natural gas hydrate resources.

Author Contributions

Conceptualization, X.H. and C.Z.; methodology, X.H. and W.L.; software, W.L.; validation, K.X.; formal analysis, W.L.; investigation, W.L.; resources, K.X. and X.H.; data curation, X.H.; writing—original draft preparation, W.L.; writing—review and editing, X.H. and W.L.; visualization, G.S. and Y.W.; supervision, X.H. and C.Z.; project administration, X.H.; funding acquisition, X.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 42404150), Open Fund of Shaanxi Key Laboratory of Petroleum Accumulation Geology (No. PAG-202403), Key Laboratory of Gas Hydrate, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences (No. E229kf18), Open Research Project of the Hubei Key Laboratory of Intelligent Geo-Information Processing (No. KLIGIP-2023-A03), the Academic and Technical Leader Training Program of Jiangxi Province (No. 20204BCJ23027), Science and Technology Research Project of Jiangxi Provincial Department of Education (No. GJJ2200747), and the Postgraduate Innovation Fund from the East China University of Technology (No. YC2025-S406 and No. YC2025-S408).

Data Availability Statement

The data underlying this article can be accessed at https://iodp.tamu.edu (accessed on 1 October 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AUCArea under curve
BDAVDeep Resistivity Average
BPHIBest Thermal Neutron Porosity, Average
BSRsBottom-simulating reflectors
DCAVDensity Caliper, Average
F1 scoreThe harmonic mean of precision and recall
GHRsGas hydrate reservoirs
GPCGaussian process classification
GRGamma Ray
IODPInternational Ocean Drilling Program
LRLogistic regression
LWDLogging While Drilling
MbsfMeters below seafloor
MLMachine learning
MLPMultilayer perceptron
MSEMean-square error
RABResistivity At Bit
RFRandom forest
RHOBBulk Density
ROCReceiver operating characteristic
SHAPSHapley Additive exPlanations
SMOTESynthetic minority oversampling technique
SVMSupport vector machine
VpP-wave velocity
XGBoostExtreme gradient boosting

References

  1. Collett, T.S.; Lee, M.W.; Agena, W.F.; Miller, J.J.; Lewis, K.A.; Zyrianova, M.V.; Boswell, R.; Inks, T.L. Permafrost-associated natural gas hydrate occurrences on the Alaska North Slope. Mar. Pet. Geol. 2011, 28, 279–294. [Google Scholar] [CrossRef]
  2. Sloan, E.D., Jr.; Koh, C.A. Clathrate Hydrates of Natural Gases, 3rd ed.; CRC Press: Boca Raton, FL, USA, 2007; p. 752. [Google Scholar] [CrossRef]
  3. Waite, W.F.; Santamarina, J.C.; Cortes, D.D.; Dugan, B.; Espinoza, D.N.; Germaine, J.; Jang, J.; Jung, J.W.; Kneafsey, T.J.; Shin, H.; et al. Physical properties of hydrate-bearing sediments. Rev. Geophys. 2009, 47, RG4003. [Google Scholar] [CrossRef]
  4. Gajanayake, S.M.; Gamage, R.P.; Li, X.S.; Huppert, H. Natural gas hydrates–Insights into a paradigm-shifting energy resource. Energy Rev. 2023, 2, 100013. [Google Scholar] [CrossRef]
  5. Hu, X.D.; Zou, C.C.; Lu, Z.Q.; Yu, C.Q.; Peng, C.; Li, W.; Tang, Y.Y.; Liu, A.Q.; Kouamelan, K.S. Evaluation of gas hydrate saturation by effective medium theory in shaly sands: A case study from the Qilian Mountain permafrost, China. J. Geophys. Eng. 2019, 16, 215–228. [Google Scholar] [CrossRef]
  6. Hu, X.D.; Zou, C.C.; Qin, Z.; Yuan, H.; Song, G.; Xiao, K. Numerical simulation of resistivity and saturation estimation of pore-type gas hydrate reservoirs in the permafrost region of the Qilian Mountains. J. Geophys. Eng. 2024, 21, 599–613. [Google Scholar] [CrossRef]
  7. Kianoush, P.; Mesgari, F.; Jamshidi, E.; Gomar, M.; Kadkhodaie, A.; Varkouhi, S. Investigating the effect of hole size, bottom hole temperature, and composition on cement bonding quality of exploratory wells in Iran. Sci. Rep. 2024, 14, 29653. [Google Scholar] [CrossRef]
  8. Wei, N.; Pei, J.; Li, H.; Sun, W.; Xue, J. Application of in-situ heat generation plugging removal agents in removing gas hydrate: A numerical study. Fuel 2022, 323, 124397. [Google Scholar] [CrossRef]
  9. Xing, L.; Gao, L.; Ma, Z.; Lao, L.; Wei, W.; Han, W.; Ge, X. A permittivity-conductivity joint model for hydrate saturation quantification in clayey sediments based on measurements of time domain reflectometry. Geoenergy Sci. Eng. 2024, 237, 212798. [Google Scholar] [CrossRef]
  10. Zhu, X.; Liu, T.; Ma, S.; Liu, X.; Li, A. Morphology identification of gas hydrate based on a machine learning method and its applications on saturation estimation. Geophys. J. Int. 2023, 234, 1307–1325. [Google Scholar] [CrossRef]
  11. Wu, S.Y.; Liu, J.; Xu, H.N.; Liu, C.L.; Ning, F.L.; Chu, H.X.; Wu, H.R.; Wang, K. Application of frequency division inversion in the prediction of heterogeneous natural gas hydrates reservoirs in the Shenhu Area, South China Sea. China Geol. 2022, 5, 251–266. [Google Scholar] [CrossRef]
  12. Wang, Y.; Wang, Y.F. Quantitative evaluation of gas hydrate reservoir by AVO attributes analysis based on the Brekhovskikh equation. Pet. Sci. 2023, 20, 2045–2059. [Google Scholar] [CrossRef]
  13. Zhan, L.; Matsushima, J.; Liu, H.; Lu, H. Evaluation and modeling of velocity dispersion and frequency-dependent attenuation in gas hydrate-bearing sediments. Mar. Pet. Geol. 2025, 171, 107204. [Google Scholar] [CrossRef]
  14. Yamamoto, K. Overview and introduction: Pressure core-sampling and analyses in the 2012–2013 MH21 offshore test of gas production from methane hydrates in the eastern Nankai Trough. Mar. Pet. Geol. 2015, 66, 296–309. [Google Scholar] [CrossRef]
  15. Lu, S.; McMechan, G.A. Estimation of gas hydrate and free gas saturation, concentration, and distribution from seismic data. Geophysics 2002, 67, 582–593. [Google Scholar] [CrossRef]
  16. Liu, C.; Meng, Q.; He, X.; Li, C.; Ye, Y.; Zhang, G.; Liang, J. Characterization of natural gas hydrate recovered from Pearl River Mouth basin in South China Sea. Mar. Pet. Geol. 2015, 61, 14–21. [Google Scholar] [CrossRef]
  17. Wang, X.; Collett, T.S.; Lee, M.W.; Yang, S.; Guo, Y.; Wu, S. Geological controls on the occurrence of gas hydrate from core, downhole log, and seismic data in the Shenhu area, South China Sea. Mar. Geol. 2014, 357, 272–292. [Google Scholar] [CrossRef]
  18. Yuan, J.; Edwards, R.N. The assessment of marine gas hydrates through electrical remote sounding: Hydrate without a BSR? Geophys. Res. Lett. 2000, 27, 2397–2400. [Google Scholar] [CrossRef]
  19. Tian, D.; Liu, X. A new approach for the identification of gas hydrate in marine sediments. Mar. Geophys. Res. 2020, 41, 13. [Google Scholar] [CrossRef]
  20. Wu, C.; Han, L.; Zhang, F.; Liu, J.; Chen, H.; Di, B. Gas hydrate reservoir identification based on rock physics modeling and sensitive elastic parameters. J. Geophys. Eng. 2023, 20, 117–127. [Google Scholar] [CrossRef]
  21. Ameur-Zaimeche, O.; Kechiched, R.; Heddam, S.; Wood, D.A. Real-time porosity prediction using gas-while-drilling data and machine learning with reservoir associated gas: Case study for Hassi Messaoud field, Algeria. Mar. Pet. Geol. 2022, 140, 105631. [Google Scholar] [CrossRef]
  22. Feng, S.; Xiong, L.; Radwan, A.E.; Xie, R.; Yin, S.; Zhou, W. Accurate identification of low-resistivity gas layer in tight sandstone gas reservoirs based on optimizable neural networks. Geoenergy Sci. Eng. 2024, 241, 213094. [Google Scholar] [CrossRef]
  23. Eftekhari, S.H.; Memariani, M.; Maleki, Z.; Aleali, M.; Kianoush, P. Electrical facies of the Asmari Formation in the Mansouri oilfield, an application of multi-resolution graph-based and artificial neural network clustering methods. Sci. Rep. 2024, 14, 5198. [Google Scholar] [CrossRef]
  24. Liu, J.J.; Liu, J.C. An intelligent approach for reservoir quality evaluation in tight sandstone reservoir using gradient boosting decision tree algorithm-A case study of the Yanchang Formation, mid-eastern Ordos Basin, China. Mar. Pet. Geol. 2021, 126, 104939. [Google Scholar] [CrossRef]
  25. Hua, Y.; Gao, G.; He, D.; Wang, G.; Liu, W. Reservoir fluid identification based on multi-head attention with UMAP. Geoenergy Sci. Eng. 2024, 238, 212888. [Google Scholar] [CrossRef]
  26. Tan, M.J.; Bai, Y.; Zhang, H.T.; Li, G.R.; Wei, X.P.; Wang, A.D. Fluid typing in tight sandstone from wireline logs using classification committee machine. Fuel 2020, 271, 117601. [Google Scholar] [CrossRef]
  27. Hu, X.D.; Song, G.; Wang, C.M.; Xiao, K.; Yuan, H.; Leng, W.F.; Wei, Y.M. An application study of machine learning methods for lithological classification based on logging data in the Permafrost Zones of the Qilian Mountains. Processes 2025, 13, 1475. [Google Scholar] [CrossRef]
  28. Ren, Q.; Zhang, H.; Zhang, D.; Zhao, X.; Yan, L.; Rui, J.; Zeng, F.; Zhu, X. A framework of active learning and semi-supervised learning for lithology identification based on improved naive Bayes. Expert Syst. Appl. 2022, 202, 117278. [Google Scholar] [CrossRef]
  29. Wang, J.; Cao, J. A lithology identification approach using well logs data and convolutional long short-term memory networks. IEEE Geosci. Remote Sens. Lett. 2023, 20, 7506405. [Google Scholar] [CrossRef]
  30. Eftekhari, S.H.; Memariani, M.; Maleki, Z.; Aleali, M.; Kianoush, P.; Shirazy, A.; Shirazi, A.; Pour, A.B. Employing statistical algorithms and clustering techniques to assess lithological facies for identifying optimal reservoir rocks: A case study of the Mansouri oilfields, SW Iran. Minerals 2024, 14, 233. [Google Scholar] [CrossRef]
  31. Zhou, K.; Zhang, J.; Ren, Y.; Huang, Z.; Zhao, L. A gradient boosting decision tree algorithm combining synthetic minority oversampling technique for lithology identification. Geophysics 2020, 85, WA147–WA158. [Google Scholar] [CrossRef]
  32. Zhang, J.; He, Y.; Zhang, Y.; Li, W.; Zhang, J. Well-logging-based lithology classification using machine learning methods for high-quality reservoir identification: A case study of Baikouquan formation in Mahu Area of Junggar Basin, NW China. Energies 2022, 15, 3675. [Google Scholar] [CrossRef]
  33. Lee, J.; Byun, J.; Kim, B.; Yoo, D.G. Delineation of gas hydrate reservoirs in the Ulleung Basin using unsupervised multi-attribute clustering without well log data. J. Nat. Gas Sci. Eng. 2017, 46, 326–337. [Google Scholar] [CrossRef]
  34. Zhu, L.; Zhou, X.; Sun, J.; Liu, Y.; Wang, J.; Wu, S. Reservoir classification and log prediction of gas hydrate occurrence in the Qiongdongnan Basin, South China Sea. Front. Mar. Sci. 2023, 10, 1055843. [Google Scholar] [CrossRef]
  35. Tian, D.; Yang, S.; Gong, Y.; Geng, M.; Li, Y.; Hu, G. A comparative study of machine learning methods for gas hydrate identification. Geoenergy Sci. Eng. 2023, 223, 211564. [Google Scholar] [CrossRef]
  36. Riedel, M.; Collett, T.S.; Malone, M.; Scientists, E. Expedition 311 synthesis: Scientific findings. Proc. Integr. Ocean. Drill. Program 2010, 311, 2. [Google Scholar] [CrossRef]
  37. Riddihough, R. Recent movements of the Juan de Fuca plate system. J. Geophys. Res. Solid Earth 1984, 89, 6980–6994. [Google Scholar] [CrossRef]
  38. Riedel, M.; Collett, T.S. Cascadia margin gas hydrates. IODP Prelim. Rep. 2005, 311, 9. [Google Scholar] [CrossRef]
  39. Riedel, M.; Collett, T.S.; Malone, M.J.; Scientists, E. Expedition 311 Summary. Proc. Integr. Ocean. Drill. Progr. 2010, 311, 2–26. [Google Scholar] [CrossRef]
  40. Rasmussen, C.E. Gaussian processes in machine learning. In Summer School on Machine Learning; Springer: Berlin/Heidelberg, Germany, 2003; Volume 3176, pp. 63–71. [Google Scholar] [CrossRef]
  41. Liu, H.; Ong, Y.S.; Yu, Z.; Cai, J.; Shen, X. Scalable Gaussian process classification with additive noise for non-Gaussian likelihoods. IEEE Trans. Cybern. 2021, 52, 5842–5854. [Google Scholar] [CrossRef]
  42. Seeger, M. Gaussian processes for machine learning. Int. J. Neural Syst. 2004, 14, 69–106. [Google Scholar] [CrossRef]
  43. Vapnik, V.N. An overview of statistical learning theory. IEEE Trans. Neural Netw. 1999, 10, 988–999. [Google Scholar] [CrossRef]
  44. Chang, C.C.; Lin, C.J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 1–27. [Google Scholar] [CrossRef]
  45. Hassoun, M.H. Fundamentals of Artificial Neural Networks; MIT Press: Cambridge, MA, USA, 1996; Volume 84, p. 906. [Google Scholar] [CrossRef]
  46. Rao, M.B. Feedforward Neural Network Methodology. Technometrics 2000, 42, 432–433. [Google Scholar] [CrossRef]
  47. Cox, D.R. The regression analysis of binary sequences. J. R. Stat. Soc. Ser. B Stat. Methodol. 1958, 20, 215–232. [Google Scholar] [CrossRef]
  48. Blanc-Valleron, M.M.; Pierre, C.; Bartier, D.; Rouchy, J.M. Data report: Bulk carbonate content of sediments and mineralogy of authigenic carbonates along an east-west transect in the northern Cascadia margin, IODP Expedition 311. Proc. IODP 2009, 311, 2. [Google Scholar] [CrossRef]
  49. Chen, M.A.; Riedel, M.; Spence, G.D.; Hyndman, R.D. Data report: A downhole electrical resistivity study of northern Cascadia marine gas hydrate. In Integrated Ocean Drilling Program; Texas A & M University: College Station, TX, USA, 2008; Volume 311. [Google Scholar] [CrossRef]
  50. Riedel, M.; Collett, T.S.; Malone, M.J. Expedition 311 Scientists. Site U1325. Proc. Integr. Ocean. Drill. Program 2006, 311, 202. [Google Scholar] [CrossRef]
  51. Pierre, C.; Blanc-Valleron, M.M.; Rouchy, J.M.; Bartier, D. Data report: Stable isotope composition of authigenic carbonates from the northern Cascadia margin, IODP Expedition 311, Sites U1325–U1329. Proc. Integr. Ocean. Drill. Program 2009, 311, 2. [Google Scholar] [CrossRef]
  52. Riedel, M.; Collett, T.S.; Malone, M.J. Expedition 311 Scientists. Site U1326. Proc. Integr. Ocean. Drill. Program 2006, 311, 6. [Google Scholar] [CrossRef]
  53. Riedel, M.; Collett, T.S.; Malone, M.J. Expedition 311 Scientists. Site U1327. Proc. Integr. Ocean. Drill. Program 2006, 311, 16–22. [Google Scholar] [CrossRef]
  54. Riedel, M.; Collett, T.S.; Malone, M.J. Expedition 311 Scientists. Site U1328. Proc. Integr. Ocean. Drill. Program 2006, 311, 25. [Google Scholar] [CrossRef]
  55. Hutter, F.; Kotthoff, L.; Vanschoren, J. Automated Machine Learning: Methods, Systems, Challenges; Springer Nature: Cham, Switzerland, 2019; p. 219. [Google Scholar] [CrossRef]
Figure 1. The geographical location of the five sites (U1325, U1326, U1327, U1328, and U1329) during the IODP Expedition 311 [39].
Figure 1. The geographical location of the five sites (U1325, U1326, U1327, U1328, and U1329) during the IODP Expedition 311 [39].
Jmse 13 01208 g001
Figure 2. MLP structure diagram.
Figure 2. MLP structure diagram.
Jmse 13 01208 g002
Figure 3. Structure of single neuron.
Figure 3. Structure of single neuron.
Jmse 13 01208 g003
Figure 4. Correlation heatmap of seven parameters: a comprehensive visual analysis.
Figure 4. Correlation heatmap of seven parameters: a comprehensive visual analysis.
Jmse 13 01208 g004
Figure 5. Logging curves of four boreholes, including DCAV, GR, BPHI, RHOB, and BDAV. Intervals containing natural gas hydrate are marked with red (YES), and intervals without natural gas hydrate are marked with blue (NO). (a) U1325A, (b) U1326A, (c) U1327A, and (d) U1328A.
Figure 5. Logging curves of four boreholes, including DCAV, GR, BPHI, RHOB, and BDAV. Intervals containing natural gas hydrate are marked with red (YES), and intervals without natural gas hydrate are marked with blue (NO). (a) U1325A, (b) U1326A, (c) U1327A, and (d) U1328A.
Jmse 13 01208 g005aJmse 13 01208 g005b
Figure 6. Cross plot of five logging curves.
Figure 6. Cross plot of five logging curves.
Jmse 13 01208 g006
Figure 7. Histogram of sample size distribution of logging dataset. NO represents the sample data without hydrate, and YES represents the data with hydrate.
Figure 7. Histogram of sample size distribution of logging dataset. NO represents the sample data without hydrate, and YES represents the data with hydrate.
Jmse 13 01208 g007
Figure 8. ROC curves and AUC values of six models.
Figure 8. ROC curves and AUC values of six models.
Jmse 13 01208 g008
Figure 9. SHAP value plots for six models: (a) SVM model, (b) RF model, (c) LR model, (d) GPC model, (e) MLP model, and (f) XGBoost model.
Figure 9. SHAP value plots for six models: (a) SVM model, (b) RF model, (c) LR model, (d) GPC model, (e) MLP model, and (f) XGBoost model.
Jmse 13 01208 g009aJmse 13 01208 g009b
Figure 10. Prediction confusion matrix of six models: (a) RF model, (b) GPC model, (c) MLP model, (d) SVM model, (e) XGBoost model, and (f) LR model.
Figure 10. Prediction confusion matrix of six models: (a) RF model, (b) GPC model, (c) MLP model, (d) SVM model, (e) XGBoost model, and (f) LR model.
Jmse 13 01208 g010
Table 1. Information on the drilling locations of boreholes U1325A, U1326A, U1327A, U1328A, and U1329A during the IODP Expedition 311 [39].
Table 1. Information on the drilling locations of boreholes U1325A, U1326A, U1327A, U1328A, and U1329A during the IODP Expedition 311 [39].
HoleLatitudeLongitudeTotal Penetration
U1325A48°38.691′ N126°58.991′ W350 mbsf
U1326A48°37.635′ N127°3.029′ W300 mbsf
U1327A48°41.887′ N126°51.921′ W300 mbsf
U1328A48°40.072′ N126°51.022′ W300 mbsf
U1329A48°47.369′ N126°40.713′ W220 mbsf
Table 2. Logging data information and hydrate reservoir depth range [39,50,52,53,54].
Table 2. Logging data information and hydrate reservoir depth range [39,50,52,53,54].
Well IDLogging CurvesSampling Interval (m)Gas Hydrate Depth Range (mbsf)
U1325ADCAV, GR, RHOB, BPHI, BDAV, Vp0.1524195–230
RAB0.0305
U1326ADCAV, GR, RHOB, BPHI, BDAV, Vp0.152473–94, 252–261
RAB0.0305
U1327ADCAV, GR, RHOB, BPHI, BDAV, Vp0.1524120–138
RAB0.0305
U1328ADCAV, GR, RHOB, BPHI, BDAV, Vp0.152420–35, 215–222
RAB0.0305
Table 3. Optimal hyperparameter selection of six models.
Table 3. Optimal hyperparameter selection of six models.
ModelHyperparameter (Symbol)Parameter Value
GPCDefault settingDefault setting
SVMKernelRBF
Penalty coefficient (c)10
Gamma1
MLPRegularization parameters (alpha)0.0001
Maximum number of iterations (max iter)100
Initialization learning rate (learning rate init)0.01
Optimization algorithms (solver)Adam
Batch size (batch size)100
Hidden layer configuration (hidden layer sizes)100
RFRandom selection of features (max features)3
Minimum sample size for node splitting (min samples split)6
The maximum depth of a tree (max depth)15
The number of trees (n estimators)30
XGBoostLearning rate0.1
The number of trees (n estimators)200
The maximum depth of a tree (max depth)7
Minimum sample size for node splitting (min child weight)1
Gamma0.1
Subsample0.6
Colsample bytree0.8
LRDefault settingDefault setting
Table 4. Accuracy, precision, recall, and F1 score of six models.
Table 4. Accuracy, precision, recall, and F1 score of six models.
Model TypeAccuracyPrecisionRecallF1 Score
SVM0.91050.90620.91050.9070
GPC0.89880.89250.89880.8926
MLP0.90950.90520.90950.9060
RF0.91910.91540.91910.9153
XGBoost0.88600.63400.77360.6969
LR0.88710.87910.88710.8746
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hu, X.; Leng, W.; Xiao, K.; Song, G.; Wei, Y.; Zou, C. Research on Reservoir Identification of Gas Hydrates with Well Logging Data Based on Machine Learning in Marine Areas: A Case Study from IODP Expedition 311. J. Mar. Sci. Eng. 2025, 13, 1208. https://doi.org/10.3390/jmse13071208

AMA Style

Hu X, Leng W, Xiao K, Song G, Wei Y, Zou C. Research on Reservoir Identification of Gas Hydrates with Well Logging Data Based on Machine Learning in Marine Areas: A Case Study from IODP Expedition 311. Journal of Marine Science and Engineering. 2025; 13(7):1208. https://doi.org/10.3390/jmse13071208

Chicago/Turabian Style

Hu, Xudong, Wangfeng Leng, Kun Xiao, Guo Song, Yiming Wei, and Changchun Zou. 2025. "Research on Reservoir Identification of Gas Hydrates with Well Logging Data Based on Machine Learning in Marine Areas: A Case Study from IODP Expedition 311" Journal of Marine Science and Engineering 13, no. 7: 1208. https://doi.org/10.3390/jmse13071208

APA Style

Hu, X., Leng, W., Xiao, K., Song, G., Wei, Y., & Zou, C. (2025). Research on Reservoir Identification of Gas Hydrates with Well Logging Data Based on Machine Learning in Marine Areas: A Case Study from IODP Expedition 311. Journal of Marine Science and Engineering, 13(7), 1208. https://doi.org/10.3390/jmse13071208

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop