Quantitative Interpretation of TOC in Complicated Lithology Based on Well Log Data: A Case of Majiagou Formation in the Eastern Ordos Basin, China

: Source rock evaluation plays a key role in studies of hydrocarbon accumulation and resource potential. Total organic carbon (TOC) is the basis of source rock evaluation and it is a key parameter that inﬂuences petroleum resource assessment. The Majiagou formation in the eastern Ordos Basin has complicated lithology and low abundance of organic matters. There are different opinions over the existence of scale source rocks. Due to inadequate laboratory data of TOC in the Ordos Basin, it is difﬁcult to accurately describe source rocks in the region; thus, log interpretation of TOC is needed. In this study, the neural network model in the artiﬁcial intelligence (AI) ﬁeld was introduced into the TOC logging interpretation. Compared with traditional ∆ logR methods, sample optimization, logging correlation analysis and comparative optimization of computational methods were carried out successively by using measured TOC data and logging data. Results show that the neural network model has good prediction effect in complicated lithologic regions and it can identify variations of TOC in continuous strata accurately regardless of the quick lithologic changes.


Introduction
Total organic carbon (TOC) content is an important index to evaluate source rock in oilgas exploration. On the one hand, there are limited measured data of TOC. Measured data of TOC are from laboratory coring analysis and the limited core samples and experimental cost leads to the inadequate TOC data. On the other hand, source rock shows strong heterogeneity. There's a substantial error between TOC data gained by traditional methods and the real TOC data. Such errors cannot meet requirements of source rock evaluation.
Schmoker et al. [1] pointed out that TOC content has linear relations with density logging (Schmoker, 1979) and gamma-ray logging (Schmoker, 1981), and TOC content can be calculated by density and gamma logging. Chen et al. [2] calculated TOC of source rock in carbonate formation by establishing an interpretation formula between natural gamma-ray logging and TOC of source rock, which achieved a good interpretation effect. When studying the Pearl River Mouth Basin, Xu et al. [3] found that the interpretation effect of TOC based on multiple logging parameters was significantly better than that based on a single logging parameter. Du et al. [4] calculated TOC contents by using the single logging parameter interpretation method and multi-logging parameter interpretation method. They also concluded that the interpretation effect based on multiple logging parameters was better than that based on single logging parameter and the multi-parameter regressive calculation accuracy increased with the increase of logging parameter types. Exxon and 2 of 18 Esso proposed the overlapping method of interval transit time and resistivity curve in 1979, that is, ∆logR analytical method. The porosity curve and resistivity curve were overlapped to identify source rocks through the amplitude differences between two curves. Passey et al. [5] proposed the calculation formula of TOC based on data statistics: TOC = (∆logR) × 10 α (1) α = 2.297 − 0.1688 LOM (2) ∆logR = lg(R/R base ) + K·(∆t − ∆t base ) where R base and ∆t base are baseline values of resistivity and interval transit time, respectively. K is the proportionality coefficient and LOM is the maturity of organic matters. This is the relatively common ∆log R method present being used. Although the ∆log R method has strong applicability, it needs artificial determination of logging baselines. Baselines vary significantly in different wells as well as different strata and sedimentary environments [6].
More and more scholars believe that the linear relationship between TOC of source carbon and logging parameters is not simple and it is difficult to be reflected by an explicit function [7][8][9]. The neural network model possesses remarkable advantages in addressing nonlinear problems, which cannot be expressed in explicit functions. Hence, neural network modelling becomes a new way for TOC logging interpretation of source rock [10,11]. Neural networks have obvious advantages in TOC logging interpretation of source rock. Its interpretation result not only conforms well to laboratory analysis data, but also can retain detailed changes of TOC. However, TOC logging interpretation of source rock based on neural networks has complicated principles and requires professional software programming, which determines the high threshold of applications. The operation is completed automatically by software, without any disclosure of details. As a result, TOC logging interpretation of source rock based on neural networks is difficult to be promoted.
Recently, many new TOC log interpretation methods have been proposed. Li [12], Rui [13], Amosu [14] and Liu [15] carried out TOC log interpretation using SVM. Zhao [16] and Yin Mei et al. [17] introduced the optimal estimation and Bayesian discriminant into TOC log interpretation. Yu [6] and Rui [18] interpreted TOC of source rock through Gaussian regression. Although there are many TOC log interpretation methods of source rock, no method is universally applicable. The optimal TOC log interpretation method shall be chosen according to practical conditions of the study area.

Geological Background
The Ordos Basin is the second largest sedimentary basin in China, with a basin area of about 25 × 10 4 km 2 . This basin is a multicycle sedimentary Craton petroliferous basin with stable sedimentation, migrating depression and obvious torsion. It is a polybasic compound sedimentary basin of Meso-Cenozoic and Proterozoic and Paleozoic rocks [19].
The Ordos Basin has two sets of source rocks, including the Upper Paleozoic marinecontinental transitional coal series and Lower Paleozoic marine carbonate rocks. There are sufficient gas sources [20].
The study area is in the LT1 well area in the east of the Ordos Basin ( Figure 1).  The target strata in this study are Ma 3 and submembers 6-10 of Ma 5 Ma 58, Ma 59 and Ma 510) in the Ordovician Majiagou Formation. Major l include dolomite, limestone, salt rock, gypsum rock and mudstone. Ma 1 and Ma 3 are characteristics of anhydrite centered at salt rocks a lacustrine deposits of dolomite. The Ma 56, Ma 58 and Ma 510 are second sediments, which mainly include gypsum-bearing dolomite and gypsum r Ma 59 are secondary transgressive deposits, which is mainly composed of c ( Figure 2). The Majiagou Formation has mainly experienced three large-s sea level from the M1 to M6. The sea level dropped and part of the water r land during the deposition period of M1, M3 and M5. A large amount of gy was formed by evaporation and concentration. The salinization process m oxic and forms a reducing environment favorable for organic matter preser amount of organic matter is enriched and high-abundance source rocks ca The target strata in this study are Ma 3 and submembers 6-10 of Ma 5 (Ma 5 6 , Ma 5 7 , Ma 5 8 , Ma 5 9 and Ma 5 10 ) in the Ordovician Majiagou Formation. Major lithologic rocks include dolomite, limestone, salt rock, gypsum rock and mudstone. Ma 1 and Ma 3 are characteristics of anhydrite centered at salt rocks and evaporation lacustrine deposits of dolomite. The Ma 5 6 , Ma 5 8 and Ma 5 10 are secondary regressive sediments, which mainly include gypsum-bearing dolomite and gypsum rocks. Ma 5 7 and Ma 5 9 are secondary transgressive deposits, which is mainly composed of carbonate rocks ( Figure 2). The Majiagou Formation has mainly experienced three large-scale changes in sea level from the M1 to M6. The sea level dropped and part of the water remained on the land during the deposition period of M1, M3 and M5. A large amount of gypsum salt rock was formed by evaporation and concentration. The salinization process makes water anoxic and forms a reducing environment favorable for organic matter preservation. A large amount of organic matter is enriched and high-abundance source rocks can be formed.

Optimal Sampling
Source rocks in the Majiagou Formation in the Ordos Basin mainly include the thinlayered dark dolomite mudstones and argillaceous dolomite. The LT1 well has full-section coring and plenty of TOC measured data. There are relatively complete conventional logging data, including natural gamma, interval transit time, neutron porosity, density and resistivity. Therefore, the core of LT1 well in the eastern Ordos Basin was chosen and logging data were used as the basic data. After core location and logging normalization, 309 TOC measured data and corresponding logging data were gained.
During the sedimentary period of the Majiagou Formation in the eastern Ordos Basin, sea level changed frequently, thus resulting in the very complicated lithology and common thin interbedded formations. It can be seen from Figure 3 that gypsum rock and dolomite are in thin alternating deposit and thickness of some lithologic strata is smaller than 1 cm, lower than the logging resolution. Although core lithology changes, the logging curve shows no obvious responses, thus causing mismatching rock-electricity relations. Although there's relatively high TOC in some thin strata, they are too thin to generate obvious logging responses ( Figure 3).

Optimal Sampling
Source rocks in the Majiagou Formation in the Ordos Basin mainly include the thinlayered dark dolomite mudstones and argillaceous dolomite. The LT1 well has full-section coring and plenty of TOC measured data. There are relatively complete conventional logging data, including natural gamma, interval transit time, neutron porosity, density and resistivity. Therefore, the core of LT1 well in the eastern Ordos Basin was chosen and logging data were used as the basic data. After core location and logging normalization, 309 TOC measured data and corresponding logging data were gained.
During the sedimentary period of the Majiagou Formation in the eastern Ordos Basin, sea level changed frequently, thus resulting in the very complicated lithology and common thin interbedded formations. It can be seen from Figure 3 that gypsum rock and dolomite are in thin alternating deposit and thickness of some lithologic strata is smaller than 1 cm, lower than the logging resolution. Although core lithology changes, the logging curve shows no obvious responses, thus causing mismatching rock-electricity relations. Although there's relatively high TOC in some thin strata, they are too thin to generate obvious logging responses ( Figure 3). Moreover, TOC logging responses are not significant due to the general inadequate TOC measured data of source rock in the study area. Moreover, external factors (e.g., lithology) influence logging response greatly. As a result, the correlation between TOC measured data and logging data is very poor (Figure 4). It is necessary to process TOC sample data.   To assure consistency of resolution between core data and logging data and increase the correlation between TOC measured data and corresponding logging data, the laboratory TOC measured data were processed by observing the following of two principles.
1. For the section with relatively homogenous lithology, actual core lithology and the mean TOC was used.
2. For the section with thin complicated lithology, and due to a thin stratum being difficult to be used as an effective source rock (although there's a very high TOC content), Moreover, TOC logging responses are not significant due to the general inadequate TOC measured data of source rock in the study area. Moreover, external factors (e.g., lithology) influence logging response greatly. As a result, the correlation between TOC measured data and logging data is very poor (Figure 4). It is necessary to process TOC sample data. Moreover, TOC logging responses are not significant due to the general inadequate TOC measured data of source rock in the study area. Moreover, external factors (e.g., lithology) influence logging response greatly. As a result, the correlation between TOC measured data and logging data is very poor (Figure 4). It is necessary to process TOC sample data.  To assure consistency of resolution between core data and logging data and increase the correlation between TOC measured data and corresponding logging data, the laboratory TOC measured data were processed by observing the following of two principles.
1. For the section with relatively homogenous lithology, actual core lithology and the mean TOC was used.
2. For the section with thin complicated lithology, and due to a thin stratum being difficult to be used as an effective source rock (although there's a very high TOC content), To assure consistency of resolution between core data and logging data and increase the correlation between TOC measured data and corresponding logging data, the laboratory TOC measured data were processed by observing the following of two principles.
1. For the section with relatively homogenous lithology, actual core lithology and the mean TOC was used.
2. For the section with thin complicated lithology, and due to a thin stratum being difficult to be used as an effective source rock (although there's a very high TOC content), the core lithology in this section was used to represent the lithology and TOC of this section was the mean after exclusion of singular values.
Based on above principles, 158 sample points were chosen as the sample data to construct the TOC quantitative log interpretation model.

Correlation Analysis of Logging Curves
Correlation analysis result is an important basis for the TOC quantitative prediction model to select logging data. Through correlation analysis, logging data that are highly related with TOC sample data were chosen to establish the TOC quantitative prediction model. The most sensitive logging parameters to TOC changes were determined according to the correlation analysis between 158 TOC measured samples and logging data ( Figure 5).
Radioactivity of source rocks mainly comes from clay minerals and organic carbon. In the deposition process, radioactive elements are adsorbed by clay minerals and organic matters, thus increasing the radioactivity of source rocks. The reducing environment which is formed by kerogen in source rock is beneficial for sedimentation of the radioactive element Uranium (U). In addition, source rock is characteristic of fine particles and a large surface area. Organic matters in source rock have strong absorptivity of radioactive elements [21,22]. Hence, organic content influences the numerical value of natural gammaray logging. The radioactivity of source rock is strengthened with the increase of TOC content. The TOC content is positively related with natural gamma (Figure 5a).
Organic matters in source rock belong to a medium to high interval transit time. The interval transit time increases due to the existence of organic matters. TOC content is positively related with interval transit time (Figure 5b).
Solid organic matters are lighter than protolith and their density is close to that of water [23]. Moreover, the density of source rock is negatively related with maturity. With the increase of organic content, density of source rocks decreases (Figure 5c).
Macerals of organic matters in source rock are rich of element hydrogen, which influence the neutron logging value and bring abnormal high values of neutron logging. Therefore, neutron porosity is positively related with TOC content (Figure 5d).
Since kerogen and oil gas have relatively poor conductivity, the formation resistivity increases when there's organic enrichment. In practical data, resistivity is influenced by TOC slightly due to the low TOC of source rock in the study area. This implies the insignificant correlation between resistivity and TOC ( Figure 5e).
Radioactivity in source rock is mainly attributed to radioactive elements (e.g., U), which were adsorbed in the process of sedimentation. Hence, natural gamma spectrometry is positively related with TOC content (Figure 5f-h). the core lithology in this section was used to represent the lithology and TOC of this section was the mean after exclusion of singular values. Based on above principles, 158 sample points were chosen as the sample data to construct the TOC quantitative log interpretation model.

Correlation Analysis of Logging Curves
Correlation analysis result is an important basis for the TOC quantitative prediction model to select logging data. Through correlation analysis, logging data that are highly related with TOC sample data were chosen to establish the TOC quantitative prediction model. The most sensitive logging parameters to TOC changes were determined according to the correlation analysis between 158 TOC measured samples and logging data (Figure 5).
Radioactivity of source rocks mainly comes from clay minerals and organic carbon. In the deposition process, radioactive elements are adsorbed by clay minerals and organic matters, thus increasing the radioactivity of source rocks. The reducing environment which is formed by kerogen in source rock is beneficial for sedimentation of the radioactive element Uranium (U). In addition, source rock is characteristic of fine particles and a large surface area. Organic matters in source rock have strong absorptivity of radioactive elements [21,22]. Hence, organic content influences the numerical value of natural gamma-ray logging. The radioactivity of source rock is strengthened with the increase of TOC content. The TOC content is positively related with natural gamma (Figure 5a).
Organic matters in source rock belong to a medium to high interval transit time. The interval transit time increases due to the existence of organic matters. TOC content is positively related with interval transit time (Figure 5b).
Solid organic matters are lighter than protolith and their density is close to that of water [23]. Moreover, the density of source rock is negatively related with maturity. With the increase of organic content, density of source rocks decreases (Figure 5c).
Macerals of organic matters in source rock are rich of element hydrogen, which influence the neutron logging value and bring abnormal high values of neutron logging. Therefore, neutron porosity is positively related with TOC content (Figure 5d).
Since kerogen and oil gas have relatively poor conductivity, the formation resistivity increases when there's organic enrichment. In practical data, resistivity is influenced by TOC slightly due to the low TOC of source rock in the study area. This implies the insignificant correlation between resistivity and TOC (Figure 5e).
Radioactivity in source rock is mainly attributed to radioactive elements (e.g., U), which were adsorbed in the process of sedimentation. Hence, natural gamma spectrometry is positively related with TOC content (Figure 5f   The correlations between re-selected TOC sample data and logging curves are improved significantly. At the 99% confidence level, the T critical value is 2.3578, and the T statistic value of each log data is greater than it (Table 1), which proves that the variables are significantly correlated at the 99% confidence level. According to correlation analysis, natural gamma spectrometry (K), natural gamma-ray (GR) and density (DEN) are more correlated with TOC. The correlations between re-selected TOC sample data and logging curves are improved significantly. At the 99% confidence level, the T critical value is 2.3578, and the T statistic value of each log data is greater than it (Table 1), which proves that the variables are significantly correlated at the 99% confidence level. According to correlation analysis, natural gamma spectrometry (K), natural gamma-ray (GR) and density (DEN) are more correlated with TOC.

Methods and Results
In this experiment, TOC was predicted by using 158 groups of measured data and corresponding logging data. The traditional ∆logR method and neural network method were compared and optimized in M3 and M5 6-10 layers. On this basis, it determines that the complicated lithologic TOC interpretation in the study area is applicable to the linear model or nonlinear model. Calculation results were evaluated by the determination coefficient (R 2 ), mean absolute error (MAE), mean relative error (MRE) and root-mean-square error (RMSE). The calculation effect is better if R 2 approaches to 1 and MAE, MRE and RMSE approach to 0.  3)). After combination, the complete TOC calculation formula of traditional ∆logR method is: where R base and ∆t base are baseline values of resistivity and interval transit time. K is a proportionality coefficient and LOM is maturity of organic matters. Baseline values are extremely subjective since they have to be determined manually. In addition, it is very difficult to calculate maturity accurately. Therefore, the traditional ∆logR method has several disadvantages. Moreover, some parameters in the traditional ∆logR method were gained by Passey et al. according to practical experiences [5]. The traditional ∆logR method is only applicable to formations with interval transit time of 260~460 µs/m [24]. However, the interval transit time of practical formations in this study area was 130~220 µs/m. Therefore, the formula of traditional ∆logR method is not applicable to the study area.

Improved ∆logR Method
For source rocks in the same formation in practical working areas, R base and ∆t base in the traditional ∆logR formula (Equation (4)) can be viewed as constant; maturity of source rocks is basically consistent. The coefficient K can be gained through fitting of practical data; therefore, the original TOC formula can be simplified into: where a, b and c can be gained through fitting of practical data of working areas. Sample data were brought into Equation (5) for parameter fitting using SPSS software. The TOC log interpretation formulas corresponding to different target formations are gained.
where R is the value of resistivity curve and ∆t is the value of interval transit time.

Interpretation Effect Analysis of the ∆logR Method
Scatter plot of predicted TOC and measured TOC was gained from the prediction model ( Figure 6). Meanwhile, the R 2 , MAE, MRE and RMSE were calculated 0.3289, 0.0671, 33.02% and 8.406%, respectively.

Interpretation Effect Analysis of the ΔlogR Method
Scatter plot of predicted TOC and measured TOC was gained from the prediction model ( Figure 6). Meanwhile, the R 2 , MAE, MRE and RMSE were calculated 0.3289, 0.0671, 33.02% and 8.406%, respectively. The TOC of LT1 well was interpreted by the ΔlogR method and the linear relationship between interpreted TOC and measured TOC was poor ( Figure 6). Furthermore, interpretation results were input into the software and a continuous TOC curve was plotted. It found that the curve could not match well with the laboratory measured TOC of core and it could only reflect the general variation trend (Figure 7).

Basic Principle
The BP (back propagation) neural network decreases the error between the output data and sample results by correcting weights of internal layers of existing learning samples, finally getting a relatively perfect neural network. This network is mainly composed of three parts (Figure 8). Part 1 is responsible for inputting original data for the next training; this part is called the input layer. Part 2 is the core of neural network, which is responsible for processing of input data and analyzing characteristics of data. However, specific operation of this process is kept within the network and this part is called the hidden layer. Part 3 is responsible for outputting data processed by the previous part and is called the output layer. In every part of the neural network, each layer is composed of several neural nodes and they are connected mutually into the neural network.
The training process of the BP neural network is a transmission process of data among different neural nodes. After data input, data firstly transmits from the input layer to the output layer, during which each layer is endowed with certain weights. After the processed input data arrive at the output layer, they will be compared with the expected data and differences are calculated. If the error is in the preset permissible range, data can be output. At this moment, neural network training is finished. If error exceeds the permissible range, the error propagates from the input layer to the output layer, during which weights of each layer are modified continuously. After finishing the correction, the processed data are compared with the expected data again and their error is calculated. This process repeats until the error reaches the permissible range. The TOC of LT1 well was interpreted by the ∆logR method and the linear relationship between interpreted TOC and measured TOC was poor ( Figure 6). Furthermore, interpretation results were input into the software and a continuous TOC curve was plotted. It found that the curve could not match well with the laboratory measured TOC of core and it could only reflect the general variation trend (Figure 7).

Basic Principle
The BP (back propagation) neural network decreases the error between the output data and sample results by correcting weights of internal layers of existing learning samples, finally getting a relatively perfect neural network. This network is mainly composed of three parts (Figure 8). Part 1 is responsible for inputting original data for the next training; this part is called the input layer. Part 2 is the core of neural network, which is responsible for processing of input data and analyzing characteristics of data. However, specific operation of this process is kept within the network and this part is called the hidden layer. Part 3 is responsible for outputting data processed by the previous part and is called the output layer. In every part of the neural network, each layer is composed of several neural nodes and they are connected mutually into the neural network.
The training process of the BP neural network is a transmission process of data among different neural nodes. After data input, data firstly transmits from the input layer to the output layer, during which each layer is endowed with certain weights. After the processed input data arrive at the output layer, they will be compared with the expected data and differences are calculated. If the error is in the preset permissible range, data can be output. At this moment, neural network training is finished. If error exceeds the permissible range, the error propagates from the input layer to the output layer, during which weights of each layer are modified continuously. After finishing the correction, the processed data are compared with the expected data again and their error is calculated. This process repeats until the error reaches the permissible range.

Construction of Neural Network
A BP neural network was built using the MATLAB software. A large sample size can assure that the neural network learns data features to the maximum extent and thus increases accuracy of the neural network. To construct the BP neural network, it usually needs data processing, data input, parameter setting and algorithm selection. The whole process is introduced as follows: 1. Data normalization Since different logging data have different units that may influence training of the model, it is necessary to implement normalization of TOC data and logging data before the logging data are input into the model in order to improve accuracy of the model and shorten operation time. The normalization uses the range method and its formula is: where X n is the sample normalization result of logging curve, X is the original logging value, X min is the minimum and X max is the maximum.

Data input
According to correlation analysis results, the normalized K, GR and DEN logging data were input into the neural network.
To avoid overfitting and assure training quality, a cross-training verification mode was used in the training process of the neural network. Training set, verification set and test set were allocated randomly according to proportions of 70%, 15% and 15%, respectively. Data in the training set (110 groups) were used for the neural network to learn and identify data characteristics. Data in the verification set (24 groups) were used to test whether overfitting occurs (if the error between data used for learning that are output by the neural network and real results decreases, but the error between the current data group after neural network output and the real result is unchanged, overfitting occurs). Data in the test set (24 groups) were used to test unknown sample prediction capacity of the neural network.
3. Parameter setting of the hidden layer Increasing hidden layers can decrease the error of the neural network and increase the interpretation accuracy. Nevertheless, excessive hidden layers may cause more operations during data transmission in the neural network, so it takes more time for assignment of each layer. Moreover, increasing hidden layers is conducive to explore data characteristics more carefully. With limitations of sample size, some characteristics might exist in learning samples only and they are not applicable to whole data. Hence, overfitting occurs. The BP neural network still can approach any function by adjusting training times or number of nodes in the hidden layer even though it contains only one hidden layer. To decrease calculation while assuring accuracy of the neural network, the number of hidden layers was set 1 during construction of the BP neural network.
Setting the number of nodes in the hidden layer is crucial to construction of the neural network. If the number of nodes is too high, it increases computational loads during data transmission in the neural network. If the number of nodes is too low, it is impossible to learn data features accurately so that the trained neural network cannot recognize the unlearned samples. To determine number of nodes in the hidden layer, several studies have been carried out and many empirical formulas have been proposed.
According to the Kolmogorov theorem, when there's only one hidden layer, the number of nodes meets: (9) where N hid is the number of nodes in the hidden layer and N in is the number of nodes in the input layer. The number of nodes in the hidden layer, proposed by Jadid et al. [25], meets: where N train is number of training samples and N out is number of nodes in the output layer, 5 ≤ R ≤ 10. Different scholars use different methods to calculate number of nodes in the hidden layer. There's no uniform formula to calculate the number of nodes in the hidden layer. The distribution range of the number of nodes in the hidden layer is usually calculated according to the empirical formula and then verifies and determines a specific number according to practical data. To optimize the neural network structure and save the learning time of characteristics, it is suggested to use the least nodes while assuring precision. Based on the above formula, the number of nodes in the hidden layer of the neural network shall be 3~13. The neural network was trained with data of practical working areas. When there were 9 nodes in the hidden layer, the training error was the lowest and the correlation among samples was the maximum (Table 2). Therefore, the number of nodes in the hidden layer of the constructed neural network was determined to be 9 in this study area.

Algorithm selection
The Levenberg-Marquardt algorithm, Bayesian regularization algorithm and quantitative conjugate gradient algorithm are three common algorithms during data training [26].
The Levenberg-Marquardt algorithm is characteristic of quick calculation, but it incurs a operation machine requirement. The Bayesian regularization algorithm avoids overfitting of a characteristic variable by retaining all characteristic variable, which takes a longer operation time. The quantitative conjugate gradient algorithm doesn't need linear searching, has quick operation and requires a small computational memory, but is not applicable to all datasets. According to practical verification, the Levenberg-Marquardt algorithm can complete neural network training the most quickly and accurately; this algorithm was chosen for data training for the neural network in this study.

Interpretation Effect Analysis of Neural Network Method
The scatter plot between predicted TOC and measured TOC was gained according to output results of the neural network ( Figure 9). Meanwhile, the R 2 , MAE, MRE and RMSE were calculated 0.8254, 0.0322, 15.88% and 4.297%, respectively. Interpretation results were input into the software and a continuous TOC curve was plotted ( Figure 10). Clearly, the fitting degree between the predicted TOC and measured TOC is high, indicating the relatively good prediction effects. Interpretation results were input into the software and a continuous TOC curve was plotted ( Figure 10). Clearly, the fitting degree between the predicted TOC and measured TOC is high, indicating the relatively good prediction effects.

Discussion
The predicted parameters of the ΔLogR method and the neural network method are summarized in Table 3. Calculated results of the ΔlogR model and neural network model were compared (Table 3). R 2 of the neural network model is closer to 1 and RMSE is closer to 0 compared to those of the ΔlogR model, indicating that calculated results of the neural network are closer to measured TOC. Moreover, the neural network shows the better TOC calculation efficiency in the level of mathematical indexes. According to practical calculation effects of TOC (Figure 11), the neural network model is more applicable to accurate calculation of TOC in the background of quick changing lithology. On the contrary, the ΔlogR model can only reflect the overall variation trend of TOC, but cannot depict detailed changes of TOC.

Discussion
The predicted parameters of the ∆LogR method and the neural network method are summarized in Table 3. Calculated results of the ∆logR model and neural network model were compared (Table 3). R 2 of the neural network model is closer to 1 and RMSE is closer to 0 compared to those of the ∆logR model, indicating that calculated results of the neural network are closer to measured TOC. Moreover, the neural network shows the better TOC calculation efficiency in the level of mathematical indexes. According to practical calculation effects of TOC (Figure 11), the neural network model is more applicable to accurate calculation of TOC in the background of quick changing lithology. On the contrary, the ∆logR model can only reflect the overall variation trend of TOC, but cannot depict detailed changes of TOC. can only reflect the overall variation trend of TOC, but cannot depict detailed changes of TOC. The ΔlogR method used only two kinds of logging information during identification of source rocks. However, GR logging and DEN logging also can reflect information of source rock, which are not used in the ΔlogR method. This might be one of the reasons for the poor application effects of the ΔlogR method in the study area. According to previous analysis of correlation between the measured TOC and logging curves, it also found that the correlation between TOC of source rocks and resistivity in the study area is extremely small with almost no correlations. This might be another reason of poor application effects of the ΔlogR method in the study area. The ΔlogR method usually has good application effects in regions with stable lithology. In the study area, there's complicated and quick changing lithology, which also leads to the poor application effect of the ΔlogR method in this study area. The ∆logR method used only two kinds of logging information during identification of source rocks. However, GR logging and DEN logging also can reflect information of source rock, which are not used in the ∆logR method. This might be one of the reasons for the poor application effects of the ∆logR method in the study area. According to previous analysis of correlation between the measured TOC and logging curves, it also found that the correlation between TOC of source rocks and resistivity in the study area is extremely small with almost no correlations. This might be another reason of poor application effects of the ∆logR method in the study area. The ∆logR method usually has good application effects in regions with stable lithology. In the study area, there's complicated and quick changing lithology, which also leads to the poor application effect of the ∆logR method in this study area.
According to research results in this study, neural network is significantly superior to the traditional ∆logR method in terms of TOC calculation effect in regions with complicated and changing lithology. The predicted TOC distribution of neural network is closer to practical geological conditions. The neural network method has important significance to further petroleum exploration and development in the future.

Conclusions
In this study, the neural network in the AI field was introduced into the TOC logging calculation. The traditional ∆logR method and neural network method were used for TOC log interpretations of source rock, which has low abundance and complicated lithology in the Majiagou Formation in the LT1 well of the eastern Ordos Basin. Some major conclusions could be drawn: 1. There's low abundance of source rock in the study area and logging response to TOC is not significant. The complicated and changing lithology also weakens influences of TOC variations on logging, thus resulting in the generally poor correlations between logging data and measured TOC. Therefore, the ∆logR method, which calculates TOC based on linear relationship is difficult to get the ideal outcomes. The calculated results of ∆logR method can only reflect the general longitudinal variation trend of TOC but cannot provide a thorough depiction of TOC changes.
2. When calculating TOC in lithologic alternating regions, the neural network model can retain details of TOC changes and thereby reflect TOC changes truly. Furthermore, the neural network model has higher prediction accuracy and stronger adaptation due to the remarkable nonlinear mapping capacity and flexible network structure. It can provide more real references for petroleum exploration and development in the future.

Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy.

Conflicts of Interest:
The authors declare no conflict of interest.