Prediction of Permeability Using Group Method of Data Handling (GMDH) Neural Network from Well Log Data

: Permeability is an important petrophysical parameter that controls the ﬂuid ﬂow within the reservoir. Estimating permeability presents several challenges due to the conventional approach of core analysis or well testing, which are expensive and time-consuming. On the contrary, artiﬁcial intelligence has been adopted in recent years in predicting reliable permeability data. Despite its shortcomings of overﬁtting and low convergence speed, artiﬁcial neural network (ANN) has been the widely used artiﬁcial intelligent method. Based on this, the present study conducted permeability prediction using the group method of data handling (GMDH) neural network from well log data of the West arm of the East African Rift Valley. Comparative analysis of GMDH permeability model and ANN methods of the back propagation neural network (BPNN) and radial basis function neural network (RBFNN) were further explored. The results of the study showed that the proposed GMDH model outperformed BPNN and RBFNN as it achieved R / root mean square error (RMSE) value of 0.989 / 0.0241 for training and 0.868 / 0.204 for predicting, respectively. Sensitivity analysis carried out revealed that shale volume, standard resolution formation density, and thermal neutron porosity were the most inﬂuential well log parameters when developing the GMDH permeability model.


Introduction
Petrophysical properties such as permeability play an important role in understanding the subsurface characteristics of a reservoir, in order to estimate oil and gas reserves available.To date, it has become a difficult task to fully capture and make accurate estimations of the complex heterogeneity of permeability in the petroleum industry.Permeability, by definition, is the capability of the rock to convey fluid under specific pressure difference [1][2][3][4][5] and it is reliant on the interaction of fluid, pore, lithology, and porosity composition [6].Reservoir permeability is either obtained from direct measurement of core and also well testing analysis or from empirical equations of well log parameters.On the contrary, a direct measurement from core and well test have shown to be costly and time-consuming method if carried out on all wells while empirical equation obtained from well logs cannot fully account for the heterogeneity of permeability in all reservoir conditions [7,8].
In line with this, numerous computational intelligent methods have been successfully adopted to model reservoir permeability.The extensively applied intelligent method in permeability prediction with improved accuracy over that of statistical methods is the artificial neural network (ANN) [3,9].This is as a result of its ability to discover patterns in non-linear and complex data systems [10][11][12][13].However, standard ANN is known to exhibit several drawbacks such as overfitting, converging at local minima, and low computational speed.In addition, training of neural network models requires the input of user-defined parameters.When training a standard neural network, the number of the hidden neurons and hidden layers (or spread parameter) are specified by the user through trial and error means.It is necessary to note that the execution of these neural network models is based on network structure [14][15][16].
Therefore, several attempts to improve the standard ANN have been developed [17][18][19][20][21].In an attempt to address the issue of model bias faced by ANN, group method of data handling (GMDH), automatically synthesize network from the database of inputs and outputs.This process which is called self-organization of input models removes the user from specifying the network architecture in advance and hence removes biases in the model [22][23][24].Reduced computational time is another advantage of the GMDH method as compared to the standard ANN.This is made possible by short analyses in the model synthesizing process.The algorithms for the GMDH were first developed by Professor Alexey Grigorevich Ivakhnenko in 1968 with the goal of identifying the affiliation between the involvement layer and output layer in nonlinear systems [25][26][27].According to Ayoub et al. [28], GMDH may also be represented as a polynomial neural networks as well as an algorithm modeling method to define nonlinear input-output variable relationships.Ma et al. [29] explained that GMDH algorithm operates by linking sets of neurons connected to quadratic polynomials, resulting in new sets of neurons in the subsequent layer.
Over the years, the GMDH concept has been widely applied in different areas, Menad et al. [30] applied GMDH and gene expression programming (GEP) to establish reliable correlations for estimating temperature-based oil-water relative permeability.Shen et al. [31], developed a GMDH model that can detect the various lithofacies using pre-processing techniques composed of dimensionality reduction (DR) and wavelet analysis (WA).Ayoub et al. [28] in their publication, utilized GMDH to model the oil compressibility below the bubble point pressure.Teng et al. [32] proposed a GMDH model to predict China's transport energy demand, the proposed model showed very satisfactory predictions.Nguyen et al. [33] proposed the GMDH network for geometrically nonlinear problems of computational mechanics.Safari et al. [34] used the generalized structure of GMDH to model sediment transport in rigid boundary open channels.Rostami et al. [35] in their study used a GMDH for estimating the ionic liquids heat capacities.GMDH algorithms have adopted three dissimilar types of elementary functions such as probabilistic graphs, second-order polynomials, and Bayes' formulas [36][37][38].
The fast GMDH theory development which includes the comprehensive spectrum of algorithms has led to, unlike the category approach in which methods of GMDH can be clustered into parametric algorithm category and non-parametric algorithm category [39].
Parametric algorithms can be classified in accordance with the nature of activation function such as partial descriptions which is a set of characteristics by which it can be recognized or the model complexity structure type.Single-layer self-organizing algorithms which also known as combinatorial algorithms, perform and achieve fully comprehensive quest among all models.Multilayer/iterative algorithms employ an iterative procedure, that increases the complexity of the model whereas an exterior criterion classifies these models to be progressed after developed to the following next predetermined layer.In the case of multilayer algorithms, no fully comprehensive quest of entire aspirant models nevertheless the time of computation is abridged and the independent amount of variables in the line of being processed turn out to be larger.Regarding the nature of making it active and effective function, the algorithms of the GMDH network can be illustrious into harmonic, multiplicative-additive, fuzzy and polynomials [40,41].
In this study, GMDH was proposed for the prediction of permeability using well log data.To further ascertain the performance of the GMDH permeability model, its results were compared with ANN algorithms of back propagation neural network (BPNN) and radial basis function neural network (RBFNN).Finally, a sensitivity analysis was carried out to assign the impact of each well log parameter used to develop the GMDH model.

Data Descriptions
East African Rift Valley (Gregory Rift) comprises of the major Ethiopian Rift, which runs in the eastward that from an Afary Triple Junction, the valley prolonged its wonder which continues to the south which is called Kenyan Rift Valley [42].Spreading its branches the western Rift Valley contains Albertine Rift, beyond its part of South with the valley of Lake Malawi which is found in the south-west of Tanzania.This rift Valley follows one of known two paths to the North of the so-called Afar Triple Junction, while in the East to Ridge of Aden found in Gulf of Aden and West is direct to Red Sea Rift [43,44].
In the present study, we considered three wells Mpyo 1, Mpyo 2, and Mpyo 3 from Mpyo field located in the Northern part of the Lake Albert rift basin.The field is aligned along the Eastern Graben Trend in the northern part of Albertine Graben and extends from the Victoria Nile area towards the North Eastern extremity of the basin margin (Figure 1).The depositional environment is the Victoria Nile Delta, where sediments consist of alternating fluvial/deltaic sands and lacustrine shales.The study area has five formations, Alluvial Sands/ Semanya which comprises alluvial sands with some claystone interbeds, Upper Paraa Formation which continued from the sands of the alluvium and Semanya, Lower Paraa Formation, which is marked with the appearance of the claystone.Pacego Formation, the top of this formation is marked by coarsening upward sand.Below the sand is predominantly claystone with some sand/silt interbeds and Wangkwar Formation, which contains the reservoir sands interbedded with claystones and siltstones.The stratigraphic diagram of the studied formations is described in Figure 2.
Well log and permeability data of 366 from Mpyo 1 and Mpyo 2 wells were used to train the model.Afterward, the trained models were tested on the well log and permeability data of 280 from Mpyo 3 well.Well log inputs of standard resolution formation density (RHOZ), apparent resistivity focusing mode 5 (RLA5), standard gamma ray (SGR), apparent resistivity focusing mode 1(RLA1) thermal neutron porosity (TNPH), rock's shale volume (VSH) as represented in (Figure 3) were selected and used in development of the GMDH and ANN models.Apart from other well logs, the VSH is a dependent log calculated from the gamma-ray log.The statistical analysis of the data used for this research is summarized in Table 1.
The entire well log and permeability data were normalized within the range of 0 and 1 using the min-max normalization approach to ensure that the artificial intelligent models treat and handle the input and output data in an unbiased manner.

Group Method of Data Handling (GMDH) Model
The combination of multi-layer algorithms in which a network of nodes and layers is engendered via numbers of selected input from the set of designed data being assessed is known as Group Method of Data Handling (GMDH).The idea of this type of neural network was first presented by G. Ivakhnenko during the 1960s as a method of identifying and recognizing nonlinear relationships between the variables of input and the output variables [17,22].
The GMDH neural networks algorithms apply the benefits of both self-organizing standards and multilayer neural networks to choose the best associations between variables.Firstly, GMDH algorithm automatically discovers the inter-relations between variables and selects the best model to fit data.Secondly, the GMDH neural network, such as artificial neural network, combines the concepts of black-box, biological neuron technique, inductive technique, probability concept, and many other approaches [45].When generating a GMDH network, a set of combinatorial inputs is developed and wholly passed into the first layer of the network.This layer generates outputs which are classified and carefully selected to give another set of combinations as inputs into the succeeding layer [46].This process persists until when results from a succeeding layer (k + 1) are no longer good as compared to results from its predecessor, layer (k).

Group Method of Data Handling (GMDH) Model
The combination of multi-layer algorithms in which a network of nodes and layers is engendered via numbers of selected input from the set of designed data being assessed is known as Group Method of Data Handling (GMDH).The idea of this type of neural network was first presented by G. Ivakhnenko during the 1960s as a method of identifying and recognizing nonlinear relationships between the variables of input and the output variables [17,22].
The GMDH neural networks algorithms apply the benefits of both self-organizing standards and multilayer neural networks to choose the best associations between variables.Firstly, GMDH algorithm automatically discovers the inter-relations between variables and selects the best model to fit data.Secondly, the GMDH neural network, such as artificial neural network, combines the concepts of black-box, biological neuron technique, inductive technique, probability concept, and many other approaches [45].When generating a GMDH network, a set of combinatorial inputs is developed and wholly passed into the first layer of the network.This layer generates outputs which are classified and carefully selected to give another set of combinations as inputs into the succeeding layer [46].This process persists until when results from a succeeding layer (k + 1) are no longer good as compared to results from its predecessor, layer (k).
The GMDH builds general linking between input and output parameters mathematically, which can be also termed as a reference point.The model corresponding to the minimum is the required nonphysical model, whose correctness is estimated by the criterion of the estimating error variation as follows where: y i is the actual parameter value, the value of the predicted parameter is given by Y i , and y is the mean value of the predicted parameters, E can often be called relative root mean square error.With a minimum value E and with extended but always same samples, we can apply a deductive method, i.e., we can select best physical models by an internal degree in each group and entrust the ending of the iteration to an expert [47].
The network connection among the input variables and output of several multiple inputs with the only single output of self-organizing neural networks are formulated and expressed through the complex discrete formula in form of Volterra functional polynomial series known as Kolmogorov-Gabor (VKG) of the form [40,46].
where A = (a 0 , a i , a ij , a ijk ) is a vector of weights or coefficients, y is a consistent output vector value, and X = (x 1 , x 2 , x 3 , x M ) is the input variables vector.The full mathematical explanation can be denoted by the polynomial equation.The system of entire equations can be denoted and represented in the form of matrix X in Equation ( 3).
For the sake of easiness Equation ( 2) can be substituted by the system of a partial polynomial as shown by Equation ( 4) The GMDH neural network algorithms track some procedures and steps systematically (Figure 4) to model the intrinsic connection between parameters of inputs and targeted output [48].Data sample given by N observations and M autonomous variables are needed; the data set are divided into predicting data and training data.
A new regression polynomial is similar in representation as that in Equation ( 4) is initially created by taking combinatorial possibilities of all independent variables of Equation (3) (matrix equation represented by X) as a couple of two in time and assume q and p as columns of the matrix X.
The data for training set A and Equation ( 5) are employed in calculating a set of constants of regression for wholly partial functions by parameter approximation technique [49].A new matrix U stores the newly calculated regression constants (see Equation ( 6)).
Mathematically, the number of combinations (c) of pairs of input is determined by The polynomial at each data points N is assessed in order to compute new estimation y i, qp as follows: In an iterative manner, the procedure will be repeated until entire pairs are estimated to produce the novel pairs of regression which would be kept into another novel matrix termed as Y matrix in Equation (10).These newly regression pairs generated could be taken as the newly enhanced variables which have the best predictability compared to that of data set X (Equation ( 3)) Quality procedures of these functions are evaluated objectively in accordance to the law adopted via a set of predicting data and this can be completed by individually comparing each column of the newly created matrix Y with that dependent variable y i .
Therefore, this whole process is repeated up to when error variation is kept smaller than the error obtained in the previous layer.A GMDH-model for the designed data sets is calculated by backtracking entire route of polynomials which matches to the lowest relative root mean square error and under each and every layer.Figure 4 is the schematic diagram which illustrates a self-organizing GMDH algorithm.

Performance Indicators
The statistical parameters such as correlation coefficient (R) and root mean square error (RMSE) was adopted to compare the performance of the GMDH and ANN permeability models.The mathematical expression for R and RMSE are provided in Equations ( 11) and (12), respectively [50] [51].
Where i y is the actual or measured parameter value, N represents the overall number of data points, i y is the average value of measured parameters, the value of the predicted parameter is denoted by i Y and i Y is the average value of the predicted parameters,

GMDH Model Development
The GMDH model was implemented in MATLAB R2016b.The output of the developed GMDH consisted of seven hidden layers and one output layer.It was identified that every hidden layer was made up of one hidden neuron which has been labelled (z1, z2, z3,…, z7) and Y for the output layer.As explained early, these unitless hidden neurons are only statistically responsible for correlating an

Performance Indicators
The statistical parameters such as correlation coefficient (R) and root mean square error (RMSE) was adopted to compare the performance of the GMDH and ANN permeability models.The mathematical expression for R and RMSE are provided in Equations ( 11) and (12), respectively [50] [51].
where y i is the actual or measured parameter value, N represents the overall number of data points, y i is the average value of measured parameters, the value of the predicted parameter is denoted by Y i and Y i is the average value of the predicted parameters.

GMDH Model Development
The GMDH model was implemented in MATLAB R2016b.The output of the developed GMDH consisted of seven hidden layers and one output layer.It was identified that every hidden layer was made up of one hidden neuron which has been labelled (z 1 , z 2 , z 3 , . . ., z 7 ) and Y for the output layer.As explained early, these unitless hidden neurons are only statistically responsible for correlating an input layer and the output layer.This configuration was accomplished after a sequence of adjusting progressions by monitoring an actual performance of the GMDH network until the accomplishment of optimum neural network structure.The schematic diagram shown in Figure 6 depicts in detail the GMDH permeability model developed.The equations for the layers of the GMDH model network that were needed in order to generate permeability prediction are be given in Equations ( 11)- (18).
The GMDH model developed has eight (8) where x 1 is standard resolution formation density (RHOZ), x 3 is standard gamma ray (SGR), x 4 is apparent resistivity focusing mode 5 (RLA5), x 5 is thermal neutron porosity (TNPH) and x 6 is rock's shale volume (VSH) which is a dependent log calculated from the gamma ray log.

Comparing with ANN
In this section, the results of GMDH are presented with other neural network algorithms.Back propagation neural network (BPNN) and radial basis function neural network (RBFNN) were further subjected to the same training and prediction dataset 366 and 280, respectively.As a result of the nonlinearity of the dataset, the hidden and output layers of BPNN used the hyperbolic tangent sigmoid and linear transfer function.BPNN models were trained using the Levenberg-Marquardt learning algorithm [32] while the gradient descent learning algorithm was to train RBFNN [33].It is important to note that the performance of BPNN and RBFNN are highly dependent on the model structure which was achieved in this study through a sequential trial and error method.
In determining permeability, the optimal BPNN model structure that obtained the highest correlation and the least RMSE was made up of 3 hidden nodes i.e., 6 inputs, 1 hidden layer with 3 hidden nodes, and 1 output as shown in Table 2.While the optimal RBFNN consisted of 6 inputs, a hidden layer with a maximum of 21 hidden nodes having a width parameter of 1 and 1 output as shown in Table 3.

Comparing with ANN
In this section, the results of GMDH are presented with other neural network algorithms.Back propagation neural network (BPNN) and radial basis function neural network (RBFNN) were further subjected to the same training and prediction dataset 366 and 280, respectively.As a result of the non-linearity of the dataset, the hidden and output layers of BPNN used the hyperbolic tangent sigmoid and linear transfer function.BPNN models were trained using the Levenberg-Marquardt learning algorithm [32] while the gradient descent learning algorithm was to train RBFNN [33].It is important to note that the performance of BPNN and RBFNN are highly dependent on the model structure which was achieved in this study through a sequential trial and error method.
In determining permeability, the optimal BPNN model structure that obtained the highest correlation and the least RMSE was made up of 3 hidden nodes i.e., 6 inputs, 1 hidden layer with 3 hidden nodes, and 1 output as shown in Table 2.While the optimal RBFNN consisted of 6 inputs, a hidden layer with a maximum of 21 hidden nodes having a width parameter of 1 and 1 output as shown in Table 3. Table 4 shows the obtained results of the performance indices for all the developed permeability models.All the results of R and RMSE for both training and predicting revealed that GMDH displayed an improved model performance than ANN models of BPNN and RBFNN.When training, GMDH produced R and RMSE values of 0.989 and 0.0241, respectively, which were better than BPNN and RBFNN as summarized in Table 4.However, BPNN and RBFNN attained 0.982/0.0313and 0.988/0.026as R/RMSE values, respectively.The predicting process also saw GMDH generate significantly better results of 0.868 and 0.204 as R and RMSE respectively (Table 4).From Table 4, we can observe that the generalization capability of BPNN and RBFNN permeability models were comparatively worse as they achieved R/RMSE score of 0.823/0.2053and 0.822/0.206respectively.Depending on the computational speed, the GMDH produced the results using a computational time of 1.44seconds, while BPNN and RBFNN produced results in a time of 13.86 s and 6.45 s respectively (Table 4) on a Windows machine with AMD Ryzen 5 @ 2 GHz.The performance of the GMDH and ANN predictive models when compared with the measured permeability data are presented in Figures 7 and 8.
Table 4 shows the obtained results of the performance indices for all the developed permeability models.All the results of R and RMSE for both training and predicting revealed that GMDH displayed an improved model performance than ANN models of BPNN and RBFNN.When training, GMDH produced R and RMSE values of 0.989 and 0.0241, respectively, which were better than BPNN and RBFNN as summarized in Table 4.However, BPNN and RBFNN attained 0.982/0.0313and 0.988/0.026as R/RMSE values, respectively.The predicting process also saw GMDH generate significantly better results of 0.868 and 0.204 as R and RMSE respectively (Table 4).From Table 4, we can observe that the generalization capability of BPNN and RBFNN permeability models were comparatively worse as they achieved R/RMSE score of 0.823/0.2053and 0.822/0.206respectively.
Depending on the computational speed, the GMDH produced the results using a computational time of 1.44seconds, while BPNN and RBFNN produced results in a time of 13.86 s and 6.45 s respectively (Table 4) on a Windows machine with AMD Ryzen 5 @ 2 GHz.The performance of the GMDH and ANN predictive models when compared with the measured permeability data are presented in Figures 7 and 8.

Sensitivity Analysis for the GMDH Model
Sensitivity analysis will examine the contribution made by each input well log in determining permeability values.Sensitivity analysis was carried out for every input well log used to develop the GMDH permeability model.Sensitivity analysis of the well log input data was conducted using Equation ( 22): where the percentage change in output and input means the variation of parameters from the minimum to the maximum value.It is significant to note that when estimating permeability, GMDH automatically selected five out of the six input well logs.These were RHOZ, RLA5, SGR, TNPH, and VSH.It is, therefore, important to identify how each of the selected well log input contributed to the development of the GMDH permeability model.The higher the S value, the more that parameter of well log affected the predicted outcome of permeability.The lower the S value means that input value cannot heavily affect the generated permeability value.From Figure 9, RHOZ observed great contribution on permeability prediction with an S value of 5.787%.VSH and TNPH similarly had a great impact of 4.14% and 2.143%, respectively.Both SGR and RLA5 had less impact on permeability prediction model than all parameters with an S value of 0.1% and 0.04%, respectively.

Sensitivity Analysis for the GMDH Model
Sensitivity analysis will examine the contribution made by each input well log in determining permeability values.Sensitivity analysis was carried out for every input well log used to develop the GMDH permeability model.Sensitivity analysis of the well log input data was conducted using Equation ( 21): percent change in output percent change in input i × 100 (21) where the percentage change in output and input means the variation of parameters from the minimum to the maximum value.It is significant to note that when estimating permeability, GMDH automatically selected five out of the six input well logs.These were RHOZ, RLA5, SGR, TNPH, and VSH.It is, therefore, important to identify how each of the selected well log input contributed to the development of the GMDH permeability model.The higher the S value, the more that parameter of well log affected the predicted outcome of permeability.The lower the S value means that input value cannot heavily affect the generated permeability value.From Figure 9, RHOZ observed great contribution on permeability prediction with an S value of 5.787%.VSH and TNPH similarly had a great impact of 4.14% and 2.143%, respectively.Both SGR and RLA5 had less impact on permeability prediction model than all parameters with an S value of 0.1% and 0.04%, respectively.

Conclusions
The purpose of this study was to explore the potential of the application of the group method of data handling (GMDH) neural network in estimating the permeability using well log data.Based on this, well log and core permeability data of the West arm of the East African Rift valley were considered for the present research.
The optimal GMDH permeability model was identified to be composed of 6 inputs and 7 hidden layers each having one node.The statistical performance of the developed GMDH model was better when compared with ANN algorithms of back propagation neural network (BPNN) and radial basis function neural network (RBFNN).During training, GMDH produced estimates having R/RMSE score of 0.989/0.0241while BPNN and RBFNN recorded 0.982/0.0313and 0.988/0.026,respectively.Additionally, GMDH generalized better than the ANN models as it generated R/RMSE of 0.868/0.204as compared to 0.823/0.2053and 0.822/0.206for BPNN and RBFNN, respectively.
The outcome of the sensitivity analysis on the automatically selected well log input parameters revealed that shale volume, standard resolution formation density, and thermal neutron porosity had a high influence on the performance of the GMDH permeability model.

Conclusions
The purpose of this study was to explore the potential of the application of the group method of data handling (GMDH) neural network in estimating the permeability using well log data.Based on this, well log and core permeability data of the West arm of the East African Rift valley were considered for the present research.
The optimal GMDH permeability model was identified to be composed of 6 inputs and 7 hidden layers each having one node.The statistical performance of the developed GMDH model was better when compared with ANN algorithms of back propagation neural network (BPNN) and radial basis function neural network (RBFNN).During training, GMDH produced estimates having R/RMSE score of 0.989/0.0241while BPNN and RBFNN recorded 0.982/0.0313and 0.988/0.026,respectively.Additionally, GMDH generalized better than the ANN models as it generated R/RMSE of 0.868/0.204as compared to 0.823/0.2053and 0.822/0.206for BPNN and RBFNN, respectively.
The outcome of the sensitivity analysis on the automatically selected well log input parameters revealed that shale volume, standard resolution formation density, and thermal neutron porosity had a high influence on the performance of the GMDH permeability model.

Figure 1 .
Figure 1.Location of Mpyo oil field found in the East African Rift Valley.

Figure 2 .
Figure 2. Stratigraphic diagram of the study area.

Figure 2 .
Figure 2. Stratigraphic diagram of the study area.

Figure 4 .
Figure 4. Network of self-organizing group method of data handling (GMDH) algorithm with M inputs and k layers.

Figure 4 .
Figure 4. Network of self-organizing group method of data handling (GMDH) algorithm with M inputs and k layers.In this present paper, selection of the input parameters follows the function of the GMDH-type network, which enhances the ability of filtering variables.The steps involved in the process include: Step 1.The well log and core permeability data for the training and predicting were standardized.The uses of the training data are to estimate the weights of GMDH neurons whereas the predicting data are used to evaluate the network architectures.Step 2. Evaluate the regression polynomial by means of Equation (4), for individually couple of input variables which are x i and x j with the corresponding output y of training data set that fits the rely on remarks M in the training data set Step 3. The polynomial for entire observations of N for an individual regression is computed.New observations N are stored into different matrix Y.By using a similar method, the new columns of Y are computed.Matrix Y can be understood as novel enhanced variables with improved predictability than those variables from the original generation of x 1 , x 2 , x 3, ....., x M .Step 4. The fourth step is screening out the last variables which are effective.Step 5.According to the increasing of regularity criterion, order the columns of Y, then choose the Y columns which satisfies regularity criterion <S where S is a minimum residual value prescribed by the user in order to substitute that previous original column of matrix X. Step 6. Steps 1 to 5 are repeated until the final estimates are automatically determined when the least error is achieved based on the metaheuristic and self-organizing nature of GMDH algorithm.Plot and compare the smallest estimated error calculated in each generation with the smallest estimating error of the current generation until it starts to give an increasing trend.All of the steps have been summarized in the flow-chart shown in Figure 5.

Figure 6 .
Figure 6.Illustration of the proposed GMDH-type neural network.

Figure 6 .
Figure 6.Illustration of the proposed GMDH-type neural network.

Figure 7 .
Figure 7. GMDH and artificial neural network (ANN) prediction in comparison with measured permeability data during training.

Figure 7 .
Figure 7. GMDH and artificial neural network (ANN) prediction in comparison with measured permeability data during training.

Figure 8 .
Figure 8. GMDH and ANN prediction in comparison with measured permeability data during training.

Figure 8 .
Figure 8. GMDH and ANN prediction in comparison with measured permeability data during training.

Figure 1. Location of Mpyo oil field found in the East African Rift Valley. Table 1. Statistical analysis of the data used. Wells RHOZ RLA1 RLA5 SGR TNPH VSH
total number of layers:

Table 2 .
Results from tuning back propagation neural network (BPNN) structure.

Table 3 .
Results from tuning radial basis function neural network (RBFNN) structure.

Table 2 .
Results from tuning back propagation neural network (BPNN) structure.

Table 3 .
Results from tuning radial basis function neural network (RBFNN) structure.

Table 4 .
Statistical indicators of the developed permeability models.

Table 4 .
Statistical indicators of the developed permeability models.