Next Article in Journal
Study on the Structural Properties of an Ecospatial Network in Inner Mongolia and Its Relationship with NPP
Previous Article in Journal
Compression and Fungal Heat Production in Maize Bulk Considering Kernel Breakage
Previous Article in Special Issue
Comparing Polynomials and Neural Network to Modelling Injection Dosages in Modern CI Engines
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advanced Machine Learning Applications to Viscous Oil-Water Multi-Phase Flow

1
Chemical Engineering, College of Engineering, King Faisal University, P.O. Box 380, Al Ahsa 31982, Saudi Arabia
2
Civil Engineering, College of Engineering, University of Bahrain, Isa Town P.O. Box 32038, Bahrain
3
Civil Engineering, College of Engineering, King Faisal University, P.O. Box 380, Al Ahsa 31982, Saudi Arabia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2022, 12(10), 4871; https://doi.org/10.3390/app12104871
Submission received: 8 February 2022 / Revised: 11 April 2022 / Accepted: 19 April 2022 / Published: 11 May 2022
(This article belongs to the Special Issue Modeling and Simulation with Artificial Neural Network)

Abstract

:
The importance of heavy oil in the world oil market has increased over the past twenty years as light oil reserves have declined steadily. The high viscosity of this kind of unconventional oil results in high energy consumption for its transportation, which significantly increases production costs. A cost-effective solution for the long-distance transport of viscous crudes could be water-lubricated flow technology. A water ring separates the viscous oil-core from the pipe wall in such a pipeline. The main challenge in using this kind of lubricated system is the need for a model that can provide reliable predictions of friction losses. An artificial neural network (ANN) was used in this study to model pressure losses based on 225 data sets from independent sources. The seven input variables used in the current ANN model are pipe diameter, average velocity, oil density, oil viscosity, water density, water viscosity, and water content. The ANN developed using the backpropagation technique with seven processing neurons or nodes in the hidden layer demonstrated to be the optimal architecture. A comparison of ANN with other artificial intelligence and parametric techniques shows the promising precision of the current model. After the model was validated, a sensitivity analysis determined the relative order of significance of the input parameters. Some of the input parameters had linear effects, while other parameters had polynomial effects of varying degrees on the friction losses.

1. Introduction

1.1. Background

Incompatible biphasic flow often occurs in the petrochemical and oil industries. When two liquids with different densities touch one another in a horizontal tube, they incline to be affected by gravitational force. The heaviest phase generally stays below and the lightest phase flows as a separate layer over the top, creating a stratified flow regime. Controlled process conditions can also yield a core annular flow (CAF) regime when the difference in densities of the fluids is not very high. The heavier liquid (usually water) forms a thin lubricating annulus that sheathes the viscous core so that the core cannot touch the pipe wall. This is an alternative pipeline transportation technology that is beneficial for highly viscous fluids like unconventional heavy oils and viscous petrochemicals. The lubricating water can significantly reduce the requirement of pumping energy when compared to similar requirements for pumping viscous fluid alone through the pipe. In fact, it is comparable to the power consumption for pumping only water. A considerable amount of research has been undertaken to find a reliable method for designing such multiphase pipe flows.
In practice, sufficient knowledge of pressure gradients or frictional pressure losses in pipes is needed to develop an energy-efficient transportation system (e.g., to determine the optimal size of pipes and pumps that can control various flow conditions throughout the lifetime in the field). Arney et al. [1] introduced a friction loss model for pumping heavy oils in a lab-scale horizontal pipeline with the application of an idealized CAF technology. Although this model could predict a large CAF dataset with acceptable precision, it failed to do so for the self-lubricated flow (SLF) of bitumen froth (which represented a commercial-scale application of this water-lubricated flow technology). Joseph et al. [2] investigated the SLF phenomenon to develop their own empirical model based on data generated from the lab- and pilot-scale experiments. A 35-km long SLF pipeline was designed, commissioned, and operated based on this model in Athabasca by Syncrude Canada Ltd. The SLF involved intermittent water lubrication with the oil-rich core frequently touching the pipe wall. Meanwhile, for CAF, this kind of contact was negligible, and the lubrication was continuous. Rodriguez et al. [3] applied CAF in a pilot-scale pipeline. Although proper attention was paid to eliminating wall-fouling, it was a natural consequence of water lubrication and could not be excluded from the large-scale water-lubricated pipeline transportation of viscous oils. Based on the data produced from CAF experiments, both with and without wall-fouling, a new semi-mechanistic two-parameter model was proposed to assist with friction losses. The model was claimed to perform better than similar models. However, it failed to provide satisfactory results for the water-assisted flow (WAF) of unconventional heavy oils [4,5,6]. WAF refers to large-scale applications of CAF that involve wall-fouling (Figure 1). It is a commercially applicable mode of the flow technology. One of the most significant technical challenges facing the industrial application of WAF is the necessity of a model that can reliably predict frictional pressure losses. Previously proposed models for various modes of water lubrication are not necessarily applicable to WAF pipelines. Applications of existing analytical models to different WAF datasets produce unreliable results, with errors as high as 500% [5]. This is because most of these models are empirical and were developed using system-specific data. An exception is the phenomenological model proposed by McKibben et al. [7]. It is probably the best analytical model for WAF systems. A concise description of the model is included in Section 3.1.

1.2. Soft Computing Approaches

The flexible computing approaches are useful and powerful tools that play an essential part in analyzing and solving problems in various fields related to engineering and technology. These computational approaches demonstrate superior performance by defining highly accurate hypothesis functions for approximate solutions when compared to many published analytical and empirical models [9,10,11]. Although different computational models are applied abundantly in the field of multiphase pipeline flow, the literature contains only a limited number of attempts to apply these soft techniques to model WAF pressure losses.
Osman and Aggour [12] propose an artificial neural network (ANN) model to estimate pressure gradients in horizontal and quasi-horizontal multiphase pipes. The model was constructed and then tested on more than 450 field-derived test data samples. Its accuracy was then compared with the available correlations as well as mechanistic models to show the superiority of the used ANN technique. Similarly, Adhikari and Jindal [13] also developed an ANN to estimate the pressure gradient losses for the non-Newtonian fluid food which was passing through a tube. The proposed model was able to predict the measured values of pressure gradients with an absolute average error of less than 5.44%. Ozbayoglu and Yuksel [14] used ANN instead of a traditional modeling approach to investigate the flow types and also the frictional pressure losses of a mixture of two phases (gas and liquid) flowing within a horizontal ring-shaped conduit. The outcome showed that the ANN can predict flow patterns with errors of less than ±5% and friction losses with an accuracy of ±30%. Salgado et al. [15] tried to estimate the volume fractions of triphasic flows by applying ANN and the nuclear technique. From the three investigated flow regimes of oil-water-gas (stratified, annular, and homogeneous), the ANN model could adequately relate the measurements which simulate the MCNP-X aided code that uses volume fraction for each of the components in the three-phase flow system. Nasseh et al. [16] also used ANN in genetic algorithms to estimate the pressure gradients in multiple flows under a Venturi scrubber based on a two-phase ring flow model. Successful implementations such as those described above strongly indicate that ANN approaches can be extended to other multiphase flow systems. Dubdub et al., (2020) [17] applied a feed-forward neural network with a backpropagation technique to model water-lubricated flow of non-conventional crude. Even though it was a pioneering study, the authors used more than 20 nodes. The ANN model was complex and vulnerable to overfitting.
Another soft computing approach involves the use of support vector machines (SVMs). This kind of model is commonly applied to problems related to prediction and classification. The use of SVMs is prevalent in the medical sciences for the prediction of illnesses and deficiencies [18]. SVMs can also categorize data into clusters or zones to identify problem areas. This ability has led to their application for leakage detection and monitoring in pipe networks [19]. Different SVMs have also been used in combination with ANNs in previous studies to predict pipe pressure [20]. Recently, Rushd et al., (2021) [21] utilized SVM along with other ML algorithms including ANN to model the pressure losses in WAF pipelines. It was a scenario-based exploratory study. Although they found ANN and SVM to perform better compared to other ML models, the nonlinear nature of the dataset did not allow those artificial intelligence tools to be pertained with self-reliance. They emphasized the requirement of further analysis. Following this, our current study aimed to employ easier and simpler ANN and SVM models for cost-effective solutions in long-distance transport of viscous crudes which will serve to enhance the water-lubricated flow technology knowledge area.
To better control the ANN, a trial-and-error process was used to optimize the ANN’s parameters, e.g., neuron numbers in the respective hidden layer, the rate of exercise, and the pulse. ANN models are usually preferred because they are inherently more flexible than traditional analytical models and have historical evidence to fit with experimental measurements. Based on the success of using ANNs to solve many technical problems, we attempted to apply these models for modeling pressure gradients in biphasic WAF pipelines. This study aims to develop a model using soft calculations to accurately determine the WAF pressure gradients in horizontal pipes under various flow conditions.

2. Dataset

The experimental dataset used in this study consists of 225 samples, which were collected from Shi [22] and Rushd [8]. They used the data for two independent studies on WAF. The experiments were conducted using horizontal flowloops located in SRC and Cranfield University (CU), Cranfield, England. The measured parameters were flow rate, fluid property, pipe diameter, water fraction, and pressure gradient. PVC and steel pipes were used at CU and SRC, respectively. It should be noted that, even though PVC and steel may produce significantly different hydrodynamic roughness, the material of construction of a WAF pipeline is not likely to have an appreciable impact on the flow hydraulics. As mentioned earlier, the inner wall of such a pipeline is naturally coated or fouled with viscous oil. The hydrodynamic roughness in a WAF pipeline is, thus, controlled by the wall-coating layer of the oil, rather than the pipe’s material of construction, and the equivalent sand-grain roughness produced by a layer of viscous oil is dependent on the flow properties [2,3,4,5,6,7,8,22].
A total of 169 samples were used for model training/development, while the remaining 56 samples were used for testing the model, resulting in a ratio of 3:1. The training and testing samples were chosen randomly from the available data to avoid bias. Eight parameters were either measured or estimated as part of the wet experiments. Among these parameters, the pressure gradient was considered as the output parameter. Other variables, such as pipe diameter, average velocity, respective fluid properties, and the fraction of the water in the mixture, were used as the input parameters. Some of the basic descriptive statistics related to the dataset and each experimental parameter are provided in Table 1.

3. Modeling Methods

Three different types of modeling techniques were studied as part of the current investigation: multivariate linear regression (MLR), SVM-based techniques, and ANN-based techniques. Among these methods, MLR is a traditional parametric technique, while SVM and ANN techniques are non-parametric machine learning (ML) techniques. Besides, the conventional model proposed by McKibben et al. [7], was also applied to the available dataset to compare its accuracy with the models of the present study.

3.1. McKibben Model

It was the product of extensive research carried out by the Saskatchewan Research Council (SRC) (Saskatoon, SK, Canada) on WAF. The experiments were conducted using flowloops comprised of 25, 100, and 260 mm steel pipes. The thicknesses of the wall-fouling layers were quantified using a double-pipe heat exchanger and a hot-film probe. The ranges of oil viscosities and input water fractions were 0.62–91.6 Pa·s and 30–50%. It was demonstrated that the model could consider the most significant factors, such as inertia, gravity, water fraction, the additional shear caused by wall-fouling, and viscosity ratio. The model’s inputs are pipe diameter, average velocity, densities, viscosities, and water fraction. One of the key factors that was addressed by McKibben et al. [7] was the contribution of the wall-fouling layer to the effective hydrodynamic roughness of the WAF regime. The strong performance of the SRC model has been recognized by other investigators, such as Shi et al. [4] and Rushd et al. [6]. Both of these independent groups of researchers demonstrated the superiority of the McKibben model over other analytical models in predicting the WAF pressure losses. Even though it is more accurate, its predictions still involve up to ±100% error. Probably, the most significant limitation of the model is the ambiguous and labor-intensive trial and error procedure used to optimize its performance. It includes a multivariate power-law function for friction factor (f) with five coefficients, the values of which were established without any rigorous statistical analysis. The model is concisely presented with Equation (1), while a detailed description of the model is available in Shi [22].
P L W A F = f ρ w V 2 2 D = 30 V g D 0.5 0.079 R e w 0.25 1.3 16 R e o 0.32 C w 1.2 ρ w V 2 D            
where, f: equivalent friction factor; Re: Reynolds number; Rew: water equivalent Reynolds number ( R e w = D V ρ w μ w ); Reo: oil equivalent Reynolds number ( R e o = D V ρ o μ o ); ∆P/L: pressure gradient (Pa/m); ρ: density (kg/m3); V: average velocity (m/s); D: pipe’s internal diameter (m); g: gravitational force (m/s2); Cw: water fraction (-); µ: dynamic viscosity (Pa.s); w: water; o: oil.

3.2. Multivariate Linear Regression

MLR is a curve-fitting approach that utilizes the criteria of minimizing the ordinary least square errors. The basic form of the function to predict a variable ‘Y’ can be expressed as in Equation (2).
Y = a + b i x i
where a is the intercept for the equation, b is the vector of regression coefficients, and x is the vector of independent variables [23]. It is a statistical technique, hence, selection of parameters in vector x depends upon their effect on the model. A t-statistic is used for this selection process [23].

3.3. Support Vector Machine

SVM is a popular supervised machine learning method of AI, particularly in the field of classification. However, it is also commonly used to predict real-values in regression problems [24]. This technique works on defining hyperplanes of maximum variation/margin within the datasets using a kernel function, as shown in Figure 2. The basic equation for an SVM is similar to that of any regression (as shown in Equation (2)), apart from the application of a kernel function in the regression model. Hence, the resulting model takes the form of Equation (3).
Y = σ f x + b
where σ and b are the weight and constant of the model, and f(x) is the function used to map the vector of input variables into a higher dimensional feature space. The weights and constants of the model for each data point are calculated, and the points with statistically significant coefficients are considered support vectors [25,26]. The distance from the nearest hyperplane to the nearest expression vector is referred to as a “margin”. The success and accuracy of SVM lie in maximizing the argin when selecting the hyperplane [27].

3.4. Artificial Neural Networks

ANNs have gained a lot of acceptance among researchers due to their generalization capabilities, especially in prediction problems. They represent a network of multiple processing units (referred to as neurons), which estimate weights and biases for each input parameter to minimize the partial least square of error. For each neuron, the weights and coefficients are calculated for the entire dataset without the restriction of statistical significance [28]. The weights for neurons depend upon the equations which are chosen as activation function for the neuron. These neurons serve as parallel processing units and have the ability to capture unknown complex variations in the output variables. Due to this, ANNs are used as unsupervised learning algorithms [29]. ANN models can be represented as networks, as shown in Figure 3.
In the illustration above, Yi is the output for each processing neuron, and an ANN may contain several neurons. The final output, ‘Y’, is the combination of outputs from all hidden neurons. The numbers of hidden layers and neurons were not known beforehand. These were determined by observing the accuracy of predictions for multiple combinations [30].
For the current study, seven neurons arranged in a single hidden layer were identified to produce most optimum results. To achieve this result, number of neurons in the hidden layers was changed from 1 to 10 and its effects on MSE for validation dataset were observed, which is shown in Figure 4. Validation dataset comprises of randomly selected samples from the available dataset which is used for determining the appropriateness of model architecture. The model architecture is not selected on the basis of accuracy for training dataset to ensure that the model can be robustly used for unknown values. It was observed that MSE with seven neurons produced the optimum results. It should be noted that MSE is used as the default for determining weights and biases for hidden neurons and this is the reason it was used for determining optimum number of hidden neurons. This number depends upon the complexity and nature of modeling problem and the the trial-and-error method, as described, is generally used to determine the appropriate number of hidden neurons [31].

4. Results and Discussions

4.1. Comparative Model Outputs

As mentioned earlier, models using SVM, ANN, and MLR were tested in the current study to predict pressure gradients (Table 2, Table 3 and Table 4). The parameters for these models were fixed as per the judgement of the authors, except for weights and coefficients of SVM and ANN and the hidden neurons for ANN. Weights and coefficients were calculated as part of the learning process of the models. Hidden neurons for ANNs were determined on the basis of trial and error by comparison accuracy attained with a different number of neurons. Other parameters were fixed because of the fact that optimizing all parameters was not practically feasible for a single study. For each model, the mean square error (MSE), Mean Absolute Percent Error (MAPE) and coefficient of determination (R-square) between the predicted and experimental values were calculated to assess the accuracy of the model. MSE is an indicator of the magnitude of error, MAPE is a measure relative to the scale of model output, and CC denotes the ability of a model to capture the variation in the trend of data. All parameters were calculated separately for training and test datasets to evaluate the robustness of the model when used for a new dataset. The comparison of these parameters is given in Table 5. Due to the volatile nature of MLR models, three-fold cross-validation was applied and the results shown in Table 5 are the average of the three trials with different datasets. Table 6 shows the results of each individual trial in terms of MSE and T-square.
Weights for support vectors of SVM and neurons of ANN are provided in the Appendix A. Table A1 in Appendix A provides weights (constants) and coefficients for the explanatory parameters in each support vector. Table A2 in Appendix A provides the threshold (constant) and coefficients for explanatory variables in hidden neurons and the same values for output neurons. It should be noted that the general functions for these models are given in Equations (2) and (3), and Figure 3. These parameters were obtained by developing the model using the training dataset while minimizing the error functions. When the MLR model was applied to the current dataset, the fraction of water had a statistically insignificant coefficient hence it was not part of that model. The variables for the MLR model were filtered based on the hypothesis that their coefficients would be statistically different from ‘zero’ at a probability of 5% (margin of error). The model presented in Equation (4) includes only the variables that have less than a 5% chance (p-value) of the coefficient being close to ‘zero.’ According to this MLR model, oil velocity, oil viscosity, and water density have negative effects on the pressure gradient, while other statistically significant parameters have positive impacts.
PressGrad = −97.03 − 32.06(Dia) +0.77(Vel) − 0.01(Oden) + 0.07(OVisc) + 0.11(Wden) − 1995.36(WVisc)
The respective performances of the models developed in this study are presented in Figure 5, Figure 6 and Figure 7. The analytical model proposed by McKibben et al. [7] was also applied to the dataset, and its accuracy measures are also included in the comparison.
The comparison of accuracy measures presented in Figure 6 and Figure 7 demonstrates that the ANN model performs much better than the other models, providing the least MSE and MAPE, and highest CC. Also, all models have shown negligible differences between training and test datasets in terms of MSE and CC values. However, the difference in MAPE was very significant for SVM and MLR while it was very low for ANNs. This could be an indication of the better robustness of ANN as compared to other models. In comparison to the soft techniques investigated in the current study, the model used by McKibben et al. Study [7] does not perform well, although it is most likely better than other analytical models for the WAF of unconventional oils [4,6,22]. This observation was confirmed for the training as well as the test datasets. The test dataset was not used for the development of models in this study hence comparison of their accuracy is deemed fair with the analytical model that was developed using a different dataset. This finding justifies the need to employ AI-based models for designing WAF pipeline systems. The analytical model seems to have inadequate generalization capability, although it was developed based on an in-depth analysis of the physics. As a result, the application of an analytical model, such as Equation (1) for designing a WAF system results in a high degree of uncertainty that is unfavorable to both the economic and technical feasibility of an engineering project.

4.2. Sensitivity Analysis

As ANNs were shown to be the most accurate model for predicting pressure gradients in this study, they were used to conduct a sensitivity analysis so that each input variable’s relationship with the output variable can be identified. Figure 8, Figure 9, Figure 10, Figure 11, Figure 12 and Figure 13 represent changes in the pressure gradient for changes in each variable as per the current ANN predictions. This analysis was performed by applying the ANN model developed in this study with varying values of one independent variable at a time, while others were kept constant at their average values. For example, if velocity was observed to affect the pressure gradient, then all other parameters were fixed at their average values (as shown in Table 1), while velocity was changed within a predetermined range. This approach for sensitivity analysis was employed in previous studies as well e.g., [17,32].
As shown in Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14, average velocity and oil viscosity have positive effects on the pressure gradient. Specifically, increasing the magnitudes of these variables tends to increase friction losses. The boosting effect of velocity is highly evident in fluid dynamics studies. It should be mentioned that ΔP/L is proportional to V2 for a single-phase pipe flow. The impact of oil viscosity is a WAF-specific phenomenon. Higher oil viscosity most likely increases the degree of wall-fouling, thereby increasing ΔP/L [4,5,7].
Water density and oil density have opposite effects on pressure losses. Varying the fluid density tends to affect the core eccentricity. Although the effect of eccentricity in WAF is not a well-studied phenomenon [8], the current study sheds some light on the topic. Water density seems to have a linear effect, whereas oil density has an inverse influence on friction losses. Oil density increases the pressure gradient within the lower range, while the gradient decreases as the density exceed 900 kg/m3. A more practical investigation is required in this field.
Water viscosity did not have a significant impact on pressure losses, as its magnitude was essentially constant (~1 mPa.s). Similar to water viscosity, water fraction was found to have a negligible effect on friction losses. A detailed analysis of the experimental measurements also demonstrated comparable results [30]. Pipe diameter had an inverse influence on the WAF pressure gradient, which was expected since there is a nearly proportional correlation between ΔP/L and D−1 for the single-phase flow in a pipeline.
The outcome of this sensitivity analysis highlights another advantage of using ANN, as the same model can capture varying degrees of relationships between variables. Traditional parametric and analytical models lack this ability or require prior information about the problem that needs to be customized in a specific way to fit the model. On the other hand, the ANN model was developed in the present study without using any priory information.

5. Conclusions

The current study investigated the machine learning approach to model frictional losses in a pipeline transmitting a mixture of water and heavy oil. Lab- and pilot-scale data were analyzed with different machine learning algorithms and a MLR model. The results of the analysis are summarized below.
Traditional parametric or analytical models—for example, the model developed by McKibben et al.,—lack the ability of generalization, therefore producing inferior predictions of actual measurements when compared to AI-based machine learning algorithms (e.g., MLR, SVM, and ANN).
Among the four modeling approaches examined in this research, ANN performed the best. It produced the least MSE (~0) and the highest CC (~1), both for the training and test datasets.
In addition to predicting frictional pressure losses, the ANN model could also analyze the respective sensitivities of the input parameters to the output parameter. Oil density, water viscosity, and pipe diameter were negatively related to the pressure gradient. Oil density and water viscosity caused the friction loss to increase at the lower range, while the gradient decreased as the parametric values crossed threshold limits. Oil viscosity and water density had linear effects on the output variable, whereas other parameters had polynomial effects. This kind of analysis is to play a significant role in operating water-assisted pipelines.
The validated AI framework developed in this study is flexible and scalable. Efforts are underway to apply it to other flow conditions.

Author Contributions

Conceptualization, S.R. and U.G.; methodology, U.G.; software, U.G.; validation, S.R. and U.G.; formal analysis, S.R. and U.G.; investigation, S.R., M.A. and U.G.; resources, S.R., M.A. and U.G.; data curation, S.R. and U.G.; writing—original draft preparation, S.R., M.A. and U.G.; writing—review and editing, S.R., M.A., H.J.Q. and U.G.; visualization, S.R. and U.G.; supervision, S.R., M.A. and U.G.; project administration, S.R., M.A. and U.G.; funding acquisition, S.R., M.A. and H.J.Q. All authors have read and agreed to the published version of the manuscript.

Funding

Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Project No. AN000246].

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data were collected from [19,20]. The data used for the current study are included in the Appendix A (Table A3).

Acknowledgments

This work was supported through the Annual Funding track by the Deanship of Scientific Research, Vice Presidency for Graduate Studies and Scientific Research, King Faisal University, Saudi Arabia [Project No. AN000246]. The authors would like to acknowledge the technical and instrumental support they received from King Faisal University and University of Bahrain. We also acknowledge the contributions of Saskatchewan Research Council and Cranfield University, where the data used for the current study were generated.

Conflicts of Interest

We would like to declare no conflict of interest.

Appendix A

Table A1. SVM Full Model.
Table A1. SVM Full Model.
Weights 1DiaVelODenOViscWDenWViscFrac
1−10.000.320.740.200.070.880.650.12
2−10.000.320.740.200.040.860.610.28
3−10.000.320.740.200.040.820.550.45
4−10.000.321.000.200.040.800.580.28
5−10.000.321.000.200.040.840.520.45
6−8.330.320.470.200.020.670.370.43
7−10.000.320.740.200.040.640.350.43
810.000.321.000.200.040.620.330.01
9−7.000.320.741.000.860.800.520.31
10−1.970.320.211.001.000.880.650.48
11−10.000.320.741.000.860.800.520.45
1210.000.321.001.000.530.590.300.16
131.690.320.211.000.680.690.390.44
14−2.000.320.740.780.070.820.550.25
15−10.000.320.470.780.100.860.610.45
16−10.000.320.740.780.090.780.500.47
17−10.000.321.000.780.070.820.550.47
182.420.320.470.780.010.310.090.36
19−10.000.000.050.330.201.001.000.41
20−10.000.000.040.330.201.001.000.56
21−10.000.000.100.330.201.001.000.73
22−10.000.000.210.330.201.001.000.93
23−10.000.000.320.330.201.001.001.00
24−10.000.000.090.330.201.001.000.22
25−10.000.000.120.330.201.001.000.40
26−10.000.000.360.330.201.001.000.87
27−10.000.000.480.330.201.001.000.94
2810.000.000.220.330.201.001.000.22
29−10.000.000.250.330.201.001.000.31
309.610.000.460.330.201.001.000.69
3110.000.000.430.330.201.001.000.45
3210.000.000.640.330.201.001.000.66
3310.000.000.750.330.201.001.000.75
3410.000.000.850.330.201.001.000.79
35−10.000.000.080.290.120.930.750.16
36−8.400.000.110.290.120.930.750.35
37−10.000.000.160.290.120.930.750.54
3810.000.000.330.290.120.930.750.24
3910.000.000.440.290.120.930.750.45
4010.000.000.670.290.120.930.750.68
4110.000.000.860.290.120.930.750.79
429.460.000.050.420.130.860.610.57
4310.000.000.060.420.130.860.610.37
4410.000.000.080.420.130.860.610.44
4510.000.000.100.420.130.860.610.56
4610.000.000.130.420.130.860.610.65
47−10.000.000.070.420.130.860.610.16
48−10.000.000.090.420.130.860.610.25
49−10.000.000.100.420.130.860.610.33
500.700.000.360.420.130.860.610.88
5110.000.000.270.420.130.860.610.02
5210.000.000.280.420.130.860.610.06
5310.000.000.300.420.130.860.610.10
5410.000.000.330.420.130.860.610.17
5510.000.000.350.420.130.860.610.25
5610.000.000.410.420.130.860.610.35
5710.000.000.460.420.130.860.610.44
5810.000.000.520.420.130.860.610.52
5910.000.000.620.420.130.860.610.62
603.040.000.050.550.481.001.000.85
617.240.000.080.550.481.001.000.68
6210.000.000.160.550.481.001.000.86
634.350.000.100.550.481.001.000.58
6410.000.000.180.550.481.001.000.80
65−10.000.000.020.330.201.001.000.33
66−1.700.110.471.000.600.640.350.27
6710.000.110.471.000.710.720.420.00
6810.000.110.471.000.420.510.240.00
6910.000.110.471.000.280.390.170.00
70−10.000.110.211.000.640.670.370.27
71−10.000.110.211.000.600.640.350.22
72−10.000.110.211.000.420.510.240.27
73−10.000.110.211.000.170.300.120.00
74−10.000.321.000.140.040.850.580.28
75−1.880.320.740.060.010.430.190.31
76−10.000.320.470.140.040.860.610.43
78−10.000.321.000.140.040.860.610.43
7910.001.000.210.140.040.850.580.26
802.781.001.000.140.040.830.550.25
8110.001.000.210.140.040.860.610.45
8210.001.001.000.140.040.850.580.40
8310.001.001.000.100.020.640.350.40
Table A2. ANN Weights.
Table A2. ANN Weights.
Neuron2.12.22.32.42.52.62.73.1
Thresh−0.830.170.750.23−0.620.690.65−0.44
1.1−0.010.040.810.141.65−2.05−5.55
1.21.83−0.86−0.77−0.340.77−0.521.56
1.3−0.970.31−0.89−0.273.01−1.59−0.38
1.40.341.22−0.51−0.312.030.270.40
1.50.170.810.310.73−0.120.66−0.14
1.6−0.100.68−0.830.44−1.360.05−0.10
1.7−0.41−0.41−0.350.68−0.601.59−1.13
2.1 1.11
2.2 0.47
2.3 0.04
2.4 −0.33
2.5 2.39
2.6 1.81
2.7 3.88
Table A3. Data set.
Table A3. Data set.
ReferenceNominal Diameter (Inch)ρow (-)µow (-)Reo (-)Rew (-)Cw (-)Pressure Gradient Ratio (WAF/Heavy Oil)Temperature (°C)
Shi [22]10.91149230.844070.391.2%12
0.91149230.842930.511.5%12
0.91149231.266220.631.2%12
0.91149232.211,6230.791.1%12
0.91149233.016,0980.841.1%12
0.91149231.264390.241.4%12
0.91149231.476270.381.3%12
0.91149231.688370.491.5%12
0.91149232.513,5180.661.1%12
0.91149233.418,1990.741.1%12
0.91149234.322,9940.801.0%12
0.91149232.211,7820.241.7%12
0.91149232.513,4040.311.2%12
0.91149233.418,1300.511.3%12
0.91149234.122,3320.601.3%12
0.91149234.926,6930.671.1%12
0.91149233.016,2580.261.2%12
0.91149233.921,1670.421.3%12
0.91149235.630,2100.581.2%12
0.91149236.534,9590.651.2%12
0.91149237.339,2520.681.2%12
0.90733761.970080.192.4%21
0.90733762.384150.342.8%21
0.90733762.910,9100.492.5%21
0.90733764.416,2990.661.9%21
0.90733765.319,5900.252.3%21
0.90733766.725,0850.422.1%21
0.90733769.736,2610.601.9%21
0.907337612.345,8700.682.0%21
0.92342701.151840.455.9%25
0.92342701.358540.515.6%25
0.92342701.672810.604.4%25
0.92342702.511,6500.753.1%25
0.92342701.465820.354.8%25
0.92342701.672520.415.1%25
0.92342701.986800.504.7%25
0.92342702.210,1070.583.7%25
0.92342702.813,0480.673.0%25
0.92342701.571650.192.8%25
0.92342701.779220.262.5%25
0.92342701.986800.332.4%25
0.92342702.210,3980.442.8%25
0.92342702.511,5920.502.9%25
0.92342703.214,6210.602.7%25
0.92342703.817,6790.672.4%25
0.92342704.420,3880.722.2%25
0.92342705.023,2130.752.1%25
0.92342703.918,1160.092.7%25
0.92342704.118,8150.112.5%25
0.92342704.219,6600.152.9%25
0.92342704.621,0580.203.1%25
0.92342704.621,2910.223.0%25
0.92342704.922,6600.262.8%25
0.92342705.525,4560.342.6%25
0.92342706.128,4270.412.2%25
0.92342706.831,6600.472.0%25
0.92342708.037,2230.551.9%25
0.92342708.740,1060.581.8%25
0.92342709.343,1350.611.7%25
0.92342709.945,7860.631.7%25
0.93611,6040.224430.463.1%11
0.93611,6040.226720.503.0%11
0.93611,6040.229460.542.7%11
0.93611,6040.337220.642.6%11
0.93611,6040.447500.732.3%11
0.93611,6040.559370.771.9%11
0.93611,6040.447040.501.9%11
0.93611,6040.557770.591.8%11
0.93611,6040.893160.741.4%11
0.93611,6040.454800.421.5%11
0.93611,6040.565530.521.6%11
0.93611,6040.679010.601.5%11
0.93611,6040.810,2750.691.3%11
0.91149230.633340.322.8%12
0.91149230.736990.403.3%12
0.91149230.844070.423.0%12
0.91149230.947720.482.2%12
Rushd [8]20.99225,6002.769,0720.090.9%32
0.99225,6002.769,0720.280.6%32
0.99323,0973.273,3150.280.6%35
0.99517,9284.580,6910.280.8%40
0.99225,6002.769,0720.071.1%32
0.99517,9284.5806910.071.4%40
0.99612,7976.786,6570.071.7%44
0.99323,9981.535,9690.280.1%34
0.99323,0971.636,6570.240.2%35
0.99323,9983.071,9380.240.8%34
0.99517,9282.240,3450.280.3%40
0.99711,3813.944,1100.170.6%45
0.99883585.445,6050.070.9%47
0.99883588.268,4080.281.5%47
40.897191026.055,2740.196.0%23
0.897169728.553,9990.288.2%22
0.898146135.657,8600.4110.3%25
0.897213047.7113,1230.197.7%24
0.897191051.9110,5480.287.3%23
0.898143174.3118,3420.4010.7%26
0.897213071.5169,6840.166.9%24
0.8981461106.7173,5790.299.6%24
0.8981399116.5181,4640.4210.8%27
0.8981431148.5236,5420.299.8%26
0.8981364162.9247,4040.4210.1%28
0.900114953.868,6110.1331.0%33
0.900104361.671,3950.3031.1%35
0.900114953.868,6110.4123.3%33
0.9001043123.3142,7900.1420.0%35
0.9011001131.0145,6020.2915.6%36
0.9001097114.8140,0050.4013.7%34
0.9011766111.4218,4030.0917.0%36
0.9011721116.5222,6410.299.3%37
0.9001808106.7214,1850.408.4%35
0.9011766148.5291,2030.0818.1%36
0.9011766148.5291,2030.308.3%36
0.9001808142.2285,5800.418.0%35
0.98930,5181.855,2740.140.9%23
Rushd [8]40.99029,7493.9115,7190.151.3%25
0.99329,2986.0177,0140.131.2%26
0.99328,8028.3241,3450.081.2%27
0.99029,7491.957,8600.311.0%25
0.99029,2984.0118,3420.310.9%26
0.99128,2596.5185,4410.310.9%28
0.99127,6719.0252,6130.260.9%29
0.99030,1541.956,5610.440.7%24
0.99028,8024.2120,9760.430.7%27
0.99128,2596.5185,4410.420.7%28
0.99127,6719.0252,6130.410.8%29
0.99324,0082.9700020.081.6%34
0.99322,1926.5145,6020.091.4%36
0.99420,17111.2226,9400.171.2%38
0.99421,20713.9296,8550.191.4%37
0.99324,0082.970,0020.141.2%34
0.99322,1926.5145,6020.171.2%36
0.99421,20710.4222,6410.280.9%37
0.99419,07916.1308,2850.321.0%39
0.99224,8392.768,6110.410.8%33
0.99323,1266.1142,7900.410.8%35
0.99322,1929.8218,4030.420.8%36
0.99419,07916.1308,2850.421.0%39
0.964319917.457,8600.253.3%25
0.964308537.0118,3420.263.9%26
0.964258167.8181,4640.264.6%27
0.965309678.7252,6130.263.4%29
0.964330416.556,5610.422.0%24
0.964319934.9115,7190.422.1%25
0.965309659.1189,4590.432.6%27
0.964258190.4241,9520.433.8%29
0.966206432.168,6110.273.9%33
0.966188571.8140,0050.255.8%34
0.9671661127.1218,4030.248.0%36
0.9662064128.5274,4460.255.8%33
0.966206432.168,6110.391.7%33
Rushd [8]40.966188571.8140,0050.395.5%34
0.9671661127.1218,4030.397.0%36
0.9662064128.5274,4460.395.6%33
0.97296992.191,8270.206.2%49
0.973896202.6186,5820.207.5%50
0.973896303.9279,8730.2010.9%50
0.9721038338.0360,9660.2012.5%48
0.97296992.191,8270.3512.4%49
0.971969184.3183,8410.356.2%49
0.970969276.4276,0400.357.8%49
0.9721038338.0360,9660.359.4%48
0.891153813.423,1440.3232.9%25
0.891153833.557,8600.309.7%25
0.891153867.0115,7190.2811.0%25
0.8901475107.1177,3470.2911.6%26
0.8901475142.7236,4630.298.2%26
0.88788429.229,1200.3142.8%36
0.88788473.1728010.3034.3%36
0.887884146.2145,6020.2919.3%36
0.887884219.3218,4030.2819.6%36
0.88664545.5331330.3346.0%43
0.886645113.782,8320.3146.8%43
0.886645227.4165,6640.3126.8%43
0.886645341.1248,4970.3120.6%43
0.886645454.8331,3290.3122.2%43
0.88543079.338,5510.3251.3%52
0.884349255.9101,0150.2987.2%55
0.884320566.8205,1840.2960.9%56
0.885430594.6289,1330.2931.3%52
0.885430792.8385,5100.2937.4%52
0.890147514.323,6460.4126.0%26
0.891153833.557,8600.409.7%25
0.891153867.0115,7190.4010.5%25
0.8911538100.5173,5790.4010.9%25
0.8911538134.0231,4380.407.9%25
0.88788429.229,1200.4129.4%36
0.88788473.172,8010.4233.2%36
0.887884146.2145,6020.4118.7%36
Rushd [8]40.887884219.3218,4030.4017.5%36
0.887884292.4291,2030.4018.2%36
0.88658452.134,3250.4333.5%45
0.886584130.285,8120.4246.0%45
0.886584260.3171,6240.4229.7%45
0.886615364.1252,8720.4221.4%44
0.886615485.5337,1630.4227.2%44
0.88540485.739,1230.4439.6%53
0.884320283.4102,5920.4267.2%56
0.884320566.8205,1840.4148.3%56
0.884320850.2307,7760.4239.2%56
0.8843491023.4404,0590.4255.9%55
100.890147591.5151,5380.2754.6%26
0.8901475183.0303,0760.2423.9%26
0.8901475274.4454,6140.2627.3%26
0.8901409391.4619,4830.2629.2%27
0.888917177.0182,8040.2779.7%35
0.887884374.8373,2370.2680.9%36
0.887884562.2559,8550.2844.6%36
0.887884749.6746,4730.2645.7%36
0.886584333.6219,9710.3275.6%45
0.886584667.3439,9410.2794.4%45
0.8865841000.9659,9120.2496.5%45
0.8865841334.6879,8820.2463.0%45
0.891153885.9148,3180.4238.4%25
0.8911538171.8296,6360.3920.8%25
0.8911538257.7444,9530.3924.5%25
0.8901475365.9606,1520.3828.1%26
0.888977163.0179,3710.4055.0%34
0.888977325.9358,7430.3870.3%34
0.888977488.9538,1140.3938.7%34
0.888917708.1731,2160.3844.8%35
0.886645291.5212,3330.4354.9%43
0.886645583.0424,6660.4082.4%43
0.886645874.4636,9980.3791.6%43
0.8866151244.5864,2860.3664.6%43

References

  1. Arney, M.; Bai, R.; Guevara, E.; Joseph, D.; Liu, K. Friction factor and holdup studies for lubricated pipelining—I. Experiments and correlations. Int. J. Multiph. Flow 1993, 19, 1061–1076. [Google Scholar] [CrossRef]
  2. Joseph, D.D.; Bai, R.; Mata, C.; Sury, K.; Grant, C. Self-lubricated transport of bitumen froth. J. Fluid Mech. 1999, 386, 127–148. [Google Scholar] [CrossRef] [Green Version]
  3. Rodriguez, O.; Bannwart, A.; de Carvalho, C. Pressure loss in core-annular flow: Modeling, experimental investigation and full-scale experiments. J. Pet. Sci. Eng. 2009, 65, 67–75. [Google Scholar] [CrossRef]
  4. Shi, J.; Lao, L.; Yeung, H. Water-lubricated transport of high-viscosity oil in horizontal pipes: The water holdup and pressure gradient. Int. J. Multiph. Flow 2017, 96, 70–85. [Google Scholar] [CrossRef] [Green Version]
  5. Rushd, S.; McKibben, M.; Sanders, R.S. A new approach to model friction losses in the water-assisted pipeline transportation of heavy oil and bitumen. Can. J. Chem. Eng. 2019, 97, 2347–2358. [Google Scholar] [CrossRef]
  6. Rushd, S.; Sultan, R.A.; Mahmud, S. Modeling Friction Losses in the Water-Assisted Pipeline Transportation of Heavy Oil. In Processing of Heavy Crude Oils: Challenges and Opportunities; Gounder, R.M., Ed.; IntechOpen: London, UK, 2019. [Google Scholar] [CrossRef] [Green Version]
  7. McKibben, M.; Gillies, R.; Sanders, S. A New Method for Predicting Friction Losses and Solids Deposition during the Water-Assisted Pipeline Transport of Heavy Oils and Co-Produced Sand. In Proceedings of the SPE Heavy Oil Conference-Canada, Calgary, AB, Canada, 11 June 2013. [Google Scholar] [CrossRef]
  8. Rushd, M.M.A.S. A New Approach to Model Friction Losses in The Water-Assisted Pipeline Transportation of Heavy Oil and Bitumen. Ph.D. Thesis, University of Alberta, Edmonton, AB, Canada, 2016. [Google Scholar] [CrossRef]
  9. Hassan, M.R.; Mamun, A.A.; Hossain, M.I.; Arifuzzaman, M. Hybrid computational intelligence and statistical measurements for moisture damage modeling in lime and chemically modified asphalt. Computational Intelligence and Neuroscience. Comput. Intell. Neurosci. 2018, 2018, 7525789. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  10. Arifuzzaman Advanced ANN Prediction of Moisture Damage in CNT Modified Asphalt Binder. Soft Comput. Civ. Eng. 2017, 1, 1–11.
  11. Alam, S.; Gazder, U. Shear strength prediction of FRP reinforced concrete members using generalized regression neural network. Neural Comput. Appl. 2019, 32, 6151–6158. [Google Scholar] [CrossRef]
  12. Osman, E.-S.A.; Aggour, M.A. Artificial neural network model for accurate prediction of pressure drop in horizontal and near-horizontal-multiphase flow. Pet. Sci. Technol. 2002, 20, 1–15. [Google Scholar] [CrossRef]
  13. Adhikari, B.; Jindal, V. Artificial neural networks: A new tool for prediction of pressure drop of non-Newtonian fluid foods through tubes. J. Food Eng. 2000, 46, 43–51. [Google Scholar] [CrossRef]
  14. Ozbayoglu, M.; Yuksel, H.E. Analysis of gas—Liquid behavior in eccentric horizontal annuli with image processing and artificial intelligence techniques. J. Pet. Sci. Eng. 2012, 81, 31–40. [Google Scholar] [CrossRef]
  15. Salgado, C.M.; Pereira, C.M.; Schirru, R.; Brandão, L.E. Flow regime identification and volume fraction prediction in multiphase flows by means of gamma-ray attenuation and artificial neural networks. Prog. Nucl. Energy 2010, 52, 555–562. [Google Scholar] [CrossRef]
  16. Nasseh, S.; Mohebbi, A.; Sarrafi, A.; Taheri, M. Estimation of pressure drop in venturi scrubbers based on annular two-phase flow model, artificial neural networks and genetic algorithm. Chem. Eng. J. 2009, 150, 131–138. [Google Scholar] [CrossRef]
  17. Dubdub, I.; Rushd, S.; Al-Yaari, M.; Ahmed, E. Application of ANN to the water-lubricated flow of non-conventional crude. Chem. Eng. Commun. 2020, 209, 47–61. [Google Scholar] [CrossRef]
  18. Huang, M.-W.; Chen, C.-W.; Lin, W.-C.; Ke, S.-W.; Tsai, C.-F. SVM and SVM Ensembles in Breast Cancer Prediction. PLoS ONE 2017, 12, e0161501. [Google Scholar] [CrossRef]
  19. Panda, A.K.; Rapur, J.S.; Tiwari, R. Prediction of flow blockages and impending cavitation in centrifugal pumps using Support Vector Machine (SVM) algorithms based on vibration measurements. Measurement 2018, 130, 44–56. [Google Scholar] [CrossRef]
  20. Nasir, M.T.; Mysorewala, M.; Cheded, L.; Siddiqui, B.; Sabih, M. Measurement error sensitivity analysis for detecting and locating leak in pipeline using ANN and SVM. In Proceedings of the 2014 IEEE 11th International Multi-Conference on Systems, Signals & Devices (SSD14), Barcelona, Spain, 11–14 February 2014; pp. 1–4. [Google Scholar]
  21. Rushd, S.; Rahman, M.; Arifuzzaman; Ali, S.A.; Shalabi, F. Aktaruzzaman Predicting pressure losses in the water-assisted flow of unconventional crude with machine learning. Pet. Sci. Technol. 2021, 39, 926–943. [Google Scholar] [CrossRef]
  22. Shi, J. A Study on High-Viscosity Oil-Water Two-Phase Flow in Horizontal Pipes. Ph.D. Thesis, Cranfield University, Cranfield, UK, 2015. Available online: http://dspace.lib.cranfield.ac.uk/handle/1826/9654 (accessed on 1 September 2015).
  23. Alexopoulos, E.C. Introduction to multivariate regression analysis. Hippokratia 2010, 14, 23. [Google Scholar]
  24. Yu, H.; Kim, S. SVM Tutorial-Classification, Regression and Ranking. Handb. Nat. Comput. 2012, 1, 479–506. [Google Scholar]
  25. Tabari, H.; Kisi, O.; Ezani, A.; Talaee, P.H. SVM, ANFIS, regression and climate based models for reference evapotranspiration modeling using limited climatic data in a semi-arid highland environment. J. Hydrol. 2012, 444–445, 78–89. [Google Scholar] [CrossRef]
  26. Sahoo, A.; Xu, H.; Jagannathan, S. Neural network-based adaptive event-triggered control of nonlinear continuous-time systems. In Proceedings of the 2013 IEEE International Symposium on Intelligent Control (ISIC), Hyderabad, India, 28–30 August 2013; pp. 35–40. [Google Scholar]
  27. Noble, W.S. What is a support vector machine? Nat. Biotechnol. 2006, 24, 1565–1567. [Google Scholar] [CrossRef]
  28. Rezaeianzadeh, M.; Tabari, H.; Yazdi, A.A.; Isik, S.; Kalin, L. Flood flow forecasting using ANN, ANFIS and regression models. Neural Comput. Appl. 2014, 25, 25–37. [Google Scholar] [CrossRef]
  29. Yegnanarayana, B. Artificial Neural Networks; PHI Learning Pvt. Ltd.: New Delhi, India, 2009. [Google Scholar]
  30. Sobie, E.A. Parameter Sensitivity Analysis in Electrophysiological Models Using Multivariable Regression. Biophys. J. 2009, 96, 1264–1274. [Google Scholar] [CrossRef] [Green Version]
  31. Sheela, K.G.; Deepa, S.N. Review on Methods to Fix Number of Hidden Neurons in Neural Networks. Math. Probl. Eng. 2013, 2013, 425740. [Google Scholar] [CrossRef] [Green Version]
  32. Wang, J.; Li, X.; Lu, L.; Fang, F. Parameter sensitivity analysis of crop growth models based on the extended Fourier Amplitude Sensitivity Test method. Environ. Model. Softw. 2013, 48, 171–182. [Google Scholar] [CrossRef]
Figure 1. Schematic presentation of water-assisted flow regime [8].
Figure 1. Schematic presentation of water-assisted flow regime [8].
Applsci 12 04871 g001
Figure 2. Hyperplanes for SVMs.
Figure 2. Hyperplanes for SVMs.
Applsci 12 04871 g002
Figure 3. Structure of ANN Process.
Figure 3. Structure of ANN Process.
Applsci 12 04871 g003
Figure 4. Performance of ANN with different number of hidden neurons.
Figure 4. Performance of ANN with different number of hidden neurons.
Applsci 12 04871 g004
Figure 5. Comparison of MSE.
Figure 5. Comparison of MSE.
Applsci 12 04871 g005
Figure 6. Comparison of R-square.
Figure 6. Comparison of R-square.
Applsci 12 04871 g006
Figure 7. Comparison of MAPE.
Figure 7. Comparison of MAPE.
Applsci 12 04871 g007
Figure 8. Effect of average velocity.
Figure 8. Effect of average velocity.
Applsci 12 04871 g008
Figure 9. Effect of oil viscosity.
Figure 9. Effect of oil viscosity.
Applsci 12 04871 g009
Figure 10. Effect of water density.
Figure 10. Effect of water density.
Applsci 12 04871 g010
Figure 11. Effect of oil density.
Figure 11. Effect of oil density.
Applsci 12 04871 g011
Figure 12. Effect of water viscosity.
Figure 12. Effect of water viscosity.
Applsci 12 04871 g012
Figure 13. Effect of water fraction.
Figure 13. Effect of water fraction.
Applsci 12 04871 g013
Figure 14. Effect of pipe diameter.
Figure 14. Effect of pipe diameter.
Applsci 12 04871 g014
Table 1. Experimental Parameters.
Table 1. Experimental Parameters.
ParameterValueShort Notation
Number of samples225N.A.
Pipe diameter (m)Average: 0.091
Min: 0.026
Max: 0.265
Standard deviation: 0.070
Dia
Average velocity (m/s)Average: 0.952
Min: 0.107
Max: 2.000
Standard deviation: 0.591
Vel
Oil density (kg/m3)Average: 921
Min: 871
Max: 987
Standard deviation: 38
ODen
Oil viscosity (Pa.s)Average: 5.50
Min: 0.16
Max: 28.45
Standard deviation: 6.79
OVisc
Water density (kg/m3)Average: 995
Min: 985
Max: 999
Standard deviation: 3.43
WDen
Water viscosity (Pa.s) × 10−3Average: 0.829
Min: 0.496
Max: 1.138
Standard deviation: 0.184
WVisc
Water fractionAverage: 0.370
Min: 0.070
Max: 0.844
Standard deviation: 0.163
Frac
Pressure gradient (kPa/m) *Average: 1.19
Min: 0.04
Max: 5.37
Standard deviation: 1.26
PressGrad
* Output parameter.
Table 2. SVM Model.
Table 2. SVM Model.
ParameterValue/Description
Kernel typeRadial basis
Number of support vectors83
σ 0.1
B0.14
Table 3. ANN Model.
Table 3. ANN Model.
ParameterValue/Description
TypeMLP
Number of processing neurons7
Learning algorithmBP–CG
Processing layer activation functionHyperbolic
Output layer activation functionLogistic
Table 4. MLR Model.
Table 4. MLR Model.
* Model ParametersEstimatep-Value
Intercept−97.030.01
Dia−32.060.00
Vel0.770.00
Oden−0.010.00
OVisc0.070.00
WDen0.110.00
WVisc−1995.360.00
* See Table 1 for notations of parameters.
Table 5. Comparison of Accuracy Parameters.
Table 5. Comparison of Accuracy Parameters.
Accuracy ParameterModelDataset
TrainingTest
MSE (kPa/m)SVM0.240.28
ANN0.030.04
MLR0.740.66
R-square SVM0.830.83
ANN0.980.98
MLR0.610.53
MAPE (%)SVM6168
ANN1620
MLR3859
Table 6. Comparison of Accuracy Parameters for Cross-Validation of MLR.
Table 6. Comparison of Accuracy Parameters for Cross-Validation of MLR.
Accuracy ParameterModelDataset
TrainingTest
MSE (kPa/m)Trial 10.520.46
Trial 20.550.44
Trial 30.560.40
R-squareTrial 10.580.56
Trial 20.620.55
Trial 30.610.49
MAPE (%)Trial 13558
Trial 23961
Trial 33559
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Rushd, S.; Gazder, U.; Qureshi, H.J.; Arifuzzaman, M. Advanced Machine Learning Applications to Viscous Oil-Water Multi-Phase Flow. Appl. Sci. 2022, 12, 4871. https://doi.org/10.3390/app12104871

AMA Style

Rushd S, Gazder U, Qureshi HJ, Arifuzzaman M. Advanced Machine Learning Applications to Viscous Oil-Water Multi-Phase Flow. Applied Sciences. 2022; 12(10):4871. https://doi.org/10.3390/app12104871

Chicago/Turabian Style

Rushd, Sayeed, Uneb Gazder, Hisham Jahangir Qureshi, and Md Arifuzzaman. 2022. "Advanced Machine Learning Applications to Viscous Oil-Water Multi-Phase Flow" Applied Sciences 12, no. 10: 4871. https://doi.org/10.3390/app12104871

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop