Next Article in Journal
Variable-Diameter Drum with Concentric Threshing Gap and Performance Comparison Experiment
Next Article in Special Issue
Incorporation of Horizontal Fins into a PCM-Based Heat Sink to Enhance the Safe Operation Time: Applicable in Electronic Device Cooling
Previous Article in Journal
The Influence of Tilt Angle on the Aerodynamic Performance of a Wind Turbine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Using Committee Neural Network for Prediction of Pressure Drop in Two-Phase Microchannels

by
Arman Haghighi
1,2,3,
Mostafa Safdari Shadloo
4,5,*,
Akbar Maleki
6 and
Mohammad Yaghoub Abdollahzadeh Jamalabadi
7,8,*
1
Institute of Research and Development, Duy Tan University, Da Nang 550000, Vietnam
2
Faculty of Electrical-Electronic Engineering, Duy Tan University, Da Nang 550000, Vietnam
3
Mechanical Engineering Department, California State Polytechnic University, Pomona, CA 91768, USA
4
CORIA-CNRS (UMR6614), Normandie Universty, INSA of Rouen, 76000 Rouen, France
5
Institute of Chemical Process Engineering (ICVT), University of Stuttgart, 70199 Stuttgart, Germany
6
Faculty of Mechanical Engineering, Shahrood University of Technology, Shahrood 3619995161, Iran
7
Department for Management of Science and Technology Development, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam
8
Faculty of Civil Engineering, Ton Duc Thang University, Ho Chi Minh City 700000, Vietnam
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2020, 10(15), 5384; https://doi.org/10.3390/app10155384
Submission received: 10 July 2020 / Revised: 1 August 2020 / Accepted: 2 August 2020 / Published: 4 August 2020
(This article belongs to the Special Issue Advancement in Phase Change Material Technologies)

Abstract

:
Numerous studies have proposed to correlate experimental results, however there are still significant errors in those predictions. In this study, an artificial neural network (ANN) is considered for a two-phase flow pressure drop in microchannels incorporating four neural network structures: multilayer perceptron (MLP), radial basis function (RBF), general regression (GR), and cascade feedforward (CF). The pressure drop predication by ANN uses six inputs (hydraulic diameter of channel, critical temperature of fluid, critical pressure of fluid, acentric factor of fluid, mass flux, and quality of vapor). According to the experimental data, for each network an optimal number of neurons in the hidden layer is considered in the range 10–11. A committee neural network (CNN) is fabricated through the genetic algorithm to improve the accuracy of the predictions. Ultimately, the genetic algorithm designates a weight to each ANN model, which represents the relative contribution of each ANN in the pressure drop predicting process for a two-phase flow within a microchannel. The assessment based on the statistical indexes reveals that the results are not similar for all models; the absolute average relative deviation percent for MLP, CF, GR, and CNN were obtained to be equal to 10.89, 10.65, 7.63, and 5.79, respectively. The CNN approach is demonstrated to be superior to many ANN techniques, even with simple linearity in the model.

1. Introduction

During recent years, much research has been conducted on two-phase inside microchannels to survey transport phenomena. There are many examples for the process involving two-phase flow in microchannels: microreactors, miniature heat exchangers, fuel injection in internal combustion equipment, miniature refrigeration systems, cooling of high-powered electronic systems, fusion reactors for cooling plasma-facing components, and fuel cells with evaporator components [1,2,3,4].
Nevertheless, two-phase flows inside microchannels come with disadvantages, too. They impose a higher pressure drop through the fluid flow passing through narrow channels. Hence, both pressure drop and heat transfer considerations should simultaneously be taken into account during the design process of these miniaturized heat exchangers.
Numerous experimental datasets have been collected and studied by scholars, to investigate inside two-phase flow microchannels [5,6,7,8]. For instance, there are published experimental works that explore fluid characteristics inside microchannels with a two-phase flow of nitrogen, R12, R32, R134a, R236ea, R245fa, R404a, R410a, and R422 with different qualities and mass fluxes at different channel hydraulic diameters [9,10,11].
Harirchian and Garimella [12] considered a boiling of the dielectric liquid fluorinert FC-77 in parallel microchannels to investigate heat transfer coefficients and pressure. They studied the flow and boiling phenomena for a comprehensive range of channels in four distinct flow regimes, including slug, annular, bubbly, and alternating churn/annular/wispy-annular flow regimes.
Pan et al. [13] experimentally examined the characteristics of flow boiling of deionized water inside a microchannel consisting of 14 parallel channels with 0.15 × 0.25 mm rectangular shape. They selected the heat flux, mass flux, and inlet temperature as significant factors for the variation of pressure drop. Based on the experiment results, they concluded from the pressure drop trend that with the increasing quality of vapor, instability will be increased.
Meanwhile, researchers have tried to validate the adiabatic pressure drop correlations of two-phase flow inside microchannels for different conditions by comparing them with the comprehensive databases at hand. Some researchers [14,15,16,17,18] have studied the heat transfer coefficient for flow boiling of oxygen-free streams inside copper microchannel heat sinks. The experimental results are in good agreement with the corresponding numerical predictions. Maher et al. [19] proposed a set of new equations to describe a two-phase pressure drop, along with the homogeneous flow of different working fluids at different tube diameters and fluid flux values.
Although many correlations are provided by researchers for predicting pressure drops, having a comprehensive correlation for an extensive range of factors for linear multivariate modeling is too difficult. A common way to obtain a comprehensive correlation is to use non-linear empirical modeling techniques, such as artificial neural network (ANN), which is developed based on the brain’s biological neuron network function [20,21,22,23].
The artificial neural network method has been increasingly used to solve engineering problems. Using an appropriate form of the algorithm to train and test the network configurations would be followed by the successful application of ANN. Picanco et al. [24] suggested that the functional relations between the pertinent dimensionless numbers in nucleate boiling can be correlated to convective heat transfer through a genetic algorithm. The flow pattern of two-phase water and air stream was presented by Mehta et al. [25]. They found predictor correlations by means of applying the ANN method on a circular Y-junction minichannel with a 2.1 mm diameter.
According to the literature, there has been limited effort focused on evaluating the pressure drop by means of the ANN technique. Therefore, in this study, in addition to the use of conventional neural networks in engineering sciences to achieve higher levels of accuracy, we have integrated these models using the concept of committee neural network. As such, different neural network models were employed, and then a committee neural network was generated, based on a genetic algorithm to integrate the neural networks.

2. Methods

2.1. Artificial Neural Networks

Artificial Neural Network (ANN) is a system of data processing, describing, predicting, and clustering which is inspired by the biological nervous system [26]. The nerve cell is the smallest component of this system. The data gained from experiments, or the measurements, are inputs of this system. The relative importance of the inputs is assigned by weights, and the weighted input values are combined by the summing function. Evaluation of information takes place by processing the obtained data from the summing function and the outputs of the activation unit. The nature of the relationship between artificial neural cells and biological neurons is illustrated in Figure 1. As shown in this figure, a similar manner of processing collected data from the outside world exists for both an artificial neural cell and a biological neuron.
For this purpose, the input layer takes variables from resources like experimental data to do some mathematical operations. Then these values are sent to the hidden layers, in which the number of hidden layers can be specified regarding the complexity of problems. The estimation of output value for all ANN structures can be provided as below:
Y i .   p r e d = f ( i = 1 n w i x i + b )
where the output value ( Y i .   p r e d ) is computed using a function f and the summation of the multiplication of weight w i and input x i to the bias b.
In general, ANN models predict the output parameters according to the following procedure:
  • Training trends are determined based on datasets.
  • The architecture of the neural network is defined.
  • The network parameters are determined.
  • The feedforward back-propagation program is run.
  • Data are compared and analyzed.
In this study, the pressure drop, as the output, is essentially the target, and is predicted by compiling the testing inputs data. Inputs 1 to 6 are hydraulic diameter of channel, critical temperature of fluid, critical pressure of fluid, acentric factor of fluid (a concept referring to the molecule’s shape), mass flux, and quality (the mass fraction of vapor in a saturated mixture), respectively. Table 1 presents the values of these input parameters.
Various models of the artificial neural network can be utilized in the prediction and comparison of experimental data results. Classification of these networks is dependent on their application. In this paper, four specific types of neural networks, such as multilayer perceptron, radial basis function, cascade feedforward, and general regression, are employed for estimations.

2.1.1. Multilayer Perceptron (MLP)

Multilayer perceptron (MLP) was developed in 1986, and it has become the most commonly used type of artificial neural network to solve problems by estimation [27]. These networks have a number of layers between their input and output layers, and their learning algorithm is essentially based on a gradient descent technique, which is basically a back-propagation algorithm. The back-propagation networks aim to approximate the network outputs according to the desired output, with the aim of reducing error. Ultimately, error reduction is achieved by setting the network weight values in the subsequent iteration.

2.1.2. Radial Basis Function (RBF)

Broomhead and Lowe developed a type of feedforward neural network named the radial basis function neural network [28]. The main difference between MLP and RBF methodologies is that the RBF method includes hidden layers containing radial basis activation functions.

2.1.3. Cascade Feedforward (CF)

Feedforward and cascade forward networks make use of the back-propagation algorithm regardless of the connection between neurons in each layer. In cascade forward networks, neurons are connected to both the neurons in the previous layers, as well as all the neurons in the same layer [29]. Similar to feedforward back-propagation networks, the back-propagation algorithm is also used by cascade networks for updating weights. The main feature of a cascade neural network is that in each layer, neurons are associated with all neurons in the previous layer.

2.1.4. General Regression (GR)

The general regression neural network is used for estimation of a continuous variable, similar to the standard regression technique. However, GR is superior with regards to fast learning and converging into optimal regression [30]. GR is structured by four layers: input, pattern, output, and summation layers. GR might be recognized as a fully connected network, since its layers are linked to the following layer through weighting vectors between neurons. This method is used to determine the nonlinear relationship between variables.

2.1.5. Committee Neural Network (CNN)

In general, a set of neural networks is making up the committee neural network, and it offers the benefits of the whole work by integrating the product of individual systems. Therefore, a model can act better than the finest individual network. CNN is used as an optimization method to reach the final result through a combination of the outputs of different models [31,32]. Figure 2 shows a schematic representation of a committee neural network. A combiner integrates the models in various ways; the most common way is the simple averaging of the group. Meanwhile, the genetic algorithm (GA) can help properly combine the weight (contribution) of each neural network in CNN. This approximation method incorporates the principles of genetics and natural selection. Therefore, a set of networks makes up CNN, and it incorporates the benefits of the whole work by means of integrating the product of each individual mode.

2.2. Experimental Database

In this study, the databank obtained from different experimental studies [33,34,35,36,37,38] was used for the investigation of pressure drop. Available experimental results in the literature demonstrate that the pressure drop in microchannels is influenced by the vapor quality and other fluid properties. Therefore, as reported in Table 2, a total of 329 empirical data points was selected for modeling, and various neural networks were examined for data training purposes. The collected experimental data covers the quality ranging from 0.007 to 0.96, mass flux of 196.8–1400 kg·m−2·s−1, acentric factor from 0.152 to 0.33, critical pressure of 3.73–11.3 MPa, critical temperature of 345–419 K, and hydraulic diameter values ranging from 0.078 to 6.2.

2.3. Scaling Database and Transformation

The provision for employing the ANN method is that there should not be any units when training begins. The reason is that the limit for the neuron output to be in the range [0, 1] is imposed by non-linear activation functions, such as a hyperbolic or logistical tangent. The standardization of data for ANN computations was done by the statistical normalization rule. After calculation completion, the output data from the network were reverse converted to their original representation. To improve the efficiency of the training step in the ANN calculations, Equation (2) was used to normalize the inputs and target values.
V n o r m a l = 0.01 + V V m i n V m a x V m i n × ( 0.99 0.01 )
In this equation, V signifies the variables, whether they be dependent or independent, and Vnormal represents the normal value. The maximum and minimum values of each variable are shown by Vmax and Vmin, respectively.
The process of selecting suitable inputs for ANN is accomplished by means of the Pearson correlation coefficient, which is a measure for variable rating. The connection type and intensity between sets of variables are determined by the Pearson correlation coefficient. This coefficient takes the values in a range between −1 and +1, which respectively correspond to the highest reverse and direct association cases. This coefficient is equal to 0 if there is no association between variables. The most important variables are the absolute averaged ones, since they have an imperative close interaction. Thus, the relationship between independent and dependent variables would be most credible when the maximization of average absolute Pearson’s coefficient (AAPC) is performed for a particular dependent variable transformation. The Pearson correlation coefficient values between each pair of variables are represented in Figure 3.
It is evident from this figure that the most influential direct and indirect relationships belong to critical temperature of fluid and hydraulic diameter of the channel, respectively. This is while mass flux has the least linearity with pressure drop in the microchannel.
Table 3 contains the inputs and different transformations. This figure indicates that the best possible outcome for pressure drop is obtained from the output to the power of 0.5. Eventually, the transformation should be reversed to reach the modeling dependent variables and enable its comparison with the actual values.

3. Results and Discussion

3.1. Performance Analysis of Models

To compare the precision of the model, various statistical indexes, such as absolute average relative deviation percent (AARD%), mean square error (MSE), the regression coefficient (R2), and root mean square error (RMSE), have been utilized. These indexes are introduced as follows:
A A R D % = 1 N i = 1 N ( | Y i . r e a l Y i . p r e d Y i . r e a l | ) × 100
M S E = 1 N i = 1 N ( Y i . r e a l Y i . p r e d ) 2
R 2 = i = 1 N ( Y i . r e a l Y ¯ r e a l ) 2 i = 1 N ( Y i . r e a l Y i . p r e d ) 2 i = 1 N ( Y i . r e a l Y ¯ r a a l ) 2
R M S E = 1 N i = 1 N ( Y i . r e a l Y i . p r e d ) 2
where Y i .   r e a l ,   Y ¯ a c t , and Y i .   p r e d represent the real values, the mean of real values, and values of predicted pressure drop, respectively. Moreover, N indicates the dataset’s number.

3.2. Selecting the Configuration of ANNs

According to the literature, the dataset was divided into testing and training portions, comprising 15% and 85% of the entire experimental dataset, respectively; this is done to confirm the consistency of expected models. Determination of weights and topology of a network is called the optimization process, and results in the optimum function when specific performance equations are utilized. The determination of the number of optimal neurons contained in the hidden layer is of great importance. The number of inputs, outputs, the architecture of the network, and the algorithm used for training can all affect the optimum number of neurons in a hidden layer. As expressed by Ayoub and Demiral [39], determining the optimum number of hidden neurons is not simple, and cannot be performed without training, and solely by examining different numbers of them to estimate each generalization error. Bar and Das [40] demonstrated that a single hidden layer having enough number of processing units would have an acceptable performance. As such, we chose a single hidden layer with multiple processing neurons in this study.
The trial and error method was used to develop a successful model. The dominant statistical parameters, such as the average absolute relative error, correlation coefficient, and root mean squared errors, were examined for all of the topologies. This provided the opportunity for careful inspection and visualization of the cross-validation or generalization error of each network design.
As mentioned before, the optimization of the MLP was done by changing the neuron content of the hidden layer while the statistical indexes were monitored. The minimum number of data needed for training is 2–11 times of weight and bias. Hence, for an MLP that has one dependent and six independent variables, the number of hidden neurons were computed as follows:
3 × ( 8 N + 1 )     279   ( t r a i n i n g )   N     11
The best outcomes of the MLP network for the number of hidden neurons are presented in Table 4.
Considering the minimum MSE value and the maximum R-squared value, eleven hidden neurons in MLP networks are optimum. The efficiencies of different types of ANNs were compared to find a suitable model for pressure drop prediction. Hence, a comparison was performed between the optimum MLP network structure and other ANN models, such as CF, RBF, and GR. The exactness of sensitivity for finding the optimum hidden neurons number is listed in Table 5, Table 6 and Table 7. It should be noted that the determination of the number of hidden neurons is similar in all kinds of ANNs and has regulations similar to MLP.
The hidden neurons were not significant in the GR, therefore the spread value needed to be modified to reach the significant hidden neurons in GR. Accordingly, one hundred statistical indicators of various GRs were compared with regards to the spread value changes within 0.1–10, with 0.1 interval steps.
Table 8 lists the values of AARD%, MSE, RMSE, and R2 obtained from different ANN models with the best topology.
Rationally, the AARD% values of the best models are minimum. As indicated in Table 8, the best prediction of the pressure drop belongs to the GR model, compared to other models. By considering the values obtained as the statistical error of pressure drop, the AARD% of the GR model (with the error value of 7.63) yielded the best prediction, compared with other ANN models of MLP (10.89), RBF (59.16), and CF (10.65).
It should be noted that the statistical indices of R2, MSE, RMSE, and AARD% served as the basis for choosing this model amongst a set of 1620 distinct models (1100 RBF models, 200 CF models, 100 GR models, 220 MLP models).

3.3. Comparison of CNN with Other Neural Networks

As mentioned before, the CNN method is made up of two steps for predicting pressure drop parameters. The first is by applying it in committee neural network methodology to determine pressure drop parameters, and the second one is implementing it in a genetic algorithm to find the share (weight) of each algorithm in building CNN.
By joining the outputs of the three models MLP, GR, and CF, the microchannel pressure drop was predicted in more precision. It should be noted that, due to the high error of the RBF method, the predicted data from this model has not been used in the CNN model. To describe the coefficient of every single model, GA is merged as follows:
Y C N N = ( a Y M L P + b Y G R + c Y C F + d )
where calculated values of a , b , c and d are 0.0534, 0.9281, 0.01725, and −0.1574, respectively. Figure 4 illustrates the actual outputs versus the values of predicted outputs by CNN for the entire dataset of pressure drop. It is evident that the CNN model is remarkably capable of forecasting results for the whole dataset, as the concentration of symbols around the solid 45° line confirms this.
The absolute error histogram for the entire data by the CNN network is presented in Figure 5, which is an approximately normal distribution. As shown, it can be seen that the absolute error distribution histogram is stable, and most of the prediction errors belong to the −200 to 200 absolute error intervals.
The comparison between the suggested CNN model and the three other models of MLP, CF, and GR are presented in Figure 6, based on the statistical indices of AARD%. This figure indicates the performance superiority of the CNN model, compared with the other models. In other words, it depicts the capability of CNN to increase the accuracy of the prediction.
Finally, the predicted values from the CNN model representing microchannel pressure drop prediction were evaluated for different experimental datasets. The AARD% values in Figure 7 were obtained to compare the predicted values and experimental datasets. As can be seen, the CNN model can predict all experimental data with great accuracy. However, the predictive precision of the CNN model can slightly depend on the distribution pattern of the data, as expected. This is consistent with the recent data of [41].

4. Conclusions

In this study, the pressure drop was predicted using a committee neural network (CNN). As such, a total number of 329 empirical data points were selected to model the pressure drop in microchannels at various conditions. The results reveal that despite the low dependency of the pressure drop on the mass flux, it is highly dependent on the properties of the fluid. Nevertheless, the plot of the pressure drop as a function of variables revealed its dependency on the hydraulic diameter of the channel and vapor quality. The weight coefficient of each network in CNN was determined using a genetic algorithm and a simple averaging method. The minimum AARD% is achieved when using a simple averaging procedure for combining MLP, GR, and CF algorithms. The finest value of AARD% (5.79) is obtained by combining all algorithms using the weighted averaging method. The values of 0.0534, 0.9281, and 0.01725 were obtained as the derived weights of GA for MLP, GR, and CF, respectively. This study demonstrates that CNN results are more precise when there are various techniques to solve a problem, therefore the superior results, compared to the other methods, indicate a high potential for prediction in different fields of applied science.

Author Contributions

Writing—original draft, Formal analysis, A.H.; Methodology, Supervision, M.S.S.; Formal analysis, Investigation, A.M.; Writing—review & editing, Validation, M.Y.A.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

Nomenclature

bBias
fThe activation function
wWeight
xInput

Abbreviations

AARD% Average absolute relative deviation percent
AI Artificial intelligence
ANN Artificial neural networks
CF Cascade feedforward
GRGeneralized regression
MLP Multilayer perceptron
MSE Mean squared errors
RBF Radial basis function
RMSE Root mean square errors
R2Regression coefficient

Subscripts/superscripts

pred.Predicted variable
real.Real variable
max Maximum
normal Normalized values

References

  1. Almasi, F.; Shadloo, M.S.; Hadjadj, A.; Ozbulut, M.; Tofighi, N.; Yildiz, M. Numerical simulations of multi-phase electro-hydrodynamics flows using a simple incompressible smoothed particle hydrodynamics method. Comput. Math. Appl. 2019. [Google Scholar] [CrossRef]
  2. Sadeghi, R.; Shadloo, M.S.; Hopp-Hirschler, M.; Hadjadj, A.; Nieken, U. Three-dimensional lattice Boltzmann simulations of high density ratio two-phase flows in porous media. Comput. Math. Appl. 2018, 75, 2445–2465. [Google Scholar] [CrossRef]
  3. Giannetti, N.; Redo, M.A.; Jeong, J.; Yamaguchi, S.; Saito, K.; Kim, H. Prediction of two-phase flow distribution in microchannel heat exchangers using artificial neural network. Int. J. Refrig. 2020, 111, 53–62. [Google Scholar] [CrossRef]
  4. Mahvi, A.J.; Garimella, S. Two-phase flow distribution of saturated refrigerants in microchannel heat exchanger headers. Int. J. Refrig. 2019, 104, 84–94. [Google Scholar] [CrossRef]
  5. Chachereau, Y.; Chanson, H. Free-Surface Turbulent Fluctuations and Air-Water Flow Measurements in Hydraulics Jumps with Small Inflow Froude Numbers; The University of Queensland: Queensland, Australia, 2010. [Google Scholar]
  6. Chen, X.; Hou, Y.; Chen, S.; Liu, X.; Zhong, X. Characteristics of frictional pressure drop of two-phase nitrogen flow in horizontal smooth mini channels in diabatic/adiabatic conditions. Appl. Therm. Eng. 2019, 162, 114312. [Google Scholar] [CrossRef]
  7. Bahmanpouri, F. Experimental Study of Air Entrainment in Hydraulic Jump on Pebbled Rough Bed. Ph.D. Thesis, The University of Napoli Federico II, Naples, Italy, 2019. [Google Scholar]
  8. Lillo, G.; Mastrullo, R.; Mauro, A.W.; Pelella, F.; Viscito, L. Experimental thermal and hydraulic characterization of R448A and comparison with R404A during flow boiling. Appl. Therm. Eng. 2019, 161, 114146. [Google Scholar] [CrossRef]
  9. Filho, E.d.S.; Aguiar, G.M.; Ribatski, G. Flow boiling heat transfer of R134a in a 500 µm ID tube. J. Braz. Soc. Mech. Sci. Eng. 2020, 42, 254. [Google Scholar] [CrossRef]
  10. Lewis, J.M.; Wang, Y. Two-phase frictional pressure drop in a thin mixed-wettability microchannel. Int. J. Heat Mass Transf. 2019, 128, 649–667. [Google Scholar] [CrossRef]
  11. Vozhakov, I.S.; Ronshin, F.V. Experimental and theoretical study of two-phase flow in wide microchannels. Int. J. Heat Mass Transf. 2019, 136, 312–323. [Google Scholar] [CrossRef]
  12. Harirchian, T.; Garimella, S.V. Flow regime-based modeling of heat transfer and pressure drop in microchannel flow boiling. Int. J. Heat Mass Transf. 2012, 55, 1246–1260. [Google Scholar] [CrossRef] [Green Version]
  13. Pan, L.; Yan, R.; Huang, H.; He, H.; Li, P. Experimental study on the flow boiling pressure drop characteristics in parallel multiple microchannels. Int. J. Heat Mass Transf. 2018, 116, 642–654. [Google Scholar] [CrossRef]
  14. Lee, J.; Mudawar, I. Two-phase flow in high-heat-flux micro-channel heat sink for refrigeration cooling applications: Part II—heat transfer characteristics. Int. J. Heat Mass Transf. 2005, 48, 941–955. [Google Scholar] [CrossRef]
  15. Qu, W.; Mudawar, I. Flow boiling heat transfer in two-phase micro-channel heat sinks––I. Experimental investigation and assessment of correlation methods. Int. J. Heat Mass Transf. 2003, 46, 2755–2771. [Google Scholar] [CrossRef]
  16. Lee, S.; Mudawar, I. Investigation of flow boiling in large micro-channel heat exchangers in a refrigeration loop for space applications. Int. J. Heat Mass Transf. 2016, 97, 110–129. [Google Scholar] [CrossRef]
  17. Ramesh, B.; Jayaramu, P.; Gedupudi, S. Subcooled flow boiling of water in a copper microchannel: Experimental investigation and assessment of predictive methods. Int. Commun. Heat Mass Transf. 2019, 103, 24–30. [Google Scholar] [CrossRef]
  18. Raj, S.; Pathak, M.; Khan, M.K. Effects of flow loop components in suppressing flow boiling instabilities in microchannel heat sinks. Int. J. Heat Mass Transf. 2019, 141, 1238–1251. [Google Scholar] [CrossRef]
  19. Maher, D.; Hana, A.; Habib, S. New Correlations for Two Phase Flow Pressure Drop in Homogeneous Flows Model. Therm. Eng. 2020, 67, 92–105. [Google Scholar] [CrossRef]
  20. Moayedi, H.; Aghel, B.; Vaferi, B.; Foong, L.K.; Bui, D.T. The feasibility of Levenberg–Marquardt algorithm combined with imperialist competitive computational method predicting drag reduction in crude oil pipelines. J. Pet. Sci. Eng. 2020, 185, 106634. [Google Scholar] [CrossRef]
  21. Davoudi, E.; Vaferi, B. Applying artificial neural networks for systematic estimation of degree of fouling in heat exchangers. Chem. Eng. Res. Des. 2018, 130, 138–153. [Google Scholar] [CrossRef]
  22. Aghel, B.; Rezaei, A.; Mohadesi, M. Modeling and prediction of water quality parameters using a hybrid particle swarm optimization–neural fuzzy approach. Int. J. Environ. Sci. Technol. 2018, 16, 4823–4832. [Google Scholar] [CrossRef]
  23. Moayedi, H.; Aghel, B.; Foong, L.K.; Bui, D.T. Feature validity during machine learning paradigms for predicting biodiesel purity. Fuel 2020, 262, 116498. [Google Scholar] [CrossRef]
  24. Picanço, M.A.S.; Passos, J.C.; Filho, E.P.B. Heat Transfer Coefficient Correlation for Convective Boiling Inside Plain and Microfin Tubes Using Genetic Algorithms. Heat Transf. Eng. 2009, 30, 316–323. [Google Scholar] [CrossRef]
  25. Mehta, H.B.; Pujara, M.P.; Banerjee, J. Prediction of two phase flow pattern using artificial neural network. Network 2013, 5, 6. [Google Scholar]
  26. Da Silva, I.N.; Spatti, D.H.; Flauzino, R.A.; Liboni, L.; Alves, S.F.D.R. Artificial Neural Network Architectures and Training Processes. In Artificial Neural Networks; Springer: Cham, Switzerland, 2017; pp. 21–28. [Google Scholar]
  27. Liu, W.; Shadloo, M.S.; Tlili, I.; Maleki, A.; Bach, Q.-V. The effect of alcohol–gasoline fuel blends on the engines’ performances and emissions. Fuel 2020, 276, 117977. [Google Scholar] [CrossRef]
  28. Broomhead, D.S.; Lowe, D. Radial Basis Functions, Multi-Variable Functional Interpolation and Adaptive Networks; Royal Signals and Radar Establishment Malvern: Malvern, Worcestershire, UK, 1988. [Google Scholar]
  29. Zheng, Y.; Shadloo, M.S.; Nasiri, H.; Maleki, A.; Karimipour, A.; Tlili, I. Prediction of viscosity of biodiesel blends using various artificial model and comparison with empirical correlations. Renew. Energy 2020, 153, 1296–1306. [Google Scholar] [CrossRef]
  30. Zhao, C.; Wu, G.; Li, Y. Measurement of water content of oil-water two-phase flows using dual-frequency microwave method in combination with deep neural network. Meas 2019, 131, 92–99. [Google Scholar] [CrossRef]
  31. Bagheripour, P. Committee neural network model for rock permeability prediction. J. Appl. Geophys. 2014, 104, 142–148. [Google Scholar] [CrossRef]
  32. Barzegar, R.; Moghaddam, A.A. Combining the advantages of neural networks using the concept of committee machine in the groundwater salinity prediction. Model. Earth Syst. Environ. 2016, 2, 26. [Google Scholar] [CrossRef] [Green Version]
  33. Rosato, A.; Mauro, A.W.; Mastrullo, R.; Vanoli, G.P. Experiments during flow boiling of a R22 drop-in: R422D adiabatic pressure gradients. Energy Convers. Manag. 2009, 50, 2613–2621. [Google Scholar] [CrossRef] [Green Version]
  34. Field, B.S.; Hrnjak, P. Adiabatic Two-Phase Pressure Drop of Refrigerants in Small Channels. In Proceedings of the ASME 4th International Conference on Nanochannels, Microchannels, and Minichannels, Parts A and B, Limerick, Ireland, 19–21 June 2006; Volume 2006, pp. 1173–1180. [Google Scholar]
  35. Zhang, M.; Webb, R.L. Correlation of two-phase friction for refrigerants in small-diameter tubes. Exp. Therm. Fluid Sci. 2001, 25, 131–139. [Google Scholar] [CrossRef]
  36. Hwang, Y.W.; Kim, M.S. The pressure drop in microtubes and the correlation development. Int. J. Heat Mass Transf. 2006, 49, 1804–1812. [Google Scholar] [CrossRef]
  37. Yang, C.-Y.; Webb, R.L. Friction pressure drop of R-12 in small hydraulic diameter extruded aluminum tubes with and without micro-fins. Int. J. Heat Mass Transf. 1996, 39, 801–809. [Google Scholar] [CrossRef]
  38. Cavallini, A.; Del Col, D.; Doretti, L.; Matkovič, M.; Rossetto, L.; Zilio, C. Two-phase frictional pressure gradient of R236ea, R134a and R410A inside multi-port mini-channels. Exp. Therm. Fluid Sci. 2005, 29, 861–870. [Google Scholar] [CrossRef]
  39. Ayoub, M.A.; Demiral, B.M. Application of resilient back-propagation neural networks for generating a universal pressure drop model in pipelines. Univ. Khartoum Eng. J. 2011, 1, 9–21. [Google Scholar]
  40. Bar, N.; Das, S.K. Gas-non-Newtonian Liquid Flow Through Horizontal Pipe–Gas Holdup and Pressure Drop Prediction using Multilayer Perceptron. Am. J. Fluid Dyn. 2012, 2, 7–16. [Google Scholar] [CrossRef] [Green Version]
  41. Shadloo, M.S.; Rahmat, A.; Karimipour, A.; Wongwises, S. Estimation of pressure drop of two-phase fluid in horizontal long pipes using artificial neural networks. J. Energy Resour. Technol. 2020, 1–21. [Google Scholar] [CrossRef]
Figure 1. Schematic structure of the artificial neural network for estimating pressure drop.
Figure 1. Schematic structure of the artificial neural network for estimating pressure drop.
Applsci 10 05384 g001
Figure 2. Structure of the committee neural network.
Figure 2. Structure of the committee neural network.
Applsci 10 05384 g002
Figure 3. Pearson correlation coefficients between each pair of independent variables.
Figure 3. Pearson correlation coefficients between each pair of independent variables.
Applsci 10 05384 g003
Figure 4. Performance of the committee neural network (CNN) for prediction.
Figure 4. Performance of the committee neural network (CNN) for prediction.
Applsci 10 05384 g004
Figure 5. Histogram of CNN for all data.
Figure 5. Histogram of CNN for all data.
Applsci 10 05384 g005
Figure 6. Evaluation between predicted values of three artificial neural network models and committee neural network.
Figure 6. Evaluation between predicted values of three artificial neural network models and committee neural network.
Applsci 10 05384 g006
Figure 7. Evaluation of predicted values with regards to various experimental datasets in the literature.
Figure 7. Evaluation of predicted values with regards to various experimental datasets in the literature.
Applsci 10 05384 g007
Table 1. Range of the input parameters.
Table 1. Range of the input parameters.
VariableNameMinimum ValueMaximum Value
Hydraulic diameter of channel (mm)Input 10.0783
Critical temperature of fluid (K)Input 2345419
Critical pressure of fluid (MPa)Input 33.7311.3
Acentric factor of fluid (-)Input 40.1520.37
Mass flux (kg·m−2·s−1)Input 5196.91400
Quality (%)Input 60.0070.961
Table 2. Range and conditions of various experimental datasets.
Table 2. Range and conditions of various experimental datasets.
Hydraulic Diameter of Channel (mm)Critical Temperature of Fluid (K)Critical Pressure of Fluid (MPa)Acentric Factor of Fluid (-)Mass Flux (kg·m−2· s−1)Quality (%)Pressure Drop (kPa· m−1)Ref
1.4345–4193.78–4.90.296–0.37196.8–1394.40.26–0.7915.8–145[33]
0.078–2.1345–4064.07–11.30.152–0.33290–4500.007–0.9681.4–3814.7[34]
2.13–6.2345.2–3743.73–4.990.215–0.33400–10000.08–0.865.33–34.8[35]
2.643854.140.179400–14000.11–0.94.36–56.5[36]
0.24–0.793744.070.33270–9500.08–0.9534.1–2318.18[37]
3352.653.90.284198–3500.19–0.939.72–82.4[38]
Table 3. Pearson’s coefficient values for different transformations.
Table 3. Pearson’s coefficient values for different transformations.
TransformationPearson’s CoefficientAAPC
FunctionNumberInput1Input2Input3Input4Input5Input6
1Output 15−0.09010.18660.2583−0.0386−0.01970.13990.1222
2Output 14−0.09370.19410.2687−0.0402−0.02050.14510.127
3Output 13−0.09770.20260.2805−0.0419−0.02140.1510.1325
4Output 12−0.10220.21220.2938−0.0439−0.02230.15750.1387
5Output 11−0.10720.22320.3089−0.0461−0.02340.16490.1456
6Output 10−0.11280.23560.3261−0.0486−0.02450.17320.1535
7Output 2−0.25130.47080.6433−0.1434−0.03790.31290.3099
8Output−0.40430.5010.697−0.2293−0.05940.34190.3721
9Output 0.75−0.47570.49170.6936−0.2519−0.07440.33720.3874
10Output 0.5−0.56100.46760.6727−0.2643−0.09490.31960.3967
11Output 0.25−0.64890.42540.6274−0.2559−0.11880.28550.3937
12Output 0.1−0.69410.39190.5875−0.2377−0.13220.25840.3836
13Output −0.10.7349−0.3412−0.52260.19870.1454−0.21890.3603
14Output −0.250.7465−0.3021−0.46930.16180.15−0.19090.3368
15Output −0.50.7298−0.2422−0.38240.0970.1469−0.15450.2921
16Output −0.750.6811−0.1939−0.30820.04070.1336−0.13380.2486
17Output −10.6193−0.1578−0.2508−0.00090.1162−0.12550.2117
18Output −20.4213−0.0843−0.1384−0.06630.0642−0.12970.1507
19Output −100.10410.0198−0.0360−0.08000.0278−0.09670.0607
20Output −110.09650.0221−0.0338−0.07970.0275−0.09390.0589
21Output −120.09080.0239−0.0322−0.07950.0273−0.09170.0575
22Output −130.08650.0254−0.0310−0.07930.0272−0.08990.0565
23Output −140.08330.0265−0.0300−0.07920.0271−0.08860.0558
24Output −150.08090.0274−0.0293−0.07910.0271−0.08760.0552
25exp(Output)-------
26Log(Output)−0.71780.36710.5563−0.2200−0.13970.23870.3733
Table 4. Determining the greatest hidden neurons for multilayer perceptron (MLP).
Table 4. Determining the greatest hidden neurons for multilayer perceptron (MLP).
Hidden NeuronDatasetStatistical Index
AARD%MSERMSER2
1Train101.57113,482.04336.8710.9004
Test129.45140,778.92375.2050.9111
Total104.36116,220.03340.9110.9017
2Train53.2255,622.25235.8440.9502
Test68.21279,407.52528.590.8965
Total54.7378,068.80279.4080.9362
3Train37.3457,662.92240.1310.9519
Test53.0770,940.72266.3470.9407
Total38.9258,994.74242.8880.9507
4Train34.9252,085.74228.2230.96
Test28.7528,003.70167.3430.9048
Total34.349,670.22222.8680.9588
5Train27.8949,789.84223.1360.957
Test25.8960,526.28246.0210.9626
Total27.6950,866.74225.5370.9576
6Train2544,457.30210.8490.9624
Test32.3466,664.92258.1950.9532
Total25.7346,684.82216.0670.9612
7Train17.867611.55287.2440.9941
Test39.823,686.50153.9040.9832
Total20.069223.93396.0410.9929
8Train16.825794.54476.1220.995
Test23.0616,642.14129.0040.9919
Total17.456882.60182.9610.9946
9Train12.73905.88962.4970.997
Test27.037658.82687.5150.9892
Total14.134282.32465.4390.9965
10Train12.13631.6560.2630.9972
Test16.128260.03990.8850.9893
Total12.54095.89563.9990.9966
11Train10.335773.06575.9810.9956
Test15.913095.68555.6390.9974
Total10.895504.51374.1920.9957
Table 5. Performance of cascade feedforward (CF) model with different hidden neurons.
Table 5. Performance of cascade feedforward (CF) model with different hidden neurons.
Hidden NeuronDatasetStatistical Index
AARD%MSERMSER2
1Train41.3577,007.54277.5020.9338
Test43.6872,265.35268.8220.9546
Total41.5976,531.88276.6440.936
2Train36.8257,041.61238.8340.949
Test31.08106,981.54327.080.9572
Total36.2462,050.78249.10.9482
3Train33.5449,877.14223.3320.9573
Test48.8687,864.31296.4190.941
Total35.0853,687.40231.7050.9553
4Train27.252,898.02229.9960.9572
Test41.7931,827.33178.4020.9651
Total28.6650,784.55225.3540.9577
5Train23.8533,742.42183.6910.9721
Test23.0654,227.72232.8680.9552
Total23.7735,797.18189.2010.9704
6Train19.9210,395.52101.9580.9913
Test33.1825,249.53158.9010.9845
Total21.2511,885.43109.020.9904
7Train13.885371.99973.2940.9961
Test20.715675.70275.3370.9934
Total14.575402.46273.5010.9957
8Train12.995365.91573.2520.9958
Test22.34866.77869.7620.9946
Total13.925315.84972.910.9957
9Train11.485250.16372.4580.9958
Test23.7410,279.86101.390.9911
Total12.715754.66275.8590.9953
10Train9.782434.45749.340.998
Test18.487820.68588.4350.9926
Total10.652974.71754.5410.9976
Table 6. Performance of radial basis function (RBF) model with different hidden neurons.
Table 6. Performance of radial basis function (RBF) model with different hidden neurons.
Hidden NeuronSpreadDatasetStatistical Index
AARD%MSERMSER2
12.1212Train245.61168,040.37409.927
Test113.14235,995.95485.7940.8892
Total232.32174,856.58418.1590.8624
21.0101Train127.73132,611.39364.1580.8856
Test131.98150,529.48387.9810.8964
Total128.15134,408.64366.6180.8862
30.7071Train98.76101,348.30318.3520.9176
Test141.5394,193.30306.9090.8996
Total103.05100,630.63317.2230.9151
40.8081Train79.5395,710.78309.3720.9196
Test140.55103,820.68322.2120.9242
Total85.6596,524.24310.6840.919
50.6061Train84.4573,306.36270.7510.9367
Test69.94100,134.85316.4410.939
Total8375,997.36275.6760.9368
60.6061Train84.0473,687.32271.4540.9413
Test66.4349,203.87221.8190.9572
Total82.2871,231.53266.8920.9404
70.6061Train78.1468,354.82261.4480.9443
Test58.4383,310.02288.6350.9363
Total76.1769,854.89264.3010.9419
80.6061Train71.6963,860.08252.7060.9466
Test55.9279,202.58281.430.9464
Total70.1165,398.99255.7320.9461
90.6061Train67.3356,511.36237.7210.9546
Test73.26122,784.88350.4070.921
Total67.9263,158.85251.3140.9471
100.7071Train63.8465,435.85255.8040.9468
Test78.6788,688.61297.8060.9383
Total65.3367,768.20260.3230.9431
110.7071Train57.9357,530.63239.8550.9467
Test70.16143,614.80378.9650.9632
Total59.1666,165.21257.2260.9469
Table 7. Accuracy of the general regression (GR) for prediction.
Table 7. Accuracy of the general regression (GR) for prediction.
SpreadDatasetStatistical Index
AARD%MSERMSER2
0.0056Train2.422688.51451.8510.9979
Test34.47129,915.53360.4380.9381
Total7.6315,449.89124.2980.9881
Table 8. Comparison of the statistical values for various artificial neural network (ANN) models.
Table 8. Comparison of the statistical values for various artificial neural network (ANN) models.
ModelDatasetStatistical Index
AARD%MSERMSER2
MLPTrain10.335773.06575.9810.9956
Test15.913095.68555.6390.9974
Total10.895504.51374.1920.9957
CFTrain9.782434.45749.340.998
Test18.487820.68588.4350.9926
Total10.652974.71754.5410.9976
GRTrain2.422688.51451.8510.9979
Test34.47129,915.53360.4380.9381
Total7.6315,449.89124.2980.9881
RBFTrain57.9357,530.63239.8550.9467
Test70.16143,614.80378.9650.9632
Total59.1666,165.21257.2260.9469

Share and Cite

MDPI and ACS Style

Haghighi, A.; Shadloo, M.S.; Maleki, A.; Abdollahzadeh Jamalabadi, M.Y. Using Committee Neural Network for Prediction of Pressure Drop in Two-Phase Microchannels. Appl. Sci. 2020, 10, 5384. https://doi.org/10.3390/app10155384

AMA Style

Haghighi A, Shadloo MS, Maleki A, Abdollahzadeh Jamalabadi MY. Using Committee Neural Network for Prediction of Pressure Drop in Two-Phase Microchannels. Applied Sciences. 2020; 10(15):5384. https://doi.org/10.3390/app10155384

Chicago/Turabian Style

Haghighi, Arman, Mostafa Safdari Shadloo, Akbar Maleki, and Mohammad Yaghoub Abdollahzadeh Jamalabadi. 2020. "Using Committee Neural Network for Prediction of Pressure Drop in Two-Phase Microchannels" Applied Sciences 10, no. 15: 5384. https://doi.org/10.3390/app10155384

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop