Next Article in Journal
Teaching Power-Sector Models Social and Political Awareness
Next Article in Special Issue
Revisiting the Corresponding-States-Based Correlation for Pool Boiling Critical Heat Flux
Previous Article in Journal
Differential Genetic Mechanisms of Deep High-Quality Reservoirs in the Paleogene Wenchang Formation in the Zhu-1 Depression, Pearl River Mouth Basin
Previous Article in Special Issue
Experimental Analysis of Flow Boiling in Horizontal Annulus—The Effect of Heat Flux on Bubble Size Distributions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Machine Learning for Prediction of Heat Pipe Effectiveness

1
Mechanical Engineering, Kalasalingam Academy of Research and Education, Krishnankoil 626126, India
2
Automobile Engineering, Kalasalingam Academy of Research and Education, Krishnankoil, 626126, India
3
School of Mechanical Engineering, Lovely Professional University, Phagwara 144411, India
4
Peter the Great St. Petersburg Polytechnic University, 195251 Saint Petersburg, Russia
5
Division of Research & Innovation, Uttaranchal University, Dehradun 248007, India
6
Saint-Petersburg University of Aerospace Instrumentation, 190000 Saint Petersburg, Russia
7
Department of Mechanical Engineering, K. R. Mangalam University, Gurgaon 122103, India
*
Author to whom correspondence should be addressed.
Energies 2022, 15(9), 3276; https://doi.org/10.3390/en15093276
Submission received: 25 March 2022 / Revised: 26 April 2022 / Accepted: 27 April 2022 / Published: 29 April 2022
(This article belongs to the Special Issue Enhanced Two-Phase Heat Transfer)

Abstract

:
This paper details the selection of machine learning models for predicting the effectiveness of a heat pipe system in a concentric tube exchanger. Heat exchanger experiments with methanol as the working fluid were conducted. The value of the angle varied from 0° to 90°, values of temperature varied from 50 °C to 70 °C, and the flow rate varied from 40 to 120 litres per min. Multiple experiments were conducted at different combinations of the input parameters and the effectiveness was measured for each trial. Multiple machine learning algorithms were taken into consideration for prediction. Experimental data were divided into subsets and the performance of the machine learning model was analysed for each of the subsets. For the overall analysis, which included all the three parameters, the random forest algorithm returned the best results with a mean average error of 1.176 and root-mean-square-error of 1.542.

1. Introduction

Heat pipes are utilized in various products, such as electronics, solar collectors, and heat exchangers, to remove/transfer heat away from the system. Heat pipe techniques have been successfully implemented in most industry applications [1,2]. Heat pipe systems have also been successfully implemented in automobiles for exhaust gas recovery [3]. Usage of heat pipes with environmentally friendly refrigerants has also been reported [4]. The loop heat pipe was used for spacecraft applications and heat transmissibility was considerably increased in this experimentation [5].
Kempers et al. [6] analysed the wicks in the form of mesh in copper heat pipes, and it was reported that mesh layer increase led to better performance. Lower thermal resistance using deionised water was reported by [7]. Filling ratio, inclined angles, number of turns and heat input were tested in various articles, and it was reported that a 50% filling ratio gives maximum performance [8,9].
The performance of heat pipes in satellite applications was investigated and it was reported that minimum thermal resistance was observed when ammonia was used [10]. Numerical studies were reported on various models, and they have shown good agreement with the experimental results [11,12]. Numerical studies using a Navier–Stokes equation have also been successfully used for the modelling of heat pipe performance [13]. The thermal energy storage system was analysed by inserting numerous heat pipes among a heat-carrying fluid [14].
The gravitational effect of the heat pipe, wick structure and working fluid shows improvement in results [15,16,17,18,19]. The heat pipes were studied with various angles, wick constructions and operational fluids [20,21,22,23]. In a similar study, the highest heat transfer coefficient at 60° and a 50% fill ratio was reported [24]. In studies involving pulsating HP with DI water, it was reported that the lowest thermal resistance of 0.077 K/W was achieved at inclined angles [25].
Traditionally, statistical and heuristic techniques have been used by researchers for the development of such prediction models. Traditional methods rely on the method of generating a relational equation, and this model may not fit the entire data points correctly, leading to non-uniform prediction. Machine learning is the newer technique, which allows to us to obtain a better representation of the process as it helps to identify tricky correlations that may exist within the dataset. ML methods develop patterns for prediction rather than developing a single equation, which further leads to much better flexibility in prediction. The development of prediction and optimization models is a better way of understanding any mechanical system as it is used as a reference for future researchers and industry experts as a means of better understanding the process. Machine learning (ML) methods and techniques have been reported in various areas of manufacturing in an attempt to implement Industry 4.0 [26]. ML has also been used as a tool for manufacturing diagnostics [27] and this is also an advanced data analytics solution [28]. ML application has also been reported by researchers in many thermal-based applications in predicting the performance of fins [29] and the air injection effect [30] in heat exchangers. A detailed review [31] shows how ML methods have been vastly adopted in various heat exchanger processes for the prediction of different performance indicators.
Based on the literature it was observed that ML modelling for heat pipe exchangers with methanol as a working fluid is an area which needs attention. Hence, this article depicts the process for the development of an ML model that can model the effectiveness of the heat exchanger process. Multiple models are developed, and they are further compared to select the best among them. The algorithms are implemented through the WEKA open-source software, which contains the algorithms for various ML methods [32,33].

2. Materials and Methods

2.1. Fabrication

The heat pipe is fabricated and substitutes the concentric tube of the traditional heat exchanger, as shown in Figure 1. Copper material is employed for the heat pipe, and iron, which is galvanized, is employed for the shell. The total pipe length is 1000 mm, in which 700 mm is inserted inside the shell side of the evaporator section and 300 mm at the condenser section. The diameter of the heat pipe is 19 mm and 17 mm at the outer and inner edges, respectively. The evaporator and condenser shell have diameters of 50 mm and 35 mm, respectively. The total length of the evaporator and condenser shell section is 1000 mm and 300 mm, respectively. Two fluid tanks are fabricated for hot and cold fluid sections. The hot fluid tank (5 L) has an immersion electric heater with a 2000 W capacity. Two rotameters with a capacity of 3 LPM are used for measurement and flow control. The temperatures are measured using thermocouples at all points of the heat pipe heat exchanger.

2.2. Experimental Procedure

For both investigations, methanol is used. Methanol is charged with fill ratios of fifty per cent of the evaporator zone volume. Thermophysical properties of the working fluid are described in Table 1 [28]. The minimum tilt angle is set at 0° (horizontal) and the maximum tilt angle is set at 90° (vertical). The angle varied in increments of 10°. The mass flow rate is set at a minimum value of 40 L per hour and a maximum of 120 L per hour. The intermediate values for mass flow rate that are used in the experiment are 60, 80 and 100 L per hour. Similarly, the maximum value of the set temperature is 70 °C and the minimum value of temperature is 50 °C. The intermediate values of temperature at which the other experiments are conducted are 55 °C, 60 °C and 65 °C. The various parametric levels chosen for the experiment are described in Table 2. The experiments are carried out with each combination of the parameter levels, leading to a total of 250 experiments. The effectiveness of each of the settings is measured and based on this database the ML model is executed.

2.3. Machine Learning Model

The prediction of the effectiveness of the heat pipe system employing methanol is discussed in this section. Effectiveness relies upon angle, mass flow rate and temperature. The objective of this analysis is to predict the effectiveness through various ML algorithms and to identify the best performing algorithm among them. Angle, mass flow rate and temperature are the inputs that affect the effectiveness of the process. These are the factors considered in the ML model. ML is the possibility in which a PC program can learn and conform to new data without human intervention. ML assesses a relationship between the factors and the output. ML methods are classified into five classes, namely: functions, lazy learning algorithms, meta-learning algorithms, rule-based algorithms and tree-based learning algorithms. The steps involved in the ML process are depicted in Figure 2.

2.4. Identification and Pre-Processing of the Dataset

Pre-processing is an interaction of cleaning the missing or crude information. The information is gathered through real-time data and is changed over to a spotless informational index. These are a portion of the fundamental pre-processing procedures that can be utilized to change over crude information. Pre-processing is used to normalize the data as the various data fall in different data categories and there is a need to create uniformity among the data for better interpretation by the machine learning algorithms.

2.5. Separation, Training and Testing

Separation of datasets is performed to decide the best subset. The best subset is a list of capabilities, which demonstrates the best exhibition in expectation exactness. Hypothetically, the best subset can be found by assessing every one of the potential subsets. Training is carried out by analysing every subset under 30 algorithms. Each algorithm is applied to each subset. After training each subset in each algorithm, the output predictions are tested and noted. Then, mean absolute errors and root-mean-square errors of all subsets are also tested and noted.

2.6. Evaluation of Our Model

The MAE (mean average error) and RMSE (root-mean-square error) of a subset that has fewer errors is taken as the best regression method and best subset, and their output predictions are the best predictions of the dataset.

2.7. Dataset Description

The set or the assortment of information obtained through experimentation is known as a dataset. The dataset’s information is arranged such that there are a set of values which represent the input and output factors. The dataset utilized in this examination, which comprises instances, attributes, info factors and an objective variable, was gathered over a multi-month time. Figure 3 shows the scatter plot of the various factors against the effectiveness.

2.8. Dataset Separation

Here, the dataset is divided into several subsets. The separate datasets are then saved in CSV format. After separation, the loading of each subset is completed. After loading, basic statistics such as minimum values, maximum values, mean and standard deviation are calculated. The various subsets that can be generated in this experiment are listed in Table 3. For each of the subsets, the prediction model is trained and tested on all possible regression algorithms available. In similar ways, all the subsets are tested on all regression methods.
In artificial intelligence-based regression examination, a significant strategy is to show the interrelation between target and factors. Regression examination makes us perceive how the change in independent variables affects the dependent variables in any process. Regression algorithms are classified into functions, lazy learning algorithms, meta-learning algorithms, rule-based algorithms and tree-based algorithms. Full forms of all algorithms are shown in Table 4.

2.9. Precision of Prediction

The prediction precision of every machine learning regression strategy is utilized to assess the difference between the real and anticipated qualities. The prediction precision is assessed through indices such as mean absolute error (MAE) and root-mean-square error (RMSE). An error can be defined as the difference between the experimental and predicted value. Mean absolute error (MAE) is calculated by taking the average of the absolute error and is shown in Equation (1).
MAE = |a1 – c1| + |a2 – c2| + … + |an – cn| / n
Root-mean-square Error (RMSE) is also often utilised for determining the closeness of the predicted value with the actual value, and the formula used is shown in Equation (2).
RMSE = i = 1 n a n c n 2 n

3. Results and Discussion

The best subset is selected by analysing the mean values of all tested algorithms. The best algorithm is found by analysing all of the mean absolute errors and root-mean square errors. Table 5 shows the result of the machine learning model when only one parameter is taken into consideration. The three parameters are run separately and the results are tabulated. It can be observed that angle has the least error observed at an MAE of 3.671 and RMSE of 4.417. These lowest errors are obtained through the random forest algorithm. The next model is executed with two-parameter subsets. The three different combinations of the input parameters were run separately and the results are reported in Table 6. It can be observed that the angle–temperature combination has the least error as the MAE value is 2.373 and the RMSE value is 2.921. The lowest errors are obtained through the additive regression method.
A box plot for the RMSE is also developed to better understand the variation in the data. Figure 4 shows the box plot of single-parameter subset performance. Angle displayed the least RMSE with a value of 4.415; hence, it was selected as the best performing subset. Similarly, Figure 5 shows the box plot of two-parameter subset performance. The minimum RMSE value of 2.921 was obtained for the angle–temperature subset and thus was selected as the best-modelled subset for two parameters.
Finally, the model is run with three-parameter subsets and the entire combination of factors is taken into consideration. Here, there is only a single combination of all the parameters. The results from the various machine learning algorithms are listed below in Table 7. The random forest algorithm provided the least errors as the values reported for MAE and RMSE were 1.1755 and 1.5422, respectively.
From the subset analysis, further classification is carried out by extracting the best models from each subset. The subsets with the least errors are listed below in Table 8. Hence, the table represents the best models in each of the subset categories. The mean of the MAE and the RMSE errors for each of the categories is taken and it is observed that the lowest errors are obtained through the random forest algorithm. The MAE and the RMSE values obtained are 2.491 and 3.004, respectively.
Figure 6 shows the interface of the Weka software, and it denotes the various statistics such as time taken to build the model, the number of iterations, correlation coefficient, mean absolute error and the root-mean-square error.
The scatter plots of the predicted versus the experimental values for the various analyses are shown in Figure 7, Figure 8 and Figure 9. Figure 7 represents the scatter plot for predicted versus actual values for one-parameter subset. Since only one subset is considered, it can be observed that there is a larger amount of scatter. It is observed that when two-parameter subsets are considered, the scatter plot is more even with less deviation overall as depicted in Figure 8. In Figure 9, all the parameters are considered, and the scatter plot shows a more even plot. It can be concluded that when all the factors are considered, the machine learning model provides a much better model. Hence, it can be confirmed that the machine learning model provides the best solution when all the factors are considered for the analysis. This also indicates that all the factors have a significant effect on the output.
The performances of the different algorithms that have been described in this study are best illustrated in Figure 10. The figure shows that the random forest algorithm has the best performance when the system comprising all the three factors is considered as a whole.
From the above analysis, we can understand that random forest is the best regression strategy (best algorithm) for predicting the effectiveness of a heat pipe system when methanol is used as the working fluid. Random forest builds multiple decision trees and merges them to create a more accurate and stable prediction. Predicted vs. actual values are compared by taking 15 to 20 random values from the respective outputs of their best regression methods. Figure 11 plots the predicted vs. the experimental values and the marked green area indicates that the deviation is acceptable. Here, the best regression method for effectiveness is random forest. A correlation coefficient of 0.9729, MAE of 1.1755, and RMSE of 1.5422 was obtained as the result of the analysis.

4. Conclusions

The identification of the best machine learning model for a heat exchanger process is discussed in this article. Heat exchanger experiments with methanol as the working fluid are conducted with consideration of various factors such as angle, temperature and mass flow rate and the effectiveness of each of the experiments is measured. The value of the angle is varied from 0 to 90 in increments of 10. Values of temperature are varied in increments of 5, starting from 50 to 70. Mass flow rate is varied from 40 to 120 in increments of 20. The experiments are conducted for each of the combinations of the input parameters and the effectiveness is measured for each trial. From the experiment data, a machine learning model is developed to identify the algorithm which best fits the experiment. Thirty algorithms were taken into consideration and the experimental values were analysed for each of the algorithms. The experimental data were divided into subsets and the performance of the machine learning model was analysed for each of the subsets. Single-parameter subset analysis revealed that angle had the most correlation with effectiveness as the MAE (3.671) and RMSE (4.417) were minimum. For the single-parameter analysis, the random forest algorithm was found as the best fit. Similarly, for the two-parameter subset, it was inferred that the angle–temperature combination had the most correlation with effectiveness and the MAE and RMSE were 2.373 and 2.921, respectively. For this, the additive regression method was identified as the best machine learning model. For the overall analysis that included the all three parameters, the random forest algorithm returned the best results with an MAE of 1.176 and RMSE of 1.542. The results show that machine learning models can be successfully used for representing the physical experiments in a numerical model. With the presence of increasing databases, more such studies can be conducted in the future to create robust databases that best depict the process.

Author Contributions

Conceptualization, A.N., R.P., S.D. and S.M.; methodology, C.P., S.D., A.N. and R.P.; software, S.D., C.P. K.E., and K.K.; validation, S.D. and S.M.; formal analysis, S.D. and G.M.; investigation, S.D. and S.M.; resources, and S.D..; data curation, S.D..; writing—original draft preparation, S.D., G.M. K.E., K.K. and N.I.V.; writing—review and editing, S.D., R.P., A.N., S.M., G.M. and N.I.V.; visualization, S.D., R.P., A.N., S.M., G.M. K.E., K.K. and N.I.V.; supervision, S.D, R.P., A.N., S.M., G.M. K.E., K.K. and N.I.V.; project administration, C.P.; funding acquisition, S.D., G.M. and N.I.V. All authors have read and agreed to the published version of the manuscript.

Funding

The research is partially funded by the Ministry of Science and Higher Education of the Russian Federation under the strategic academic leadership program “Priority 2030” (Agreement 075-15-2021-1333 dated 30 September 2021).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Noie-Baghban, S.H.; Majideian, G.R. Waste heat recovery using heat pipe heat exchanger (HPHE) for surgery rooms in hospitals. Appl. Therm. Eng. 2000, 20, 1271–1282. [Google Scholar] [CrossRef]
  2. Vasiliev, L.L. Heat pipes in modern heat exchangers. Appl. Therm. Eng. 2005, 25, 1–19. [Google Scholar] [CrossRef]
  3. Yang, F.; Yuan, X.; Lin, G. Waste heat recovery using heat pipe heat exchanger for heating automobile using exhaust gas. Appl. Therm. Eng. 2003, 23, 367–372. [Google Scholar] [CrossRef]
  4. Longo, G.A.; Righetti, G.; Zilio, C.; Bertolo, F. Experimental and theoretical analysis of a heat pipe heat exchanger operating with a low global warming potential refrigerant. Appl. Therm. Eng. 2014, 65, 361–368. [Google Scholar] [CrossRef]
  5. Wang, L.; Miao, J.; Gong, M.; Zhou, Q.; Liu, C.; Zhang, H.; Fan, H. Research on the Heat Transfer Characteristics of a Loop Heat Pipe Used as Mainline Heat Transfer Mode for Spacecraft. J. Therm. Sci. 2019, 28, 736–744. [Google Scholar] [CrossRef]
  6. Kempers, R.; Ewing, D.; Ching, C.Y. Effect of number of mesh layers and fluid loading on the performance of screen mesh wicked heat pipes. Appl. Therm. Eng. 2006, 26, 589–595. [Google Scholar] [CrossRef]
  7. Bastakoti, D.; Zhang, H.; Cai, W.; Li, F. An experimental investigation of thermal performance of pulsating heat pipe with alcohols and surfactant solutions. Int. J. Heat Mass Transf. 2018, 117, 1032–1040. [Google Scholar] [CrossRef]
  8. Patel, V.M.; Mehta, H.B. Experimental Investigations on the Effect of Influencing Parameters on Operating Regime of a Closed Loop Pulsating Heat Pipe. J. Enhanc. Heat Transf. 2019, 26, 333–344. [Google Scholar] [CrossRef]
  9. Jia, H.; Jia, L.; Tan, Z. An experimental investigation on heat transfer performance of nanofluid pulsating heat pipe. J. Therm. Sci. 2013, 22, 484–490. [Google Scholar] [CrossRef]
  10. Patel, V.K. An efficient optimization and comparative analysis of ammonia and methanol heat pipe for satellite application. Energy Convers. Manag. 2018, 165, 382–395. [Google Scholar] [CrossRef]
  11. Han, C.; Zou, L. Study on the heat transfer characteristics of a moderate-temperature heat pipe heat exchanger. Int. J. Heat Mass Transf. 2015, 91, 302–310. [Google Scholar] [CrossRef]
  12. Zhang, D.; Li, G.; Liu, Y.; Tian, X. Simulation and experimental studies of R134a flow condensation characteristics in a pump-assisted separate heat pipe. Int. J. Heat Mass Transf. 2018, 126, 1020–1030. [Google Scholar] [CrossRef]
  13. Lian, W.; Han, T. Flow and heat transfer in a rotating heat pipe with a conical condenser. Int. Commun. Heat Mass Transf. 2019, 101, 70–75. [Google Scholar] [CrossRef]
  14. Shabgard, H.; Bergman, T.L.; Sharifi, N.; Faghri, A. High temperature latent heat thermal energy storage using heat pipes. Int. J. Heat Mass Transf. 2010, 53, 2979–2988. [Google Scholar] [CrossRef]
  15. Savino, R.; Abe, Y.; Fortezza, R. Comparative study of heat pipes with different working fluids under normal gravity and microgravity conditions. Acta Astronaut. 2008, 63, 24–34. [Google Scholar] [CrossRef]
  16. Said, S.A.; Akash, B.A. Experimental performance of a heat pipe. Int. Commun. Heat Mass Transf. 1999, 26, 679–684. [Google Scholar] [CrossRef]
  17. Dixit, S. Study of factors affecting the performance of construction projects in AEC industry. Organization. Technol. Manag. Constr. 2020, 12, 2275–2282. [Google Scholar] [CrossRef]
  18. Dixit, S. Impact of management practices on construction productivity in Indian building construction projects: An empirical study. Organ. Technol. Manag. Constr. 2021, 13, 2383–2390. [Google Scholar] [CrossRef]
  19. Dixit, S. Analysing the Impact of Productivity in Indian Transport Infra Projects. IOP Conf. Ser. Mater. Sci. Eng. 2022, 1218, 12059. [Google Scholar] [CrossRef]
  20. Dixit, S.; Singh, P. Investigating the disposal of E-Waste as in architectural engineering and construction industry. Mater. Today Proc. 2022, 56, 1891–1895. [Google Scholar] [CrossRef]
  21. Dixit, S.; Stefańska, A. Digitisation of contemporary fabrication processes in the AEC sector. Mater. Today Proc. 2022, 56, 1882–1885. [Google Scholar] [CrossRef]
  22. Rahimi, M.; Asgary, K.; Jesri, S. Thermal characteristics of a resurfaced condenser and evaporator closed two-phase thermosyphon. Int. Commun. Heat Mass Transf. 2010, 37, 703–710. [Google Scholar] [CrossRef]
  23. Venkatachalapathy, S.; Kumaresan, G.; Suresh, S. Performance analysis of cylindrical heat pipe using nanofluids–An experimental study. Int. J. Multiph. Flow 2015, 72, 188–197. [Google Scholar] [CrossRef]
  24. Charoensawan, P.; Khandekar, S.; Groll, M.; Terdtoon, P. Closed loop pulsating heat pipes: Part A: Parametric experimental investigations. Appl. Therm. Eng. 2003, 23, 2009–2020. [Google Scholar] [CrossRef]
  25. Shang, F.; Fan, S.; Yang, Q.; Liu, J. An experimental investigation on heat transfer performance of pulsating heat pipe. J. Mech. Sci. Technol. 2020, 34, 425–433. [Google Scholar] [CrossRef]
  26. Wang, J.; Ma, Y.; Zhang, L.; Gao, R.X.; Wu, D. Deep learning for smart manufacturing: Methods and applications. J. Manuf. Syst. 2018, 48, 144–156. [Google Scholar] [CrossRef]
  27. Ademujimi, T.T.; Brundage, M.P.; Prabhu, V.V. A Review of Current Machine Learning Techniques Used in Manufacturing Diagnosis BT-Advances in Production Management Systems. In The Path to Intelligent, Collaborative and Sustainable Manufacturing; Lödding, H., Riedel, R., Thoben, K.-D., von Cieminski, G., Kiritsis, D., Eds.; Springer International Publishing: Cham, Switzerland, 2017; pp. 407–415. [Google Scholar]
  28. Zacarias, A.G.V.; Reimann, P.; Mitschang, B. A framework to guide the selection and configuration of machine-learning-based data analytics solutions in manufacturing. Procedia CIRP 2018, 72, 153–158. [Google Scholar] [CrossRef]
  29. Krishnayatra, G.; Tokas, S.; Kumar, R. Numerical heat transfer analysis & predicting thermal performance of fins for a novel heat exchanger using machine learning. Case Stud. Therm. Eng. 2020, 21, 100706. [Google Scholar]
  30. El-Said, E.M.S.; Elaziz, M.A.; Elsheikh, A.H. Machine learning algorithms for improving the prediction of air injection effect on the thermohydraulic performance of shell and tube heat exchanger. Appl. Therm. Eng. 2021, 185, 116471. [Google Scholar] [CrossRef]
  31. Wang, Z.; Zhao, X.; Han, Z.; Luo, L.; Xiang, J.; Zheng, S.; Liu, G.; Yu, M.; Cui, Y.; Shittu, S.; et al. Advanced big-data/machine-learning techniques for optimization and performance enhancement of the heat pipe technology—A review and prospective study. Appl. Energy 2021, 294, 116969. [Google Scholar] [CrossRef]
  32. Frank, E.; Hall, M.A.; Witten, I.H. The WEKA Workbench. Online Appendix for Data Mining: Practical Machine Learning Tools and Techniques, 4th ed.; Morgan Kaufmann: Burlington, VT, USA, 2016. [Google Scholar]
  33. Holman, J.P. Experimental Methods for Engineers, 8th ed.; McGraw-Hill’s: New York, NY, USA, 2012; p. 800. [Google Scholar]
Figure 1. Schema of the concentric tube heat pipe heat exchanger.
Figure 1. Schema of the concentric tube heat pipe heat exchanger.
Energies 15 03276 g001
Figure 2. Steps involved in machine learning.
Figure 2. Steps involved in machine learning.
Energies 15 03276 g002
Figure 3. Scatter plot of parameters and responses.
Figure 3. Scatter plot of parameters and responses.
Energies 15 03276 g003
Figure 4. Box plot for RMSE in single-parameter subset performance.
Figure 4. Box plot for RMSE in single-parameter subset performance.
Energies 15 03276 g004
Figure 5. Box plot for RMSE in two-parameter subset performance.
Figure 5. Box plot for RMSE in two-parameter subset performance.
Energies 15 03276 g005
Figure 6. Random forest algorithm.
Figure 6. Random forest algorithm.
Energies 15 03276 g006
Figure 7. Plot of predicted and actual values for one-parameter subset.
Figure 7. Plot of predicted and actual values for one-parameter subset.
Energies 15 03276 g007
Figure 8. Plot of predicted and actual values for two-parameter subset.
Figure 8. Plot of predicted and actual values for two-parameter subset.
Energies 15 03276 g008
Figure 9. Plot of predicted and actual values for three-parameter subset.
Figure 9. Plot of predicted and actual values for three-parameter subset.
Energies 15 03276 g009
Figure 10. MAE and RMSE of the different machine learning models.
Figure 10. MAE and RMSE of the different machine learning models.
Energies 15 03276 g010
Figure 11. Actual vs. predicted effectiveness.
Figure 11. Actual vs. predicted effectiveness.
Energies 15 03276 g011
Table 1. Thermo-physical properties of working fluid.
Table 1. Thermo-physical properties of working fluid.
PropertiesMethanol
Boiling point65 °C
Melting point−97.9 °C
Latent heat of evaporation (λ)1055 kJ/kg
Density of liquid (ρl)792 kg/m3
Density of vapour (ρv)1.47 kg/m3
Thermal conductivity of liquid (kl)0.201 W/m °C
Vapor pressure (at 293 K)12.87 kPa
Viscosity of liquid (μl)0.314 × 10−3 Ns/m2
Surface tension of liquid (σ)1.85 × 10−2 N/m
Molecular weight (M)32 g/mol
Specific heat ratio (νv)1.33
Table 2. Parametric levels used in experiment.
Table 2. Parametric levels used in experiment.
S. NoFactorsMinimumMaximumMeanStd-Dev
1Angle (A)0904528.78
2Mass flow rate (MF)401208028.341
3Temperature (T)5070607.085
4Effectiveness (Methanol)6.8438.9820.136.177
Table 3. Determination of the subsets.
Table 3. Determination of the subsets.
S. No.SubsetsAMFT
1A100
2MF010
3T001
4A-MF110
5A-T101
6MF-T011
7A-MF-T111
Table 4. Full forms of all algorithms.
Table 4. Full forms of all algorithms.
CategoriesAlgorithmsFull-Form
FunctionsSLRSimple Linear Regression
LMsLeast Median Square
GPGaussian Processes
MLPMultilayer Perceptron
RBFNRadial basis Function Network
RBFRRadial basis Function Regressor
SMOREGSupport vector machine Optimizer Regression
LazyIBKInstance Based Learner K
K starK Star
LWLLocally Weighted Learning
MetaARAdditive Regression
BREPBagging Reduced Error Pruning
MSMulti Scheme
RCRandom Committee
RFCRandom Filtered Classifier
RSSRandom Subspace
RBDRandom By Discretization
STACKINGStacking
VOTEVote
WIHWWeighted Instances Handled Wrapper
RulesDTDecision Table
M5RM5R
ZERORZERO R
TreesDSDecision Stump
M5PM5P
RFRandom Forest
RTRandom Tree
REP TREEReduced Error Pruning
Misc.IMCInstance Mapped Classifier
Table 5. Subsets with one parameter—Performance.
Table 5. Subsets with one parameter—Performance.
SUBSETSAMFT
CategoriesALGORITHMSMAERMSEMAERMSEMAERMSE
FunctionsSLR4.9726.1395.0756.2124.4565.660
LR4.9726.1395.0536.1994.4565.660
LMs4.9746.4215.0806.2294.4325.686
MLP4.9866.1265.1796.2804.3725.654
GP5.0636.1805.0686.2034.7766.023
RBFN4.9146.0615.0586.1954.4625.601
RBFR4.1004.9514.7615.8474.1355.271
SMOREG4.9836.2485.0936.2634.4055.704
IBK3.6734.4154.7615.8474.1355.271
Kstar4.2085.1664.7905.8824.2085.304
LWL3.9004.6914.8155.9034.1365.274
MetaAR3.7284.4754.7645.8514.1315.268
BREP3.7014.4584.7935.8784.1255.284
MS5.0536.1995.0536.1995.0536.199
RC3.6734.4154.7615.8474.1355.271
RFC3.6734.4154.7395.8474.1355.271
RSS3.7074.4464.7835.8714.1315.274
RBD3.7024.4104.8645.9444.1515.228
STACKING5.0536.1995.0536.1995.0536.199
VOTE5.0536.1995.0536.1995.0536.199
WIHW5.0536.1995.0536.1995.0536.199
RulesDT3.6734.4154.7615.8474.1355.271
M5R3.7214.4854.8516.0004.1025.229
ZEROR5.0536.1995.0536.1995.0536.199
TreesDS4.5675.6134.9796.0654.1525.317
M5P3.8434.6324.8735.9974.1155.229
RF3.6714.4174.7705.8594.1365.271
RT3.6734.4154.7615.8474.1355.271
REP TREE3.6834.4274.8185.8764.1955.333
IMC5.0536.1995.0536.1995.0536.199
Table 6. Subsets with two parameters—Performance.
Table 6. Subsets with two parameters—Performance.
SUBSETSA-MFA-TMF-T
CategoriesALGORITHMSMAERMSEMAERMSEMAERMSE
FunctionsSLR4.9726.1394.4565.6604.4565.660
LR4.9726.1394.3385.5894.4565.660
LMs4.9986.4604.3016.0444.4425.698
MLP5.0036.1414.2815.5974.4715.729
GP5.0616.1874.3155.6064.5715.827
RBFN5.1016.2164.4885.6364.7875.871
RBFR4.5575.6083.6764.7913.8834.971
SMOREG5.0206.2974.2405.7894.4345.739
IBK3.9714.4512.6313.0623.8775.081
Kstar4.1254.9173.2364.1163.9635.024
LWL4.0164.8143.3364.2594.0865.233
MetaAR3.4633.9612.3732.9213.6934.842
BREP3.7834.2252.4982.9353.9115.100
MS5.0536.1995.0536.1995.0536.199
RC3.9714.4512.6313.0623.8775.081
RFC3.9714.4512.6553.1523.8775.081
RSS3.9864.7863.3314.1754.2875.319
RBD3.7664.3062.5783.0753.7614.951
STACKING5.0536.1995.0536.1995.0536.199
VOTE5.0536.1995.0536.1995.0536.199
WIHW5.0536.1995.0536.1995.0536.199
RulesDT3.7984.5082.6313.0623.8775.081
M5R3.7324.4092.6063.1784.0645.195
ZEROR5.0536.1995.0536.1995.0536.199
TreesDS4.5675.6134.1525.3174.1525.317
M5P3.8404.5592.9273.6664.0565.168
RF3.9564.4192.6273.0523.8775.075
RT3.9714.4512.6313.0623.8775.081
REP TREE3.6894.2862.6003.1204.0735.221
IMC5.0536.1995.0536.1995.0536.199
Table 7. Subsets with one parameter—Performance.
Table 7. Subsets with one parameter—Performance.
S. No.SUBSETSA-MF-T
ALGORITHMSMAERMSE
1SLR4.45645.6602
2LR4.33835.5886
3LMs4.26585.9911
4MLP4.20885.691
5GP4.30145.5754
6RBFN4.80865.8918
7RBFR3.74024.8737
8SMOREG4.20415.7579
9IBK4.31516.6209
10Kstar3.21714.2561
11LWL3.50594.4872
12AR1.63082.0891
13BREP1.59752.0988
14MS5.0536.1989
15RC1.32521.7054
16RFC4.60066.9647
17RSS2.51093.1017
18RBD1.96122.4249
19STACKING5.05306.1989
20VOTE5.05306.1989
21WIHW5.05306.1989
22DT2.63093.0623
23M5R2.79843.6392
24ZEROR5.05306.1989
25DS4.15205.3165
26M5P2.94383.7524
27RF1.17551.5422
28RT1.84562.4296
29REP TREE1.99632.5834
30IMC5.05306.1989
Table 8. Selection of the best performances from the different subsets.
Table 8. Selection of the best performances from the different subsets.
CategoriesSUBSETSAA-TA-MF-TMean
ALGORITHMSMAERMSEMAERMSEMAERMSEMAERMSE
FunctionsSLR4.9726.1394.4565.6604.4565.6604.6285.820
LR4.9726.1394.3385.5894.3385.5894.5495.772
LMs4.9746.4214.3016.0444.2665.9914.5146.152
MLP4.9866.1264.2815.5974.2095.6914.4925.805
GP5.0636.1804.3155.6064.3015.5754.5605.787
RBFN4.9146.0614.4885.6364.8095.8924.7375.863
RBFR4.1004.9513.6764.7913.7404.8743.8394.872
SMOREG4.9836.2484.2405.7894.2045.7584.4765.931
IBK3.6734.4152.6313.0624.3156.6213.5404.699
Kstar4.2085.1663.2364.1163.2174.2563.5544.513
LWL3.9004.6913.3364.2593.5064.4873.5814.479
MetaAR3.7284.4752.3732.9211.6312.0892.5773.161
BREP3.7014.4582.4982.9351.5982.0992.5993.164
MS5.0536.1995.0536.1995.0536.1995.0536.199
RC3.6734.4152.6313.0621.3251.7052.5433.061
RFC3.6734.4152.6553.1524.6016.9653.6434.844
RSS3.7074.4463.3314.1752.5113.1023.1833.908
RBD3.7024.4102.5783.0751.9612.4252.7473.303
STACKING5.0536.1995.0536.1995.0536.1995.0536.199
VOTE5.0536.1995.0536.1995.0536.1995.0536.199
WIHW5.0536.1995.0536.1995.0536.1995.0536.199
RulesDT3.6734.4152.6313.0622.6313.0622.9783.513
M5R3.7214.4852.6063.1782.7983.6393.0423.768
ZEROR5.0536.1995.0536.1995.0536.1995.0536.199
TreesDS4.5675.6134.1525.3174.1525.3174.2905.415
M5P3.8434.6322.9273.6662.9443.7523.2384.017
RF3.6714.4172.6273.0521.1761.5422.4913.004
RT3.6734.4152.6313.0621.8462.4302.7173.302
REP TREE3.6834.4272.6003.1201.9962.5832.7603.377
IMC5.0536.1995.0536.1995.0536.1995.0536.199
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Nair, A.; P., R.; Mahadevan, S.; Prakash, C.; Dixit, S.; Murali, G.; Vatin, N.I.; Epifantsev, K.; Kumar, K. Machine Learning for Prediction of Heat Pipe Effectiveness. Energies 2022, 15, 3276. https://doi.org/10.3390/en15093276

AMA Style

Nair A, P. R, Mahadevan S, Prakash C, Dixit S, Murali G, Vatin NI, Epifantsev K, Kumar K. Machine Learning for Prediction of Heat Pipe Effectiveness. Energies. 2022; 15(9):3276. https://doi.org/10.3390/en15093276

Chicago/Turabian Style

Nair, Anish, Ramkumar P., Sivasubramanian Mahadevan, Chander Prakash, Saurav Dixit, Gunasekaran Murali, Nikolai Ivanovich Vatin, Kirill Epifantsev, and Kaushal Kumar. 2022. "Machine Learning for Prediction of Heat Pipe Effectiveness" Energies 15, no. 9: 3276. https://doi.org/10.3390/en15093276

APA Style

Nair, A., P., R., Mahadevan, S., Prakash, C., Dixit, S., Murali, G., Vatin, N. I., Epifantsev, K., & Kumar, K. (2022). Machine Learning for Prediction of Heat Pipe Effectiveness. Energies, 15(9), 3276. https://doi.org/10.3390/en15093276

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop