Machine Learning for Prediction of Heat Pipe Effectiveness

: This paper details the selection of machine learning models for predicting the effectiveness of a heat pipe system in a concentric tube exchanger. Heat exchanger experiments with methanol as the working ﬂuid were conducted. The value of the angle varied from 0 ◦ to 90 ◦ , values of temperature varied from 50 ◦ C to 70 ◦ C, and the ﬂow rate varied from 40 to 120 litres per min. Multiple experiments were conducted at different combinations of the input parameters and the effectiveness was measured for each trial. Multiple machine learning algorithms were taken into consideration for prediction. Experimental data were divided into subsets and the performance of the machine learning model was analysed for each of the subsets. For the overall analysis, which included all the three parameters, the random forest algorithm returned the best results with a mean average error of 1.176 and root-mean-square-error of 1.542.


Introduction
Heat pipes are utilized in various products, such as electronics, solar collectors, and heat exchangers, to remove/transfer heat away from the system. Heat pipe techniques have been successfully implemented in most industry applications [1,2]. Heat pipe systems have also been successfully implemented in automobiles for exhaust gas recovery [3]. Usage of heat pipes with environmentally friendly refrigerants has also been reported [4]. The loop heat pipe was used for spacecraft applications and heat transmissibility was considerably increased in this experimentation [5].
Kempers et al. [6] analysed the wicks in the form of mesh in copper heat pipes, and it was reported that mesh layer increase led to better performance. Lower thermal resistance using deionised water was reported by [7]. Filling ratio, inclined angles, number of turns and heat input were tested in various articles, and it was reported that a 50% filling ratio gives maximum performance [8,9].
The performance of heat pipes in satellite applications was investigated and it was reported that minimum thermal resistance was observed when ammonia was used [10]. Numerical studies were reported on various models, and they have shown good agreement with the experimental results [11,12]. Numerical studies using a Navier-Stokes equation The heat pipe is fabricated and substitutes the concentric tube of the traditional heat exchanger, as shown in Figure 1. Copper material is employed for the heat pipe, and iron, which is galvanized, is employed for the shell. The total pipe length is 1000 mm, in which 700 mm is inserted inside the shell side of the evaporator section and 300 mm at the condenser section. The diameter of the heat pipe is 19 mm and 17 mm at the outer and inner edges, respectively. The evaporator and condenser shell have diameters of 50 mm and 35 mm, respectively. The total length of the evaporator and condenser shell section is 1000 mm and 300 mm, respectively. Two fluid tanks are fabricated for hot and cold fluid sections. The hot fluid tank (5 L) has an immersion electric heater with a 2000 W capacity. Two rotameters with a capacity of 3 LPM are used for measurement and flow control. The temperatures are measured using thermocouples at all points of the heat pipe heat exchanger.

Experimental Procedure
For both investigations, methanol is used. Methanol is charged with fill ratios of fifty per cent of the evaporator zone volume. Thermophysical properties of the working fluid are described in Table 1 [28]. The minimum tilt angle is set at 0 • (horizontal) and the maximum tilt angle is set at 90 • (vertical). The angle varied in increments of 10 • . The mass flow rate is set at a minimum value of 40 L per hour and a maximum of 120 L per hour.  Table 2. The experiments are carried out with each combination of the parameter levels, leading to a total of 250 experiments. The effectiveness of each of the settings is measured and based on this database the ML model is executed.

Experimental Procedure
For both investigations, methanol is used. Methanol is charged with fill ratios of fifty per cent of the evaporator zone volume. Thermophysical properties of the working fluid are described in Table 1 [28]. The minimum tilt angle is set at 0° (horizontal) and the maximum tilt angle is set at 90° (vertical). The angle varied in increments of 10°. The mass flow rate is set at a minimum value of 40 L per hour and a maximum of 120 L per hour. The intermediate values for mass flow rate that are used in the experiment are 60, 80 and 100 L per hour. Similarly, the maximum value of the set temperature is 70 °C and the minimum value of temperature is 50 °C. The intermediate values of temperature at which the other experiments are conducted are 55 °C, 60 °C and 65 °C. The various parametric levels chosen for the experiment are described in Table 2. The experiments are carried out with each combination of the parameter levels, leading to a total of 250 experiments. The effectiveness of each of the settings is measured and based on this database the ML model is executed.

Machine Learning Model
The prediction of the effectiveness of the heat pipe system employing methanol is discussed in this section. Effectiveness relies upon angle, mass flow rate and temperature. The objective of this analysis is to predict the effectiveness through various ML algorithms and to identify the best performing algorithm among them. Angle, mass flow rate and temperature are the inputs that affect the effectiveness of the process. These are the factors considered in the ML model. ML is the possibility in which a PC program can learn and conform to new data without human intervention. ML assesses a relationship between the factors and the output. ML methods are classified into five classes, namely: functions, lazy learning algorithms, meta-learning algorithms, rule-based algorithms and tree-based learning algorithms. The steps involved in the ML process are depicted in Figure 2.

Machine Learning Model
The prediction of the effectiveness of the heat pipe system employing methanol is discussed in this section. Effectiveness relies upon angle, mass flow rate and temperature. The objective of this analysis is to predict the effectiveness through various ML algorithms and to identify the best performing algorithm among them. Angle, mass flow rate and temperature are the inputs that affect the effectiveness of the process. These are the factors considered in the ML model. ML is the possibility in which a PC program can learn and conform to new data without human intervention. ML assesses a relationship between the factors and the output. ML methods are classified into five classes, namely: functions, lazy learning algorithms, meta-learning algorithms, rule-based algorithms and tree-based learning algorithms. The steps involved in the ML process are depicted in Figure 2.

Identification and Pre-Processing of the Dataset
Pre-processing is an interaction of cleaning the missing or crude information. The information is gathered through real-time data and is changed over to a spotless informational index. These are a portion of the fundamental pre-processing procedures that can be utilized to change over crude information. Pre-processing is used to normalize the data as the various data fall in different data categories and there is a need to create uniformity among the data for better interpretation by the machine learning algorithms.

Separation, Training and Testing
Separation of datasets is performed to decide the best subset. The best subset is a list of capabilities, which demonstrates the best exhibition in expectation exactness. Hypothetically, the best subset can be found by assessing every one of the potential subsets. Training is carried out by analysing every subset under 30 algorithms. Each algorithm is applied to each subset. After training each subset in each algorithm, the output predictions are tested and noted. Then, mean absolute errors and root-mean-square errors of all subsets are also tested and noted.

Evaluation of Our Model
The MAE (mean average error) and RMSE (root-mean-square error) of a subset that has fewer errors is taken as the best regression method and best subset, and their output predictions are the best predictions of the dataset.

Dataset Description
The set or the assortment of information obtained through experimentation is known as a dataset. The dataset's information is arranged such that there are a set of values which represent the input and output factors. The dataset utilized in this examination, which comprises instances, attributes, info factors and an objective variable, was gathered over

Identification and Pre-Processing of the Dataset
Pre-processing is an interaction of cleaning the missing or crude information. The information is gathered through real-time data and is changed over to a spotless informational index. These are a portion of the fundamental pre-processing procedures that can be utilized to change over crude information. Pre-processing is used to normalize the data as the various data fall in different data categories and there is a need to create uniformity among the data for better interpretation by the machine learning algorithms.

Separation, Training and Testing
Separation of datasets is performed to decide the best subset. The best subset is a list of capabilities, which demonstrates the best exhibition in expectation exactness. Hypothetically, the best subset can be found by assessing every one of the potential subsets. Training is carried out by analysing every subset under 30 algorithms. Each algorithm is applied to each subset. After training each subset in each algorithm, the output predictions are tested and noted. Then, mean absolute errors and root-mean-square errors of all subsets are also tested and noted.

Evaluation of Our Model
The MAE (mean average error) and RMSE (root-mean-square error) of a subset that has fewer errors is taken as the best regression method and best subset, and their output predictions are the best predictions of the dataset.

Dataset Description
The set or the assortment of information obtained through experimentation is known as a dataset. The dataset's information is arranged such that there are a set of values which represent the input and output factors. The dataset utilized in this examination, which comprises instances, attributes, info factors and an objective variable, was gathered over a multi-month time. Figure 3 shows the scatter plot of the various factors against the effectiveness.

Dataset Separation
Here, the dataset is divided into several subsets. The separate datasets are then saved in CSV format. After separation, the loading of each subset is completed. After loading, basic statistics such as minimum values, maximum values, mean and standard deviation are calculated. The various subsets that can be generated in this experiment are listed in Table 3. For each of the subsets, the prediction model is trained and tested on all possible regression algorithms available. In similar ways, all the subsets are tested on all regression methods. a multi-month time. Figure 3 shows the scatter plot of the various factors against the effectiveness.

Dataset Separation
Here, the dataset is divided into several subsets. The separate datasets are then saved in CSV format. After separation, the loading of each subset is completed. After loading, basic statistics such as minimum values, maximum values, mean and standard deviation are calculated. The various subsets that can be generated in this experiment are listed in Table 3. For each of the subsets, the prediction model is trained and tested on all possible regression algorithms available. In similar ways, all the subsets are tested on all regression methods.
In artificial intelligence-based regression examination, a significant strategy is to show the interrelation between target and factors. Regression examination makes us perceive how the change in independent variables affects the dependent variables in any process. Regression algorithms are classified into functions, lazy learning algorithms, metalearning algorithms, rule-based algorithms and tree-based algorithms. Full forms of all algorithms are shown in Table 4. Table 3. Determination of the subsets.

S. No.
Subsets A-MF-T 1 1 1 Figure 3. Scatter plot of parameters and responses. Table 3. Determination of the subsets.

S. No. Subsets
In artificial intelligence-based regression examination, a significant strategy is to show the interrelation between target and factors. Regression examination makes us perceive how the change in independent variables affects the dependent variables in any process. Regression algorithms are classified into functions, lazy learning algorithms, meta-learning algorithms, rule-based algorithms and tree-based algorithms. Full forms of all algorithms are shown in Table 4.

Precision of Prediction
The prediction precision of every machine learning regression strategy is utilized to assess the difference between the real and anticipated qualities. The prediction precision is assessed through indices such as mean absolute error (MAE) and root-mean-square error (RMSE). An error can be defined as the difference between the experimental and predicted value. Mean absolute error (MAE) is calculated by taking the average of the absolute error and is shown in Equation (1). Root-mean-square Error (RMSE) is also often utilised for determining the closeness of the predicted value with the actual value, and the formula used is shown in Equation (2).

Results and Discussion
The best subset is selected by analysing the mean values of all tested algorithms. The best algorithm is found by analysing all of the mean absolute errors and root-mean square errors. Table 5 shows the result of the machine learning model when only one parameter is taken into consideration. The three parameters are run separately and the results are tabulated. It can be observed that angle has the least error observed at an MAE of 3.671 and RMSE of 4.417. These lowest errors are obtained through the random forest algorithm. The next model is executed with two-parameter subsets. The three different combinations of the input parameters were run separately and the results are reported in Table 6. It can be observed that the angle-temperature combination has the least error as the MAE value is 2.373 and the RMSE value is 2.921. The lowest errors are obtained through the additive regression method.
A box plot for the RMSE is also developed to better understand the variation in the data. Figure 4 shows the box plot of single-parameter subset performance. Angle displayed the least RMSE with a value of 4.415; hence, it was selected as the best performing subset. Similarly, Figure 5 shows the box plot of two-parameter subset performance. The minimum RMSE value of 2.921 was obtained for the angle-temperature subset and thus was selected as the best-modelled subset for two parameters. A box plot for the RMSE is also developed to better understand the variation in the data. Figure 4 shows the box plot of single-parameter subset performance. Angle displayed the least RMSE with a value of 4.415; hence, it was selected as the best performing subset. Similarly, Figure 5 shows the box plot of two-parameter subset performance. The minimum RMSE value of 2.921 was obtained for the angle-temperature subset and thus was selected as the best-modelled subset for two parameters.   Finally, the model is run with three-parameter subsets and the entire combination of factors is taken into consideration. Here, there is only a single combination of all the parameters. The results from the various machine learning algorithms are listed below in Table 7. The random forest algorithm provided the least errors as the values reported for MAE and RMSE were 1.1755 and 1.5422, respectively.    6.199 Finally, the model is run with three-parameter subsets and the entire combination of factors is taken into consideration. Here, there is only a single combination of all the parameters. The results from the various machine learning algorithms are listed below in Table 7. The random forest algorithm provided the least errors as the values reported for MAE and RMSE were 1.1755 and 1.5422, respectively.
From the subset analysis, further classification is carried out by extracting the best models from each subset. The subsets with the least errors are listed below in Table 8. Hence, the table represents the best models in each of the subset categories. The mean of the MAE and the RMSE errors for each of the categories is taken and it is observed that the lowest errors are obtained through the random forest algorithm. The MAE and the RMSE values obtained are 2.491 and 3.004, respectively. Figure 6 shows the interface of the Weka software, and it denotes the various statistics such as time taken to build the model, the number of iterations, correlation coefficient, mean absolute error and the root-mean-square error.  The scatter plots of the predicted versus the experimental values for the various analyses are shown in Figures 7-9. Figure 7 represents the scatter plot for predicted versus actual values for one-parameter subset. Since only one subset is considered, it can be observed that there is a larger amount of scatter. It is observed that when two-parameter subsets are considered, the scatter plot is more even with less deviation overall as depicted  The scatter plots of the predicted versus the experimental values for the various analyses are shown in Figures 7-9. Figure 7 represents the scatter plot for predicted versus actual values for one-parameter subset. Since only one subset is considered, it can be observed that there is a larger amount of scatter. It is observed that when two-parameter subsets are considered, the scatter plot is more even with less deviation overall as depicted in Figure 8. In Figure 9, all the parameters are considered, and the scatter plot shows a more even plot. It can be concluded that when all the factors are considered, the machine learning model provides a much better model. Hence, it can be confirmed that the machine learning model provides the best solution when all the factors are considered for the analysis. This also indicates that all the factors have a significant effect on the output.
The performances of the different algorithms that have been described in this study are best illustrated in Figure 10. The figure shows that the random forest algorithm has the best performance when the system comprising all the three factors is considered as a whole.
From the above analysis, we can understand that random forest is the best regression strategy (best algorithm) for predicting the effectiveness of a heat pipe system when methanol is used as the working fluid. Random forest builds multiple decision trees and merges them to create a more accurate and stable prediction. Predicted vs. actual values are compared by taking 15 to 20 random values from the respective outputs of their best regression methods. Figure 11 plots the predicted vs. the experimental values and the marked green area indicates that the deviation is acceptable. Here, the best regression method for effectiveness is random forest. A correlation coefficient of 0.9729, MAE of 1.1755, and RMSE of 1.5422 was obtained as the result of the analysis.
The scatter plots of the predicted versus the experimental values for the various analyses are shown in Figures 7-9. Figure 7 represents the scatter plot for predicted versus actual values for one-parameter subset. Since only one subset is considered, it can be observed that there is a larger amount of scatter. It is observed that when two-parameter subsets are considered, the scatter plot is more even with less deviation overall as depicted in Figure 8. In Figure 9, all the parameters are considered, and the scatter plot shows a more even plot. It can be concluded that when all the factors are considered, the machine learning model provides a much better model. Hence, it can be confirmed that the machine learning model provides the best solution when all the factors are considered for the analysis. This also indicates that all the factors have a significant effect on the output.   The performances of the different algorithms that have been described in this study are best illustrated in Figure 10. The figure shows that the random forest algorithm has the best performance when the system comprising all the three factors is considered as a whole.
From the above analysis, we can understand that random forest is the best regression strategy (best algorithm) for predicting the effectiveness of a heat pipe system when methanol is used as the working fluid. Random forest builds multiple decision trees and  The performances of the different algorithms that have been described in this study are best illustrated in Figure 10. The figure shows that the random forest algorithm has the best performance when the system comprising all the three factors is considered as a whole.
From the above analysis, we can understand that random forest is the best regression strategy (best algorithm) for predicting the effectiveness of a heat pipe system when meth-

Conclusions
The identification of the best machine learning model for a heat exchanger process is discussed in this article. Heat exchanger experiments with methanol as the working fluid are conducted with consideration of various factors such as angle, temperature and mass flow rate and the effectiveness of each of the experiments is measured. The value of the angle is varied from 0 to 90 in increments of 10. Values of temperature are varied in increments of 5, starting from 50 to 70. Mass flow rate is varied from 40 to 120 in increments of 20. The experiments are conducted for each of the combinations of the input parameters and the effectiveness is measured for each trial. From the experiment data, a machine learning model is developed to identify the algorithm which best fits the experiment. Thirty algorithms were taken into consideration and the experimental values were analysed for each of the algorithms. The experimental data were divided into subsets and the performance of the machine learning model was analysed for each of the subsets. Singleparameter subset analysis revealed that angle had the most correlation with effectiveness as the MAE (3.671) and RMSE (4.417) were minimum. For the single-parameter analysis, the random forest algorithm was found as the best fit. Similarly, for the two-parameter subset, it was inferred that the angle-temperature combination had the most correlation with effectiveness and the MAE and RMSE were 2.373 and 2.921, respectively. For this, the additive regression method was identified as the best machine learning model. For the overall analysis that included the all three parameters, the random forest algorithm returned the best results with an MAE of 1.176 and RMSE of 1.542. The results show that

Conclusions
The identification of the best machine learning model for a heat exchanger process is discussed in this article. Heat exchanger experiments with methanol as the working fluid are conducted with consideration of various factors such as angle, temperature and mass flow rate and the effectiveness of each of the experiments is measured. The value of the angle is varied from 0 to 90 in increments of 10. Values of temperature are varied in increments of 5, starting from 50 to 70. Mass flow rate is varied from 40 to 120 in increments of 20. The experiments are conducted for each of the combinations of the input parameters and the effectiveness is measured for each trial. From the experiment data, a machine learning model is developed to identify the algorithm which best fits the experiment. Thirty algorithms were taken into consideration and the experimental values were analysed for each of the algorithms. The experimental data were divided into subsets and the performance of the machine learning model was analysed for each of the subsets. Singleparameter subset analysis revealed that angle had the most correlation with effectiveness as the MAE (3.671) and RMSE (4.417) were minimum. For the single-parameter analysis, the random forest algorithm was found as the best fit. Similarly, for the two-parameter subset, it was inferred that the angle-temperature combination had the most correlation with effectiveness and the MAE and RMSE were 2.373 and 2.921, respectively. For this, the additive regression method was identified as the best machine learning model. For the overall analysis that included the all three parameters, the random forest algorithm returned the best results with an MAE of 1.176 and RMSE of 1.542. The results show that

Conclusions
The identification of the best machine learning model for a heat exchanger process is discussed in this article. Heat exchanger experiments with methanol as the working fluid are conducted with consideration of various factors such as angle, temperature and mass flow rate and the effectiveness of each of the experiments is measured. The value of the angle is varied from 0 to 90 in increments of 10. Values of temperature are varied in increments of 5, starting from 50 to 70. Mass flow rate is varied from 40 to 120 in increments of 20. The experiments are conducted for each of the combinations of the input parameters and the effectiveness is measured for each trial. From the experiment data, a machine learning model is developed to identify the algorithm which best fits the experiment. Thirty algorithms were taken into consideration and the experimental values were analysed for each of the algorithms. The experimental data were divided into subsets and the performance of the machine learning model was analysed for each of the subsets. Single-parameter subset analysis revealed that angle had the most correlation with effectiveness as the MAE (3.671) and RMSE (4.417) were minimum. For the singleparameter analysis, the random forest algorithm was found as the best fit. Similarly, for the two-parameter subset, it was inferred that the angle-temperature combination had the most correlation with effectiveness and the MAE and RMSE were 2.373 and 2.921, respectively. For this, the additive regression method was identified as the best machine learning model. For the overall analysis that included the all three parameters, the random forest algorithm returned the best results with an MAE of 1.176 and RMSE of 1.542. The results show that machine learning models can be successfully used for representing the physical experiments in a numerical model. With the presence of increasing databases, more such studies can be conducted in the future to create robust databases that best depict the process.