Next Article in Journal
Characteristic Impedance Modeling of Nuclear Power Instrumentation and Control Cable Shield Breakage
Previous Article in Journal
Quantifying Social Benefits of Virtual Power Plants (VPPs) in South Korea: Contingent Valuation Method
Previous Article in Special Issue
Application of Neural Network Models for Analyzing the Impact of Flight Speed and Angle of Attack on Flow Parameter Non-Uniformity in a Turbofan Engine Inlet Duct
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Thermohydraulic Performance of Microchannel Gas Coolers: A Machine Learning Approach

1
KNU Institute of Engineering Design Technology (IEDT), Kyungpook National University, Daegu 41566, Republic of Korea
2
School of Mechanical Engineering, College of Engineering, Kyungpook National University, Daegu 41566, Republic of Korea
*
Author to whom correspondence should be addressed.
Energies 2025, 18(12), 3007; https://doi.org/10.3390/en18123007
Submission received: 2 May 2025 / Revised: 1 June 2025 / Accepted: 4 June 2025 / Published: 6 June 2025
(This article belongs to the Special Issue Heat Transfer Analysis: Recent Challenges and Applications)

Abstract

:
In this study, a numerical model of a microchannel gas cooler was developed using a segment-by-segment approach for thermohydraulic performance evaluation. State-of-the-art heat transfer and pressure drop correlations were used to determine the air and refrigerant side heat transfer coefficients and friction factors. The developed model was validated against a wide range of experimental data and was found to accurately predict the gas cooler capacity (Q) and pressure drop (ΔP) within an acceptable margin of error. Furthermore, advanced machine learning algorithms such as extreme gradient boosting (XGB), random forest (RF), support vector regression (SVR), k-nearest neighbors (KNNs), and artificial neural networks (ANNs) were employed to analyze their predictive capability. Over 11,000 data points from the numerical model were used, with 80% of the data for training and 20% for testing. The evaluation metrics, such as the coefficient of determination (R2, 0.99841–0.99836) and mean squared error values (0.09918–0.10639), demonstrated high predictive efficacy and accuracy, with only slight variations among the models. All models accurately predict the Q, with the XGB and ANN models showing superior performance in ΔP prediction. Notably, the ANN model emerges as the most accurate method for refrigerant and air outlet temperatures predictions. These findings highlight the potential of machine learning as a robust tool for optimizing thermal system performance and guiding the design of energy-efficient heat exchange technologies.

1. Introduction

In recent years, the global adoption of new energy vehicles (NEVs), particularly electric vehicles (EVs), has surged due to increasing environmental concerns and the transition toward sustainable transportation. Governments and manufacturers worldwide are promoting EV technologies to reduce greenhouse gas emissions and urban noise. Among these vehicles, battery electric vehicles (BEVs) dominate the market [1]. As a result, the efficiency of vehicle thermal management systems, especially for cabin air conditioning, has become a critical factor affecting overall energy consumption and driving range.
With the growing global demand for EVs, energy-efficient and eco-friendly cooling strategies must be developed for these vehicles. Transcritical CO2 microchannel gas coolers (MCGCs) are one of the promising solutions, offering compactness, high thermal efficiency, and reduced environmental impact compared to conventional cooling systems that employ refrigerants with high global warming potential. The unique feature of CO2 is its low critical temperature (31.1 °C), which differentiates it considerably from other refrigerants. Above this critical point, the heat transfer capacity of the condenser becomes ineffective which subsequently replaces the condenser with a specialized heat rejection device (gas cooler). In a gas cooler, the conventional condensation process is replaced with a unique gas cooling process in which the temperature of the refrigerant decreases significantly, while pressure changes remain minimal. The gas cooling process operates within the supercritical region, where temperature and pressure are decoupled. This unique process determines the enthalpy of CO2 at the outlet of a gas cooler, influenced by both temperature and pressure [2]. The performance of these gas coolers is important for the overall efficiency and effectiveness of the EV’s air conditioning system [3]. Strategies for improving the performance of transcritical CO2 systems primarily focus on two key aspects: optimizing system components and improving system design. In terms of system component optimization, this approach mainly involves optimizing the performance of the gas cooler, evaporator, or compressor [4,5].
Accurate prediction of the thermohydraulic performance of MCGCs for EV applications is essential for their design and optimization. Conventional methods such as empirical correlations and numerical simulations have been extensively used in the field of heat transfer for evaluating the performance of MCGCs. These well-established techniques rely on physical principles and consider specific system details, including geometry, materials, and working fluid properties. Empirical correlations are basically derived from experimental data and are often tailored to specific geometries and flow regimes. Although these correlations offer a quick and straightforward solution, their accuracy may be limited, especially when applied to novel designs or nonstandard operating conditions. In contrast, numerical simulations offer a more comprehensive evaluation of the MCGC performance. Computational fluid dynamics (CFD) techniques such as finite element and finite volume methods are utilized to numerically solve the governing equations of fluid flow and heat transfer. These methods consider the intricate geometry of microchannels, the properties and behavior of the working fluid. By discretizing the domain and iteratively solving the equations, CFD simulations can provide insights into the thermohydraulic performance of MCGCs under various operating conditions. Both empirical correlations and numerical simulations have their advantages and limitations. Empirical correlations are relatively simple and can provide quick estimates; however, they may lack accuracy for complex designs or under unconventional operating conditions. Conversely, numerical simulations offer a more detailed understanding of the underlying physical processes but require significant computational resources and expertise. Both approaches have been widely used considering the specific details of the system being modeled. In addition, emerging techniques such as neural networks offer promising options for more accurate, cost effective, and quick estimation of MCGC performance [6,7,8]. Neural networks can learn complex relationships between input parameters (e.g., geometrical and operational variables) and the corresponding thermohydraulic performance. By training the neural network with a diverse dataset generated from experimental or simulation data, MCGC performance can be accurately predicted under various operating conditions. ML algorithms are faster than numerical simulations, allowing for more rapid optimization of air conditioning system design. In addition, they are more flexible than numerical simulations, enabling performance prediction under a wider range of operating conditions and cooler designs. ML techniques have been extensively studied in heat transfer and fluid mechanics [9,10].
Saeed et al. [11] performed 3D-Reynolds-averaged Navier–Stokes simulations to calculate the heat sink performances with various fin configurations. Subsequently, this data was employed to train six ML regression techniques to identify the most accurate method for predicting heat transfer coefficients and ΔP values. The selected ML model was combined with a multi-objective genetic algorithm to determine the ideal heat sink geometry. The multilayer perceptron method, adapted into a deep neural network model, effectively predicted the heat transfer coefficients and ΔP using the available data. The performance of the optimized channel geometry increased 2.1 times compared to that of the best existing channel configuration, with a 14% higher heat transfer coefficient and five times lower ΔP. Arman et al. employed a committee neural network (CNN) to estimate the ΔP in microchannels under varying conditions and demonstrated that incorporating multiple techniques into the CNN increased the accuracy of the results, suggesting its strong predictive potential across diverse fields [12]. Kim et al. developed general ML models using power law regression and a database comprising 906 data points from 15 different sources. The ML models were found to have mean absolute errors of 7.5–10.9%, an approximately fivefold improvement in prediction accuracy compared with existing regression correlations [13]. Yu et al. performed numerical simulations of the flow and heat transfer process in an elliptical pin-fin microchannel heat sink, and ANNs were used to predict the average temperature, temperature nonuniformity, and ΔP of the microchannel [14]. Sikirica et al. introduced a framework using CFD, ML-based surrogate modeling, and multi-objective optimization to optimize microchannel heat sink designs. The optimized designs showed enhanced performance while reducing the computational time compared with traditional methods. The designs generated achieved temperatures more than 10% lower than that of a typical microchannel design under the same pressure limits. When limited by temperature, ΔP decreased by more than 25% [15].
Recent advancements underscore the growing role of machine learning (ML) and deep learning (DL) in optimizing thermal system performance, particularly in complex geometries and multiphase flow conditions. For example, Zohora et al. [16] applied multi-layer perceptron and XGBoost models to CFD data from pin-fin microchannel heat sinks, achieving over 95% accuracy in thermal and fluid flow predictions. Efatinasab et al. [17] developed ANN and CNN models trained on a large experimental dataset to predict heat transfer and ΔP in micro-finned tube heat exchangers, achieving mean absolute errors under 4.5% and leveraging SHAP analysis for interpretability. Similar trends are evident in vortex generator optimization [18], multimodal data fusion for boiling heat sinks [19], and hybrid physics-informed frameworks for pool boiling on structured surfaces [20]. Other works demonstrate the effectiveness of LSTM and Transformer-based models in capturing complex flow dynamics in plate heat exchangers [21,22], while XGBoost and SGBoost models have proven to be reliable surrogates for CFD in nanofluid systems [23,24]. Collectively, these studies highlight the current shift toward data-driven modeling strategies in thermal engineering, emphasizing high accuracy, physical interpretability, and substantial reductions in computational cost.
To meet the growing demand for compact, energy-efficient, and eco-friendly thermal systems in electric vehicles, transcritical CO2 microchannel gas coolers (MCGCs) have emerged as a promising solution due to their high heat transfer efficiency and low environmental impact. However, their design and operation involve significant challenges, such as complex two-phase flow, maldistribution, high-pressure conditions, and performance sensitivity to varying parameters. While experimental and CFD-based approaches have been used to investigate these systems, they often have high computational and resource costs, particularly when analyzing a wide range of conditions. In this context, machine learning (ML) offers a powerful and efficient alternative for predicting thermohydraulic performance by capturing nonlinear interactions among design and operating variables. This study aims to develop and validate ML-based models to provide fast and accurate predictions for MCGCs, supporting advanced design and optimization efforts.
In this work, we aim to investigate and compare the ability of ML algorithms in predicting the thermohydraulic performance of an MCGC for mobile air conditioning applications. Utilizing over 11,000 data points from an experimentally validated numerical model, we developed and optimized predictive models using five ML techniques: XGB, RF, SVR, KNNs, and ANNs. By carefully tuning hyperparameters and evaluating performance metrics such as coefficient of determination (R2) and mean squared error (MSE), our study provides insights into the unique abilities of each model, offering enhancement of the energy efficiency and design optimization of complex thermal systems in automotive applications.

2. Numerical Model Development of MCHX

2.1. Geometric and Operating Conditions

In this study, a numerical model was developed using MATLAB R2024b code for single-slab MCHX. The baseline geometry consists of 34 tubes arranged in three passes (13-11-10), each tube with 11 circular ports as illustrated in Figure 1. CO2 refrigerant flows through microchannel tubes and air across louvered fins. The geometric specifications and operating conditions are provided in Table 1 and Table 2, respectively.

2.2. Modeling Approach

A segment-by-segment modeling approach was used due to the strong sensitivity of CO2 properties near the critical point. Each tube was divided into five segments, each modeled as an independent crossflow heat exchanger. Heat transfer and ΔP were calculated using the ε-NTU method. Properties of CO2 and air were calculated using REFPROP (v10.0). The model assumes steady-state, one-dimensional flow, and uses empirical correlations for heat transfer and ΔP, which may introduce deviations under highly transient conditions or at the extremes of operating parameters (e.g., extremely low or high mass flow rates, very large channel aspect ratios, or non-uniform refrigerant distribution). The schematic flow chart of the numerical methodology is shown in Figure 2.

2.3. Heat Transfer and Pressure Drop Correlations

Appropriate correlations for heat transfer and ΔP are essential for model accuracy. The convective heat transfer coefficient on the air side is selected based on the fin configuration. In the present model, the Colburn j factor and friction f factor correlations for louvered fins, as developed by Kim and Bullard [25], were employed. These correlations are widely used for flat-tube heat exchangers with louvered fins and have been validated in the literature under comparable operating conditions, making them well-suited for the geometry and application considered in this study.
j = R e L p 0.487 L α 90 0.257 F p L p 0.13 H L p 0.29 F d L p 0.235 L l L p 0.68 T p L p 0.279 δ f L p 0.05
f = R e L p 0.781 L α 90 0.444 F p L p 1.682 H L p 1.22 F d L p 0.818 L l L p 1.97
The refrigerant side heat transfer coefficient is calculated by Gnielinski correlation [26], valid in the range of 3000 R e 5 × 10 6 and 0.5 P r 2000 . The Darcy friction factor fD is obtained from Equation (4) developed by Petukhov [26], a single correlation that spans a wide range of Reynold numbers, 3000 R e 5 × 10 6 .
N u = f D 8 × Re 1000 × P r 1 + 12.7 f D 8 1 2 Pr 2 3 1
f D = ( 0.79 ln ( Re ) 1.64 ) 2  
The ΔP is calculated from friction along ports as follows:
Δ P = L t D p o r t × f D × G r 2 2 ρ r

2.4. Numerical Model Validation

The numerical model for Q and ΔP is validated against experimental data from Yin and Bullard [27]. Results showed strong agreement, with maximum errors within ±3.79% for capacity and ±10.24% for ΔP, respectively, which is within the acceptable range as shown in Figure 3.

3. Predictive Modeling of MCGC Using Machine Learning Approach

The methodology for predicting MCGC performance in mobile air conditioning systems (MACSs) is built upon a robust machine learning (ML) framework that incorporates five advanced algorithms: XGB, RF, SVR, KNN, and ANNs. Each algorithm was selected to represent a diverse range of modeling approaches, ensuring a comprehensive evaluation of their individual capabilities, from XGB’s robustness in handling tabular data to RF’s resilience against overfitting, SVR’s efficiency in capturing non-linear patterns, KNN’s interpretability for local relationships, and ANN’s capacity to learn intricate, non-linear interactions. By comparing their performance on the same dataset, the framework facilitates a comparative analysis of their performance, highlighting both their general-purpose utility and scenario-specific strengths. All algorithms were implemented using the scikit-learn library, with configurations tailored to optimize predictive accuracy while minimizing the risk of overfitting. This systematic and well-rounded approach provides valuable insights into the strengths and limitations of each algorithm, enabling informed decision-making for performance prediction in MACSs.

3.1. Problem Formulation

The task of performance prediction is framed as a supervised learning problem. Given a dataset containing input features X = x 1 , x 2 , , x n R n , where n is the number of features, representing geometric and operational parameters, and output variables Y = y 1 , y 2 , , y m R m , where m is the number of output variables, representing system performance metrics, the objective is to learn the functional relationship:
Y = f X + ε
where f X is the true mapping function and ε is the error. Each algorithm separately models this relationship, predicting the outputs Y ^ . To evaluate model performance, two metrics were used: mean squared error (MSE) and the coefficient of determination (R2). The MSE is defined as follows:
M S E = 1 N i = 1 N y i y ^ i 2
which quantifies the average squared difference between predicted y ^ i and true y i values. The R2 metric is defined as follows:
R 2 = 1 i ˙ = 1 N ( y i y ^ i ) i ˙ = 1 N ( y i y i )
measures the proportion of variance in the dependent variable that is predictable from the independent variables.
The overall workflow of the system is depicted in Figure 4, providing a clear representation of the sequential processes involved. Each component of the workflow is meticulously detailed in the subsequent sections, offering an in-depth explanation of their roles and functionality within the system.

3.2. Data Preparation

The dataset used for machine learning model development was generated from a validated numerical model of the MCGC [27,28]. By systematically varying key geometric and operational parameters such as number of passes (3–5), number of tubes (18–60), number of circular ports (9–21), ambient temperature (42–56°C), inlet air flow rate (0.45–0.70 kg/s), and refrigerant flow rate (0.018–0.057 kg/s), a total of approximately 13,000 datasets were generated. Each data point includes twenty-three input parameters and four output performance indicators. While the numerical model exhibits some errors relative to experimental data, our ML framework will be trained and tested exclusively on those same simulated outputs, so this bias will not affect the reported ML accuracy.
The dataset was thoroughly examined for missing, constant, and duplicate values, followed by necessary preprocessing steps to ensure its integrity and readiness for analysis. A rigorous approach was also applied for outlier removal to maintain the quality of the regression modeling. To enhance predictive accuracy and model efficiency, a systematic feature selection process was carried out on the preprocessed dataset. Statistical techniques and correlation analysis were used to identify the most influential variables affecting the target outputs, thereby reducing the original 27 features to an optimized set of 13 (nine input features and four output features). Features with constant values such as heat exchanger volume, tube pitches, fin pitch, tube thickness, and fin height were removed due to their lack of variance. Redundant features were excluded based on high correlation and engineering relevance. For example, among airflow-related parameters (velocity, mass flow rate, and volume rate), only the air mass flow rate was retained. Similarly, the number of tubes and number of ports were chosen over the more dependent geometric dimensions like height and width. A single categorical feature “passes” was used to represent the variation in refrigerant flow configurations. This refined feature set not only reduces model complexity and the risk of overfitting but also ensures better interpretability and relevance to the thermal and hydraulic behavior of CO2 gas coolers.
The input features include air flow rate (ṁca), air inlet temperature (Tcai), refrigerant flow rate (ṁr), refrigerant inlet pressure (Pcri), refrigerant inlet temperature (Tcri), heat exchanger length, number of tubes, ports, and passes. Output variables are Q, ∆P, and air and refrigerant outlet temperatures (Tao and Tro). Ranges of geometric and operating conditions for machine learning models are given in Table 3.

3.3. Train Test Split Technique

In ML, evaluating a model’s performance on unseen data is crucial for the assessment of its generalizability. To achieve this, the dataset is commonly divided into two subsets: a training set and a testing set. The training set is used to train the model, enabling it to learn the underlying patterns and relationships between the input features and output values. On the other hand, the testing set is employed to assess the model’s performance on new, unseen data, which helps determine how effectively the model can generalize its predictions to novel data points. In this study, the testing set size was defined as 0.2, indicating that 20% of the dataset was dedicated to testing, while the remaining 80% was allocated for the training set. Moreover, the random state parameter was set to 1, ensuring that the data split was reproducible across multiple runs. By setting this parameter, we obtained consistent results when evaluating the models. Consequently, the resulting training set, represented as Xtrain (input features) and ytrain (output values), was employed to train various models, and the testing set, denoted as Xtest (input features) and ytest (output values), was used to evaluate the performances of these models. Using this systematic approach, we effectively determined the capabilities and limitations of the different models under consideration.

3.4. Model Training

As described earlier, five different machine learning algorithms, i.e., XGB, RF, SVR, KNN, and ANNs, were independently used to train multiple models. Each algorithm was optimized using hyperparameter tuning to ensure accurate predictions and robust performance across the dataset. To prevent overfitting, early stopping techniques were employed during model training.

3.4.1. Algorithms for MCGC Performance Predictions

These algorithms were implemented using Python 3.11, and utilized a suite of powerful open-source libraries. For seamless data handling and preprocessing, pandas enabled efficient loading and manipulation of tabular datasets, while NumPy facilitated complex mathematical computations. To visualize insights and trends, matplotlib provided dynamic and intuitive plotting capabilities. Key machine learning operations, including data transformation, model implementation, and evaluation, were powered by scikit-learn. Specifically, sklearn.linear_model supported the implementation of various regression models, sklearn.preprocessing ensured the data was appropriately transformed, sklearn.model_selection streamlined data splitting and cross-validation, and sklearn.metrics allowed for comprehensive performance evaluation using diverse metrics. Together, these tools not only enabled precise coding of the models but also ensured their performance could be rigorously assessed. Each algorithm’s underlying principles were carefully considered to match the dataset’s complexity and diversity, creating a framework that combines analytical rigor with computational efficiency.

The Extreme Gradient Boosting Algorithm

XGB, introduced by Chen [29], is a powerful ML algorithm that belongs to the class of ensemble methods. It builds an ensemble of weak learners (decision trees) in a sequential, additive manner while minimizing a given loss function to form a strong predictive model. XGB employs regularization techniques to control model complexity and reduce overfitting. The objective function of XGB can be expressed as follows:
O b j θ = i = 1 n l y i , ŷ i t 1 + f t x i + Ω f t ,
where θ is the model parameters, l is the loss function, y i is the true label of the ith observation, ŷ i t 1 is the predicted label of the ith observation at the (t − 1)th iteration, f t x i is the tth weak learner (i.e., decision tree) that maps the features (xi) to a predicted label, and Ω is a regularization term that penalizes complex models. The objective function comprises two parts: the loss function and the regularization term. The loss function measures the difference between the predicted and true labels, whereas the regularization term penalizes complex models to avoid overfitting. Figure 5 shows the schematic of the XGB model.

Random Forest Algorithm

RF is an ensemble learning algorithm developed by Breiman and Leo [30]. It combines multiple decision trees to improve the accuracy and robustness of the model. The mathematical equation for the RF model is expressed in Equation (10) as follows:
f x = Σ b j     I x     R j ,
where f x is the predicted value for a new observation with the feature vector x. The decision tree comprises a set of regions R j that divide the feature space into mutually exclusive and exhaustive regions. Each region R j is associated with the predicted value b j . The equation I(x R j ) is an indicator function that takes the value 1 if the new observation x belongs to region R j and takes the value 0 otherwise. Therefore, Σ( b j * I(x R j )) computes the predicted value by summing the predicted values b j of all regions to which the new observation x belongs:
y ^ = m o d e f 1 x ,   f 2 x ,   ,   f n x .
In Equation (11), y ^ is the predicted value for the new observation with the feature vector x. It comprises n decision trees trained on the randomly selected subsets of the training data and the randomly selected subsets of the features. Each decision tree produces the predicted value f i x for the new observation x. The function mode returns the most common value among its arguments, which are the predicted values of all the decision trees. Therefore, the equation y ^ = m o d e f 1 x ,   f 2 x ,   ,   f n x computes the predicted value by taking the majority vote of the predicted values of all decision trees in the RF. The schematic of the RF model is shown in Figure 6.

Support Vector Regression Algorithm

SVR is an ML algorithm developed by Vapnik and fellows [31] for regression tasks. It is a variant of support vector machine (SVM) and is particularly useful when dealing with nonlinear and complex data using different kernel functions. SVR aims to find the best-fitting hyperplane that maximizes the margin while minimizing the prediction error. SVR can be expressed as follows:
y = b + ( α i     K ( x i ,   x ) ) ,
where y is the target variable, b is the bias term, and α i are the Lagrange multipliers or support vector weights associated with each data point. K( x i ,   x ) is the kernel function that calculates the distance between the input data points x i and x in a higher-dimensional feature space.

k-Nearest Neighbors Algorithm

In ML, a KNN is a nonparametric algorithm that predicts the value of a target variable by considering the k-nearest training examples in the feature space. To determine the nearest neighbors, the KNN algorithm uses a distance metric to measure the distance between each pair of observations in the feature space. The most used distance metric is the Euclidean distance, which can be described by Equation (13):
d x i ,   x j = x i k x j k 2 ,
where x i k is the kth feature of observation i, and x j k is the kth feature of observation j.
Once the distance between each pair of observations is calculated, the algorithm selects the KNNs for the new observation. The value of k is a hyperparameter set before training the algorithm. Then, the KNN algorithm assigns the label of the new observation based on the majority class of its KNNs, as described by Equation (14):
y ^ = a r g m a x I y i = y ,
where y ^   is the predicted label for the new observation, y i is the label of the ith nearest neighbor, and I(yi = y) is an indicator function that equals 1 if y i = y and 0 otherwise. The function argmax returns the value of the argument that maximizes the function inside the parentheses. In other words, argmax returns the label y that has the highest frequency among the KNNs.

Artificial Neural Networks Model

ANNs are computational models inspired by biological neural networks and are used in various ML applications. They comprise interconnected layers of neurons, including the input, hidden, and output layers as shown in Figure 7. ANNs learn through backpropagation, adjusting their weights to minimize the error between the predicted and actual outputs. Activation functions introduce nonlinearity, enabling the network to learn complex relationships, as described by Equation (15):
Y i h = j = 1 n h 1 W i j h a j h 1 + b i h ,
where Y i h represents the preactivation value of neuron i in the hidden layer h, W i j h denotes the weight connecting neuron j in layer (h − 1) to neuron i in layer h, a j h 1 represents the activation value of neuron j in layer (h − 1), and b i h represents the bias term associated with neuron i in layer h. n h 1 is the number of neurons in layer (h − 1).
The topology of the ANN employed in predicting the performance of an MCGC comprised an input layer, three hidden layers, and an output layer. In total, 9 features were considered, with model outputs including Tao, Tro, Q, and ∆P.
To prevent overfitting and enhance generalization, the dropout technique is utilized in neural networks. This technique selectively deactivates neurons in hidden layers during training, thus improving the network’s ability to generalize unseen data. Through dropout regularization, the architecture of the neural network is adjusted, contributing to more robust model performance.

3.5. Hyperparameter Tuning

The predictive capability of an ML model heavily relies on fine-tuning its hyperparameters. Hyperparameters are configuration settings that are not directly learned from the data but influence the learning process. Optimizing these hyperparameters is crucial to achieve the best model performance. In ML, various methods are commonly employed to fine-tune a model’s hyperparameters. The manual search involves manually tweaking the hyperparameters based on prior knowledge and intuition. The grid search systematically explores a predefined grid of hyperparameter combinations and evaluates the model’s performance for each configuration. Moreover, the random search employs a randomized search strategy by sampling hyperparameters from predefined distributions. In this study, the RandomizedSearchCV method is used for parameter tuning. RandomizedSearchCV combines the benefits of random search and cross-validation. It randomly samples hyperparameters from the defined distributions and evaluates the model’s performance using cross-validation techniques. This approach allows for a more efficient exploration of the hyperparameter space, enabling the identification of promising combinations that yield improved model performance. Once the hyperparameters are fine-tuned using RandomizedSearchCV, the model is deemed ready for training, testing, and prediction. With the optimized hyperparameters, the model can be trained on the training data, allowing it to learn the underlying patterns and relationships. Subsequently, the model’s performance is assessed on the testing set to evaluate its ability to make accurate predictions on unseen data. Finally, with the trained and evaluated model, predictions can be made on new, unseen data to facilitate decision-making and inference tasks.

4. Results and Discussion

4.1. Feature Importance

The SHAP analysis provides a detailed interpretation of feature contributions to the machine learning model’s predictions for both Q and ΔP in the gas cooler system. As shown in Figure 8a, for the output variable Q, the ṁr emerges as the most influential parameter, as indicated by consistently high SHAP values. This reflects the direct thermodynamic relationship between increased refrigerant flow and higher heat capacity. The ṁca also contributes positively to Q, although with slightly less variability than ṁr. A higher air flow rate enhances the convective heat transfer on the air side, thereby increasing overall heat transfer. The Tcri shows a positive impact on Q at higher values, indicating that warmer Tcri improves the energy transfer potential. In contrast, the Tcai displays a negative relationship with Q, meaning lower Tcai values result in larger temperature differentials between air and refrigerant, enhancing heat transfer and increasing Q. The Pcri has a minimal effect, as evidenced by SHAP values clustered near zero, suggesting a negligible marginal contribution under the modeled conditions.
Regarding the geometrical parameters, tube length demonstrates a significant positive influence on Q. A longer tube provides an extended heat transfer surface area and contact time, enhancing thermal exchange. The number of passes also positively affects Q, as it increases the effective refrigerant path length. However, the number of tubes and ports shows a negative impact on Q. This may be attributed to the division of flow among a higher number of channels, potentially reducing the velocity and convective heat transfer effectiveness in individual tubes or ports. Overall, the operating conditions exert a more dominant influence on Q than the geometrical features.
In contrast, the SHAP analysis for ΔP, as illustrated in Figure 8b, reveals a shift in the dominant contributing features. Tube length again shows the highest impact, with longer lengths correlating with increased ΔP due to higher frictional losses over a longer path. Similarly, an increased number of passes leads to higher ΔP, as the flow is redirected more frequently, increasing turbulence and head loss. The number of tubes has a negative contribution; more tubes reduce ΔP by allowing a greater number of parallel flow paths, thereby reducing the velocity and associated frictional losses per tube. The ṁr has a strong positive impact on ΔP, as higher flow rates increase velocity and, consequently, the pressure loss due to friction. The number of ports shows a negative influence, possibly due to more distribution and reduced local velocities. Notably, the SHAP values for operating conditions such as Tcri, ṁca, Tcai, and Pcri are relatively low, indicating minimal effect on ΔP compared to geometrical parameters. Overall geometrical features dominate the contributions to ΔP while operating parameters have less impact.
These findings highlight the importance of distinguishing the roles of thermodynamic and geometric parameters in influencing different aspects of gas cooler performance. The SHAP analysis not only quantifies each feature’s impact but also aligns with engineering principles, offering interpretable and physically consistent insights into the model’s decision-making process.

4.2. Gas Cooler Capacity Predictions

In Figure 9a–e, the predicted Q of an MCGC is visually represented using advanced ML models such as (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs. Two key metrics, the coefficient of determination (R2) and the mean squared error (MSE), were used to evaluate the performance and accuracy of each model. The R2 values, which reflect the proportions of predictable variance, showcase excellent predictive performance across all models, with XGB at 0.99841, RF at 0.99818, SVR at 0.99844, KNNs at 0.99825, and ANNs at 0.99836. Simultaneously, the MSE values, which indicate the average squared differences between the predicted and actual values, exhibit noteworthy accuracy, with XGB at 0.09945, RF at 0.10639, SVR at 0.09858, KNNs at 0.10536, and ANNs at 0.09918.
Possible reasons for obtaining these results are attributed to the inherent strengths of each algorithm. XGB, known for its ensemble learning and boosting techniques, reveals high accuracy and efficiency in capturing complex patterns in the data. RF utilizes multiple decision trees, provides robustness in handling diverse features and reducing overfitting. SVR, which is based on the identification of optimal hyperplanes, excels in handling nonlinear relationships in the data. KNNs rely on proximity and are effective in capturing local patterns and dependencies. Lastly, ANNs, inspired by human brain functioning, showcase significant performance in capturing intricate relationships in large datasets. The combined use of these models allows for a comprehensive evaluation, providing valuable insights into Q predictions and aiding in the selection of the most suitable model for practical applications.

4.3. Gas Cooler Pressure Drop Predictions

The graphical results shown in Figure 10a–e represent ΔP predictions using XGB, RF, SVR, KNNs, and ANNs. As with the capacity prediction, R2 and MSE were used to assess model reliability, showing consistently high accuracy across all approaches. Notably, XGB stands out with an exceptional R2 value of 0.99398, highlighting its robust capability in predicting ΔP. SVR and ANNs also exhibit significant R2 values of 0.99166 and 0.98954, respectively, emphasizing their accuracy. RF maintains a notable R2 value of 0.96809, whereas the KNN model, with its focus on proximity-based predictions, displays a somewhat lower R2 value of 0.86713. This is primarily due to the local nature of KNN, which performs well in dense regions but lacks a global modeling structure, making it less suitable for capturing complex ΔP patterns. In contrast, ANN excels by modeling intricate non-linear dependencies, while XGB and SVR effectively manage high variance through boosting and margin optimization, respectively.
At the same time, the MSE values, which offer insights into the average squared differences between the predicted and actual values, confirm the accuracy of the models. XGB demonstrates exceptional precision with a low MSE of 0.12172, indicating minimal discrepancies in its ΔP predictions. SVR and ANN exhibit reliability with MSE values of 0.14033 and 0.13358, respectively. However, RF, despite its robust R2 value, presents a higher MSE of 0.25647, indicating some variability in prediction accuracy. The KNN model, which relies on local patterns, demonstrates a higher MSE of 0.45952, indicating potential challenges in capturing complex relationships within the data. The small variations in the performances of the models emphasize the importance of selecting suitable algorithms based on specific data characteristics and the complexities of the prediction task. Among all the models, XGB stands out as an effective predictor, producing impressive and reliable ΔP predictions. The RF model exhibits slight dispersion in its predictions. The SVR model presents more detailed behavior, providing dispersed predictions for lower ΔP while exhibiting enhanced accuracy for higher ΔP. This intriguing distinction shows the SVR model’s sensitivity to ΔP ranges and its potential applicability in scenarios with high ΔPs. In contrast, the KNN model shows considerable inconsistency, delivering highly dispersed predictions that deviate significantly from the acceptable range. Such divergence raises concerns about the suitability of the KNN model for ΔP analysis. On the other hand, the ANN model emerges as the best predictive model for ΔP prediction. The results highlight exceptional performance, surpassing the performance of other models. This superiority can be attributed to the ANN’s unique approach based on deep learning principles, enabling it to effectively capture complex patterns and dependencies in the ΔP data. XGB shows robust predictive capabilities, RF exhibits promise with slight fluctuation, and SVR reveals distinct behavior across ΔP ranges. While KNN’s highly dispersed predictions raise concerns about its reliability, the ANN model stands out as a dominant model due to its precision and accuracy in ΔP predictions, substantiating the power of deep learning techniques.

4.4. Refrigerant (CO2) Outlet Temperature Predictions

Figure 11 illustrates that all models reveal remarkable accuracy in their predictions of refrigerant (CO2) outlet temperature. Among the models, the ANN demonstrates superior performance, providing the most accurate predictions of outlet temperature. This high level of precision can be attributed to its deep learning architecture, which effectively captures complex patterns and dependencies within the data, resulting in highly reliable outcomes. Furthermore, the SVR model notably avoids overfitting in this specific case. This indicates that SVR’s margin-based learning is well-tuned for smooth continuous outputs like temperature, though its performance may still be constrained in the presence of high-dimensional feature interactions, where ANN has a natural advantage.

4.5. Air Outlet Temperature Predictions

Our detailed assessment of predictive models for outlet air temperature forecasting revealed the notable potential of all models to achieve accurate predictions. Among them, the ANN model exhibits the highest precision and reliability in its predictions, as shown in Figure 12. Its superior performance can be attributed to its utilization of sophisticated DL techniques, which enables it to discern intricate patterns and dependencies within the data. The XGB and SVR models closely follow the ANN model in proficiency, demonstrating commendable predictive capabilities. Their competitive performance attests to their capacity to capture meaningful insights from the data, although they fall short of the exceptional results achieved using the ANN model. Moreover, the RF and KNN models offer noteworthy outcomes, though RF may have oversmoothed predictions due to averaging across trees, and KNN’s instance-based learning is more sensitive to local variations, limiting its generalization across broader operating conditions.

4.6. Comparitive Analysis of Numerical Model and ANN

Figure 13a evaluates the percentage error in predicting Q by comparing experimental data with numerical simulation results, and numerical results with ANN predictions. The error between experimental and numerical results averages 3.63%, indicating that the numerical model effectively captures the thermal behavior of the gas cooler. This level of agreement suggests that the numerical model accurately incorporates the key thermophysical properties of the working fluid, heat transfer mechanisms, and flow distribution effects. On the other hand, the ANN model, trained on numerical data, demonstrates a slightly higher average error of 4.14%. This reflects the ANN’s capacity to generalize from training data and estimate thermal capacity under varying input conditions with reasonable accuracy. Despite the black-box nature of ANNs, their predictive reliability in this context highlights their suitability for quick performance estimation, especially when rapid analysis is needed in parametric studies or optimization routines.
Figure 13b analyzes the percentage error in predicting gas cooler ΔP, again comparing experimental data with numerical results, and numerical data with ANN predictions. ΔP prediction typically involves more sensitivity to geometric features, flow regime transitions, and localized effects such as entrance losses or maldistribution. The numerical model shows an average error of 6.05% when compared to experimental values, suggesting that while it captures the general trend, it may have limitations in modeling detailed flow resistance or minor losses. In contrast, the ANN model achieves a lower average error of 3.73% compared to the numerical results, indicating effective learning of the complex nonlinear mapping between inputs (e.g., geometric and operating conditions) and output (ΔP). Moreover, once the ANN model is trained, it can produce predictions in nanoseconds, offering a significant computational advantage over simulation-based methods, which are typically resource-intensive and time-consuming. This speed advantage makes ANN particularly attractive for real-time control, iterative design, and optimization applications.

5. Conclusions

This study employed various machine learning algorithms (XGB, RF, SVR, KNN, and ANN) to predict the thermohydraulic performance of a microchannel gas cooler (MCGC). The models were trained on high-fidelity data generated from a validated numerical model, and their hyperparameters were carefully optimized to ensure robust performance. SHAP plots were used to interpret models and identify the influence of key operational and geometric parameters.
Among the tested models, ANN consistently delivered the highest accuracy across all performance metrics, including Q, ΔP, and outlet temperatures. XGB and SVR also performed well in certain cases, while KNN showed limitations in consistency. SHAP-based interpretability helped uncover important trends, such as the significant effect of flow rates and air-side temperatures.
This work adds novelty by focusing specifically on transcritical CO2 MCGCs, a relatively less-explored application in ML-based thermal system modeling. The integration of explainable ML techniques (e.g., SHAP) further enhances its practical relevance, offering insights that can support the design and optimization of advanced cooling systems, particularly in automotive and energy applications.
From a comparative standpoint, the findings highlight the importance of aligning model choice with data characteristics. ANN’s deep learning framework excels at capturing intricate nonlinear relationships in complex thermofluidic data, while XGB’s ensemble boosting effectively reduces bias and variance. SVR demonstrates strength in handling smooth trends without overfitting, and RF provides stability through averaging, albeit sometimes at the cost of precision. KNN, despite being intuitive and easy to implement, is more sensitive to local noise and less suited for capturing global patterns. These distinctions offer guidance for model selection in future applications of ML in heat exchanger performance prediction.

6. Practical Implications and Design Recommendations

The findings of this study offer practical guidance for the design and optimization of CO2 gas coolers in electric vehicle (EV) thermal management systems. The integration of machine learning (ML) models with SHAP-based interpretability highlights the most influential operating and geometric parameters affecting thermal performance. Specifically, refrigerant and air mass flow rates, as well as refrigerant inlet temperature, emerged as key factors influencing cooling capacity. This indicates that strategies such as adjusting refrigerant flow rates, enhancing air-side heat transfer via fan speed or duct design, and managing inlet conditions can yield significant improvements. Additionally, geometric factors such as tube length, number of passes, and number of ports should be carefully balanced to maximize heat transfer while avoiding flow maldistribution. The developed ML models, coupled with SHAP insights, not only support accurate performance prediction but also offer a robust framework for real-time control, early-stage design, and integration into digital twins, thereby translating data-driven methods into actionable engineering solutions.

Author Contributions

Conceptualization, S.I. and N.U.; Methodology, S.I. and N.U.; Validation, S.I. and N.U.; Formal analysis, S.I. and N.U.; Investigation, S.I. and N.U.; Data curation, S.I., N.U. and S.C.; Writing—original draft, S.I. and N.U.; Writing—review & editing, S.I., N.U., S.C. and M.-H.K.; Supervision, S.C. and M.-H.K.; Project administration, M.-H.K.; Funding acquisition, M.-H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Acknowledgments

This work was partly supported by the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government (MOTIE) (Project number: RS-2025-02313376).

Conflicts of Interest

The authors declare no conflict of interest.

Nomenclature

caAir flow rate, kg/s
∆PGas cooler pressure drop, kPa
rRefrigerant mass flow rate, kg/s
PcriRefrigerant inlet pressure, kPa
QGas cooler capacity, kW
TcaiAir inlet temperature, °C
TaoAir outlet temperature, °C
TcriRefrigerant inlet temperature, °C
TroRefrigerant outlet temperature, °C
Abbreviations
CFDComputational fluid dynamics
DLDeep learning
EVsElectric vehicles
MLMachine learning
MCGCMicrochannel gascooler
MACSMobile air-conditioning system
SHAPShapley additive explanations
XGBXGBoost
RFRandom Forest
SVRSupport Vector Regression
KNNsk-Nearest Neighbors
ANNsArtificial Neural Networks
R2Coefficient of determination
MSEMean squared error

References

  1. The Number of New-Energy Vehicles in China Reached at 7.84 Million by the End of 2021. Available online: http://www.gov.cn/xinwen/2022-01/12/content_5667734.htm (accessed on 3 May 2023).
  2. Ji, H.; Pei, J.; Niu, J.; Ding, C.; Guo, F.; Wang, Y. Hybrid offline and online strategy for optimal high pressure seeking of the transcritical CO2 heat pump system in an electric vehicle. Appl. Therm. Eng. 2023, 228, 120514. [Google Scholar] [CrossRef]
  3. Wang, H.; Song, Y.; Qiao, Y.; Li, S.; Cao, F. Rational assessment and selection of air source heat pump system operating with CO2 and R407C for electric bus. Renew. Energy 2022, 182, 86–101. [Google Scholar] [CrossRef]
  4. Okasha, A.; Müller, N.; Deb, K. Bi-objective optimization of transcritical CO2 heat pump systems. Energy 2022, 247, 123469. [Google Scholar] [CrossRef]
  5. Qin, X.; Zhang, D.; Zhang, F.; Gao, Z.; Wei, X. Experimental and numerical study on heat transfer of gas cooler under the optimal discharge pressure. Int. J. Refrig. 2020, 112, 229–239. [Google Scholar] [CrossRef]
  6. Song, X.; Yuan, H.; Zhang, Y.; Yu, B.; Wang, D.; Shi, J.; Chen, J. Experimental study on improved performance of an automotive CO2 air conditioning system with an evaporative gas cooler. Int. J. Refrig. 2022, 140, 39–48. [Google Scholar] [CrossRef]
  7. Wang, D.; Yu, B.; Shi, J.; Chen, J. Experimental and theoretical study on the cooling performance of a CO2 mobile air conditioning system. Energies 2018, 11, 1927. [Google Scholar] [CrossRef]
  8. Na, S.I.; Kim, M.; Kim, M.S. Performance simulation of CO2 transcritical cooling system with mechanical subcooling cycle for automobile air conditioning. J. Mech. Sci. Technol. 2022, 36, 4797–4807. [Google Scholar] [CrossRef]
  9. Nie, F.; Wang, H.; Zhao, Y.; Song, Q.; Yan, S.; Gong, M. A universal correlation for flow condensation heat transfer in horizontal tubes based on machine learning. Int. J. Therm. Sci. 2023, 184, 107994. [Google Scholar] [CrossRef]
  10. Saeed, M.; Berrouk, A.S.; Al Wahedi, Y.F.; Singh, M.P.; Dagga, I.A.; Afgan, I. Performance enhancement of a C-shaped printed circuit heat exchanger in supercritical CO2 Brayton cycle: A machine learning-based optimization study. Case Stud. Therm. Eng. 2022, 38, 102276. [Google Scholar] [CrossRef]
  11. Saeed, M.; Kalule, R.S.; Berrouk, A.S.; Alshehhi, M.; Almatrafi, E. Machine Learning-Based Optimization of a Mini-Channel Heatsink Geometry. Arab. J. Sci. Eng. 2023, 48, 12107–12124. [Google Scholar] [CrossRef]
  12. Haghighi, A.; Shadloo, M.S.; Maleki, A.; Abdollahzadeh Jamalabadi, M.Y. Using committee neural network for prediction of pressure drop in two-phase microchannels. Appl. Sci. 2020, 10, 5384. [Google Scholar] [CrossRef]
  13. Kim, K.; Lee, H.; Kang, M.; Lee, G.; Jung, K.; Kharangate, C.R.; Asheghi, M.; Goodson, K.E.; Lee, H. A machine learning approach for predicting heat transfer characteristics in micro-pin fin heat sinks. Int. J. Heat Mass Transf. 2022, 194, 123087. [Google Scholar] [CrossRef]
  14. Yu, C.; Zhu, X.; Li, Z.; Ma, Y.; Yang, M.; Zhang, H. Optimization of elliptical pin-fin microchannel heat sink based on artificial neural network. Int. J. Heat Mass Transf. 2023, 205, 123928. [Google Scholar] [CrossRef]
  15. Sikirica, A.; Grbčić, L.; Kranjčević, L. Machine learning based surrogate models for microchannel heat sink optimization. Appl. Therm. Eng. 2023, 222, 119917. [Google Scholar] [CrossRef]
  16. Zohora, F.T.; Akter, F.; Haque, M.A.; Chowdhury, N.M.; Haque, M.R. A novel pin finned structure-embedded microchannel heat sink: CFD-data driven MLP, MLR, and XGBR machine learning models for thermal and fluid flow prediction. Energy 2024, 307, 132646. [Google Scholar] [CrossRef]
  17. Efatinasab, E.; Irannezhad, N.; Rampazzo, M.; Diani, A. Machine and deep learning driven models for the design of heat exchangers with micro-finned tubes. Energy AI 2024, 16, 100370. [Google Scholar] [CrossRef]
  18. Aksöz, Z.Y.; Günay, M.E.; Aziz, M.; Tunç, K.M.M. Machine Learning Analysis of Thermal Performance Indicator of Heat Exchangers with Delta Wing Vortex Generators. Energies 2024, 17, 1380. [Google Scholar] [CrossRef]
  19. Lee, H.; Lee, G.; Kim, K.; Kong, D.; Lee, H. Multimodal machine learning for predicting heat transfer characteristics in micro-pin fin heat sinks. Case Stud. Therm. Eng. 2024, 57, 104331. [Google Scholar] [CrossRef]
  20. Kuberan, V.; Gedupudi, S. Empirical modeling and hybrid machine learning framework for nucleate pool boiling on microchannel structured surfaces. Int. J. Heat Mass Transf. 2025, 249, 127163. [Google Scholar] [CrossRef]
  21. Bakır, R.; Khail, A.A.; Bakır, H. Enhancing the prediction of flow characteristics in an inventive plate heat exchanger using deep learning techniques. Phys. Scr. 2025, 100, 3. [Google Scholar] [CrossRef]
  22. Su, Y.; Zhao, Y.; Wu, J.; Zhang, L. Research on Heat Transfer Coefficient Prediction of Printed Circuit Plate Heat Exchanger Based on Deep Learning. Appl. Sci. 2025, 15, 4635. [Google Scholar] [CrossRef]
  23. Godasiaei, S.H.; Kamali, H.A. Evaluating Machine Learning as an Alternative to CFD for Heat Transfer Modeling. Microgravity Sci. Technol. 2025, 37, 6. [Google Scholar] [CrossRef]
  24. Gomathi, B.; Ramanipriya, M.; Anitha, S. Machine learning approach to predict the thermal performance and friction factor of cylindrical heat exchangers with perforated conical ring turbulators. J. Therm. Anal. Calorim. 2025. [Google Scholar] [CrossRef]
  25. Kim, M.H.; Bullard, C.W. Air-side thermal hydraulic performance of multi-louvered fin aluminum heat exchangers. Int. J. Refrig. 2002, 25, 390–400. [Google Scholar] [CrossRef]
  26. Incropera, F.P.; Dewitt, D.P.; Bergman, T.L. Fundamentals of Heat and Mass Transfer, 6th ed.; John Wiley & Sons: Hoboken, NJ, USA, 2007. [Google Scholar] [CrossRef]
  27. Yin, J.M.; Bullard, C.W.; Hrnjak, P.S. R-744 Gas Cooler Model Development and Validation. Int. J. Refrig. 2001, 24, 692–701. [Google Scholar] [CrossRef]
  28. Ullah, N.; Ishaque, S.; Kim, M.H.; Choi, S. Modeling and Optimization of a Micro-Channel Gas Cooler for a Transcritical CO2 Mobile Air-Conditioning System. Machines 2022, 10, 1177. [Google Scholar] [CrossRef]
  29. Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; Association for Computing Machiner: New York, NY, USA, 2016; pp. 785–794. [Google Scholar] [CrossRef]
  30. Breiman, L. Random Forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  31. Boser, B.E.; Guyon, I.M.; Vapnik, V.N. A Training Algorithm for Optimal Margin Classifiers. In Proceedings of the Fifth Annual Workshop on Computational Learning Theory, Pittsburgh, PA, USA, 27–29 July 1992; pp. 144–152. [Google Scholar]
Figure 1. Microchannel tube cross-section.
Figure 1. Microchannel tube cross-section.
Energies 18 03007 g001
Figure 2. Flow chart of a numerical model for transcritical CO2 gas cooler.
Figure 2. Flow chart of a numerical model for transcritical CO2 gas cooler.
Energies 18 03007 g002
Figure 3. (a) Comparison of experimental vs. simulated Q. (b) Comparison of experimental vs. simulated gas cooler ΔP.
Figure 3. (a) Comparison of experimental vs. simulated Q. (b) Comparison of experimental vs. simulated gas cooler ΔP.
Energies 18 03007 g003
Figure 4. Schematic flow chart of the proposed framework.
Figure 4. Schematic flow chart of the proposed framework.
Energies 18 03007 g004
Figure 5. Schematic of the XGB model.
Figure 5. Schematic of the XGB model.
Energies 18 03007 g005
Figure 6. Schematic of the RF model.
Figure 6. Schematic of the RF model.
Energies 18 03007 g006
Figure 7. (a,b) Schematic of the ANN model.
Figure 7. (a,b) Schematic of the ANN model.
Energies 18 03007 g007
Figure 8. Impacts of various input parameters on (a) SHAP feature impact on output Q, and (b) SHAP feature impact on ΔP.
Figure 8. Impacts of various input parameters on (a) SHAP feature impact on output Q, and (b) SHAP feature impact on ΔP.
Energies 18 03007 g008
Figure 9. Comparison of the actual and predicted gas cooler capacities using (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs.
Figure 9. Comparison of the actual and predicted gas cooler capacities using (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs.
Energies 18 03007 g009
Figure 10. Comparison of the actual and predicted gas cooler ΔPs using (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs.
Figure 10. Comparison of the actual and predicted gas cooler ΔPs using (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs.
Energies 18 03007 g010
Figure 11. Comparison of the actual and predicted refrigerant outlet temperatures using (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs.
Figure 11. Comparison of the actual and predicted refrigerant outlet temperatures using (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs.
Energies 18 03007 g011
Figure 12. Comparison of the actual and predicted air outlet temperatures using (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs.
Figure 12. Comparison of the actual and predicted air outlet temperatures using (a) XGB, (b) RF, (c) SVR, (d) KNNs, and (e) ANNs.
Energies 18 03007 g012
Figure 13. (a) Comparison of percentage difference in Q predictions (experimental vs. numerical) and (numerical vs. ANN). (b) Comparison of percentage difference in gas cooler ΔP predictions (experimental vs. numerical) and (numerical vs. ANN).
Figure 13. (a) Comparison of percentage difference in Q predictions (experimental vs. numerical) and (numerical vs. ANN). (b) Comparison of percentage difference in gas cooler ΔP predictions (experimental vs. numerical) and (numerical vs. ANN).
Energies 18 03007 g013
Table 1. Geometric specifications.
Table 1. Geometric specifications.
Number of Tubes34
Tube length (mm)545
Number of ports11
Ports diameter (mm)0.79
Fin typeLouvered fins
Fin height (mm)8.89
Fin pitch (mm)2.5
Fin width (mm)16
Table 2. Operating conditions.
Table 2. Operating conditions.
S.NoTri [°C]Pri [kPa]Tai [°C]r [g/s]a [g/s]
I17-1106.8983343.620.78451
I17-2111.710,35543.619.8451
I17-3115.810,88843.619.02452
I17-4119.711,38843.618.45452
I17-512311,85443.617.96452
I6-1115.812,46455.126.39457
I6-211812,6725525.91457
I6-3119.212,8555525.61457
I6-4120.512,96054.925.260.456
I6-512513,33554.924.47456
I6-6126.613,59254.923.94456
M03-1124.710,93742.737.84537
M03-2124.310,95042.838.05537
M03-312510,97442.937.75537
M03-4124.710,97542.937.93537
H03-1129.310,33843.656.39701
H03-2129.510,35143.956.39700
H03-3138.610,79243.556.36701
H03-4142.611,02543.754.83700
H03-5148.911,75643.550.13700
Table 3. Ranges of geometric and operating conditions used for machine learning models.
Table 3. Ranges of geometric and operating conditions used for machine learning models.
ParameterRange/Values
Passes3–5
Tube Numbers18–60
Number of Circular Ports9–21
MCGC DimensionsLength: 545 mm, Depth: 16.51 mm, Height: 1.65 mm
Ambient Temperature42–56 °C
Inlet Air Flow Rate0.45–0.70 kg/s
Refrigerant Flow Rate0.018–0.057 kg/s
The dataset was preprocessed to ensure compatibility with the ML algorithms. Input features were standardized using the StandardScaler function to achieve zero mean and unit variance.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ishaque, S.; Ullah, N.; Choi, S.; Kim, M.-H. Evaluating the Thermohydraulic Performance of Microchannel Gas Coolers: A Machine Learning Approach. Energies 2025, 18, 3007. https://doi.org/10.3390/en18123007

AMA Style

Ishaque S, Ullah N, Choi S, Kim M-H. Evaluating the Thermohydraulic Performance of Microchannel Gas Coolers: A Machine Learning Approach. Energies. 2025; 18(12):3007. https://doi.org/10.3390/en18123007

Chicago/Turabian Style

Ishaque, Shehryar, Naveed Ullah, Sanghun Choi, and Man-Hoe Kim. 2025. "Evaluating the Thermohydraulic Performance of Microchannel Gas Coolers: A Machine Learning Approach" Energies 18, no. 12: 3007. https://doi.org/10.3390/en18123007

APA Style

Ishaque, S., Ullah, N., Choi, S., & Kim, M.-H. (2025). Evaluating the Thermohydraulic Performance of Microchannel Gas Coolers: A Machine Learning Approach. Energies, 18(12), 3007. https://doi.org/10.3390/en18123007

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop