Next Article in Journal
Robust Detection Algorithm for Single-Phase Voltage Sags Integrating Adaptive Composite Morphological Filtering and Improved MSTOGI-PLL
Previous Article in Journal
Determining the Origin of Electricity Consumed from Low-Carbon and Renewable Energy Sources: A Matrix-Based Modelling Approach and Algorithm
Previous Article in Special Issue
Device Modeling Method for the Entire Process of Energy-Saving Retrofit of a Refrigeration Plant
 
 
Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Comparative Assessment of Supervised Machine Learning Models for Predicting Water Uptake in Sorption-Based Thermal Energy Storage

1
Automation and Energy Systems, Saarland University, D-66123 Saarbrücken, Germany
2
Industrial Security Lab, ZeMA—Center for Mechatronics and Automation Technology, D-66121 Saarbrücken, Germany
3
Department of Mechanical Engineering, Ulsan National Institute of Science and Technology, Ulsan 44919, Republic of Korea
*
Author to whom correspondence should be addressed.
Energies 2026, 19(7), 1619; https://doi.org/10.3390/en19071619
Submission received: 25 February 2026 / Revised: 20 March 2026 / Accepted: 23 March 2026 / Published: 25 March 2026

Abstract

In this study, supervised machine learning (ML) regression models are employed to predict water uptake during the sorption process in a sorption reactor for thermal energy storage applications. Two main methods are used to study sorption storage systems: experimental studies and numerical simulations. Experimental studies involve physical testing and measurements but are often costly and time-consuming. Numerical simulations are more flexible and cost-effective, though they can require significant computational resources for large or complex systems. To address these challenges, researchers are increasingly employing various machine learning techniques, which offer strong potential for data analysis and predictive modeling. In this study, CFD-based sorption simulations are integrated with machine learning models to predict the spatiotemporal evolution of water uptake. Several ML techniques including support vector regression (SVR), Random Forest, XGBoost, CatBoost (gradient boosting decision trees), and multilayer perceptron neural networks (MLPs) are evaluated and compared. A fixed-bed reactor equipped with fins and tubes is considered within a closed adsorption thermal storage system. Numerical simulations are conducted for three different fin lengths (10 mm, 25 mm, and 35 mm) to generate a comprehensive dataset for training the ML models and capturing the complex temporal evolution of water uptake, thereby enabling predictions for unseen fin geometries. The results indicate that neural network-based models achieve superior predictive performance compared to the other methods. For water uptake training, the mean absolute error (MAE), root mean squared error (RMSE), and coefficient of determination R 2 are approximately 2.83, 4.37, and 0.91, respectively. The predicted water uptake shows close agreement with the numerical simulation results. For the prediction cases, the MAE, MSE, and R 2 values are approximately 1.13, 1.2, and 0.8, respectively. Overall, the study demonstrates that machine learning models can accurately predict water uptake beyond the training dataset, indicating strong generalization capability and significant potential for improving thermal management system design. Additionally, the proposed approach reduces simulation time and computational cost while providing an efficient and reliable framework for modeling complex sorption processes in thermal energy storage systems.

1. Introduction

The building sector plays a significant role in global energy consumption and CO2 emissions, contributing to climate change and the depletion of natural resources [1,2]. A significant share of building energy use is associated with heating and cooling, and projections indicate that by 2050, two-thirds of residential buildings will require air conditioning [3]. Under these circumstances, increasing the integration of renewable energy sources in buildings has become essential. However, the intermittent and variable nature of renewable energy presents a key obstacle to maintaining a stable and reliable energy supply. Energy storage technologies provide an effective solution to this challenge by enabling excess energy generated during periods of high renewable production to be stored and later released when demand is high or generation is low. Thermal energy storage (TES) systems, classified into sensible heat storage (SHS), latent heat storage (LHS), and thermochemical energy storage (TCES), offer valuable means of mitigating fluctuations in renewable energy availability and reducing dependence on fossil fuels.
Sorption energy storage has emerged as a promising thermal energy storage (TES) technology due to its high energy density and negligible heat losses during long-term storage, outperforming conventional sensible and latent heat storage systems [4,5]. In thermal energy storage systems, adsorption refers to the process by which a gas (adsorbate) adheres to the surface of a solid material (adsorbent) [6,7]. Heat storage and release in adsorption-based systems rely on reversible interactions between the adsorbent and adsorbate, typically involving three key stages: charging, storage, and discharging [8]. Common adsorbent materials such as zeolite, silica gel, and activated carbon are widely used for building applications, while water is typically employed as the adsorbate due to its low cost, availability, and non-toxic nature [9,10]. Köll et al. [11] experimentally examined a sorption storage system designed to supply domestic hot water and space heating for a single-family residence. The system comprised two reactors filled with zeolite 13XBF and was charged using evacuated tube solar collectors. Padamurthy et al. [12] experimentally investigated a sorption energy storage system using zeolites for low-temperature heat storage applications. They conducted both desorption and adsorption experiments and analyzed the performance characteristics to assess the system’s suitability for the targeted application. Ji et al. [13] introduced a metal mesh net-packing technique to enhance the energy storage density and thermal efficiency of a thermochemical energy storage system. They designed three mesh geometries with two different packing arrangements and evaluated key reactor parameters. Their findings indicated that the cube-shaped aluminum mesh net configuration provided the best overall performance. Gaeini et al. [14] developed and evaluated a sorption storage system employing the zeolite 13X–water pair for domestic hot water production in residential buildings. Their results showed that, by using one of the system’s segments, 100 L of water could be heated to 75 °C, demonstrating the feasibility of producing domestic hot water with this type of sorption-based system. Palo et al. [15] studied the performance of a novel adsorption thermal storage system based on steam vapor and zeolite 13X through both experimental and numerical approaches. Their results showed that the system could achieve charging efficiencies above 80% and discharging efficiencies exceeding 50%, demonstrating strong potential for practical thermal energy storage applications. Gao et al. [16] conducted a parametric investigation to examine how different charging conditions—specifically temperature, humidity, and volumetric flow rate—affect the performance of an open sorption system, with a particular focus on the coefficient of performance (COP). Their findings revealed that approximately 60% of the input thermal energy was lost during the charging phase due to the direct release of hot outlet air from the reactor into the surrounding environment.
Although exergy-based analysis has been widely used as a powerful tool for evaluating the thermodynamic performance of energy systems [17], two primary approaches are commonly employed to investigate sorption storage systems, experimental studies and numerical simulations, each with distinct advantages and limitations. Experimental studies provide accurate and reliable insights into system behavior through direct physical testing, but they are often costly and time-consuming due to the need for specialized equipment and detailed data acquisition. In contrast, numerical simulations offer a more flexible and cost-effective alternative by modeling sorption processes and heat transfer phenomena, allowing the evaluation of a wide range of operating conditions. However, for large or complex systems, such simulations can still require significant computational time and resources. To tackle these challenges, methodologies such as Design of Experiments (DOE) and Response Surface Methodology (RSM) are commonly employed, as they provide powerful tools for analyzing and optimizing complex systems [18]. DOE offers a systematic framework for examining the influence of multiple variables with minimal experimental effort, while RSM extends this approach by using regression-based, models to characterize relationships and identify optimal operating conditions. These methods are particularly useful in controlling experimental settings with limited data and where understanding variable interactions is essential. However, their ability to capture complex or strongly nonlinear behavior is limited, especially in systems governed by intricate physical interactions. To overcome these limitations, researchers increasingly turn to machine learning techniques, which have shown strong potential for analyzing and predicting heat transfer processes in recent studies [19,20]. Machine learning is particularly effective at handling large, complex datasets and capturing non-linear relationships, making it well suited for problems involving high-dimensional inputs and outputs. Its application has expanded across numerous fields, including fluid mechanics, heat exchangers, and fluid flow systems [21,22]. A variety of machine learning approaches such as gradient boosting decision trees, multilayer perceptron neural networks, support vector regression, Random Forest, and XGBoost have been applied to capture these relationships without the need for detailed, system-specific information. These methods are particularly effective for the modeling and optimization of sorption storage systems, as they can handle complex system behavior, lower computational costs, and provide rapid, adaptable, and accurate predictions. By training machine learning models using data from experimental studies or numerical simulations, researchers can create predictive frameworks that efficiently estimate system performance under new operating conditions without the need for extensive testing. Combining experimental techniques, numerical modeling, and machine learning therefore shortens development time, reduces resource consumption, and enables new opportunities for system optimization. Balakrishnan et al. [23] employed several machine learning approaches, including Random Forest, Extreme gradient boost, Catboost, SVM ANNs, to predict the thermal behavior of PCM based on experimental measurements. Their work focused on accurately modeling the charging and discharging cycles in a heat exchanger and demonstrated that machine learning can improve the design of thermal energy storage systems. Amudhalapalli et al. [24] investigated shell-and-tube heat exchangers incorporating phase change materials, enhanced with copper metal foams, to predict the melt fraction using machine learning approaches. Several ML models were assessed, including Linear Regression, Support Vector Regression, XGBoost, and K-Nearest Neighbors, with these methods achieving the highest prediction accuracy for melt fraction during both melting and solidification processes. The results underscore the effectiveness of machine learning techniques for modeling transient heat transfer behavior. Zhao and Alshehri [25] integrated numerical simulations with machine learning regression techniques to model an ozonation process coupled with membrane separation, with particular emphasis on mass transfer and the distribution of ozone concentration in the liquid phase. Using data generated from CFD simulations, ozone concentration was predicted employing Deep Neural Networks (DNNs), Gaussian Process Regression (GPR), and Extreme Gradient Boosting models. The results indicate that the GPR and DNN approaches achieved high predictive accuracy, whereas the XGBoost model exhibited comparatively lower performance. Delmarre et al. [26] applied an ML model to simulate a sorption heat storage system as an alternative to conventional physical modeling. The neural network was trained using experimental data to assess its performance under real operating conditions. Their findings showed that a recurrent neural network (RNN) achieved high accuracy, with predicted outcomes closely matching the experimental results. Scapino et al. [27] explored the capability of neural network models to predict the behavior of a sorption thermal energy storage system. Their ANN was trained using simulated data generated from a physics-based model. The results demonstrated that the network could accurately reproduce and forecast the system’s dynamic performance, achieving mean squared error values below 2 × 10 3 . Skrobek et al. [28] evaluated several neural network architectures—including LSTM, BiLSTM, and GRU—to identify the most effective model for predicting mass variations within a sorption bed. The dataset incorporated the fluidized state of the adsorption bed under reduced pressure. Their numerical investigation focused on mass prediction using these algorithms for silica gel sorbents enhanced with copper, aluminum, and carbon-nanotube additives.
Although many researchers have explored the use of ML and numerical simulations in thermal and sorption systems, relatively few studies have examined the capability of ML methods to predict system behavior beyond the conditions included in the training dataset, thereby reducing computational effort and resource consumption. The ML models are designed to interpolate adsorption behavior within the investigated geometric range rather than to extrapolate beyond it. Even fewer works have focused specifically on forecasting the temporal evolution of sorption water uptake. This study aims to address these research gaps by evaluating the ability of different ML methods to predict the dynamic water uptake in a sorption system housed within an enclosure with different fin lengths, thereby contributing new insights to this under-explored area. Numerical simulations are first conducted for three fin lengths (10 mm, 25 mm, and 35 mm). The corresponding coordinate data for water uptake at various time steps are then extracted, and different ML models, such as CatBoost, MLP, SVR, Random Forest, and XGBoost, are used to train the model. These models were chosen because they represent widely used and well-established regression approaches in thermal engineering and energy system modeling, including both neural networks and ensemble learning techniques. Based on the metrics, the best model is selected and, after training, applied to predict water uptake for two additional fin lengths (20 mm and 30 mm), which are not included in the training dataset. The predicted results are compared against separate numerical simulations to evaluate the model’s predictive accuracy.

2. Materials and Methods

Figure 1 shows the geometric model used for numerical simulation. In this study, a cylindrical fixed-bed reactor is selected as the research subject within a closed sorption heat storage system. The computational domain, depicted in Figure 1, consists of the zeolite 13X-filled bed, the tube containing the heat transfer fluid (HTF), and the attached fins. The inner and outer tube radii ( r i and r o u t ), the heat exchanger length ( L b e d ), and the wall thickness (t) are 7 mm, 58 mm, 64 mm, and 1 mm, respectively. Copper fins are uniformly distributed along the tube surface. The thermophysical properties of the water vapor, fins, zeolite, as well as geometric and other parameters, are provided in reference [29]. Inside the enclosure, two fins are placed, each with a thickness of 0.86 mm. The fin length, denoted as L, varies in simulations, with values of 10 mm, 25 mm, and 35 mm considered. The outcomes of these simulations are utilized to train the ML model by supplying spatial data of the water uptake, allowing the model to capture and comprehend the dynamic water uptake of the reactor during discharge process under various conditions. Once trained, the best model is selected and is applied to predict the water uptake characteristics for intermediate fin lengths of 20 mm and 30 mm, which were excluded from the training dataset. The predicted results for these intermediate cases are then thoroughly validated against the corresponding numerical simulation data to evaluate the model’s accuracy and reliability.
Figure 2 presents the workflow diagram of the study. As illustrated, ML models are employed to improve the efficiency of simulations related to water uptake. Initially, numerical simulations are conducted for the sorption reactor with fin lengths of 10 mm, 25 mm, and 35 mm to obtain contour data of water uptake. Subsequently, the X and Y coordinate data are extracted from these contours at different time steps and used to train the ML models. Using the metric-based evaluation, the most suitable machine learning model for the dataset is identified. The selected model is then trained and applied to predict the coordinate data for additional fin lengths of 20 mm and 30 mm. The predicted results are compared and validated against the corresponding numerical simulation outcomes. If the ML predictions show sufficient accuracy and reliability, the model is considered effective in significantly reducing computational time while accurately predicting results for untested fin lengths. In cases where the predictions lack reliability, the ML model is further refined to improve its accuracy and performance. When the predictive performance is unsatisfactory, the model is refined through systematic hyperparameter tuning. Critical hyperparameters are optimized using a structured search strategy with cross-validation to improve accuracy and generalization.

2.1. Governing Equations

The finite element method (FEM)-based computational fluid dynamics (CFD) software, COMSOL Multiphysics 6.1, is employed to numerically solve the interconnected mass and heat transfer equations. Custom user-defined functions were developed by Abohamzeh et al. [29] and used to incorporate expressions for reaction kinetics and thermophysical properties. The mesh is constructed by dividing the three distinct domains, heat transfer fluid (HTF), heat transfer tube, and adsorbent bed (porous region), into very fine elements to achieve superior accuracy in the simulation outcomes. All equations are discretized via a quadratic second-order scheme. To minimize computational errors, an extremely fine grid resolution is employed near the boundaries between the porous medium and the solid phase. Simulations were conducted with three mesh densities, 22,950, 54,879, and 87,606 triangular elements, and the resulting water uptake quantities were compared. The findings from the finest mesh (87,606 elements) showed deviations of no more than 0.02% relative to the 54,879-element case. Grid independence validation is presented in Figure 3.
The governing equations used to model this process are outlined below.

2.1.1. Adsorption Model

The adsorption rate and corresponding adsorbed mass are calculated using the linear driving force (LDF) model [30]:
X t = K L D F X E Q X
where X E Q is the equilibrium adsorption capacity, X is the instantaneous adsorbed amount, and K LDF is the lumped mass-transfer coefficient. This coefficient characterizes the ease with which vapor diffuses from the adsorbent surface into the interior of a particle and is defined as [30]:
K L D F = 15 r p 2 D o e x p E a R T
Here, r p is the radius of the zeolite particle, D 0 is the pre-exponential diffusivity, and E a is the activation energy. To describe equilibrium adsorption behavior within the bed, an adsorption equilibrium model is required to relate local thermodynamic conditions (namely temperature and vapor pressure) to the equilibrium water uptake. In this study, Dubinin’s theory [31] is used for this purpose. This model is appropriate for materials with pore sizes smaller than 2 nm and assumes that adsorption is controlled primarily by micropore volume rather than surface area. Based on the Dubinin–Astakhov formulation, the equilibrium uptake X E Q for a zeolite 13X–water pair is given by [32,33]:
X E Q = X 0 e x p B T T s a t 1 n
where X 0 is the maximum adsorption capacity and B and n are D–A fitting parameters specific to the zeolite–water system [34].

2.1.2. Vapor Transport in Bed

ε e f f C t D e f f 2 C + . C u = R s
ε e f f = ε b e d + ε p 1 ε b e d
R s = 1 ε e f f ρ s M w X t
In this model, D eff represents the effective gas diffusivity within the porous medium, C is the water vapor concentration, ρ s is the adsorbent density, M w is the molar mass of water vapor, ε eff denotes the effective porosity, ε p and ε b e d correspond to the particle porosity and the bed porosity of the zeolite, respectively.
Because the system operates under very low pressure, the water vapor behaves as an ideal gas, allowing its density to be calculated using the ideal gas law. Water vapor transport through the porous adsorbent occurs via both diffusion and advection. Pressure gradients drive the bulk movement of vapor, while the local velocity in both radial and axial directions is described using the Darcy equation, which is appropriate for the low gas velocities typically encountered in porous media such as adsorption beds [35].

2.1.3. Heat Transfer

Three different domains of adsorbent bed, fins and heat transfer tube (HTT) and heat transfer fluid (HTF), are taken into consideration.
A d s o r b e n t ρ C p e f f T s t + . ( ρ f C p f u T s ) k e f f 2 T s + h s , c o A s , c o V s T s T c o = 1 ε e f f ρ s X t H
H T T ρ c o C p , c o T c o t k c o 2 T c o + h t i A t i V t T c o T h t f + h s , c o A s , c o V t T c o T s = 0
H T F ρ h t f C p , h t f T h t f t + ρ f C p f T h t f k h t f 2 T h t f + 2 r t h t i T h t f r = r t T c o = 0
where C p , f and ρ f represent the specific heat capacity and density of the water vapor, respectively. The term ( ρ C p ) eff denotes the effective volumetric heat capacity of the porous medium. Additionally, k c o , C p , c o , and ρ c o correspond to the thermal conductivity, specific heat capacity, and density of the copper fins. h t i is heat transfer coefficient between heat transfer fluid and tube.

2.2. Validation of the Simulation Results

To confirm the accuracy of the simulation, this model was validated by Abohamzeh et al. [29]. They validated the model for the desorption process in a closed system with experiments of Wu et al. [36]. Good agreement was observed between the predictions of the present model and the results reported in the literature, demonstrating the reliability and accuracy of the model for further numerical calculations.

2.3. Machine Learning

In this study, the water uptake of the sorption reactor under varying fin lengths is simulated to analyze the discharge behavior and to generate spatial data for training with different ML models. The geometric influence is primarily characterized by the fin length parameter, which directly impacts the heat transfer behavior of the reactor. All other geometric parameters, including reactor dimensions and fin thickness, are held constant to isolate the effect of fin length on overall system performance. The extracted water uptake data is represented using coordinate values. As shown in Figure 4, these coordinates are recorded along the reactor domain from the starting point ( R m i n , Z m i n ) to the endpoint ( R m a x , Z m a x ) , providing a detailed spatial representation of the discharge process. For each coordinate ( R , Z ) within the domain, the corresponding values of water uptake are recorded and organized into three columns: the R column, Z column, and Value column (water uptake). Consequently, a distinct dataset is generated for the training stage.
For training, fin lengths of 10 mm, 25 mm, and 35 mm are selected to represent a broad range of operating conditions. For each fin length, the water uptake data is recorded at multiple time intervals. Because significant variations occur at the beginning of the discharge process, data recording starts at 10 min, with intervals of 10 min up to 50 min. Beyond this point, the interval is increased to 50 min, continuing until 250 min. This approach yields a comprehensive dataset encompassing water uptake distribution across different times and fin lengths. At each interval, coordinate data for water uptake are collected, providing a robust dataset for ML training. Thermophysical parameters such as effective diffusivity, adsorption constants, and heat transfer coefficients were not directly used as input features because they remain constant throughout the simulations and are already embedded in the CFD model that generated the dataset. Such detailed data enables the ML models to effectively learn and model the relationships between variables, allowing for accurate prediction of water uptake distribution under varying conditions. Consequently, the trained model contributes to improved design and optimization of sorption heat storage systems. Figure 5 presents a schematic overview of the proposed methodology, structured into different phases and steps, and illustrates the algorithms and tools incorporated into the framework. To select the most suitable type of model, widely used data-driven machine learning models in the field of thermal storage were considered. These include CatBoost (gradient boosting decision trees), multilayer perceptron neural networks (MLPs), support vector regression (SVR), Random Forest, and XGBoost. Table 1 shows a summary of the supervised learning models reviewed and their comparison.
CatBoost is a gradient boosting decision tree (GBDT) algorithm that constructs an additive ensemble of decision trees. It employs ordered boosting and symmetric tree structures to reduce prediction bias and overfitting.
The predicted output for the i -th sample is expressed as:
y ˆ i = k = 1 K     f k x i , f k F ,
where f k is the k -th regression tree, K is the number of trees, and F is the space of regression trees.
The regularized training objective is:
L = i = 1 n     l y i , y ˆ i + k = 1 K     Ω f k ,
where Ω ( ) penalizes tree complexity (e.g., depth, leaf weights). CatBoost’s symmetric trees improve stability and generalization for coupled thermophysical predictors.
An MLP is a feed-forward neural network composed of L layers with nonlinear activation functions. Let h ( 0 ) = x . For hidden layers l = 1 , , L 1 :
h ( l ) = σ W ( l ) h ( l 1 ) + b ( l )
and the output layer (scalar regression) is:
y ˆ = f θ ( x ) = W ( L ) h ( L 1 ) + b ( L )
The parameters are θ = W ( l ) , b ( l ) l = 1 L and σ ( ) may be ReLU or tanh.
A common choice is squared loss with l 2 regularization
m i n θ   1 n i = 1 n     y i f θ x i 2 + λ l = 1 L     W ( l ) 2 2
where λ > 0 controls weight decay. MLPs can approximate complex nonlinear sorption dynamics but require regularization (and often early stopping) to mitigate overfitting.
Support Vector Regression constructs a maximum-margin regression function with an ε -insensitive loss. The model is
f θ ( x ) = w ϕ ( x ) + b
where ϕ ( ) maps inputs into a feature space and θ = ( w , b ) .
The primal optimization problem is:
m i n w , b , ξ i , ξ i *   1 2 w 2 2 + C i = 1 n     ξ i + ξ i * ,
subject to:
y i w ϕ x i + b   ε + ξ i
w ϕ x i + b y i   ε + ξ i *
ξ i , ξ i *   0 , i = 1 , , n
Using a kernel K x , x = ϕ ( x ) ϕ x , the predictor can be written in dual form:
f θ ( x ) = i = 1 n     α i α i * K x i , x + b
where α i , α i * are dual variables. SVR is often effective for small-to-medium datasets and provides strong generalization when nonlinear kernels are appropriate.
Random Forest is an ensemble method that averages predictions from M regression trees trained on bootstrap samples with randomized feature selection at splits.
The prediction is:
y ˆ = f θ ( x ) = 1 M m = 1 M     T m ( x )
where T m ( ) is the m -th regression tree. Each tree is trained by minimizing squared errors on its bootstrap dataset D m D :
m i n T m   x i , y i D m     y i T m x i 2
Random Forest reduces variance through bagging and is robust to noise in experimental measurements, making it suitable for sorption datasets.
XGBoost is a regularized gradient boosting method that constructs an additive model of K trees, similar in form to (4):
y ˆ i = k = 1 K     f k x i , f k F
At boosting iteration t , using a second-order Taylor expansion of the loss around y ˆ i ( t 1 ) , the objective for the next tree f t is approximate as:
L ( t ) i = 1 n   g i f t x i + 1 2 h i f t x i 2 + Ω f t
where the first- and second-order derivatives are:
g i = l y i , y ˆ y ˆ y ˆ = y ˆ i ( t 1 ) , h i = 2 l y i , y ˆ y ˆ 2 y ˆ = y ˆ i ( t 1 )
A commonly used regularizer is:
Ω f t = γ L t + λ 2 j L t     w j 2
where L t is the set of leaves in tree t and w j are leaf scores. XGBoost is typically effective for tabular thermophysical datasets with strong nonlinear interactions and provides high predictive accuracy with explicit regularization.
Table 1 presents a structured qualitative comparison of the supervised learning models used in this study according to four evaluation criteria: paradigm, nonlinearity, data efficiency, and interpretability. Paradigm refers to the underlying learning principle or algorithmic family to which each model belongs (e.g., neural networks, kernel-based methods, bagging, or boosting tree ensembles). Nonlinearity indicates the model’s capacity to represent complex nonlinear relationships between input features and water uptake response. Data efficiency describes the ability of a model to achieve strong generalization performance with limited training data, reflecting its inductive bias and built-in regularization mechanisms. Models with strong inductive biases (e.g., tree ensembles or margin-based methods) are typically more data-efficient than highly parameterized neural networks, which often require larger datasets to reach optimal performance. Interpretability refers to the extent to which the model’s predictions and internal decision structure can be understood and explained in physically meaningful terms.
A consistent data preprocessing and validation protocol was applied across all supervised learning models to ensure fair comparison and reproducibility.
Data Cleaning and Preprocessing: Missing values in numerical features were handled using either mean or median imputation, depending on feature skewness. Outliers were identified through interquartile range (IQR) analysis and retained unless clear measurement errors were detected, as extreme values may carry physical significance in sorption processes.
Feature Scaling: For models sensitive to feature scale (Linear/Logistic Regression, SVR, and MLP), input features were standardized using:
x i j s c a l e d = x i j μ j σ j
where μ j and σ j denote the mean and standard deviation of feature j computed from the training set only. Tree-based models (Decision Tree, Random Forest, CatBoost, XGBoost) were trained on unscaled features.
Train–Test Split and Cross-Validation: The dataset was partitioned into training and test sets using an 80 / 20 split:
D train   D test   = D , D train   D test   = .
Model selection and hyperparameter tuning were performed exclusively on D train   using k -fold cross-validation (typically k = 5 ). Final performance was reported on the held-out test set to estimate generalization performance.
Evaluation Metrics
Model performance was evaluated using standard metrics appropriate to the prediction task, ensuring comparability across algorithms. For continuous water uptake prediction, the following metrics were employed:
Mean Absolute Error (MAE): MAE = 1 n i = 1 n     y i y ˆ i
Root Mean Squared Error (RMSE): R M S E = 1 n i = 1 n     y i y ˆ i 2
Coefficient of Determination R 2 : R 2 = 1 i = 1 n     y i y ˆ i 2 i = 1 n     y i y 2
where y denotes the mean of the observed targets.

3. Results

3.1. Performance Metrics of the ML Models

Table 2 presents the performance of five algorithms, showing that MLP delivers the highest predictive accuracy, with an R2 value of 0.9148 and the lowest RMSE and MAE. Overall, the neural network demonstrates robust performance, with both training and validation losses reaching low values and converging, which signifies effective learning and good generalization to unseen data. The consistently low MAE and RMSE values, along with the high R2 values, indicate excellent predictive capability across all datasets. The high R2 on the test set further confirms that the MLP performs well on previously unseen data. The close alignment of these metrics across the three sets suggests that the model generalizes effectively and does not exhibit signs of overfitting. The metrics for the test set confirm that the model maintains its performance on completely unseen data. The MLP network achieved the highest predictive accuracy because it can effectively capture complex nonlinear relationships in high-dimensional spatial–temporal datasets. Tree-based ensemble models such as Random Forest, CatBoost, and XGBoost typically perform well on tabular datasets but may struggle with highly nonlinear spatial patterns present in adsorption processes [37].
Hyperparameter tuning for the MLP model was conducted using randomized search in combination with 3-fold cross-validation, ensuring a reliable and computationally efficient hyperparameter optimization process. The model employs a three-layer architecture with decreasing layer widths, which was found to capture the nonlinear relationships in the data while maintaining generalization performance. To mitigate overfitting, early stopping was employed during training using an internal validation split (15% of the training data). Additionally, L2 regularization (α = 1 × 10−4) is applied to further improve generalization. The combination of early stopping and L2 regularization proved sufficient to control overfitting in our experiments. This configuration achieved the best cross-validated performance and demonstrated strong generalization on unseen data (Table 3).

3.2. Training Results for Water Uptake

Figure 6, Figure 7 and Figure 8 compare the water uptake distributions obtained from numerical simulations with those predicted by the MLP model (green line) for fin lengths of 10 mm, 25 mm, and 35 mm. The results are presented in a grid layout, with each panel corresponding to a specific time step. These images illustrate the evolution of the sorption process, and three representative water uptake values 0.17, 0.19, and 0.22 k g w / k g Z e o l i t e are selected, corresponding to approximately 65%, 73%, and 85% of the zeolite’s total adsorption capacity.
The contours show strong water uptake gradients at initial stages, which gradually diminish as the system approaches thermal equilibrium. As expected, the sorption process proceeds more rapidly with increasing fin length, as longer fins enhance heat transfer by extracting more heat from the zeolite. Consequently, the complete adsorption time (CAT) is shortest (approximately 130 min) for a fin length of 35 mm. Overall, the numerical simulation results strongly agree with the ML predictions, indicating that the model effectively captures the process evolution and generalizes well across time. Minor discrepancies are observed at initial and final stages, where ML predictions slightly deviate from the simulations, likely due to training limitations or complex transient behavior. Despite these localized differences, the ML model demonstrates strong predictive performance and robustness in handling complex thermal dynamics. Furthermore, the water uptake front exhibits a more complex profile due to adsorption occurring near the fin walls compared with regions farther from the fins.
The fin geometry and its impact on local heat transfer generates intricate water uptake patterns that are challenging for the MLP model to predict with high precision. As a result, the training error is slightly increased, as evidenced by minor deviations between the predicted curves and the actual (black) line. Nevertheless, despite these deviations, the model successfully captures the overall trend and general shape of the water uptake front.

3.3. ML Prediction Results

The trained ML model is then applied to predict the evolution of the water uptake for fin lengths of 20 mm and 30 mm (Figure 9 and Figure 10). Although full numerical simulations for these cases take approximately 30 min of computational time (excluding geometry creation, meshing, and related steps), the ML model produces predictions in just a few seconds, leading to a substantial reduction in computational cost. Shen et al. [38] reported similar findings, demonstrating the efficiency and accuracy of ANN models in predicting numerical simulation results and highlighting their potential to significantly reduce simulation time while maintaining high accuracy. In addition to this efficiency gain, the model demonstrates good predictive accuracy. Figures present prediction results, showing a close agreement between the ML predictions and the numerical simulation results across all time intervals. This strong alignment highlights the robustness of the model and its ability to predict system behavior rapidly and accurately under new conditions. The consistent match throughout the process confirms the model’s high accuracy in forecasting water uptake in the sorption system. Like in the training cases, slightly higher prediction errors are observed during the early stages due to the complex evolution of the loading near the fin walls, for instance, a small part of the contour line associated with X = 0.22 is observed within the fin region that is related to prediction errors. Despite this initial discrepancy, the overall predictive performance of the model remains strong. In the initial absorption stage, the loading curve has a two-wave shape due to its proximity to the fins, and over time, it changes to a more uniform spatial distribution, thus showing a smoother trend (X = 0.19 for example). It is evident that increasing the fin length has a significant impact on the adsorption rate. A comparison of the two figures shows that the adsorption process is faster for a fin length of 30 mm. Overall, the prediction results demonstrate higher accuracy for slower adsorption processes; consequently, lower prediction errors are observed for cases with shorter fin lengths.
Another notable observation is that the adsorption rate on the left side of the domain is higher than that on the right side. This behavior is clearly illustrated in the figures and becomes more pronounced as time progresses. The primary reason for this phenomenon is the direction of the HTF flow within the pipe, which enters from left to right. As a result, a temperature gradient develops along the pipe, with lower temperatures at the inlet and higher temperatures at the outlet. The cooler inlet region enhances the adsorption process, leading to a higher adsorption rate on the left side of the domain. This spatial behavior is successfully captured by machine learning models, as evidenced by the predicted contours, which accurately reproduce the observed adsorption patterns.

4. Conclusions

In this study, numerical simulations were conducted to analyze water uptake within an enclosure containing two fins. The simulation results describe the temporal evolution of water uptake for three different fin lengths (10 mm, 25 mm, and 35 mm), generating a comprehensive dataset for training ML models and capturing the complex dynamics of the sorption process. Several ML models, including CatBoost, MLP, SVR, Random Forest, and XGBoost, were selected comparative analysis was conducted between them. Based on performance metrics, the best-performing model was chosen. The main findings of the research are summarized as follows:
  • Among the evaluated machine learning models, the MLP neural network achieved the highest predictive accuracy, with an R2 value of 0.9148 and the lowest MAE and RMSE values.
  • The trained MLP model successfully predicted water uptake for unseen fin geometries (20 mm and 30 mm) with good agreement compared to numerical simulations.
  • The machine learning approach significantly reduces computational time, producing predictions in seconds compared with approximately 30 min required for CFD simulations.
  • The results demonstrate that ML models provide an efficient alternative for rapid prediction and optimization of sorption-based thermal energy storage systems.
In summary, the MLP model represents a powerful complementary approach to conventional numerical simulations, reducing the computational effort required to predict the dynamic loading behavior of zeolite. Its high accuracy and strong generalization capability make it a valuable tool in thermal engineering, providing significant advantages in terms of cost efficiency, time savings, and design flexibility. Therefore, it constitutes a promising avenue for future research and practical applications in thermal management and related fields. Future work will focus on extending the ML framework to different adsorbent materials, applying advanced ML models such as Gaussian Process Regression, deep neural networks, or recurrent neural networks (RNN/LSTM) to improve prediction performance, and expanding the dataset to include wider geometric variations.

Author Contributions

Conceptualization, formal analysis, software, investigation, methodology, visualization, and writing—original draft preparation, M.T.J.; data curation, resources, and writing—review and editing, E.A.; formal analysis, methodology, software, and writing—review and editing, D.M.M.; validation and writing—review and editing, S.K.; validation and writing—review and editing, D.K.; validation and writing—review and editing, A.Y.; funding acquisition, validation, project administration, supervision, and writing—review and editing, G.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was financially supported by the Saarland Ministry of Finance and for Science in the project EnFoSaar.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANNsArtificial neural networks
CATComplete adsorption time
CFDComputational fluid dynamics
COPCoefficient of performance
DNNsDeep Neural Networks
DOEDesign of Experiments
FEMFinite element method
GBDTGradient boosting decision tree
GPRGaussian Process Regression
HTFHeat transfer fluid
IQRInterquartile range
LDFLinear driving force
LHSLatent heat storage
MAEMean absolute error
MLMachine learning
MLPsMultilayer perceptron neural networks
RMSERoot mean squared error
RNNRecurrent neural network
RSMResponse Surface Methodology
SHSSensible heat storage
SVRSupport vector regression
TCESThermochemical energy storage
TESThermal energy storage

References

  1. Huo, T.; Ma, Y.; Cai, W.; Liu, B.; Mu, L. Will the urbanization process influence the peak of carbon emissions in the building sector? A dynamic scenario simulation. Energy Build. 2021, 232, 110590. [Google Scholar] [CrossRef]
  2. Cabeza, L.F.; Chafer, M. Technological options and strategies towards zero energy buildings contributing to climate change mitigation: A systematic review. Energy Build. 2020, 219, 110009. [Google Scholar] [CrossRef]
  3. International Energy Agency. The Future of Cooling. 2022. Available online: https://www.iea.org/reports/the-future-of-cooling (accessed on 15 May 2022).
  4. Solé, A.; Martorell, I.; Cabeza, L.F. State of the art on gas–solid thermochemical energy storage systems and reactors for building applications. Renew. Sustain. Energy Rev. 2015, 47, 386–398. [Google Scholar] [CrossRef]
  5. Gbenou, T.R.S.; Fopah-Lele, A.; Wang, K. Recent status and prospects on thermochemical heat storage processes and applications. Entropy 2021, 23, 953. [Google Scholar] [CrossRef] [PubMed]
  6. Zhang, Y.; Wang, R. Sorption thermal energy storage: Concept, process, applications and perspectives. Energy Storage Mater. 2020, 27, 352–369. [Google Scholar] [CrossRef]
  7. N’Tsoukpoe, K.E.; Kuznik, F. A reality check on long-term thermochemical heat storage for household applications. Renew. Sustain. Energy Rev. 2021, 139, 110683. [Google Scholar] [CrossRef]
  8. Zbair, M.; Bennici, S. Survey summary on salts hydrates and composites used in thermochemical sorption heat storage: A review. Energies 2021, 14, 3105. [Google Scholar] [CrossRef]
  9. Jarimi, H.; Aydin, D.; Zhang, Y.; Ozankaya, G.; Chen, X.; Riffat, S. Review on the recent progress of thermochemical materials and processes for solar thermal energy storage and industrial waste heat recovery. Int. J. Low-Carbon Technol. 2019, 14, 44–69. [Google Scholar] [CrossRef]
  10. Zhang, Y.N.; Wang, R.Z.; Li, T.X. Experimental investigation on an open sorption thermal storage system for space heating. Energy 2017, 141, 2421–2433. [Google Scholar] [CrossRef]
  11. Köll, R.; Van Helden, W.; Engel, G.; Wagner, W.; Dang, B.; Jänchen, J.; Kerskes, H.; Badenhop, T.; Herzog, T. An experimental investigation of a realistic-scale seasonal solar adsorption storage system for buildings. Sol. Energy 2017, 155, 388–397. [Google Scholar] [CrossRef]
  12. Padamurthy, A.; Nandanavanam, J.; Rajagopalan, P. Sustainable and open sorption system for low-temperature heat storage applications. Int. J. Energy Res. 2022, 1, 17. [Google Scholar] [CrossRef]
  13. Ji, W.; Zhang, H.; Liu, S.; Li, Y.; Wang, Z.; Deng, S. A metal mesh net-packed method for improving thermochemical energy storage reactor performance by increasing the void fraction. Appl. Therm. Eng. 2023, 225, 120248. [Google Scholar] [CrossRef]
  14. Gaeini, M.; Van Alebeek, R.; Scapino, L.; Zondag, H.A.; Rindt, C.C.M. Hot tap water production by a 4 kW sorption segmented reactor in household scale for seasonal heat storage. J. Energy Storage 2018, 17, 118–128. [Google Scholar] [CrossRef]
  15. Palo, M.D.; Sabatelli, V.; Buzzi, F.; Gabbrielli, R. Experimental and numerical assessment of a novel all-in-one adsorption thermal storage with zeolite for thermal solar applications. Appl. Sci. 2020, 10, 8517. [Google Scholar] [CrossRef]
  16. Gao, S.; Wang, S.; Sun, Y.; Wang, J.; Hu, P.; Shang, J.; Ma, Z.; Liang, Y. Effect of charging operating conditions on open zeolite/water vapor sorption thermal energy storage system. Renew. Energy 2023, 215, 119033. [Google Scholar] [CrossRef]
  17. Bayrak, F.; Abu-Hamdeh, N.; Alnefaie, K.A.; Öztop, H.F. A review on exergy analysis of solar electricity production. Renew. Sustain. Energy Rev. 2017, 74, 755–770. [Google Scholar] [CrossRef]
  18. Hassani, F.; Kouhkord, A.; Golshani, A.; Amirmahani, M.; Moghanlou, F.S.; Naserifar, N.; Beris, A.T. Micro-electro-mechanical acousto fluidic mixing system: A response surface-metaheuristic machine learning fusion framework. Expert Syst. Appl. 2024, 249, 123638. [Google Scholar] [CrossRef]
  19. Rashidi, S. Applications of machine learning techniques in energy systems integrated with phase change materials—A concise review. Eng. Anal. Bound. Elem. 2023, 150, 237–245. [Google Scholar] [CrossRef]
  20. Zhou, Y.; Zheng, S.; Liu, Z.; Wen, T.; Ding, Z.; Yan, J.; Zhang, G. Passive and active phase change materials integrated building energy systems with advanced machine-learning based climate-adaptive designs, intelligent operations, uncertainty-based analysis and optimisations: A state-of-the-art review. Renew. Sustain. Energy Rev. 2020, 130, 109889. [Google Scholar] [CrossRef]
  21. Sammil, S.; Sridharan, M. Employing ensemble machine learning techniques for predicting the thermohydraulic performance of double pipe heat exchanger with and without turbulators. Therm. Sci. Eng. Prog. 2024, 47, 102337. [Google Scholar] [CrossRef]
  22. Kouhkord, A.; Hassani, F.; Amirmahani, M.; Golshani, A.; Naserifar, N.; Moghanlou, F.S.; Beris, A.T. Controllable microfluidic system through intelligent framework: Data-driven modeling, machine learning energy analysis, comparative multi-objective optimization, and experimental study. Ind. Eng. Chem. Res. 2024, 63, 13326–13344. [Google Scholar] [CrossRef]
  23. Balakrishnan, V.K.V.; Kumaresan, K. Thermal analysis of PCM magnesium chloride hexahydrate using various machine learning and deep learning models. Eng. Appl. Artif. Intell. 2023, 126, 107159. [Google Scholar] [CrossRef]
  24. Amudhalapalli, G.K.; Devanuri, J.K. Prediction of transient melt fraction in metal foam-nanoparticle enhanced PCM hybrid shell and tube heat exchanger: A machine learning approach. Therm. Sci. Eng. Prog. 2023, 46, 102241. [Google Scholar] [CrossRef]
  25. Zhao, H.; Alshehri, S. Development of advanced hybrid mechanistic-artificial intelligence computational model for learning of numerical data of flow in porous membranes. Eng. Appl. Artif. Intell. 2023, 126, 106910. [Google Scholar] [CrossRef]
  26. Delmarre, C.; Resmond, M.-A.; Kuznik, F.; Obrecht, C.; Chen, B.; Johannes, K. Artificial neural network simulation of energetic performance for sorption thermal energy storage reactors. Energies 2021, 14, 3294. [Google Scholar] [CrossRef]
  27. Scapino, L.; Zondag, H.A.; Diriken, J.; Rindt, C.C.; Van Bael, J.; Sciacovelli, A. Modeling the performance of a sorption thermal energy storage reactor using artificial neural networks. Appl. Energy 2019, 253, 113525. [Google Scholar] [CrossRef]
  28. Skrobek, D.; Krzywanski, J.; Sosnowski, M.; Kulakowska, A.; Zylka, A.; Grabowska, K.; Ciesielska, K.; Nowak, W. Implementation of deep learning methods in prediction of adsorption processes. Adv. Eng. Softw. 2022, 173, 103190. [Google Scholar] [CrossRef]
  29. Abohamzeh, E.; Hosseinizadeh, S.E.; Frey, G. Numerical investigation and response surface optimization of a sorption heat storage systems performance using Y-shaped fins. J. Energy Storage 2024, 84, 110803. [Google Scholar] [CrossRef]
  30. Glueckauf, E. Theory of chromatography. Part 10.—Formulæ for diffusion into spheres and their application to chromatography. Trans. Faraday Soc. 1955, 51, 1540–1551. [Google Scholar] [CrossRef]
  31. Bering, B.P.; Dubinin, M.M.; Serpinsky, V.V. Theory of volume filling for vapor adsorption. J. Colloid Interface Sci. 1966, 21, 378–393. [Google Scholar] [CrossRef]
  32. Critoph, R.E.; Turner, H.L. Performance of ammonia-activated carbon and ammonia zeolite heat pump adsorption cycles. Appl. Therm. Eng. 1996, 16, 419–427. [Google Scholar] [CrossRef]
  33. Solmuş, I.; Yamalı, C.; Kaftanoğlu, B.; Baker, D.; Çağlar, A. Adsorption properties of a natural zeolite–water pair for use in adsorption cooling cycles. Appl. Energy 2010, 87, 2062–2067. [Google Scholar] [CrossRef]
  34. Wang, L.W.; Wang, R.Z.; Oliveira, R.G. A review on adsorption working pairs for refrigeration. Renew. Sustain. Energy Rev. 2009, 13, 518–534. [Google Scholar] [CrossRef]
  35. Latrille, C.; Zoia, A. Estimating apparent diffusion coefficient and tortuosity in packed sand columns by tracers experiments. J. Porous Media 2011, 14, 507–520. [Google Scholar] [CrossRef]
  36. Wu, J.W.; Biggs, M.J.; Hu, E.J. Dynamic model for the optimisation of adsorption-based desalination processes. Appl. Therm. Eng. 2014, 66, 464–473. [Google Scholar] [CrossRef]
  37. Nwaila, G.T.; Zhang, S.E.; Bourdeau, J.E.; Frimmel, H.E.; Ghorbani, Y. Spatial Interpolation Using Machine Learning: From Patterns and Regularities to Block Models. Nat. Resour. Res. 2024, 33, 129–161. [Google Scholar] [CrossRef]
  38. Shen, S.; Wu, C.; Duan, F. Machine learning for predicting the PCM melting process in a rectangular enclosure energy storage. AI Therm. Fluids 2025, 1, 100001. [Google Scholar] [CrossRef]
Figure 1. (a) The adsorbent bed, (b) the geometric model of the problem and the geometric of fins.
Figure 1. (a) The adsorbent bed, (b) the geometric model of the problem and the geometric of fins.
Energies 19 01619 g001
Figure 2. The workflow diagram of simulation and machine learning.
Figure 2. The workflow diagram of simulation and machine learning.
Energies 19 01619 g002
Figure 3. Grid independence test.
Figure 3. Grid independence test.
Energies 19 01619 g003
Figure 4. The spatial data used in the ML models.
Figure 4. The spatial data used in the ML models.
Energies 19 01619 g004
Figure 5. Algorithmic components and tools employed in the proposed framework.
Figure 5. Algorithmic components and tools employed in the proposed framework.
Energies 19 01619 g005
Figure 6. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 10 mm.
Figure 6. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 10 mm.
Energies 19 01619 g006
Figure 7. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 25 mm.
Figure 7. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 25 mm.
Energies 19 01619 g007
Figure 8. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 35 mm.
Figure 8. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 35 mm.
Energies 19 01619 g008
Figure 9. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 20 mm.
Figure 9. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 20 mm.
Energies 19 01619 g009
Figure 10. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 30 mm.
Figure 10. Comparison between simulation (black lines) and MLP training (colored lines) when the fin length is 30 mm.
Energies 19 01619 g010
Table 1. Comparative summary of supervised ML models for water uptake prediction.
Table 1. Comparative summary of supervised ML models for water uptake prediction.
ModelParadigmNonlinearityData EfficiencyInterpretability
CatBoostGradient boosting treesVery highHighMedium
MLPNeural networksVery highMediumLow
SVRKernel regressionHigh (kernel-dependent)HighMedium
Random ForestBagging treesHighHighHigh
XGBoostRegularized boosting treesVery highMedium-HighMedium
Table 2. Performance comparison of five algorithms.
Table 2. Performance comparison of five algorithms.
ModelMAE (mm)RMSE (mm)R2
CatBoost6.75929.00510.6397
MLP2.83114.37850.9148
SVR13.505315.5059−0.0680
Random Forest6.96909.06380.6350
XGBoost7.11039.14680.6283
Table 3. Hyperparameters for the best-performing MLP.
Table 3. Hyperparameters for the best-performing MLP.
HyperparameterOptimized Value
Hidden layer sizes(512, 256, 128)
Activation functionReLU
SolverAdam
Learning rate0.0005
Batch size32
L2 regularization (α)0.0001
Maximum iterations5000
Early stoppingYes
Validation fraction0.15
Random seed42
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tajik Jamalabad, M.; Abohamzeh, E.; Minhas, D.M.; Kim, S.; Kim, D.; Yoon, A.; Frey, G. Comparative Assessment of Supervised Machine Learning Models for Predicting Water Uptake in Sorption-Based Thermal Energy Storage. Energies 2026, 19, 1619. https://doi.org/10.3390/en19071619

AMA Style

Tajik Jamalabad M, Abohamzeh E, Minhas DM, Kim S, Kim D, Yoon A, Frey G. Comparative Assessment of Supervised Machine Learning Models for Predicting Water Uptake in Sorption-Based Thermal Energy Storage. Energies. 2026; 19(7):1619. https://doi.org/10.3390/en19071619

Chicago/Turabian Style

Tajik Jamalabad, Milad, Elham Abohamzeh, Daud Mustafa Minhas, Seongbhin Kim, Dohyun Kim, Aejung Yoon, and Georg Frey. 2026. "Comparative Assessment of Supervised Machine Learning Models for Predicting Water Uptake in Sorption-Based Thermal Energy Storage" Energies 19, no. 7: 1619. https://doi.org/10.3390/en19071619

APA Style

Tajik Jamalabad, M., Abohamzeh, E., Minhas, D. M., Kim, S., Kim, D., Yoon, A., & Frey, G. (2026). Comparative Assessment of Supervised Machine Learning Models for Predicting Water Uptake in Sorption-Based Thermal Energy Storage. Energies, 19(7), 1619. https://doi.org/10.3390/en19071619

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop