Next Article in Journal
Optimizing Methane Production from Lignocellulosic Biomass: Low-Temperature Potassium Ferrate Pretreatment via Response Surface Methodology
Previous Article in Journal
Fourier Transform-Based Inversion of Surface Deformation for Risk Assessment of Casing Damage: A Case Study of the Daqing Oilfield
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Data-Driven Prediction of Li-Ion Battery Thermal Behavior: Advances and Applications in Thermal Management

by
Weijia Qian
1,
Wenda Fang
2,
Yongjun Tian
3,4,*,
Guangwu Dai
3,4,
Tao Yan
3,4,
Siheng Yang
5 and
Ping Wang
1
1
Institute for Energy Research, Jiangsu University, Zhenjiang 212013, China
2
School of Energy and Power Engineering, Jiangsu University, Zhenjiang 212013, China
3
TCATARC Automotive Test Center (Changzhou) Company Limited, Changzhou 213000, China
4
Jiangsu Provincial Engineering Technology Research Center for Integrated Photovoltaic, Energy Storage, Charging, and Testing, Changzhou 213000, China
5
School of Energy Engineering, Zhejiang University, Hangzhou 310013, China
*
Author to whom correspondence should be addressed.
Processes 2025, 13(9), 2769; https://doi.org/10.3390/pr13092769
Submission received: 12 August 2025 / Revised: 25 August 2025 / Accepted: 28 August 2025 / Published: 29 August 2025
(This article belongs to the Section Energy Systems)

Abstract

Lithium-ion batteries (LIBs) are critical for various applications, and effective thermal management is important for their safety, performance, and lifespan. Traditional physics-based modeling of battery thermal behavior is computationally complex and requires detailed parameters. Using data-driven modeling to predict thermal characteristics of batteries offers a promising alternative. This review comprehensively examines the utilization of data-driven methods in predicting LIB thermal behavior and designing battery thermal management systems. It explores commonly used data-driven techniques and focuses on their applications in predicting heat generation, temperature distribution, and cooling performance. Specific data-driven models for battery thermal prediction are presented, with a comparative analysis of their strengths and weaknesses. The review concludes that data-driven models can effectively predict battery thermal behavior, offering computational efficiency compared to physics-based simulations. Future research directions include hybrid data-driven/physical modeling, ensemble modeling, and incorporating explainable artificial intelligence techniques to enhance model interpretability. These advancements will lead to more accurate and interpretable models, contributing to the safe and efficient applications of LIB systems.

1. Introduction

LIBs have the advantages of high energy density, low self-discharge, long cycle life, and no memory effect, making them widely applied in various types of consumer electronics and electric vehicles [1]. LIB-based energy-storage systems are also key facilities for the large-scale utilization of renewable energy sources [2]. As an electrochemical system, operating temperature has direct influence on the performance and safety of the LIB. Beyond charging and discharging activities, prolonged exposure to excessively high or low temperatures can cause battery degradation and shorten battery longevity [3]. Exposure to excessively high temperatures can lead to thermal runaway in batteries, resulting in fires or even explosions [4]. Therefore, understanding, predicting, and optimizing the thermal behavior, heat-transfer processes, and temperature of LIBs are crucial for ensuring their safety and efficient operation [5].
Understanding the fundamental principles of heat generation in LIBs is a prerequisite for predicting and controlling battery temperature. During operation, internal heat is generated in LIBs, and the heat generation can be classified into four main sources according to their principles: reversible heat generation, irreversible heat generation, heat due to side reactions, and heat due to mixing [6]. The reversible heat generation originates from the entropy changes of electrochemical reactions that happened during battery charging and discharging, while the irreversible heat generation comes from ohmic and kinetic losses [7]. While side reactions in LIBs, such as the chemical reactions between the electrolyte and the active materials of the cathode or anode, are generally insignificant at normal operating temperatures, they become increasingly pronounced at elevated temperatures. The heat generated from these exothermic side reactions can contribute to a further rise in battery temperature. This self-accelerating thermal feedback loop, where increased temperature promotes more side reactions, which in turn generate more heat, is called as thermal runaway (TR) [8]. This is a critical safety concern for LIB systems. Detailed triggers and underlying mechanisms of TR are well-documented in the extant literature [9,10,11]. Lastly, heat of mixing is the thermal energy released due to changes in local composition and energy of electrode particles as lithium-ions diffuse under concentration gradients, which will become significant at high charge/discharge rates, but the heat of mixing is often negligible at low rates [12]. Fully understanding the heat-generation mechanisms of LIBs and establishing predictive models of heat-generation rates under different operating conditions for various types of batteries is a critical issue in designing battery thermal management systems.
The existing heat-generation rate (HGR) models for LIBs are based on detailed heat-generation mechanisms. Based on the model assumptions and level of simplification, HGR models can be categorized as follows: isothermal models and coupled electrochemical–thermal models [6]. Isothermal models do not consider the efffect of temperature changes on the heat-generation rate [13,14]. The coupled electrochemical–thermal model integrates both electrochemical reactions and heat balance equations to simultaneously predict the battery’s electrochemical performance and temperature behavior [15,16]; therefore, it is more accurate but requires much higher computational power.
Due to heat generation during operation, lithium-ion batteries undergo temperature increases. Therefore, LIB systems and devices rely on battery thermal management systems (BTMSs) to predict temperature and control cooling, ensuring batteries operate at optimal temperatures with minimal temperature differences [17]. Therefore, efficient thermal modeling and prediction capabilities are needed for both batteries and their associated management systems. Battery system modeling is inherently complex, requiring consideration of electrochemical kinetics, fluid flow dynamics, heat-transfer phenomena governed by partial differential equations, and the effect of varying environmental conditions [18,19]. The inherent complexity of battery systems creates a significant challenge for the development and utilization of physics-based models. Data-driven modeling methodologies offer an alternative, made possible by advancements in computational power and numerical algorithmic development, allowing them to utilize the extensive datasets generated from battery testing and operation.
Data-driven approaches in battery modeling are methods that rely on statistical analysis and machine learning techniques to construct models, rather than traditional physical equations or mechanistic models [20]. Data-driven approaches can learn the dependency between input and output parameters from a large amount of historical data [21], thereby enabling predictions, optimization, and decision-making regarding system behavior. Data-driven methods are particularly suitable for modeling complex and nonlinear systems, achieving high-precision results without the mechanisms of the system [22]. In engineering fields including battery systems, data-driven approaches have demonstrated significant advantages. A large number of studies have utilized data-driven modeling methods to design, simulate, predict, control, optimize, and diagnose lithium-ion battery systems [23,24,25,26,27,28].
Considering the critical importance of battery temperature for safe and efficient operation, and the rapid advancements in data-driven technologies, this paper reviews recent applications of data-driven modeling methods in the context of LIB thermal phenomena. The scope of this review encompasses heat generation, temperature field characteristics, and the cooling process employed within LIB systems. The following text is structured as follows: It begins with an overview of data-driven method categories and commonly used methods. It then reviews the application of these methods in heat-generation rate modeling, battery temperature prediction, and battery thermal management system design. Finally, it summarizes the work and provides a brief discussion and outlook.

2. Overview of Data-Driven Modeling Methods

Data-driven modeling approaches offer a powerful approach to constructing prediction, classification, and optimization models by leveraging a large amount of historical data, thereby reducing dependence on prior theories or assumptions. A significant advantage of data-driven modeling over traditional theory-based approaches is its ability to “learn” directly from the data, eliminating the need for potentially biased or inaccurate assumptions. Data-driven modeling methods can be classified into three major categories, with regard to their inherent principles and modeling techniques: statistical modeling methods, machine learning methods, and hybrid methods.

2.1. Statistical Modeling Methods

Based on classical statistical theories such as linear regression (LR), principal component analysis (PCA), and grey relational analysis (GRA), these methods analyze the statistical features of data to identify relationships between variables [29]. Statistical learning methods offer a trade-off: they are computationally efficient and well suited for small datasets, but their reliance on specific assumptions about the data can limit their ability to generalize to complex, nonlinear problems. This makes them less suitable for scenarios where the underlying data relation is highly nonlinear.

2.1.1. Linear Regression

Linear regression is the simplest and a commonly used statistical analysis technique that models the dependence relationship between a dependent variable and one or more independent variables [30]. The general expression of linear regression is given by:
y ^ = w 0 + w 1 x 1 + w 2 x 2 + + w N x N
where y ^ is the estimated value of the true value y, w 0 is the line intercept, and w 1 , w 2 , , w n are the regression coefficients. The aim of linear regression is to determine the optimal values of w 0 , w 1 , , w n that minimize the error between the true value y and the estimated value y ^ . It assumes a linear correlation among the variables and is frequently used for predicting the value of y based on the values of independent variables [31].

2.1.2. Grey Relational Analysis

This is a statistical method particularly useful when dealing with systems where information is incomplete or uncertain (i.e., “grey” systems) [32]. It quantifies the correlation between a system’s target parameter and multiple influencing features, providing a ranked order of features based on their degree of correlation. This allows for the identification of the features most strongly related to the system’s target parameter. By ranking features based on their grey relational grade (GRG), GRA enables the selection of the most important features for constructing predictive models [33].

2.2. Traditional Machine Learning Methods

Machine learning methods include traditional machine learning and deep learning [34]. Commonly used traditional learning methods include support vector machine (SVM), Gaussian process regression (GPR), and random forest (RF) [34]. These methods rely on well-defined mathematical optimization and statistical inference, allowing them to effectively generalize from limited data and handle high-dimensional datasets.

2.2.1. Support Vector Machine

The support vector machine analysis can be utilized for classification and regression, and was first proposed by Vladimir Vapnik [35]. The basic principle of support vector regression (SVR) is to find a function that most fits the given data while maintaining a margin of tolerance for errors [36]. Unlike classification, where the goal is to separate data into categories, SVR aims to predict output values.
In SVR, the model tries to find a function f ( x ) that has at most a specified deviation (epsilon, ϵ ) from the actual data points, while also being as flat as possible. Here, the function f ( x ) is defined as:
f ( x ) = w · x + b
where w is the weight vector and b is the bias term. The key idea is to minimize the complexity of the function, represented by w, while allowing a margin of error ϵ for the data points.
The process involves finding the best balance between a flat function (minimizing the weight vector norm w ) and the tolerance for errors. If data points fall outside the tolerance margin, a penalty is applied, controlled by a parameter C, the box constraint, which balances the trade-off between function flatness and error minimization [37].
To handle nonlinear relationships, SVR “uses a kernel function to transform the data into a higher-dimensional space” [36], then linear regression can be utilized. The kernel function enables SVR to model complex and nonlinear relationships without explicitly computing the high-dimensional feature space. The value of the box constraint C and the choice of kernel function are crucial for the learning and generalization ability of the SVR [38]. Overall, the SVR seeks to find a function that provides good generalization by minimizing both the error and the complexity of the model, while allowing for some flexibility in fitting the data.

2.2.2. Gaussian Process Regression

This is a non-parametric Bayesian method to conduct regression using a Gaussian process prior [39]. It assumes that any finite set of data points follow a joint Gaussian distribution. GPR utilizes a kernel function to define the covariance between data points. The resulting posterior distribution over functions allows for both point predictions and uncertainty estimates, making GPR a valuable tool for tasks requiring robust and reliable predictions.

2.2.3. Decision Tree and Random Forest

Decision tree (DT) is a non-parametric and supervised method. It can be used to create a predictive model by learning decision rules [40]. The visualization of a decision tree typically takes the form of a tree diagram, starting from a root node. Each internal node of the DT represents a decision point corresponding to a specific feature, with branches being the results of node decisions. Lastly, leaf nodes are the model’s final predictions. The DT algorithm recursively separates the data based on the feature that best separates the target variable, using metrics of Gini impurity or information gain. The DT method continues until meeting an ending criterion, such as a maximum depth or a minimum number of samples in a leaf node.
Random forest is a type of ensemble learning method which utilizes a collection of DTs [41,42]. Each DT is trained by a random subset of the training data (bagging) and a random subset of the features (random subspace method). The final prediction of the RF is determined according to the predictions of all individual DTs. This randomization process reduces overfitting and improves the generalization performance [43,44].

2.3. Deep Learning Methods

The designing and extracting features processes of traditional machine learning methods demand significant manual efforts. In contrast, deep learning utilizes multi-layer neural networks to automatically learn effective feature representations directly from the data [45], leading to improved performance [46]. However, this advantage comes with trade-offs. Traditional methods require fewer computational resources and generally offer greater model interpretability, while deep learning demands substantial computational power and often results in ‘black box’ models with limited transparency [47]. Common deep learning methods include multi-layer perceptron (MLP), convolutional neural network (CNN), and recurrent neural network (RNN). These techniques have found widespread applications in diverse fields, including computer vision, smart agriculture, robotics, speech recognition, and food safety [48,49].

2.3.1. Multi-Layer Perceptron

The multi-layer perceptron is a foundational type of feedforward artificial neural network characterized by multiple layers of interconnected neurons. At a minimum, an “MLP consists of an input layer, one or more hidden layers, and an output layer” [50]. Each neuron is connected to all neurons in the following layer [50], forming a fully connected architecture.
The fundamental principle of an MLP involves propagating input signals through the network, where each neuron in the network calculates a weighted sum of its all inputs, then applies a nonlinear activation function (e.g., sigmoid, ReLU, or tanh), and passes the result to the next layer. The hidden layers enable the MLP to learn complex relationships between the input and output. During training, the network’s connection weights are iteratively adjusted using algorithms such as backpropagation to minimize a predefined loss function, thereby adjusting the network for accurately mapping inputs to outputs.

2.3.2. Convolutional Neural Network

While MLPs laid the foundation, deep neural networks (DNNs) distinguish themselves through their significantly increased depth, typically comprising many more hidden layers, enabling the learning of more intricate and abstract representations [45,51]. CNNs are a specialized type of DNN that is well suited for processing data with a matrix-like structure, such as images and videos. We note that, in the context of LIB thermal process, CNNs can be utilized by organizing the input variables (e.g., temperature, current, voltage, charging/discharging rate) into a matrix-like structure, which is achieved by considering the temporal sequence of these variables as channels, similar to the color channels (red, green, blue) in an image. Different with traditional feedforward networks, CNNs use convolutional layers, pooling layers, and fully connected layers to extract spatial hierarchies of features. The core principle of a CNN is the convolutional operation, as shown by a typical CNN in Figure 1, where learnable filters are convolved with input to learn local features. Then, learnable filters slide across the input to conduct element-wise multiplication and summation, generating feature maps that represent the presence of specific patterns. Pooling operations, including max pooling or average pooling, are then used to reduce the spatial dimensions of the feature maps, thereby decreasing computational complexity and increasing robustness to variations in position and orientation. Multiple convolutional and pooling layers are typically stacked to learn increasingly complex features. Finally, fully connected layers are often used to output classification or regression values based on the learned feature representations. The weight sharing inherent in convolutional layers significantly reduces the quantity of model parameters compared to MLP, making CNNs more efficient and less prone to overfitting, especially when dealing with high-dimensional data. CNNs have achieved state-of-the-art performance in a wide range of applications, including image recognition, object detection, and image analysis [52,53].

2.3.3. Recurrent Neural Network and Long Short-Term Memory Networks

Recurrent neural network is a class of neural network designed to process sequential data by maintaining a hidden state which stores information of past inputs. Unlike feedforward networks, RNNs use recurrent connections to allow information to persist across time steps. As shown in Figure 2, at time (t), the RNN updates the hidden state ( h t ) based on the current input ( x t ) and the previous hidden state ( h t 1 ). The updated hidden state is then used as input for the next time step ( t + 1 ). This process enables the network to model temporal dependencies within the sequence. The core principle of an RNN is to apply the same function recursively to each element of the sequence, effectively creating a memory of past inputs. This memory allows the network to learn patterns and relationships that span multiple time steps. While traditional RNNs are theoretically capable of capturing long-range dependencies, they often suffer from the vanishing or exploding gradient problem, which limits their ability to effectively learn from long sequences [55].
The inherent vanishing gradient problems of RNNs are addressed by long short-term memory (LSTM) networks. LSTMs can effectively learn long-range dependencies in sequential data. The Architecture of a typical LSTM neural network is shown in Figure 3. LSTMs use memory cells ( C t 1 and C t ) to be accumulators of information over time, and use three gating mechanisms (input i t , output o t , forget f t ) to control information flowing into and out of the cell; h t is the state of the hidden layer. The forget gate determines the discarding of information from the previous cell state. This allows the network to selectively forget irrelevant or outdated information. The input gate determines the adding of new information from the current, which enables the network to learn and incorporate relevant information. Finally, the output gate regulates the output of information from the cell, allowing the network to selectively output relevant information at each time step. By carefully controlling the flow of information through these gates, LSTMs can maintain and access information, which makes them particularly good for tasks involving time series analysis. The ability to learn and remember long-range dependencies has made LSTMs a basis for many sequence-modeling applications [57].

2.4. Hybrid Methods

Except statistical methods and machine learning methods, hybrid methods have become a hot research topic, typically referring to the combination of data-driven methods with traditional physical models to form physical-informed neural networks (PINNs) or hybrid Models [59].
PINNs use neural networks to approximate solutions to physical systems governed by partial differential equations (PDEs) [59,60]. The core principle of a PINN is embedding the governing PDEs as a regularization term within the neural network’s loss function. Specifically, automatic differentiation is used to compute derivatives of output parameters with respect to their inputs (e.g., spatial coordinates and time), which are then substituted into the PDE to calculate a residual. This residual, along with a data-driven loss term that quantifies the deviation between the network’s predictions and available data, forms the total loss function. By minimizing this combined loss, PINNs train the neural network to simultaneously satisfy the governing physical laws and fit the observed data, realizing accurate and physically consistent predictions.
PINN methods are applicable to complex engineering systems (e.g., battery systems) for thermal management, state estimation, and optimization design. The main challenge of applying these methods lies in how to effectively integrate physical knowledge and data features, achieving a good balance between model complexity and generalization.
The statistical, machine learning, and hybrid methods summarized above are not typically used in an isolated manner. Practical implementations often use combination of various methodologies. For instance, traditional machine learning methods often combine statistical learning methods for feature selection and preprocessing (e.g., using PCA for dimensionality reduction or GRA for feature screening before applying SVM or GPR for modeling [61]). Additionally, hybrid methods extend the application boundaries of pure data-driven methods, achieving better results than a single approach by merging theoretical models with data-driven models.

2.5. Reinforcement Learning

Reinforcement learning (RL) is a machine learning paradigm distinct from both supervised and unsupervised learning. Unlike supervised learning, which relies on labeled datasets to derive a predictive function (where each training instance includes both input and desired output), RL algorithms learn through interaction with an environment, receiving reward or penalty signals as feedback. In contrast to unsupervised learning, which aims to infer conclusions from unlabeled data (a prime example being clustering analysis used to discover hidden patterns or group data in exploratory data analysis), RL’s objective is to learn a policy that maximizes cumulative rewards. The environment represents the world the agent interacts with, providing states and responding to the agent’s actions. The agent is the learner and decision-maker, aiming to learn the optimal policy [62].
The RL framework is built upon several core components: state, representing the current situation of the environment or system; action, the operation an agent can take in a specific state; reward, a numerical feedback quantifying the environment’s response to an agent’s action; and policy, a mapping function guiding the agent on which action to take in a given state. An RL algorithm essentially defines a policy that depends on the entire history of states, actions, and rewards, and selects the next action to take based on this history. Through trial and error, the agent learns which actions yield the highest rewards in specific states, ultimately developing an optimal policy to guide its action selection.

3. Prediction and Modeling of Thermal Properties

The temperature field of an LIB is governed by Fourier’s law given in the following equation as [6]:
ρ c p T t = ( k · T ) + q ˙
where T is the temperature, k is the thermal conductivity, c p is the specific heat capacity, and q ˙ represents the volumetric heat source term, which is the HGR of the LIB. Thus, the temperature distribution depends on the HGR, k , and boundary conditions.
The HGR q ˙ and thermal conductivity k of LIB are both crucial thermal properties for the design of BTMS. This part discusses the modeling and prediction methods for these properties.

3.1. Heat-Generation Rate

Accurately predicting the HGR of LIBs with different battery materials or at different charging/discharging C-rates is essential for designing and optimizing the BTMS of LIB systems to ensure battery performance and safety [63,64]. The HGR is influenced by parameters, the battery type, T, state of charge (SOC)/depth of discharge (DOD), and charge/discharge C-rate. Physics-based models can accurately calculate the HGR of batteries [6], but their computational cost is prohibitive for real-time applications, while data-driven models can rapidly estimate the HGR after capturing the relationships between HGR and relevant parameters. Numerous data-driven modeling methods have been employed in the prediction of HGR.
Arora et al. [65] utilized a feedforward neural network (as shown in Figure 4) with a hidden layer to predict the HGR under different nominal capacities, ambient temperatures, discharge rates, and DOD conditions. The neural network has four input variables, namely, DOD, T, nominal capacity and discharge rate. The prediction results were generally consistent with experimental data, although there were some discrepancies at certain conditions.
Regarding different modeling methods, Cao et al. [66] compared three common data-driven modeling methods—MLP, SVM, and GPR—for predicting the HGR of LIBs. The results showed that the relative prediction error of all three models was within 5%, with MLP exhibiting the best performance. All three models performed better under interpolation conditions than under extrapolation conditions. The authors thus recommended avoiding extrapolation in practical applications. In the work of Yalçın et al. [67], the CNN is combined with the artificial bee colony (ABC) algorithm which optimizes the weights of the CNN to predict the HGR. This proposed method was compared with other modeling techniques, including linear regression, multiple linear regression, DT, RF, SVM, MLP, LSTM, and basic CNN. The model performance comparison clearly demonstrated that the developed CNN-ABC approach outperformed the other methods. The CNN-ABC approach shows an increase in the coefficient of determination ( R 2 ) from 0.9876 for the basic CNN to 0.9972.
Not only purely data-driven methods, but also hybrid approaches can be used to predict HGR. Legala and Li [68] utilized a data-driven hybrid approach, where an extended Kalman filter (EKF) predicted the DOD as shown in Figure 5. Then, the predicted DOD, along with discharge current I, output voltage V, ambient temperature T a , and surface temperature, served as input features for an artificial neural network (ANN), which output the HGR prediction.
Due to the relatively well-understood mechanisms of heat generation in LIBs and the clear factors influencing HGR, it is clear that the selection of appropriate input parameters in the modeling significantly impacts the accuracy of HGR prediction. Cao et al. [66] examined the impact of different model inputs, including DOD, ambient temperature T a , discharge voltage V, and discharge current I. The results indicate that including discharge voltage V as an input parameter with DOD, I, and T a has a negligible impact on the prediction accuracy of the ANN. This is because discharge voltage is directly correlated with DOD. The input parameters and modeling methods used in different HGR prediction models are summarized in Table 1. The inclusion of adequate and suitable input parameters is key to achieving accurate HGR predictions. The most critical parameters are current, voltage (or DOD/SOC), and temperature, which can be acquired either through direct sensor measurements or indirectly through calculations or estimations based on other measured parameters [69,70].
Liu et al. [71] estimated the non-uniform detailed HGR within different regions of a battery based on surface temperature measurements, employing a particle swarm optimization (PSO) algorithm that considered the multi-factor influence of SOC, T a , and discharge rate. They further analyzed the battery’s temperature and HGR distribution characteristics under these multi-factor conditions. Finally, they developed a three-dimensional (3D) thermal model in COMSOL, utilizing the estimated regional heat-generation rates as non-uniform heat sources and incorporating the battery’s 3D heat transfer mechanisms, to simulate the detailed and time-varying temperature distribution under variable operating conditions, determining the local temperature at any point within the LIB. The model’s effectiveness and accuracy were experimentally validated. In contrast to earlier research that modeled the HGR as a single lumped value, Liu et al.’s work accounted for the spatially varying, non-uniform distribution of HGR within the battery.

3.2. Heat Conductivity

The internal temperature distribution within a battery is governed by heat conduction processes. However, heat conduction in batteries is significantly more complex than in simple, homogeneous materials. Batteries contain both solid materials and liquid electrolytes, and the electrode materials and separators are typically porous, influencing the heat-transfer process due to the presence of the electrolyte within their pores. Furthermore, the jelly roll structure of wound batteries (as shown in Figure 6) results in highly anisotropic thermal conductivity. The in-plane (parallel to the layers) thermal conductivity can be as high as 160 W/mK, while the through-plane (perpendicular to the layers) thermal conductivity is significantly lower, ranging from 0.2 to 8 W/mK [72]. Finally, thermal conductivity is temperature-dependent, and the non-uniform temperature distribution and HGR within a battery during operation leads to spatial variations in thermal conductivity [73]. Thermal conductivity is also found to be dependent on SOC [74].
The heat conductivity within a battery is influenced by the materials, construction, and operating states [75]. While bottom-up models, based on temperature-dependent material properties of the battery components, can be used to predict this heat conductivity [73], they often suffer from high computational complexity. Therefore, data-driven models can be recommended to be an alternative to detailed physics-based models for efficiently predicting accurate, anisotropic thermal conductivity values. It is important to acknowledge that, to the best of our knowledge, there is currently a limited number of published studies directly employing data-driven methods for predicting anisotropic thermal conductivity in LIBs. However, the inherent complexity and computational cost associated with physics-based models, coupled with the demonstrated success of data-driven approaches in related areas like HGR prediction, strongly motivates the exploration of this alternative. This approach has the potential to significantly improve the accuracy and efficiency of battery thermal simulations.
In summary, data-driven approaches have demonstrated promising capabilities in predicting the HGR of LIBs. These models offer a valuable alternative to computationally intensive electrochemical–thermal simulations, particularly for specific operating conditions. However, a significant challenge remains in developing models capable of accurately predicting both HGR and thermal conductivity across a wider range of operating conditions and battery states of health (SOHs). Future research should develop adaptable models that can account for the influence of varying C-rates, ambient temperatures, and battery degradation on both HGR and thermal conductivity. Furthermore, the development of methods for rapidly generating detailed HGR and thermal conductivity distributions within the battery cell is crucial. Such detailed information would significantly enhance the predictive power of detailed numerical simulations of LIB thermal behavior.

4. Prediction of Battery Temperature

The conventional methods for monitoring the temperature of LIBs involve using thermocouples or thermistors to measure the surface temperature of LIB. In addition to sensor-based temperature-measurement methods, there are also sensorless methods based on techniques such as electrochemical impedance spectroscopy (EIS) [76]. However, temperature-measurement methods based on EIS data rely on corresponding advanced testing instruments, which makes this approach impractical for battery modules containing a large number of cells. Data-driven modeling offers an alternative approach for battery temperature acquisition. Previous research has proposed various state-prediction models for batteries, including SOC [77,78], SOH [79,80], state of power (SOP) [81], remaining useful life (RUL) [79,82], temperature, and a combination of multiple state parameters [83], based on the rapidly evolving data-driven models. These models treat the LIB as a “black box”, using data-driven approaches to simulate the complex interdependencies between numerous operational parameters and battery state parameters. This section discusses the application of data-driven modeling for LIB temperature prediction.
The multi-layer perceptron is a foundational model in deep learning. In the early work of Jaliliantabar et al. [84] , they used an MLP to predict the battery temperature under different discharge rates, cooling materials, and operating times. The results showed that the ANN has the ability to predict battery temperature. However, this prediction did not consider the dynamic working process of the battery, thus presenting certain limitations in their work. In the research by Hussein et al. [85], a two-layer feedforward neural network (FNN) was used to predict the battery temperature. They compared two approaches: one approach predicted the voltage based on the battery’s temperature, SOC, and current, and then predicted the temperature. The other approach considered the battery temperature from the previous time step within the network. In the latter approach, the impact of the battery temperature on the subsequent temperature was taken into account, which showed significantly better accuracy in testing. Battery temperature depends on its previous temperature and heat generation. Considering the battery’s temperature at previous time steps in the prediction model can improve prediction accuracy. Xiong et al. [86] trained an MLP based on the temperature from a single temperature sensor and the battery’s thermal model to determine the temperature of other cells within the module. This approach helps reduce the cost associated with using a large number of temperature sensors in the battery module.
The temperature of the battery is a dynamic feature that changes over time. Therefore, the incorporation of historical information into the model can help improve prediction accuracy. As given in Section 2.3.3, RNNs can retain historical information within the network, making them well suited for battery temperature prediction. Yravedra and Li [87] trained and validated an RNN model to predict the temperature variation of LIBs, utilizing the advantage of the RNN model in handling time series data. However, it is known that RNNs suffer from the issue of gradient vanishing, which is addressed by developing LSTMs based on RNNs. LSTMs overcome the long-period dependency and gradient vanishing issues of regular RNNs by introducing structures including forget gate, input gate, and output gate. In principle, the memory units inside an LSTM can selectively retain or update historical information based on the gating mechanism, making it well suitable for capturing dynamic features of time-varying LIB conditions, such as capacity-degradation trends and temperature variation curves. Furthermore, LIB temperature is influenced not only by the current operating conditions but also by the LIB’s achieved temperature, a factor that LSTM’s memory units are well equipped to handle. This was tested in the work by Naguib et al. [88]. In their study, they compared the application of FNN and LSTM in predicting the surface temperature of cylindrical batteries. Their results revealed that LSTMs achieved higher prediction accuracy compared to FNNs, with the RMSE (Root Mean Square Error) decreasing from 2.0 to 0.7. Interestingly, they also demonstrated that the performance of FNNs could be substantially improved by incorporating external low-pass filters, resulting in a reduction of the RMSE from 2.0 to 1.3. This indicates that high-frequency signals are caused by errors, and removing these high-frequency signals can improve the performance of the neural network. Han et al. [89] utilized an LSTM model to predict the temperature variation of LIBs under dynamic current conditions, demonstrating high accuracy in LIB temperature prediction. Zhu et al. [90] also used LSTMs to predict battery temperature as time series data. As shown in Figure 7, their main innovation was distinguishing between different LIB heat sources, including reversible heat, irreversible heat, and environmental heat transfer. Their work can be utilized to analyze the thermal behavior of LIB. In Yi et al.’s work [91], the grey wolf optimizer algorithm was used to optimize the hyperparameters of the LSTM neural network, aiming to improve its performance in predicting battery temperature.
Due to the inherent advantages of LSTMs, a number of studies have focused on combining LSTMs with other mechanisms. Zhao et al. [92] developed a hybrid neural network combining CNN and LSTM networks to predict LIB temperature, shown in Figure 8. The architecture benefits from the spatial feature-extraction ability of CNNs and the temporal feature-extraction capabilities of LSTMs. They also designed a residual monitor to detect the deviation between measured and predicted values, using a deviation threshold to determine whether thermal runaway has occurred in the battery. Zafar et al. [93] constructed a DeepTimeNet model by combining CNN, ResNet blocks, inception modules, bidirectional LSTM, and gated recurrent unit (GRU) layers to predict the temperature changes of batteries based on the current, voltage, and current temperature of the battery. This work integrates multiple different artificial intelligence algorithms, allowing it to capture the more complex dynamic temperature variations of the battery.
The nonlinear autoregressive neural network (NARX) is a type of dynamic neural network that considers both historical input and output values, enabling it to capture dynamic characteristics and nonlinear relationships in time series data [94]. Sharing a similar architecture with conventional ANNs, the NARX network includes input, hidden, and output layers. However, NARX utilizes past time series data by incorporating lagged input and output values, as visualized in Figure 9. A tapped delay line (TDL) unit is employed to store a predefined number of present and historical time series. Hasan et al. [95] used a NARX neural network to develop a prediction model for battery temperature in an energy-storage system. The model’s prediction accuracy was season-dependent, reflecting the influence of environment temperature on LIB’s temperature. The ability to predict battery core temperature in real time using NARX is of significant importance for optimizing BTMS and improving the performance of electric vehicles [96].
There are different modeling methods that can establish the dependency between battery-operating parameters and temperature, making it necessary to compare the effectiveness of these methods. Tran et al. [97] compared the performance of four models—linear regression, k-nearest neighbors, RF, and DT—in predicting the surface temperature of batteries. The predictions used environmental temperature, battery capacity, current, voltage, and temperature as inputs to forecast the subsequent temperature and voltage of the battery. The performance comparison showed that the decision tree model had the best predictive performance. Pathmanaban et al. [98] compared four machine learning models (RF, LSTM, MLP, and SVM) in predicting the battery temperature of electric bicycles under different speeds, with or without passengers, and day/night charging conditions. Their results indicated that the RF model exhibited superior performance compared to the other models, achieving the highest R-squared value of 0.9228. Zhang et al. [56] conducted a comparison of various RNN models, and detailed the performance and complexity of RNN, LSTM, and GRU models. The RMSE (Root Mean Square Error) and MAE (Mean Absolute Error) of all three models were around 0.15 °C, indicating that these models exhibit good prediction performance and robustness.
Real-world application scenarios provide a better basis for comparing data-driven modeling methods and highlighting their differences. Naguib et al. [99] trained and compared the performance of FNN and LSTM models in predicting battery surface temperature on real microprocessors. The results showed that the execution speed of the FNN model was significantly faster than that of the LSTM. With approximately 3000 model parameters, the execution time of FNN was 0.8 milliseconds, while that of LSTM was 2.5 milliseconds. In addition, the FNN model consumed less random access memory (RAM), requiring only 0.4 kilobytes compared to 1 kilobyte for LSTM. These two models can serve as alternatives to physical temperature sensors or as redundant monitoring systems to assist in detecting thermal faults and short-circuit failures in batteries.
Integrating data-driven modeling with experimental data can improve prediction accuracy, such as the EIS of LIB. Since EIS is related to the average temperature of the battery and does not require internal or external measurement devices, it has gained widespread attention [100]. In order to predict battery state parameters based on EIS data, Ströbel et al. [101] related the battery’s EIS with the battery states (SOC, SOH, T) and used an ANN to establish the relationship between EIS and battery states, achieving high prediction accuracy for battery temperature. Yuan et al. [58] developed a battery core temperature-estimation method based on fusing the physical model with an LSTM. This method combines the advantages of physical models, EIS models, and data-driven approaches. The work flow of this method is shown in Figure 10: First, a numerical relationship between the average volume temperature of the battery and the core temperature is established using the physical model, where the average volume temperature is predicted based on EIS, and the thermal parameters are predicted using an LSTM neural network to finally determine the battery core temperature.
All the models discussed above are purely data-driven, requiring a large amount of data for training. The accuracy of the data and the potential uncertainty they may contain can impact the model’s precision. Therefore, some studies have attempted to incorporate physical models into the data-driven models to improve their performance [95,102]. Cho et al. [103] incorporated the lumped capacitance battery thermal model which describes the energy balance as physical information into the loss function of the artificial neural network, realizing a PINN model. This work demonstrated the advantage of PINN over fully connected networks (FCNs) in terms of prediction accuracy for battery temperature. Another approach is to integrate neural networks as units into physical models. For instance, Kuzhiyil et al. [104] incorporated a neural network into a thermal equivalent circuit model with diffusion dynamics, resulting in the Neural-TECMD battery model, as shown in Figure 11, where two individual neural networks are used to predict HGR and voltage. Data-driven modeling can also be combined with multiple physical models of varying complexity to form hybrid models. By combining a neural network with an equivalent circuit model and an electrochemical model, Feng et al. [105] developed an electrochemical–thermal–neural network (ETNN) model for predicting battery SOC and temperature, as shown in Figure 12. A neural network is implemented to adapt the voltage output, especially when operating at high C-rates and across a wide spectrum of temperatures. Corresponding tests conducted under wide temperature and high current conditions demonstrated that the ETNN model has better performance than physical models in terms of accuracy and generalization.
Common data-driven modeling techniques can be integrated to achieve higher prediction accuracy. Yang et al. [106] proposed a novel extreme learning machines thermal model (ELMT model) that combines a physical thermal model with extreme learning machines (ELMs) to predict the temperature variation of LIBs in electric vehicles under external short-circuit conditions. This model outperforms traditional lumped thermal models and ELM models in both computational efficiency and prediction accuracy, offering higher physical significance and scalability. He et al. [107] constructed a composite network containing three DNNs, with each DNN used for parameter identification, heat-generation rate prediction, and temperature prediction, respectively. Physical information was embedded in each DNN, as shown in Figure 13, realizing the concept of PINN. This was the first attempt to retain and incorporate known physical information into a composite DNN for LIB’s thermal modeling, referred to by the authors as a physics-reserved spatiotemporal model.
In practical applications, thermal runaway can be detected by comparing the difference between the model’s predicted T and the actual measured T. The larger the deviation between the predicted temperature from a reliable prediction model and the actual temperature, the higher the probability of TR event [108,109]. The development of various data-driven battery temperature-prediction models provides support for early warning systems for lithium-ion batteries.
In summary, while physics-based models can accurately determine the thermal behavior of LIBs, their practical application is often hampered by the challenges associated with accurately identifying system structures and parameters. Data-driven models, on the other hand, demonstrate promising predictive capabilities but can suffer from limited generalization due to the wide range of operating, battery aging, and environment factors. Therefore, a promising solution lies in the integration of these two approaches, utilizing the strengths of data-driven techniques to address the parameter-identification limitations of mechanistic models. This hybrid approach offers the potential to significantly enhance both the accuracy and robustness of battery temperature prediction.
Several promising directions for future research warrant exploration. The integration of diverse prediction methodologies through techniques like Kalman filtering, ensemble learning, and meta-bagging holds significant potential for improving prediction accuracy and robustness. Furthermore, the incorporation of emerging explainable machine learning techniques, like SHAP (Shapley Additive exPlanations) and partial dependence plots, into temperature-prediction models represents a crucial step towards enhancing model interpretability. Lastly, most of the existing work focuses on average, surface, or core battery temperatures. However, the non-uniformity and detailed distribution of temperature within the battery are often more critical for assessing performance degradation and safety risks. Future research is suggested to develop data-driven models capable of accurately predicting these temperature gradients and distributions. Developing explainable and spatially resolved models across a wide range of operating conditions is essential to accelerate their real-world applications.

5. Design of Thermal Management System

The design of BTMSs is crucial to controlling the temperature and temperature difference of the LIB (pack) [110]. Previous studies have shown that, for LIBs, the optimal operating temperature is 25–40 °C, and the temperature difference should not exceed 5 °C [111]. To achieve this goal, various types of BTMSs have been developed. Commonly used cooling techniques include air, liquid, heat pipe (HP), and phase change material (PCM). Hybrid techniques can also be seen in practical application [112].
Different BTMS cooling systems operate based on varying principles and have specific characteristics. These cooling systems involve different and numerous structural and operating parameters which directly determine the system performance and thus need optimization design [113]. The optimization design of BTMSs aims to simultaneously minimize the maximum battery temperature during operation and reduce the temperature gradient within the battery. In the optimization process, general optimization algorithms typically require the dependency relationship between optimization parameters and performance metrics. This relationship can be obtained through experiments or data simulations, but it is costly and difficult to establish the complex nonlinear dependencies among multiple parameters. Data-driven methods can play a key role here, as surrogate models for the optimization object can be built using statistical modeling methods, traditional machine learning methods, and deep learning methods. Surrogate models can then be applied in the optimization process. We refer to this approach as surrogate-based optimization (SBO). The applications of common optimization algorithms, including ant colony optimization, genetic algorithm, PSO, and non-dominated sorting genetic algorithm II (NSGA-II) [114,115,116,117], in design parameter optimization have been thoroughly reviewed by Fayaz et al. [118], who also categorized these algorithms based on various BTMSs.
We begin by exploring the application of data-driven models in the development of surrogate models. Common statistical modeling methods include linear regression, polynomial regression, response surface method, and Kriging method. For instance, Shrinet et al. [119] addressed the optimization problem of the serpentine channel in the 18650-type battery pack by using the response surface method to establish quadratic regression equations between thermal performance and cooling system design parameters, which were then used as surrogate models in the multi-objective optimization by the NSGA-II algorithm. Figure 14 provides a visual representation of the serpentine channel cooling structure for the 18650-type LIB pack.
The Kriging model is a regression-based statistical method that spatially models and interpolates random processes or fields using a covariance function, leveraging the spatial correlation between known data points to estimate values at unobserved locations. Monika et al. [120] employed a Kriging model to establish the dependency relationships between the design parameters of a battery cooling plate with the hexagonal channel and thermal performance. The cooling plate structure shown in Figure 15 is designated for pouch batteries. Their workflow combines Latin hypercube sampling, surrogate model, and multi-objective optimization. When compared to multivariate linear regression, response surface approximation, and support vector machine methods, the Kriging model demonstrated higher prediction accuracy. Additionally, they utilized the SHAP technique to reveal the influence of design and operational variables on the objective function, enhancing the model’s interpretability.
Traditional machine learning methods, such as SVR, have also been widely applied. Tang et al. [121] used the SVR method to establish the dependency relationships between the compressor speed, ambient temperature, and air flow velocity of the external heat exchanger with the cooling capacity and coefficient of performance of the liquid cooling system. This SVR model can be used to analyze the effects of various factors on the performances of liquid-cooling BTMSs comprehensively.
Among deep learning algorithms, feedforward ANNS are the most commonly used type. Numerous studies have employed ANNs to establish relationships between cooling system performance and structural/operational parameters. These ANNs then serve as surrogate models, which are then used in conjunction with optimization algorithms to enhance cooling performance. A summary of relevant studies is presented in Table 2. Based on the summary in Table 2, the common outputs of the ANNs are typically maximum temperature T m a x , maximum temperature difference Δ T m a x , and pressure difference Δ P . The thermal performance of the cooling system is a function of T m a x and Δ P . Furthermore, the maximum temperature difference is indicative of temperature non-uniformity. In these studies, the structural parameters of the BTMS are treated as discrete variables, and their values during the optimization process are discrete. However, in practical design, these structural parameters are continuous, leading to a certain gap between research and real-world applications. To address this issue, Feng et al. [122] compared the optimization performance, computational complexity, and manufacturing cost differences between using discrete and continuous variables in BTMS optimization. The results showed that both approaches achieved similar thermal performance, but the continuous-variable method involved higher computational complexity. In the context of optimization design for BTMSs, artificial neural networks that have been extensively used and evaluated encompass FNN, DNN, Elman neural network, CNN, NARX, and LSTM [123,124]. Although ANN models perform well in predicting battery temperature, their internal mechanisms lack physical interpretation, which may affect the in-depth understanding and optimization of the model results.
Bayesian neural networks (BNNs) extend traditional neural networks by learning probability distributions over the network’s weights, rather than single point estimates. Qian et al. [137] used CFD simulations to train a BNN that predicts battery temperature as a function of battery spacing under air cooling, as shown in Figure 16. They then optimized the LIB pack’s thermal performance by optimizing the battery spacing using the trained BNN.
In contrast to the surrogate models for liquid cooling system performance constructed using neural networks in the aforementioned studies, Zhou et al. [138] used a Gaussian process model to establish a surrogate model for the air-cooling system of an electric bicycle battery. Compared to neural network methods, the advantage of Gaussian process modeling lies in its ability to quantify the uncertainty in predictions.
The aforementioned studies discussed above all adopt a surrogate-based optimization strategy: using data-driven methods to model the performance of the cooling system, and then optimizing the system based on the rapid inference capability of the data-driven modeling and classical optimization algorithm such as PSO and NSGA-II. An alternative and increasingly popular optimization method is deep reinforcement learning (DRL), which automates the optimization process by learning optimal actions through interaction with a simulated or real-world environment [139]. DRL operates by having an agent observe the environment’s state and, based on its current policy, select an action. This action alters the environment, leading to a new state and a reward signal. The agent then refines its policy based on this reward, striving to maximize the total reward accumulated across all steps until a terminal state is reached [140,141]. In a recent study, Cheng et al. [142] utilized DRL to optimize the layout of a battery-cooling system integrating PCM and liquid-cooling pipes, as shown in Figure 17 . Their findings indicated superior performance compared to conventional methods like NSGA-II. The DRL-optimized system exhibited a 1.3% decrease in temperature difference relative to NSGA-II, and a 0.2% decrease in maximum temperature compared to NSGA-II. The core of their approach was a DRL framework based on the double deep Q-network (DDQN) algorithm. This framework achieved multi-objective optimization of the BTMS by carefully defining the state space ( T m a x , Δ T m a x , average PCM temperature, and current action), the action space (five discrete actions representing adjustments to structural parameters), and the reward function (based on incremental improvements in optimization objectives). To address the challenge of local optima, the training process incorporated an ϵ -greedy exploration strategy and an experience replay buffer, enabling the agent to converge towards a globally optimal solution.
In the aforementioned works, data-driven modeling methods were utilized to establish relationships between structural and operational parameters and cooling system performance parameters. However, Ebbs-Picken et al. [143] employed “a deep encoder–decoder hierarchical (DeepEDH) convolutional neural network” to obtain predictions of temperature fields, velocity fields, and pressure fields in cooling systems. This approach was then combined with optimization algorithms to achieve multi-objective optimization. Figure 18 outlines the workflow of this work. Compared to optimization based on performance parameter surrogate models, this method demonstrates better stability and physical interpretability.
In summary, data-driven approaches, particularly DRL, have demonstrated superior performance compared to surrogate-based optimization methods in the design of BTMSs. DRL’s ability to learn optimal design parameters directly from data makes it a promising tool for various BTMS configurations. However, current research heavily relies on computationally expensive CFD simulations for training data. Future research should explore integrating reduced-order modeling techniques or other efficient modeling approaches to accelerate CFD calculations, which will enable the application of data-driven methods to large-scale LIB systems.

6. Discussion and Outlook

This review has presented a comprehensive overview of the application of data-driven modeling techniques in predicting the thermal behavior of LIBs and in the design of effective BTMSs. By providing a foundational understanding of data-driven methodologies, LIB thermal characteristics, and BTMS principles, this work aims to provide readers with the necessary knowledge to navigate this rapidly evolving field. Through the examination of diverse data-driven modeling approaches and the analysis of their respective strengths and limitations, this review highlights the potential of these techniques to address critical challenges in LIB thermal management.
The key conclusion drawn from this review is that data-driven models, trained on either experimental or simulation data, can effectively predict battery heat generation, temperature profiles, and cooling performance, resulting in readily deployable predictive tools. These models offer a computationally efficient alternative to traditional physics-based simulations, enabling faster design iterations and real-time control strategies.
The rapid advancements in data-driven modeling technologies present significant opportunities for further innovation in the LIB field. Several promising future research directions are as follows:
(1) Advanced experimental techniques and sensors for data acquisition: The accuracy and reliability of data-driven models are fundamentally dependent on the quality and comprehensiveness of the training data. Emerging experimental techniques and sensors offer the potential to provide more detailed and accurate data for model development. This includes techniques such as EIS-based internal temperature measurement, fiber optic sensors, novel material-based thermocouples (e.g., carbon black-based), and temperature-distribution measurement using infrared cameras or thermochromic liquid crystals. Integrating data from these advanced sensors can significantly enhance the performance and applicability of data-driven models in the LIB field.
(2) Hybrid modeling: Combining data-driven models with physics-based insights and/or models can leverage the strengths of both approaches, leading to improved accuracy and generalizability. This includes incorporating physical constraints into the model-training process or using data-driven models to parameterize existing physics-based models.
(3) Ensemble modeling: Integrating multiple data-driven models, each trained on different datasets or using different algorithms, can enhance prediction accuracy and reduce model uncertainty. Ensemble methods can also improve the model’s ability to handle diverse operating conditions and battery aging effects.
(4) Explainable AI: Incorporating explainability techniques into data-driven models is crucial for understanding the underlying mechanisms driving battery thermal behavior. Explainable AI methods can provide insights into the key factors influencing heat generation and temperature distribution, facilitating more informed design decisions.
(5) Deep reinforcement learning for active thermal management: Implementing DRL algorithms to develop intelligent and adaptive thermal management strategies. DRL can learn optimal control policies for cooling systems by interacting with a simulated or real-world battery pack environment, dynamically adjusting cooling parameters (e.g., coolant flow rate, fan speed) to minimize energy consumption, maintain optimal temperature distribution, and extend LIB lifespan.
(6) Implementation challenges and deployment strategies: The practical implementation of data-driven or hybrid models within realistic BTMS faces limitations related to on-board computational resources, including microprocessor RAM and real-time latency constraints. Therefore, it is necessary to develop implementation techniques such as model reduction, memory optimization, and algorithm acceleration to balance model complexity with implementation feasibility. Alternatively, exploring real-time inference using edge AI hardware offers a promising avenue for deploying complex models within the resource-constrained environment of a BTMS.
By pursuing these research directions, the field of data-driven battery thermal management can move towards more accurate and interpretable models, contributing to the development of safer and longer-lasting LIB systems.

Author Contributions

Conceptualization, Y.T. and P.W.; Data curation, S.Y.; Formal analysis, T.Y.; writing—original draft preparation, W.F. and W.Q.; writing—review and editing, W.Q. and T.Y.; visualization, G.D.; supervision, T.Y.; project administration, W.Q.; funding acquisition, Y.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Changzhou Science and Technology Program (International Science and Technology Cooperation) under grant No. CZ20240007 and Natural Science Foundation of Jiangsu Province under grant No. BK20230532.

Data Availability Statement

No data were used in this review.

Acknowledgments

The finical supports from Changzhou Science and Technology Program (International Science and Technology Cooperation, No. CZ20240007) and Natural Science Foundation of Jiangsu Province (Grant No. BK20230532) are gratefully acknowledged.

Conflicts of Interest

Yongjun Tian, Guangwu Dai, and Tao Yan are the employees of TCATARC Automotive Test center (Changzhou) Company Limited. The other authors declare no conflicts of interest.

Nomenclature

ANNartificial neural network
BTMSbattery thermal management system
CFDcomputational fluid dynamics
CNNconvolutional neural network
DNNdeep neural network
DODdepth of discharge
DRLdeep reinforcement learning
DTdecision tree
EISelectrochemical impedance spectroscopy
FNNfeedforward neural network
GPRGaussian process regression
GRAgrey relational analysis
GRGgrey relational grade
HGRheat-generation rate
LIBLithium-ion battery
LRlinear regression
LSTMlong short-term memory
MLPmulti-layer perceptron
PINNphysical-informed neural network
PDEpartial differential equation
PSOparticle swarm optimization
PCAprincipal component analysis
RFrandom forest
RULremaining useful life
RMSEroot mean square error
RLreinforcement learning
RNNrecurrent neural network
SVMsupport vector machine
SVRsupport vector regression
SOCstate of charge
SOHstate of health
TRthermal runaway

References

  1. Han, J.; Wang, F. Design and testing of a small orchard tractor driven by a power battery. Eng. Agric. 2023, 2, e20220195. [Google Scholar] [CrossRef]
  2. Huang, Y.; Li, J. Key Challenges for Grid-Scale Lithium-Ion Battery Energy Storage. Adv. Energy Mater. 2022, 12, 2202197. [Google Scholar] [CrossRef]
  3. Yang, S.; Ling, C.; Fan, Y.; Yang, Y.; Tan, X.; Dong, H. A Review of Lithium-Ion Battery Thermal Management System Strategies and the Evaluate Criteria. Int. J. Electrochem. Sci. 2019, 14, 6077–6107. [Google Scholar] [CrossRef]
  4. Feng, X.; Zheng, S.; Ren, D.; He, X.; Wang, L.; Cui, H.; Liu, X.; Jin, C.; Zhang, F.; Xu, C.; et al. Investigating the thermal runaway mechanisms of lithium-ion batteries based on thermal analysis database. Appl. Energy 2019, 246, 53–64. [Google Scholar] [CrossRef]
  5. Gharebaghi, M.; Rezaei, O.; Li, C.; Wang, Z.; Tang, Y. A Survey on Using Second-Life Batteries in Stationary Energy Storage Applications. Energies 2025, 18, 42. [Google Scholar] [CrossRef]
  6. Zeng, Y.; Chalise, D.; Lubner, S.D.; Kaur, S.; Prasher, R.S. A review of thermal physics and management inside lithium-ion batteries for high energy density and fast charging. Energy Storage Mater. 2021, 41, 264–288. [Google Scholar] [CrossRef]
  7. Smith, K.; Wang, C.Y. Power and thermal characterization of a lithium-ion battery pack for hybrid-electric vehicles. J. Power Sources 2006, 160, 662–673. [Google Scholar] [CrossRef]
  8. Wang, Q.; Ping, P.; Zhao, X.; Chu, G.; Sun, J.; Chen, C. Thermal runaway caused fire and explosion of lithium ion battery. J. Power Sources 2012, 208, 210–224. [Google Scholar] [CrossRef]
  9. Feng, X.; Ouyang, M.; Liu, X.; Lu, L.; Xia, Y.; He, X. Thermal runaway mechanism of lithium ion battery for electric vehicles: A review. Energy Storage Mater. 2018, 10, 246–267. [Google Scholar] [CrossRef]
  10. Tran, M.K.; Mevawalla, A.; Aziz, A.; Panchal, S.; Xie, Y.; Fowler, M. A Review of Lithium-Ion Battery Thermal Runaway Modeling and Diagnosis Approaches. Processes 2022, 10, 1192. [Google Scholar] [CrossRef]
  11. Pal, S.; Kashyap, P.; Panda, B.; Gao, L.; Garg, A. A comprehensive review of thermal runaway for batteries: Experimental; modelling, challenges and proposed framework. Int. J. Green Energy 2025, 22, 2399–2422. [Google Scholar] [CrossRef]
  12. Chalise, D.; Lu, W.; Srinivasan, V.; Prasher, R. Heat of Mixing During Fast Charge/Discharge of a Li-Ion Cell: A Study on NMC523 Cathode. J. Electrochem. Soc. 2020, 167, 090560. [Google Scholar] [CrossRef]
  13. An, K.; Barai, P.; Smith, K.; Mukherjee, P.P. Probing the Thermal Implications in Mechanical Degradation of Lithium-Ion Battery Electrodes. J. Electrochem. Soc. 2014, 161, A1058. [Google Scholar] [CrossRef]
  14. Li, X.; Chang, X.; Feng, Y.; Dai, Z.; Zheng, L. Investigation on the heat generation and heat sources of cylindrical NCM811 lithium-ion batteries. Appl. Therm. Eng. 2024, 241, 122403. [Google Scholar] [CrossRef]
  15. Wu, B.; Yufit, V.; Marinescu, M.; Offer, G.J.; Martinez-Botas, R.F.; Brandon, N.P. Coupled thermal–electrochemical modelling of uneven heat generation in lithium-ion battery packs. J. Power Sources 2013, 243, 544–554. [Google Scholar] [CrossRef]
  16. Wang, H.; Li, P.; Wang, K.; Zhang, H. Coupled Electrochemical-Thermal-Mechanical Modeling and Simulation of Multi-Scale Heterogeneous Lithium-Ion Batteries. Adv. Theory Simul. 2025, e00250. [Google Scholar] [CrossRef]
  17. Zhao, G.; Wang, X.; Negnevitsky, M.; Zhang, H. A review of air-cooling battery thermal management systems for electric and hybrid electric vehicles. J. Power Sources 2021, 501, 230001. [Google Scholar] [CrossRef]
  18. Northrop, P.W.C.; Suthar, B.; Ramadesigan, V.; Santhanagopalan, S.; Braatz, R.D.; Subramanian, V.R. Efficient Simulation and Reformulation of Lithium-Ion Battery Models for Enabling Electric Transportation. J. Electrochem. Soc. 2014, 161, E3149. [Google Scholar] [CrossRef]
  19. Le Houx, J.; Kramer, D. Physics based modelling of porous lithium ion battery electrodes—A review. Energy Rep. 2020, 6, 1–9. [Google Scholar] [CrossRef]
  20. Habib, M.K.; Ayankoso, S.A.; Nagata, F. Data-Driven Modeling: Concept, Techniques, Challenges and a Case Study. In Proceedings of the 2021 IEEE International Conference on Mechatronics and Automation (ICMA), Takamatsu, Japan, 8–11 August 2021; pp. 1000–1007. [Google Scholar] [CrossRef]
  21. Li, M.; Dong, C.; Xiong, B.; Mu, Y.; Yu, X.; Xiao, Q.; Jia, H. STTEWS: A sequential-transformer thermal early warning system for lithium-ion battery safety. Appl. Energy 2022, 328, 119965. [Google Scholar] [CrossRef]
  22. Shahrabi, A.; Nikpanjeh, F.; Hamounian, A.; Mohebbi, H.; Shekari, M.; Parvandi, Z.; Asoudeh, M.; Tabar, M.R.R. Data-driven stability analysis of complex systems with higher-order interactions. Commun. Phys. 2025, 8, 239. [Google Scholar] [CrossRef]
  23. Al Miaari, A.; Ali, H.M. Batteries temperature prediction and thermal management using machine learning: An overview. Energy Rep. 2023, 10, 2277–2305. [Google Scholar] [CrossRef]
  24. Haghi, S.; Hidalgo, M.F.V.; Niri, M.F.; Daub, R.; Marco, J. Machine Learning in Lithium-Ion Battery Cell Production: A Comprehensive Mapping Study. Batter. Supercaps 2023, 6, e202300046. [Google Scholar] [CrossRef]
  25. Wang, S.; Zhou, R.; Ren, Y.; Jiao, M.; Liu, H.; Lian, C. Advanced data-driven techniques in AI for predicting lithium-ion battery remaining useful life: A comprehensive review. Green Chem. Eng. 2025, 6, 139–153. [Google Scholar] [CrossRef]
  26. Jaime-Barquero, E.; Bekaert, E.; Olarte, J.; Zulueta, E.; Lopez-Guede, J.M. Artificial Intelligence Opportunities to Diagnose Degradation Modes for Safety Operation in Lithium Batteries. Batteries 2023, 9, 388. [Google Scholar] [CrossRef]
  27. Duquesnoy, M.; Liu, C.; Dominguez, D.Z.; Kumar, V.; Ayerbe, E.; Franco, A.A. Machine learning-assisted multi-objective optimization of battery manufacturing from synthetic data generated by physics-based simulations. Energy Storage Mater. 2023, 56, 50–61. [Google Scholar] [CrossRef]
  28. Tagade, P.; Hariharan, K.S.; Ramachandran, S.; Khandelwal, A.; Naha, A.; Kolake, S.M.; Han, S.H. Deep Gaussian process regression for lithium-ion battery health prognosis and degradation mode diagnosis. J. Power Sources 2020, 445, 227281. [Google Scholar] [CrossRef]
  29. Wu, X.; Zhu, J.; Wu, B.; Zhao, C.; Sun, J.; Dai, C. Discrimination of Chinese Liquors Based on Electronic Nose and Fuzzy Discriminant Principal Component Analysis. Foods 2019, 8, 38. [Google Scholar] [CrossRef] [PubMed]
  30. Geng, W.; Haruna, S.A.; Li, H.; Kademi, H.I.; Chen, Q. A Novel Colorimetric Sensor Array Coupled Multivariate Calibration Analysis for Predicting Freshness in Chicken Meat: A Comparison of Linear and Nonlinear Regression Algorithms. Foods 2023, 12, 720. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Ma, H.; Wang, B.; Qu, W.; Wali, A.; Zhou, C. Relationships between the structure of wheat gluten and ACE inhibitory activity of hydrolysate: Stepwise multiple linear regression analysis. J. Sci. Food Agric. 2016, 96, 3313–3320. [Google Scholar] [CrossRef]
  32. Huang, Y.; Shen, L.; Liu, H. Grey relational analysis, principal component analysis and forecasting of carbon emissions based on long short-term memory in China. J. Clean. Prod. 2019, 209, 415–423. [Google Scholar] [CrossRef]
  33. Du, Z.; Hu, Y.; Buttar, N.A. Analysis of mechanical properties for tea stem using grey relational analysis coupled with multiple linear regression. Sci. Hortic. 2020, 260, 108886. [Google Scholar] [CrossRef]
  34. Zhang, H.; Liu, Y.; Zhang, C.; Li, N. Machine Learning Methods for Weather Forecasting: A Survey. Atmosphere 2025, 16, 82. [Google Scholar] [CrossRef]
  35. Vapnik, V.N. The Nature of Statistical Learning Theory, 2nd ed.; Springer: New York, NY, USA, 2000. [Google Scholar]
  36. Wong, K.K.L. Support Vector Machine. In Cybernetical Intelligence; John Wiley & Sons, Ltd.: Hoboken, NJ, USA, 2023; Chapter 8; pp. 149–176. [Google Scholar] [CrossRef]
  37. Huang, W.; Zhu, W.; Ma, C.; Guo, Y.; Chen, C. Identification of group-housed pigs based on Gabor and Local Binary Pattern features. Biosyst. Eng. 2018, 166, 90–100. [Google Scholar] [CrossRef]
  38. Fan, C.; Liu, Y.; Cui, T.; Qiao, M.; Yu, Y.; Xie, W.; Huang, Y. Quantitative Prediction of Protein Content in Corn Kernel Based on Near-Infrared Spectroscopy. Foods 2024, 13, 4173. [Google Scholar] [CrossRef] [PubMed]
  39. Rasmussen, C.E.; Williams, C.K.I. Gaussian Processes for Machine Learning; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
  40. Lian, Y.; Chen, J.; Guan, Z.; Song, J. Development of a monitoring system for grain loss of paddy rice based on a decision tree algorithm. Int. J. Agric. Biol. Eng. 2021, 14, 224–229. [Google Scholar] [CrossRef]
  41. Louppe, G. Understanding Random Forests: From Theory to Practice. Ph.D. Thesis, University of Liege, Liege, Belgium, 2014. [Google Scholar] [CrossRef]
  42. Yu, J.; Zhangzhong, L.; Lan, R.; Zhang, X.; Xu, L.; Li, J. Ensemble Learning Simulation Method for Hydraulic Characteristic Parameters of Emitters Driven by Limited Data. Agronomy 2023, 13, 986. [Google Scholar] [CrossRef]
  43. Elbeltagi, A.; Srivastava, A.; Deng, J.; Li, Z.; Raza, A.; Khadke, L.; Yu, Z.; El-Rawy, M. Forecasting vapor pressure deficit for agricultural water management using machine learning in semi-arid environments. Agric. Water Manag. 2023, 283, 108302. [Google Scholar] [CrossRef]
  44. Qian, W.; Hui, X.; Wang, B.; Kronenburg, A.; Sung, C.J.; Lin, Y. An investigation into oxidation-induced fragmentation of soot aggregates by Langevin dynamics simulations. Fuel 2023, 334, 126547. [Google Scholar] [CrossRef]
  45. Qian, W.; Yang, S.; Liu, W.; Xu, Q.; Zhu, W. Research on Flow Field Prediction in a Multi-Swirl Combustor Using Artificial Neural Network. Processes 2024, 12, 2435. [Google Scholar] [CrossRef]
  46. Suk, H.I. An Introduction to Neural Networks and Deep Learning. In Deep Learning for Medical Image Analysis; Zhou, S.K., Greenspan, H., Shen, D., Eds.; Academic Press: Cambridge, MA, USA, 2017; pp. 3–24. [Google Scholar] [CrossRef]
  47. Sarker, I.H. Deep Learning: A Comprehensive Overview on Techniques, Taxonomy, Applications and Research Directions. SN Comput. Sci. 2021, 2, 420. [Google Scholar] [CrossRef]
  48. Zhou, X.; Sun, J.; Tian, Y.; Lu, B.; Hang, Y.; Chen, Q. Hyperspectral technique combined with deep learning algorithm for detection of compound heavy metals in lettuce. Food Chem. 2020, 321, 126503. [Google Scholar] [CrossRef] [PubMed]
  49. Chen, C.; Zhu, W.; Steibel, J.; Siegford, J.; Han, J.; Norton, T. Classification of drinking and drinker-playing in pigs by a video-based deep learning method. Biosyst. Eng. 2020, 196, 1–14. [Google Scholar] [CrossRef]
  50. Maffezzoni, D.; Barbierato, E.; Gatti, A. Data-Driven Diagnostics for Pediatric Appendicitis: Machine Learning to Minimize Misdiagnoses and Unnecessary Surgeries. Future Internet 2025, 17, 147. [Google Scholar] [CrossRef]
  51. Wang, Z.; Gong, K.; Fan, W.; Li, C.; Qian, W. Prediction of swirling flow field in combustor based on deep learning. Acta Astronaut. 2022, 201, 302–316. [Google Scholar] [CrossRef]
  52. Zhang, T.; Zhou, J.; Liu, W.; Yue, R.; Shi, J.; Zhou, C.; Hu, J. SN-CNN: A Lightweight and Accurate Line Extraction Algorithm for Seedling Navigation in Ridge-Planted Vegetables. Agriculture 2024, 14, 1446. [Google Scholar] [CrossRef]
  53. Qiu, D.; Guo, T.; Yu, S.; Liu, W.; Li, L.; Sun, Z.; Peng, H.; Hu, D. Classification of Apple Color and Deformity Using Machine Vision Combined with CNN. Agriculture 2024, 14, 978. [Google Scholar] [CrossRef]
  54. Shen, S.; Sadoughi, M.; Chen, X.; Hong, M.; Hu, C. A deep learning method for online capacity estimation of lithium-ion batteries. J. Energy Storage 2019, 25, 100817. [Google Scholar] [CrossRef]
  55. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  56. Zhang, X.; Xiang, H.; Xiong, X.; Wang, Y.; Chen, Z. Benchmarking core temperature forecasting for lithium-ion battery using typical recurrent neural networks. Appl. Therm. Eng. 2024, 248, 123257. [Google Scholar] [CrossRef]
  57. Taha, M.F.; Mao, H.; Mousa, S.; Zhou, L.; Wang, Y.; Elmasry, G.; Al-Rejaie, S.; Elwakeel, A.E.; Wei, Y.; Qiu, Z. Deep Learning-Enabled Dynamic Model for Nutrient Status Detection of Aquaponically Grown Plants. Agronomy 2024, 14, 2290. [Google Scholar] [CrossRef]
  58. Yuan, A.; Cai, T.; Luo, H.; Song, Z.; Wei, B. Core temperature estimation of lithium-ion battery based on numerical model fusion deep learning. J. Energy Storage 2024, 102, 114148. [Google Scholar] [CrossRef]
  59. Raissi, M.; Yazdani, A.; Karniadakis, G.E. Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations. Science 2020, 367, 1026–1030. [Google Scholar] [CrossRef]
  60. Qian, W.; Hui, X.; Wang, B.; Zhang, Z.; Lin, Y.; Yang, S. Physics-Informed neural network for inverse heat conduction problem. Heat Transf. Res. 2023, 54, 65–76. [Google Scholar] [CrossRef]
  61. Guo, P.; Cheng, Z.; Yang, L. A data-driven remaining capacity estimation approach for lithium-ion batteries based on charging health feature extraction. J. Power Sources 2019, 412, 442–450. [Google Scholar] [CrossRef]
  62. Chen, Y.; Yu, Z.; Han, Z.; Sun, W.; He, L. A Decision-Making System for Cotton Irrigation Based on Reinforcement Learning Strategy. Agronomy 2024, 14, 11. [Google Scholar] [CrossRef]
  63. Feng, X.; Ren, D.; He, X.; Ouyang, M. Mitigating Thermal Runaway of Lithium-Ion Batteries. Joule 2020, 4, 743–770. [Google Scholar] [CrossRef]
  64. Feng, X.; Fang, M.; He, X.; Ouyang, M.; Lu, L.; Wang, H.; Zhang, M. Thermal runaway features of large format prismatic lithium ion battery using extended volume accelerating rate calorimetry. J. Power Sources 2014, 255, 294–301. [Google Scholar] [CrossRef]
  65. Arora, S.; Shen, W.; Kapoor, A. Neural network based computational model for estimation of heat generation in LiFePO4 pouch cells of different nominal capacities. Comput. Chem. Eng. 2017, 101, 81–94. [Google Scholar] [CrossRef]
  66. Cao, R.; Zhang, X.; Yang, H. Prediction of the heat-generation rate of lithium-ion batteries based on three machine learning algorithms. Batteries 2023, 9, 165. [Google Scholar] [CrossRef]
  67. Yalçın, S.; Panchal, S.; Herdem, M.S. A CNN-ABC model for estimation and optimization of heat generation rate and voltage distributions of lithium-ion batteries for electric vehicles. Int. J. Heat Mass Transf. 2022, 199, 123486. [Google Scholar] [CrossRef]
  68. Legala, A.; Li, X. Hybrid data-based modeling for the prediction and diagnostics of Li-ion battery thermal behaviors. Energy AI 2022, 10, 100194. [Google Scholar] [CrossRef]
  69. Cho, G.; Kim, Y.; Kwon, J.; Su, W.; Wang, M. Impact of Data Sampling Methods on the Performance of Data-driven Parameter Identification for Lithium ion Batteries. IFAC-PapersOnLine 2021, 54, 534–539. [Google Scholar] [CrossRef]
  70. Li, W.; Zhang, J.; Ringbeck, F.; Jöst, D.; Zhang, L.; Wei, Z.; Sauer, D.U. Physics-informed neural networks for electrode-level state estimation in lithium-ion batteries. J. Power Sources 2021, 506, 230034. [Google Scholar] [CrossRef]
  71. Liu, S.; Zhang, T.; Zhang, C.; Yuan, L.; Xu, Z.; Jin, L. Non-uniform heat generation model of pouch lithium-ion battery based on regional heat-generation rate. J. Energy Storage 2023, 63, 107074. [Google Scholar] [CrossRef]
  72. Koo, B.; Goli, P.; Sumant, A.V.; dos Santos Claro, P.C.; Rajh, T.; Johnson, C.S.; Balandin, A.A.; Shevchenko, E.V. Toward Lithium Ion Batteries with Enhanced Thermal Conductivity. ACS Nano 2014, 8, 7202–7207. [Google Scholar] [CrossRef] [PubMed]
  73. Werner, D.; Loges, A.; Becker, D.J.; Wetzel, T. Thermal conductivity of Li-ion batteries and their electrode configurations—A novel combination of modelling and experimental approach. J. Power Sources 2017, 364, 72–83. [Google Scholar] [CrossRef]
  74. Bazinski, S.J.; Wang, X. Experimental study on the influence of temperature and state-of-charge on the thermophysical properties of an LFP pouch cell. J. Power Sources 2015, 293, 283–291. [Google Scholar] [CrossRef]
  75. Wei, L.; Lu, Z.; Cao, F.; Zhang, L.; Yang, X.; Yu, X.; Jin, L. A comprehensive study on thermal conductivity of the lithium-ion battery. Int. J. Energy Res. 2020, 44, 9466–9478. [Google Scholar] [CrossRef]
  76. Leng, X.; Li, Y.; Xu, G.; Xiong, W.; Xiao, S.; Li, C.; Chen, J.; Yang, M.; Li, S.; Chen, Y.; et al. Prediction of lithium-ion battery internal temperature using the imaginary part of electrochemical impedance spectroscopy. Int. J. Heat Mass Transf. 2025, 240, 126664. [Google Scholar] [CrossRef]
  77. Choi, C.; Park, S.; Kim, J. Uniqueness of multilayer perceptron-based capacity prediction for contributing state-of-charge estimation in a lithium primary battery. Ain Shams Eng. J. 2023, 14, 101936. [Google Scholar] [CrossRef]
  78. Zheng, Y.; Cui, Y.; Han, X.; Ouyang, M. A capacity prediction framework for lithium-ion batteries using fusion prediction of empirical model and data-driven method. Energy 2021, 237, 121556. [Google Scholar] [CrossRef]
  79. Li, P.; Zhang, Z.; Xiong, Q.; Ding, B.; Hou, J.; Luo, D.; Rong, Y.; Li, S. State-of-health estimation and remaining useful life prediction for the lithium-ion battery based on a variant long short term memory neural network. J. Power Sources 2020, 459, 228069. [Google Scholar] [CrossRef]
  80. Lin, M.; Zeng, X.; Wu, J. State of health estimation of lithium-ion battery based on an adaptive tunable hybrid radial basis function network. J. Power Sources 2021, 504, 230063. [Google Scholar] [CrossRef]
  81. Hu, X.; Jiang, H.; Feng, F.; Liu, B. An enhanced multi-state estimation hierarchy for advanced lithium-ion battery management. Appl. Energy 2020, 257, 114019. [Google Scholar] [CrossRef]
  82. Hasib, S.A.; Islam, S.; Ali, M.F.; Sarker, S.K.; Li, L.; Hasan, M.M.; Saha, D.K. Enhancing prediction accuracy of Remaining Useful Life in lithium-ion batteries: A deep learning approach with Bat optimizer. Future Batter. 2024, 2, 100003. [Google Scholar] [CrossRef]
  83. Qian, C.; Guan, H.; Xu, B.; Xia, Q.; Sun, B.; Ren, Y.; Wang, Z. A CNN-SAM-LSTM hybrid neural network for multi-state estimation of lithium-ion batteries under dynamical operating conditions. Energy 2024, 294, 130764. [Google Scholar] [CrossRef]
  84. Jaliliantabar, F.; Mamat, R.; Kumarasamy, S. Prediction of lithium-ion battery temperature in different operating conditions equipped with passive battery thermal management system by artificial neural networks. Mater. Today Proc. 2022, 48, 1796–1804. [Google Scholar] [CrossRef]
  85. Hussein, A.A.; Chehade, A.A. Robust Artificial Neural Network-Based Models for Accurate Surface Temperature Estimation of Batteries. IEEE Trans. Ind. Appl. 2020, 56, 5269–5278. [Google Scholar] [CrossRef]
  86. Xiong, R.; Li, X.; Li, H.; Zhu, B.; Avelin, A. Neural network and physical enable one sensor to estimate the temperature for all cells in the battery pack. J. Energy Storage 2024, 80, 110387. [Google Scholar] [CrossRef]
  87. Yravedra, F.A.; Li, Z. A complete machine learning approach for predicting lithium-ion cell combustion. Electr. J. 2021, 34, 106887. [Google Scholar] [CrossRef]
  88. Naguib, M.; Kollmeyer, P.; Vidal, C.; Emadi, A. Accurate Surface Temperature Estimation of Lithium-Ion Batteries Using Feedforward and Recurrent Artificial Neural Networks. In Proceedings of the 2021 IEEE Transportation Electrification Conference & Expo (ITEC), Chicago, IL, USA, 21–25 June 2021; pp. 52–57. [Google Scholar] [CrossRef]
  89. Han, J.; Seo, J.; Kim, J.; Koo, Y.; Ryu, M.; Lee, B.J. Predicting temperature of a Li-ion battery under dynamic current using long short-term memory. Case Stud. Therm. Eng. 2024, 63, 105246. [Google Scholar] [CrossRef]
  90. Zhu, S.; He, C.; Zhao, N.; Sha, J. Data-driven analysis on thermal effects and temperature changes of lithium-ion battery. J. Power Sources 2021, 482, 228983. [Google Scholar] [CrossRef]
  91. Yi, Y.; Xia, C.; Feng, C.; Zhang, W.; Fu, C.; Qian, L.; Chen, S. Digital twin-long short-term memory (LSTM) neural network based real-time temperature prediction and degradation model analysis for lithium-ion battery. J. Energy Storage 2023, 64, 107203. [Google Scholar] [CrossRef]
  92. Zhao, H.; Chen, Z.; Shu, X.; Xiao, R.; Shen, J.; Liu, Y.; Liu, Y. Online surface temperature prediction and abnormal diagnosis of lithium-ion batteries based on hybrid neural network and fault threshold optimization. Reliab. Eng. Syst. Saf. 2024, 243, 109798. [Google Scholar] [CrossRef]
  93. Zafar, M.H.; Bukhari, S.M.S.; Houran, M.A.; Mansoor, M.; Khan, N.M.; Sanfilippo, F. DeepTimeNet: A novel architecture for precise surface temperature estimation of lithium-ion batteries across diverse ambient conditions. Case Stud. Therm. Eng. 2024, 61, 105002. [Google Scholar] [CrossRef]
  94. Kleiner, J.; Stuckenberger, M.; Komsiyska, L.; Endisch, C. Advanced Monitoring and Prediction of the Thermal State of Intelligent Battery Cells in Electric Vehicles by Physics-Based and Data-Driven Modeling. Batteries 2021, 7, 31. [Google Scholar] [CrossRef]
  95. Hasan, M.M.; Ali Pourmousavi, S.; Jahanbani Ardakani, A.; Saha, T.K. A data-driven approach to estimate battery cell temperature using a nonlinear autoregressive exogenous neural network model. J. Energy Storage 2020, 32, 101879. [Google Scholar] [CrossRef]
  96. Kleiner, J.; Stuckenberger, M.; Komsiyska, L.; Endisch, C. Real-time core temperature prediction of prismatic automotive lithium-ion battery cells based on artificial neural networks. J. Energy Storage 2021, 39, 102588. [Google Scholar] [CrossRef]
  97. Tran, M.K.; Panchal, S.; Chauhan, V.; Brahmbhatt, N.; Mevawalla, A.; Fraser, R.; Fowler, M. Python-based scikit-learn machine learning models for thermal and electrical performance prediction of high-capacity lithium-ion battery. Int. J. Energy Res. 2022, 46, 786–794. [Google Scholar] [CrossRef]
  98. Pathmanaban, P.; Arulraj, P.; Raju, M.; Hariharan, C. Optimizing electric bike battery management: Machine learning predictions of LiFePO4 temperature under varied conditions. J. Energy Storage 2024, 99, 113217. [Google Scholar] [CrossRef]
  99. Naguib, M.; Kollmeyer, P.; Emadi, A. Application of Deep Neural Networks for Lithium-Ion Battery Surface Temperature Estimation Under Driving and Fast Charge Conditions. IEEE Trans. Transp. Electrif. 2023, 9, 1153–1165. [Google Scholar] [CrossRef]
  100. Wang, L.; Lu, D.; Song, M.; Zhao, X.; Li, G. Instantaneous estimation of internal temperature in lithium-ion battery by impedance measurement. Int. J. Energy Res. 2020, 44, 3082–3097. [Google Scholar] [CrossRef]
  101. Ströbel, M.; Pross-Brakhage, J.; Kopp, M.; Birke, K.P. Impedance Based Temperature Estimation of Lithium Ion Cells Using Artificial Neural Networks. Batteries 2021, 7, 85. [Google Scholar] [CrossRef]
  102. Amiri, M.N.; Håkansson, A.; Burheim, O.S.; Lamb, J.J. Lithium-ion battery digitalization: Combining physics-based models and machine learning. Renew. Sustain. Energy Rev. 2024, 200, 114577. [Google Scholar] [CrossRef]
  103. Cho, G.; Zhu, D.; Campbell, J.J.; Wang, M. An LSTM-PINN Hybrid Method to Estimate Lithium-Ion Battery Pack Temperature. IEEE Access 2022, 10, 100594–100604. [Google Scholar] [CrossRef]
  104. Kuzhiyil, J.A.; Damoulas, T.; Widanage, W.D. Neural equivalent circuit models: Universal differential equations for battery modelling. Appl. Energy 2024, 371, 123692. [Google Scholar] [CrossRef]
  105. Feng, F.; Teng, S.; Liu, K.; Xie, J.; Xie, Y.; Liu, B.; Li, K. Co-estimation of lithium-ion battery state of charge and state of temperature based on a hybrid electrochemical–thermal–neural-network model. J. Power Sources 2020, 455, 227935. [Google Scholar] [CrossRef]
  106. Yang, R.; Xiong, R.; Shen, W.; Lin, X. Extreme Learning Machine-Based Thermal Model for Lithium-Ion Batteries of Electric Vehicles under External Short Circuit. Engineering 2021, 7, 395–405. [Google Scholar] [CrossRef]
  107. He, Y.B.; Wang, B.C.; Deng, H.P.; Li, H.X. Physics-reserved spatiotemporal modeling of battery thermal process: Temperature prediction, parameter identification, and heat-generation rate estimation. J. Energy Storage 2024, 75, 109604. [Google Scholar] [CrossRef]
  108. Yang, R.; Xiong, R.; Ma, S.; Lin, X. Characterization of external short circuit faults in electric vehicle Li-ion battery packs and prediction using artificial neural networks. Appl. Energy 2020, 260, 114253. [Google Scholar] [CrossRef]
  109. Li, M.; Dong, C.; Yu, X.; Xiao, Q.; Jia, H. Multi-step ahead thermal warning network for energy storage system based on the core temperature detection. Sci. Rep. 2021, 11, 15332. [Google Scholar] [CrossRef]
  110. Sarchami, A.; Tousi, M.; Darab, M.; Kiani, M.; Najafi, M.; Houshfar, E. Novel AgO-based nanofluid for efficient thermal management of 21700-type lithium-ion battery. Sustain. Energy Technol. Assess. 2024, 70, 103934. [Google Scholar] [CrossRef]
  111. Lin, J.; Liu, X.; Li, S.; Zhang, C.; Yang, S. A review on recent progress, challenges and perspective of battery thermal management system. Int. J. Heat Mass Transf. 2021, 167, 120834. [Google Scholar] [CrossRef]
  112. Jindal, P.; Kumar, B.S.; Bhattacharya, J. Coupled electrochemical-abuse-heat-transfer model to predict thermal runaway propagation and mitigation strategy for an EV battery module. J. Energy Storage 2021, 39, 102619. [Google Scholar] [CrossRef]
  113. Sadeh, M.; Tousi, M.; Sarchami, A.; Sanaie, R.; Kiani, M.; Ashjaee, M.; Houshfar, E. A novel hybrid liquid-cooled battery thermal management system for electric vehicles in highway fuel-economy condition. J. Energy Storage 2024, 86, 111195. [Google Scholar] [CrossRef]
  114. Jiang, H.; Xu, W.; Chen, Q. Evaluating aroma quality of black tea by an olfactory visualization system: Selection of feature sensor using particle swarm optimization. Food Res. Int. 2019, 126, 108605. [Google Scholar] [CrossRef] [PubMed]
  115. Pang, Y.; Li, H.; Tang, P.; Chen, C. Synchronization Optimization of Pipe Diameter and Operation Frequency in a Pressurized Irrigation Network Based on the Genetic Algorithm. Agriculture 2022, 12, 673. [Google Scholar] [CrossRef]
  116. Pang, Y.; Tang, P.; Li, H.; Marinello, F.; Chen, C. Optimization of sprinkler irrigation scheduling scenarios for reducing irrigation energy consumption. Irrig. Drain. 2024, 73, 1329–1343. [Google Scholar] [CrossRef]
  117. Li, H.; Pan, X.; Jiang, Y.; Zhou, X. Optimising and analysis of the hydraulic performance of a water dispersion needle sprinkler using RF-NSGA II and CFD. Biosyst. Eng. 2025, 254, 104113. [Google Scholar] [CrossRef]
  118. Fayaz, H.; Afzal, A.; Samee, A.M.; Soudagar, M.E.M.; Akram, N.; Mujtaba, M.; Jilte, R.; Islam, M.T.; Ağbulut, Ü.; Saleel, C.A. Optimization of thermal and structural design in lithium-ion batteries to obtain energy efficient battery thermal management system (BTMS): A critical review. Arch. Comput. Methods Eng. 2022, 29, 129–194. [Google Scholar] [CrossRef]
  119. Shrinet, E.S.; Akula, R.; Kumar, L. Improvement in serpentine channel based battery thermal management system geometry using variable contact area and its multi-objective design optimization. J. Energy Storage 2024, 96, 112726. [Google Scholar] [CrossRef]
  120. Monika, K.; Punnoose, E.M.; Datta, S.P. Multi-objective optimization of cooling plate with hexagonal channel design for thermal management of Li-ion battery module. Appl. Energy 2024, 368, 123423. [Google Scholar] [CrossRef]
  121. Tang, X.; Guo, Q.; Li, M.; Wei, C.; Pan, Z.; Wang, Y. Performance analysis on liquid-cooled battery thermal management for electric vehicles based on machine learning. J. Power Sources 2021, 494, 229727. [Google Scholar] [CrossRef]
  122. Feng, X.H.; Li, Z.Z.; Gu, F.S.; Zhang, M.L. Structural design and optimization of air-cooled thermal management system for lithium-ion batteries based on discrete and continuous variables. J. Energy Storage 2024, 86, 111202. [Google Scholar] [CrossRef]
  123. Yuan, L.; Li, W.; Deng, W.; Sun, W.; Huang, M.; Liu, Z. Cell temperature prediction in the refrigerant direct cooling thermal management system using artificial neural network. Appl. Therm. Eng. 2024, 254, 123852. [Google Scholar] [CrossRef]
  124. Billert, A.M.; Erschen, S.; Frey, M.; Gauterin, F. Predictive battery thermal management using quantile convolutional neural networks. Transp. Eng. 2022, 10, 100150. [Google Scholar] [CrossRef]
  125. Mokhtari Mehmandoosti, M.; Kowsary, F. Artificial neural network-based multi-objective optimization of cooling of lithium-ion batteries used in electric vehicles utilizing pulsating coolant flow. Appl. Therm. Eng. 2023, 219, 119385. [Google Scholar] [CrossRef]
  126. Yetik, O.; Karakoc, T.H. Estimation of thermal effect of different busbars materials on prismatic Li-ion batteries based on artificial neural networks. J. Energy Storage 2021, 38, 102543. [Google Scholar] [CrossRef]
  127. Oyewola, O.M.; Idowu, E.T. Numerical and artificial neural network inspired study on step-like-plenum battery thermal management system. Int. J. Thermofluids 2024, 24, 100897. [Google Scholar] [CrossRef]
  128. Li, A.; Yuen, A.C.Y.; Wang, W.; Chen, T.B.Y.; Lai, C.S.; Yang, W.; Wu, W.; Chan, Q.N.; Kook, S.; Yeoh, G.H. Integration of Computational Fluid Dynamics and Artificial Neural Network for Optimization Design of Battery Thermal Management System. Batteries 2022, 8, 69. [Google Scholar] [CrossRef]
  129. Najafi Khaboshan, H.; Jaliliantabar, F.; Abdullah, A.A.; Panchal, S.; Azarinia, A. Parametric investigation of battery thermal management system with phase change material, metal foam, and fins; utilizing CFD and ANN models. Appl. Therm. Eng. 2024, 247, 123080. [Google Scholar] [CrossRef]
  130. Jin, L.; Xi, H. Multi-objective parameter optimization of the Z-type air-cooling system based on artificial neural network. J. Energy Storage 2024, 86, 111284. [Google Scholar] [CrossRef]
  131. Zheng, A.; Gao, H.; Jia, X.; Cai, Y.; Yang, X.; Zhu, Q.; Jiang, H. Deep learning-assisted design for battery liquid cooling plate with bionic leaf structure considering non-uniform heat generation. Appl. Energy 2024, 373, 123898. [Google Scholar] [CrossRef]
  132. Fini, A.T.; Fattahi, A.; Musavi, S. Machine learning prediction and multiobjective optimization for cooling enhancement of a plate battery using the chaotic water-microencapsulated PCM fluid flows. J. Taiwan Inst. Chem. Eng. 2023, 148, 104680. [Google Scholar] [CrossRef]
  133. Ye, L.; Li, C.; Wang, C.; Zheng, J.; Zhong, K.; Wu, T. A multi-objective optimization approach for battery thermal management system based on the combination of BP neural network prediction and NSGA-II algorithm. J. Energy Storage 2024, 99, 113212. [Google Scholar] [CrossRef]
  134. Zhang, N.; Zhang, Z.; Li, J.; Cao, X. Performance analysis and prediction of hybrid battery thermal management system integrating PCM with air cooling based on machine learning algorithm. Appl. Therm. Eng. 2024, 257, 124474. [Google Scholar] [CrossRef]
  135. Li, W.; Li, A.; Yin Yuen, A.C.; Chen, Q.; Yuan Chen, T.B.; De Cachinho Cordeiro, I.M.; Lin, P. Optimisation of PCM passive cooling efficiency on lithium-ion batteries based on coupled CFD and ANN techniques. Appl. Therm. Eng. 2025, 259, 124874. [Google Scholar] [CrossRef]
  136. Guo, Z.; Wang, Y.; Zhao, S.; Zhao, T.; Ni, M. Modeling and optimization of micro heat pipe cooling battery thermal management system via deep learning and multi-objective genetic algorithms. Int. J. Heat Mass Transf. 2023, 207, 124024. [Google Scholar] [CrossRef]
  137. Qian, X.; Xuan, D.; Zhao, X.; Shi, Z. Heat dissipation optimization of lithium-ion battery pack based on neural networks. Appl. Therm. Eng. 2019, 162, 114289. [Google Scholar] [CrossRef]
  138. Zhou, X.; Guo, W.; Shi, X.; She, C.; Zheng, Z.; Zhou, J.; Zhu, Y. Machine learning assisted multi-objective design optimization for battery thermal management system. Appl. Therm. Eng. 2024, 253, 123826. [Google Scholar] [CrossRef]
  139. Zhao, J.; Fan, S.; Zhang, B.; Wang, A.; Zhang, L.; Zhu, Q. Research Status and Development Trends of Deep Reinforcement Learning in the Intelligent Transformation of Agricultural Machinery. Agriculture 2025, 15, 1223. [Google Scholar] [CrossRef]
  140. He, L.; Tan, L.; Liu, Z.; Zhang, Y.; Wu, L.; Feng, Y.; Tong, B. Optimization of thermal management performance of direct-cooled power battery based on backpropagation neural network and deep reinforcement learning. Appl. Therm. Eng. 2025, 258, 124661. [Google Scholar] [CrossRef]
  141. Xie, F.; Guo, Z.; Li, T.; Feng, Q.; Zhao, C. Dynamic Task Planning for Multi-Arm Harvesting Robots Under Multiple Constraints Using Deep Reinforcement Learning. Horticulturae 2025, 11, 88. [Google Scholar] [CrossRef]
  142. Cheng, H.; Jung, S.; Kim, Y.B. Battery thermal management system optimization using Deep reinforced learning algorithm. Appl. Therm. Eng. 2024, 236, 121759. [Google Scholar] [CrossRef]
  143. Ebbs-Picken, T.; Da Silva, C.M.; Amon, C.H. Hierarchical thermal modeling and surrogate-model-based design optimization framework for cold plates used in battery thermal management systems. Appl. Therm. Eng. 2024, 253, 123599. [Google Scholar] [CrossRef]
Figure 1. Architecture of a deep convolutional neural network [54]. Reprinted with permission from Ref. [54]. Copyright 2019 Elsevier.
Figure 1. Architecture of a deep convolutional neural network [54]. Reprinted with permission from Ref. [54]. Copyright 2019 Elsevier.
Processes 13 02769 g001
Figure 2. Architecture of a RNN [56]. Reprinted with permission from Ref. [56]. Copyright 2024 Elsevier.
Figure 2. Architecture of a RNN [56]. Reprinted with permission from Ref. [56]. Copyright 2024 Elsevier.
Processes 13 02769 g002
Figure 3. Architecture of the LSTM neural network [58]. Reprinted from Ref. [58], under a CC-BY license.
Figure 3. Architecture of the LSTM neural network [58]. Reprinted from Ref. [58], under a CC-BY license.
Processes 13 02769 g003
Figure 4. Architecture of the feedforward neural network for HGR prediction [65]. Reprinted with permission from Ref. [65]. Copyright 2017 Elsevier.
Figure 4. Architecture of the feedforward neural network for HGR prediction [65]. Reprinted with permission from Ref. [65]. Copyright 2017 Elsevier.
Processes 13 02769 g004
Figure 5. Schematic of the ANN integrated with an extended Kalman filter for HGR prediction [68]. Reprinted from Ref. [68], under a CC-BY license.
Figure 5. Schematic of the ANN integrated with an extended Kalman filter for HGR prediction [68]. Reprinted from Ref. [68], under a CC-BY license.
Processes 13 02769 g005
Figure 6. Wound structure of a 18650-type battery [75]. Reprinted with permission from Ref. [75]. Copyright 2020 John Wiley and Sons.
Figure 6. Wound structure of a 18650-type battery [75]. Reprinted with permission from Ref. [75]. Copyright 2020 John Wiley and Sons.
Processes 13 02769 g006
Figure 7. LSTM-based temperature prediction and heat source analysis for LIB [90]. Reprinted with permission from Ref. [90]. Copyright 2021 Elsevier.
Figure 7. LSTM-based temperature prediction and heat source analysis for LIB [90]. Reprinted with permission from Ref. [90]. Copyright 2021 Elsevier.
Processes 13 02769 g007
Figure 8. Structure of the CNN-LSTM model [92]. Reprinted with permission from Ref. [92]. Copyright 2024 Elsevier.
Figure 8. Structure of the CNN-LSTM model [92]. Reprinted with permission from Ref. [92]. Copyright 2024 Elsevier.
Processes 13 02769 g008
Figure 9. Structure of the NARX model [95]. Reprinted with permission from Ref. [95]. Copyright 2020 Elsevier.
Figure 9. Structure of the NARX model [95]. Reprinted with permission from Ref. [95]. Copyright 2020 Elsevier.
Processes 13 02769 g009
Figure 10. Hybrid architecture for predicting battery core temperature [58]. Reprinted from Ref. [58], under a CC-BY license.
Figure 10. Hybrid architecture for predicting battery core temperature [58]. Reprinted from Ref. [58], under a CC-BY license.
Processes 13 02769 g010
Figure 11. Architecture of the Neural-TECMD method [104]. Reprinted from Ref. [104], under a CC-BY-NC-ND license.
Figure 11. Architecture of the Neural-TECMD method [104]. Reprinted from Ref. [104], under a CC-BY-NC-ND license.
Processes 13 02769 g011
Figure 12. Architecture of the electrochemical–thermal–neural network model [105]. Reprinted with permission from Ref. [105]. Copyright 2020 Elsevier.
Figure 12. Architecture of the electrochemical–thermal–neural network model [105]. Reprinted with permission from Ref. [105]. Copyright 2020 Elsevier.
Processes 13 02769 g012
Figure 13. Architecture of the physics-reserved spatiotemporal model [107]. Reprinted with permission from Ref. [107]. Copyright 2024 Elsevier.
Figure 13. Architecture of the physics-reserved spatiotemporal model [107]. Reprinted with permission from Ref. [107]. Copyright 2024 Elsevier.
Processes 13 02769 g013
Figure 14. The serpentine channel liquid-cooling structure for the 18650 battery pack [119]. Reprinted with permission from Ref. [119]. Copyright 2024 Elsevier.
Figure 14. The serpentine channel liquid-cooling structure for the 18650 battery pack [119]. Reprinted with permission from Ref. [119]. Copyright 2024 Elsevier.
Processes 13 02769 g014
Figure 15. Liquid cooling plate for pouch batteries [120]. (a) assembly of the battery module, (b) exploded illustration of the battery module, and (c) detailed view of a cooling plate highlighting its geometric characteristics. Reprinted with permission from Ref. [120]. Copyright 2024 Elsevier.
Figure 15. Liquid cooling plate for pouch batteries [120]. (a) assembly of the battery module, (b) exploded illustration of the battery module, and (c) detailed view of a cooling plate highlighting its geometric characteristics. Reprinted with permission from Ref. [120]. Copyright 2024 Elsevier.
Processes 13 02769 g015
Figure 16. The 18650 battery pack under air cooling [137]. Reprinted with permission from Ref. [137]. Copyright 2019 Elsevier.
Figure 16. The 18650 battery pack under air cooling [137]. Reprinted with permission from Ref. [137]. Copyright 2019 Elsevier.
Processes 13 02769 g016
Figure 17. Structure of a battery-cooling system integrating PCM and liquid cooling pipes [142]. Reprinted with permission from Ref. [142]. Copyright 2024 Elsevier.
Figure 17. Structure of a battery-cooling system integrating PCM and liquid cooling pipes [142]. Reprinted with permission from Ref. [142]. Copyright 2024 Elsevier.
Processes 13 02769 g017
Figure 18. Workflow of the DeepEDH model and its surrogate-based optimization [143]. Reprinted with permission from Ref. [143]. Copyright 2024 Elsevier.
Figure 18. Workflow of the DeepEDH model and its surrogate-based optimization [143]. Reprinted with permission from Ref. [143]. Copyright 2024 Elsevier.
Processes 13 02769 g018
Table 1. Input parameters and modeling methods of different HGR models.
Table 1. Input parameters and modeling methods of different HGR models.
WorkInput ParametersModel
Arora et al. [65]DOD, T, capacity, discharge rateFNN
Cao et al. [66]DOD, I, T a FNN
Yalçın et al. [67]I, V, TCNN
Legala and Li [68]I, V, Surface T, T a , DODFNN
Table 2. Summary of literature on ANN-based cooling system optimization.
Table 2. Summary of literature on ANN-based cooling system optimization.
WorkCooling System TypeInputOutput
Mehmandoosti and Kowsary [125]Pulsating liquid coolingfrequency and amplitude of pulsating flow T m a x , Δ T m a x
Yetik and Karakoc [126]Air coolingbusbar material, C-rate, T a i r , i n l e t , V a i r , t T m a x
Oyewola and Idowu [127]Air coolingStructural parameters, T a i r , i n l e t , V a i r T m a x , T m i n , Δ T m a x , Δ P
Li et al. [128]Air coolingStructural parameters, T a i r , i n l e t , V a i r T m a x , Δ T m a x
Khaboshan et al. [129]PCMStructural parametersT
Jin and Xi [130]Air coolingStructural parameters, V a i r T m a x , Δ T m a x , Δ P
Zheng et al. [131]Liquid coolingStructural parameters T m a x , Δ T m a x , Δ P
Fini et al. [132]PCM liquid flowRe, volume fractionNu, Δ P
Ye et al. [133]PCM and Liuqid CoolingStructural parameters, V i n l e t T m a x , Δ T m a x , Δ P
Zhang et al. [134]PCM and air coolingStructural parameters, S O C , C rate, T a m b T m a x , Δ T m a x
Li et al. [135]PCM and immersion coolingProperty and configuration parameters T m a x , Δ T m a x
Guo et al. [136]Micro heat pipeStructural parameters T m a x , Δ T m a x
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Qian, W.; Fang, W.; Tian, Y.; Dai, G.; Yan, T.; Yang, S.; Wang, P. Data-Driven Prediction of Li-Ion Battery Thermal Behavior: Advances and Applications in Thermal Management. Processes 2025, 13, 2769. https://doi.org/10.3390/pr13092769

AMA Style

Qian W, Fang W, Tian Y, Dai G, Yan T, Yang S, Wang P. Data-Driven Prediction of Li-Ion Battery Thermal Behavior: Advances and Applications in Thermal Management. Processes. 2025; 13(9):2769. https://doi.org/10.3390/pr13092769

Chicago/Turabian Style

Qian, Weijia, Wenda Fang, Yongjun Tian, Guangwu Dai, Tao Yan, Siheng Yang, and Ping Wang. 2025. "Data-Driven Prediction of Li-Ion Battery Thermal Behavior: Advances and Applications in Thermal Management" Processes 13, no. 9: 2769. https://doi.org/10.3390/pr13092769

APA Style

Qian, W., Fang, W., Tian, Y., Dai, G., Yan, T., Yang, S., & Wang, P. (2025). Data-Driven Prediction of Li-Ion Battery Thermal Behavior: Advances and Applications in Thermal Management. Processes, 13(9), 2769. https://doi.org/10.3390/pr13092769

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop