Previous Article in Journal
Enhanced Sodium Storage Performance of Few-Layer Graphene-Encapsulated Hard Carbon Fiber Composite Electrodes
Previous Article in Special Issue
Early Prediction of Battery Lifetime Using Centered Isotonic Regression with Quantile-Transformed Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Physics-Informed Neural Networks for Advanced Thermal Management in Electronics and Battery Systems: A Review of Recent Developments and Future Prospects

1
Ansys, Inc., Austin, TX 78758, USA
2
Nancy E. and Peter C. Meinig School of Biomedical Engineering, Cornell University, Ithaca, NY 14853, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Batteries 2025, 11(6), 204; https://doi.org/10.3390/batteries11060204
Submission received: 26 April 2025 / Revised: 16 May 2025 / Accepted: 20 May 2025 / Published: 22 May 2025
(This article belongs to the Special Issue Machine Learning for Advanced Battery Systems)

Abstract

:
The growing complexities, power densities, and cooling demands of modern electronic systems and batteries—such as three-dimensional integrated circuit chip packaging, printed circuit board assemblies, and electronics enclosures—have pushed the urgency for efficient and dynamic thermal management strategies. Traditional numerical methods like computational fluid dynamics (CFD) and the finite element method (FEM) are computationally impractical for large-scale or real-time thermal analysis, especially when dealing with complex geometries, temperature-dependent material properties, and rapidly changing boundary conditions. These approaches typically require extensive meshing and repeated simulations for each new scenario, making them inefficient for design exploration or optimization tasks. Physics-informed neural networks (PINNs) emerge as a powerful alternative approach that incorporates physical principles such as mass and energy conservation equations into deep learning models. This approach delivers rapid and adaptable resolutions to the partial differential equations that govern heat transfer and fluid dynamics. This review examines the basic principle of PINN and its role in thermal management for electronics and batteries, from the small unit scale to the system scale. We highlight recent advancements in PINNs, particularly their superior performance compared to traditional CFD methods. For example, studies have shown that PINNs can be up to 300,000 times faster than conventional CFD solvers, with temperature prediction differences of less than 0.1 K in chip thermal models. Beyond speed, we explore the potential of PINNs in enabling efficient design space exploration and predicting outcomes for previously unseen scenarios. However, challenges such as training convergence in fine-grained or large-scale applications remain. Notably, research combining PINNs with LSTM networks for battery thermal management at a 2.0 C charging rate has achieved impressive results—an R2 of 0.9863, a mean absolute error (MAE) of 0.2875 °C, and a root mean square error (RMSE) of 0.3306 °C—demonstrating high predictive accuracy. Finally, we propose future research directions that emphasize the integration of PINNs with advanced hardware and hybrid modeling techniques to advance thermal management solutions for next-generation electronics and battery systems.

1. Introduction

Electronics and battery systems, which stand out as the two primary contributors to power consumption and heat generation in modern devices, drive the need for advanced thermal management solutions [1,2]. Effective thermal management is essential for maintaining the long-term performance and reliability of electronics and battery systems, particularly as modern devices continue to shrink in size while increasing in power, presenting significant challenges. Consequently, evaluating the thermal performance of these systems during the early design phase is critical, as pre-screening design parameters and manufacturing processes can demand substantial resources [3]. Achieving rapid and precise thermal predictions for various design parameter combinations is thus a key priority. Historically, the finite element method (FEM) and computational fluid dynamics (CFD) have been employed, leveraging computational resources to conduct numerical analyses of temperature profiles and fluid flow based on the fundamental equations governing mass, momentum, and energy conservation [4,5,6,7]. The finite volume method or finite element method, which are common numerical techniques, address partial differential equations (PDEs) by dividing the domain into discrete control volumes or elements and integrating the governing equations across them [8]. This process requires a deep understanding of the underlying physics and the creation of control volumes or meshes, which is critical for obtaining accurate results while managing computational resource demands. However, this approach is often slow and inefficient, particularly considering the evolution of electronic systems from individual chips to packages and full systems and from single battery cells to battery packs and large-scale battery systems featuring billions of transistors and high thermal design power. The next-generation power density system can reach over 100 W/cm2 [9,10,11]; thus, CFD is becoming increasingly impractical for addressing the thermal simulation needs of today’s advanced and rapidly evolving technologies.
As machine learning becomes more popular, benefiting from advancements in algorithms and GPU-based parallel solvers, its integration with thermal management has emerged as a growing trend [12,13]. Typically, machine learning has found widespread application in areas like generative AI [14,15], image processing [16,17,18,19], and content creation [20,21,22]. However, traditional ML and DL methods face several important limitations in scientific and engineering contexts. First, they typically require large volumes of labeled data to train accurate models—a resource that can be expensive or impractical to obtain, especially for training intricate physical systems. Often, generating these datasets relies on extensive, resource-intensive simulations such as CFD, further limiting scalability [23,24]. Second, conventional ML models are highly sensitive to imperfect data, such as noise, sparsity, or bias in training datasets, which can significantly degrade their predictive performance and robustness. This issue is especially problematic in real-world engineering applications, where experimental or simulation data may be limited or of variable quality [25]. Third, most ML models generally function as “black boxes”, producing outputs without offering a clear physical interpretation of the internal parameters [26,27]. This lack of transparency makes it difficult to ensure that model predictions are physically realistic or to trust their extrapolation to unseen scenarios. As a result, purely data-driven models may fail to generalize when faced with new boundary conditions, rare events, or out-of-distribution cases that are common in thermal management.
Physics-informed neural networks (PINNs), on the other hand, overcome this limitation by embedding governing physical principles, typically in the form of PDEs, directly into the training process [28]. The physics-informed technique can participate in stages of initialization, loss function calculation, and architecture design or be hybridized with conventional DL models. This approach enables PINNs to deliver accurate predictions even with limited data, making them well-suited for thermal management tasks governed by established physical laws, such as Fourier’s law of heat conduction or the Navier–Stokes equations for fluid dynamics [29,30,31,32]. Unlike conventional neural networks, PINNs are trained not only on observational data but also on the underlying physics, which serves as a regularization mechanism, narrowing the range of possible solutions and improving their ability to generalize. This makes PINNs particularly advantageous in situations in which traditional numerical methods are too slow or computationally demanding, such as real-time thermal regulation or solving inverse problems like parameter estimation.
Recent research has demonstrated the versatility of PINNs across various thermal management domains. For instance, in building thermal modeling, PINNs have been used to develop control-oriented models that combine the interpretability of physical laws with the expressive power of neural networks, as seen in studies like Gokhale et al.’s work on control-oriented thermal models for buildings [33]. Similarly, Wang et al.’s research on heat transfer in porous media using PINNs demonstrated their ability to accurately predict temperature and heat flux fields without labeled data, achieving computation accelerations five orders of magnitude greater than those of numerical methods [34]. More examples are discussed in Section 2 and Section 3. These efforts suggest that PINNs could revolutionize thermal management in electronics, although the field is still emerging, with fewer direct studies compared to other areas.
In this review paper, we will discuss the applications of PINNs in electronics and battery systems. We will begin by explaining the foundational working principles of the PINN and its variants in Section 2. In Section 3, we will explore PINN research conducted in electronics thermal management at different scales, from chips to board to systems. In Section 4, following the same scale category, we describe the PINN research conducted in battery thermal management, from single cells to battery packs to battery systems. Within these sections, based on this understanding of the challenges related to different-scale thermal management in electronics and batteries, we will then evaluate the integration of PINNs with other machine learning techniques and explore variations of the PINN framework. Lastly, we will outline potential future opportunities and prospects for leveraging PINNs in these domains.

2. PINNs and Variations

PINNs represent a significant paradigm shift in scientific machine learning, particularly in solving PDEs governing heat transfer, fluid dynamics, and multiphysics problems. Unlike conventional deep learning models, which require extensive labeled datasets, PINNs embed physical laws—such as energy conservation and material properties—directly into the training process. This hybrid approach enables PINNs to generate accurate, physically consistent solutions, even with limited experimental or simulated data, making them highly valuable for thermal modeling in electronics and battery systems.

2.1. Mathematical Formulation of PINNs

PINNs approximate the solutions of PDEs, which can be generally expressed as follows:
N [ u ( x , t ) ] = 0 ,   x Ω ,   t 0 ,   T
where N represents a differential operator that encodes the governing physics; Ω is the computational domain; and u ( x , t ) is the solution of the PDE, with x as the coordinate and t as the time of the physical field.
Heat transfer in electronics and battery systems is primarily governed by conduction and convection, with radiation typically being negligible due to small surface areas, moderate temperatures, and low-emissivity materials [35]. In heat conduction simulation, N is defined according to Fourier’s law as follows [36]:
N = · k T + q ˙ ρ c T t
where T is the temperature, k is the thermal conductivity, ρ is the density, c is the specific heat capacity, and q ˙ is the energy generated per unit volume.
In convection simulation, 3D incompressible flow dynamics are usually considered, where N consist of two components based on Navier–Stokes equations, as follows [29]:
N 1 = u t + u · u + p 1 R e 2 u N 2 = u
where u ,   t ,   a n d   p are the fluid velocity, time, and pressure, respectively, and R e is the Reynolds number.
Based on this equation, a force convection problem, which is a common heat management scenario, can be achieved by coupling the heat conduction and the fluid dynamics. Hence, the PDEs will be defined as follows [37]:
N 1 = T t + u · T 1 P e 2 T N 2 = u t + u · u + p 1 R e 2 u + R i T N 3 = u
where P e and R i denote the Peclet and Richardson numbers, respectively.
Meanwhile, PDE problems usually include boundary and initial conditions, as follows:
u x , 0 = h x ,   x Ω u x ,   t = g x , t , x Ω ,   t [ 0 , T ]
where Ω and Ω are the computational domain and boundary, respectively, h x is the initial condition, and g x , t represents the boundary conditions. The backbone of the PINN is usually a deep neural network (DNN), which takes x and t as inputs and outputs the approximated solution of the PDE.
To calculate the solution, the PINN minimizes the non-negative residual error associated with Equations (1) and (5).
L N = [ 0 , T ] × Ω N u x , t 2 d t d x L I = Ω u x , 0 h x 2 d x L B = 0 , T × Ω ( u x ,   t g x , t ) 2 d t d x
PINNs leverage automatic differentiation (AD) from deep learning frameworks to compute the derivatives required for the residual calculation. Unlike finite difference or finite element methods, AD avoids numerical discretization errors. Since evaluating the full integral in residual loss can be computationally expensive, PINNs often employ Monte Carlo sampling [38]. A subset of points ( x i , t i ) is randomly sampled from the computational domain, and the mini-batch training algorithm is used to iteratively update the neural network parameters [39].
L N = 1 N N N u x i , t i 2 L I = 1 N I u x i , 0 h x i 2 L B = 1 N B ( u x i , t i g x i , t i ) 2
If some ground truth data are available in the computational domain, an additional data-based loss term can also be included.
L D a t a = 1 N D a t a ( u x i , t i u D a t a x i , t i ) 2
Hence, the loss function of the PINN can be defined as the combination of the following residuals:
L = L N + L I + L B + L D a t a
This approach allows PINNs to generalize beyond the training data and maintain consistency with underlying physics, making them particularly effective for thermal management problems with limited experimental data or unknown boundary conditions [29,37]. For a comprehensive overview of PINN methodologies—including neural network architectures, strategies for integrating physical laws, and applications across fields such as fluid dynamics, materials science, energy, and medicine—see the recent review by Farea et al. [40].

2.2. PINN Variants

While the standard PINN framework offers a powerful tool for solving PDE-governed problems with limited data, it still faces notable challenges when applied to complex real-world scenarios—such as those encountered in electronics and battery thermal management. These challenges include (1) difficulty in accurately enforcing boundary conditions, which can degrade solution fidelity; (2) computational inefficiency when scaling to high-dimensional or multiscale problems; (3) poor convergence and susceptibility to local minima, particularly in stiff systems or extrapolation tasks; and (4) limited ability to capture complex physical phenomena like turbulence or multiphysics interactions [41]. To address these challenges, several variants of PINNs have been developed, each focusing on different limitations, which are reviewed in detail in the following subsections.

2.2.1. Balancing Residual and Boundary Losses

One of the most critical challenges in training PINNs is the imbalance between the PDE residual loss and the boundary condition loss, which can significantly hinder convergence and accuracy. This issue often arises because the residual loss, which stems from the PDE constraints across the entire domain, can dominate the boundary loss by several orders of magnitude, leading to the poor satisfaction of boundary conditions. Wang et al. analyzed this phenomenon as a gradient pathology, showing that conventional training dynamics result in vanishing boundary loss gradients, thereby biasing the model toward interior solutions that violate boundary constraints. To address this issue, they proposed a learning rate annealing algorithm and a novel PINN architecture to rebalance gradient flows during training, which led to 50–100× improvements in predictive accuracy across various benchmark problems [42]. Building on this, Yao et al. introduced MultiAdam, a scale-invariant optimizer that adaptively rescales gradients using parameter-wise second-moment statistics. Unlike manual or static reweighting, MultiAdam automatically harmonizes the contributions of loss terms at different scales, maintaining consistent convergence across complex PDE domains. This method improved solution accuracy by 1–2 orders of magnitude across diverse physics scenarios, demonstrating robust performance in multiscale PINN training [43]. Together, these approaches mark a significant step forward in developing reliable and physically consistent PINNs, particularly for thermal management tasks that demand precision at boundary interfaces.

2.2.2. Adaptive Sampling Strategies

Sampling strategies play a pivotal role in the training dynamics and accuracy of PINNs, as the locations of residual collocation directly influence how well the solution captures the complex features of the PDE. Uniform sampling, while simple and widely adopted, often results in poor performance in regions with steep gradients or localized phenomena. To address this issue, several adaptive sampling strategies have been proposed. Nabian et al. introduced an importance sampling approach that selects collocation points proportional to the loss value, effectively focusing training on regions with greater residuals and accelerating convergence without additional hyperparameters [39]. Wu et al. conducted a comprehensive study and proposed RAD and RAR-D, which dynamically adjust point distributions based on residual magnitudes and outperform traditional strategies with fewer collocation points [44]. Tang et al. advanced this solution further with the DAS-PINN, a generative model-based method that learns the residual distribution, yielding strong results for high-dimensional or irregular PDEs [45]. More recently, Yu et al. proposed MCMC-PINNs, which use a modified Markov Chain Monte Carlo method to sample collocation points according to a canonical distribution based on PDE residuals. This method adapts the proposal distribution to domain geometry and ensures more thorough exploration of complex solution landscapes while maintaining convergence guarantees [46]. Together, these innovations in adaptive sampling significantly enhance both the efficiency and precision of PINNs, particularly in multiscale or unbounded domain problems.

2.2.3. Variational Formulations in PINNs

While traditional PINNs enforce PDEs through their strong form—minimizing residuals at discrete collocation points (Equations (6) and (7))—this approach often suffers from instability, especially when high-order derivatives are involved or when the solution lacks smoothness. Variational formulations offer a powerful alternative by expressing the PDE in its weak form, where the equation is satisfied in an integrated sense against test functions. This allows for smoother losses, reduced differentiation order (via integration by parts), and improved stability, especially in complex or high-dimensional settings. The evolution of variational PINNs reflects growing recognition of these benefits. The earliest prominent example is the Deep Ritz Method by E and Yu, which recasts the PDE solution as the minimizer of a variational energy functional. This method constructs the trial solution using a deep neural network and optimizes an integral form of the PDE’s energy without enforcing pointwise residuals, thus avoiding high-order derivative computations and enabling efficient training, even in high dimensions [47]. Following this, the VPINN framework by Kharazmi et al. generalized the approach by embedding PINNs in a Petrov-Galerkin setting. In VPINNs, the trial space consists of neural networks, while the test space is constructed using classical polynomial bases (e.g., Legendre polynomials). This variational residual reduces differential order via integration by parts and replaces dense collocation with sparse quadrature, leading to improved numerical stability and efficiency [48]. Building on VPINNs, the hp-VPINNs introduced adaptive domain decomposition and hierarchical polynomial refinement, enabling localized learning and better handling of sharp gradients or singularities in the solution [49]. Around the same time, VarNet proposed a variational training strategy that is fully discretization-free and operates over space–time volumes rather than isolated points. By training on integral residuals and using adaptive sampling driven by residual feedback, VarNet enables smoother, more sample-efficient learning and is particularly well-suited for parametric and control applications [50]. Collectively, variational approaches provide a powerful and more physically grounded alternative to classic PINNs, making them especially well-suited for problems with irregular domains, lower solution regularity, or high computational complexity.

2.2.4. Domain Decomposition PINNs

As PINNs are extended to large-scale or multiphysics systems, they often suffer from high computational costs and poor convergence, especially when trying to capture complex or discontinuous physical phenomena [41]. To address these limitations, domain decomposition techniques have been incorporated into the PINN framework, giving rise to variations such as Conservative PINNs (cPINNs) [51] and eXtended PINNs (XPINNs) [52]. These architectures divide the computational domain into smaller subdomains, within which localized PINNs are trained. This strategy not only enhances scalability and parallelizability but also enables tailored network architectures for different subregions of the problem domain.
The cPINN framework focuses on conservation laws, enforcing continuity in both the solution and flux across the boundaries of decomposed subdomains. Each subdomain employs a separate neural network, with interface conditions ensuring physical consistency by stitching local solutions together [51]. This includes enforcing average solution continuity and flux conservation at shared interfaces, which is a critical step for solving hyperbolic PDEs like the Euler equations (Figure 1). Based on this approach, XPINNs further generalize the domain decomposition concept beyond conservation laws, supporting arbitrary space–time decompositions for any type of PDEs. This includes both convex and non-convex geometries, time-dependent or time-independent problems, and even cases with moving interfaces [52]. Each subdomain is governed by its own neural network and optimized independently. XPINNs introduced interface conditions such as residual continuity and average solution enforcement, allowing seamless stitching across irregular domains. In order to fully leverage the advantages of the cPINN and XPINN, a parallelized implementation has also been proposed based on a hybrid MPI + X programming model (where X can be CPUs or GPUs). This enables the efficient training of PINNs on distributed hardware. The parallel framework supports both weak and strong scaling and significantly reduces training time by exploiting localized computation within subdomains.
The physics-informed neural network (PINN) has emerged as a powerful modeling framework for electronics thermal management (ETM), offering a compelling alternative to conventional numerical solvers. By embedding physical laws—such as Fourier’s law and the Navier–Stokes equations (Equations (2)–(4))—directly into the training process, PINNs eliminate the need for large datasets and deliver fast, accurate solutions to partial differential equations, even in data-sparse or complex settings. As summarized in Table 1, the adaptability of PINNs is further strengthened by a rich ecosystem of variants designed to overcome key training and scalability challenges. For instance, advanced optimizers like MultiAdam and reweighting strategies mitigate the imbalance between PDE residual and boundary losses, improving solution fidelity. Adaptive sampling methods, such as importance sampling, DAS-PINNs, and MCMC-PINNs, dynamically allocate collocation points where they are most needed, boosting convergence and efficiency in multiscale and sharp-gradient regions. Variational formulations—including the Deep Ritz Method, VPINNs, and VarNet—reformulate PINNs in a weak form, reducing the order of derivatives and enhancing stability for irregular geometries. Domain decomposition approaches like cPINNs and XPINNs enable parallelization and localized learning, making PINNs scalable to large or heterogeneous ETM problems. Among these variants, domain decomposition methods (such as cPINNs and XPINNs) are particularly promising for electronics thermal management, where modeling large-scale, multi-material systems and capturing localized hot spots are critical. Their main advantage lies in improved scalability and parallelizability; however, their practical implementation can be complex, especially when managing interface conditions between subdomains. Loss balancing strategies are also valuable for both electronics and battery applications, as they improve the accuracy of boundary and interface temperature predictions, which is essential for device reliability and safety. The main challenge with these approaches is that optimal weighting can be problem-dependent and may require extensive hyperparameter tuning. In battery systems, adaptive sampling approaches show strong potential, as they can efficiently target regions with rapid thermal changes or sparse data, which are common issues in battery packs and large-format cells. Ultimately, the choice of variant depends on the complexity and scale of the system being modeled, the available data, and computational resources.
While many machine learning approaches are fundamentally limited by the size and quality of available datasets, a key advantage of PINNs is their ability to leverage governing physical laws, thereby reducing reliance on large experimental or simulated datasets. In PINN frameworks, the physical equations themselves guide model learning, enabling accurate solutions even when data are sparse or partially missing. However, the effectiveness of PINNs can still be influenced by data-related factors such as measurement noise, limited boundary condition data, or incomplete coverage of operating regimes. Thus, while PINNs are more robust to data scarcity than purely data-driven methods, careful attention to the available data—however limited—remains important, particularly in complex or ill-posed real-world scenarios.

3. Applications of PINNs in Electronics Thermal Management

For general heat transfer physics, there are three modes of heat transfer—conduction, convection, and radiation—depending on the heat transfer medium [54]. Conduction heat transfer is governed by Fourier’s Law, which states that heat flux is linearly dependent on the temperature gradient when thermal conductivity is constant in a steady-state scenario, or on density and heat capacity. Temperature-dependent thermal conductivity, density, or heat capacity will introduce non-linear physical characteristics into this scenario [55,56]. Convection is governed by Newton’s Law, and the heat transfer coefficient depends on fluid properties, such as temperature, fluid velocity, or flow regime (laminar or turbulent) [57]. Radiation is governed by the Stefan–Boltzman Law and is non-linear due to the fourth-power dependence on temperature [58]. With power densities with varying intensities and different cooling requirement scales, thermal management systems employed in electronics can generally be categorized into passive cooling and active cooling [59]. Passive cooling refers to the method that dissipates heat without any active energy inputs; techniques in this category include heat sink, thermal pads/interfaces, heat pipes, and vapor chambers, whereas active cooling requires additional power input, such as a fan, liquid cooling, or jet impingement cooling. Besides these traditional cooling mechanisms, two-phase liquid cooling is increasingly standard in data centers, while immersion cooling is gaining traction for its efficiency [60,61,62].
The advancement of cutting-edge information and digital technologies—including 5G, artificial intelligence, cloud computing, autonomous vehicles, and data centers—has greatly increased the need for functional electronics, driving demand in their power densities and reliabilities [63,64]. Maintaining a safe operating temperature is essential for the proper functioning of each unit, making thermal management systems very critical. Depending on the physical size of the system, thermal management can be classified into three levels: chip, board, and system [35]. The thermal management challenges vary across the three levels. As depicted in Figure 2, at the chip level (Figure 2a), densely packed power tiles represent each functional unit, with conduction as the dominant heat transfer mode and boundary conditions set by the heat transfer coefficient. Material properties may be isotropic and temperature-dependent, varying based on design accuracy requirements. At the board level (Figure 2b), when it involves a lot of components, such as electronic control module, cooling solutions, PCB, etc., heat transfer becomes more complex as various mediums—such as air or dielectric liquids—come into play. In addition, multiple components with varying power inputs are present, which can include factors like direct current internal resistance losses or electromagnetic power losses involving multiphysics considerations. Despite this increased complexity and the larger number of design parameters, board-level challenges are not necessarily greater than those at the chip level, largely because most passive electrical and heat-generating components exhibit isotropic characteristics. Finally, at the system level (Figure 2c), the intricate geometry of components not only increases the volume of data but also lengthens both the training and forecasting periods.
Following this scaling methodology, this section will analyze selective PINN applications at the chip, board, and system levels of electronics thermal management and aims to bridge the connections between different scales.

3.1. Chip Thermal Management

For chip thermal management, heat is generated at the nanoscale with intensive density and is mainly conducted through the die before it reaches the chip surface, is cooled through external multilayer package and thermal interface materials, and is eventually dissipated through external conduction or convection [35]. Therefore, conduction is the dominant heat transfer mode at the chip level, while external conduction or convection can be simplified and simulated with heat transfer coefficient boundary condition settings. Radiation is negligible due to the small surface area and low emissivity (polished surface) [35]. According to Fourier’s Law, conduction heat transfer is linear when thermal conductivity is constant. There are two main challenges associated with chip thermal management, namely, temperature-dependent material properties (thermal conductivity, density, and heat capacity) and high-dimensional power densities due to the hierarchical structure or advanced 3D IC structure [67].
Chip technology has become very dense with power tiles, requiring a very fine resolution for each power tile to be captured correctly, which is not possible with the current finite element analysis (FEA) approach, as the discretization of the full chip level is extremely time-consuming and resource-intensive. The high-dimensional and nonlinear PDEs have also been designed to accelerate using machine learning approaches. Traditional machine learning approaches to chip thermal management require large data inputs. For example, Sadiqbatcha et al. built the long short-term memory (LSTM) model to train experimental infrared thermal image results to estimate representative spatial features of 2D heatmaps with similar accuracy and high efficiency [68]. Chen et al. [69] adopted graph convolution networks (GCN) with global features, skip connections, edge-based attention, and principle neighborhood aggregation to efficiently estimate thermal maps of 2.5D chiplet-based systems while demonstrating strong generalization to unseen datasets.
On the other hand, PINN demonstrates good potential for overcoming the data-intensive limitations of traditional machine learning. Liu et al. applied the physics-aware operator learning method (DeepONet) to a 21 × 21 × 11 mesh grid-based single-cuboid geometry with a 2D power map with constant HTC boundary conditions [70]. In the study, firstly, two families of design configurations, namely, boundary conditions for each individual surface and the locations and intensity of external or internal heat sources, were encoded as input functions for different “branch nets” to be fed into the framework. While all the sampled coordinates were fed into another sub-network, namely, “trunk net”, the k branch nets, and one trunk net were combined via a Hadamard (elementwise) product and summed to represent the predicted temperature field. The framework was then trained using multi-input DeepONet (Figure 3) [71]. The total loss was minimized using gradient descent based on an automatic differentiation algorithm. The result was 300,000 times faster than the commercial Celsius 3D solver, with a max/min temperature difference of less than 0.1 K. However, this method reveals limitations in scalability and generalization, as it does not include orthotropic or temperature-dependent material properties.
Another study conducted by Chen et al. involved an enhanced PINN approach designed for a rapid and accurate full-chip thermal analysis of Very Large-Scale Integration (VLSI) chips, namely, ThermPINN [72]. Standard PINNs, while innovative, suffer from slow training convergence. In Chen’s work, the temperature distribution equation can be expressed with the separation of variables into two variables along the x and y directions to form two cosine vectors with the coefficient Cpq. The authors also considered that both thermal conductivity and leakage power vary with temperature and used appropriate models to depict their relationship to temperature across a specific temperature range. Firstly, spatial variables are separated into cosine vectors, forming a Q × P matrix whose inner product with the Cpq matrix yields the temperature value with a discrete cosine neural network. Second, the effective convection coefficient, m, and the ambient temperature, T0, are parameterized to connect with Cpq with multi-layer perception (MLP). A back-propagation algorithm is used to learn the MLP parameters. The thermal equation is then coupled with a loss function, and an unsupervised learning method is used to train the networks. The author also applied a plain PINN as a benchmark, where the position, the effective convection coefficient, and the ambient temperature are directly parametrized to train the MLP.
The findings indicate that the ThermPINN’s accuracy was slightly below that of the plain PINN, but in the same order of significance. However, it demonstrated strong potential for faster training, making it more practical for EDA applications, with both PINNs outperforming traditional FEM solutions in terms of speed. The model parameterizes key variables—the ambient temperature and effective convection coefficient—enabling efficient design space exploration and uncertainty quantification (UQ).

3.2. Board Thermal Management

In board thermal management, the electrical components hosted on a printed circuit board—such as resistors, capacitors, memory cards, integrated circuits (ICs), heatsinks, and solder joints—vary widely by application and industry, with some being passive and others producing high-density, dynamic thermal power under different operating conditions [73]. Thermal management poses a significant challenge, although techniques like air cooling, cold plates, jet impingement, and immersion cooling are well-established, addressing the diverse safe temperature limits, space constraints, and power demands across components and product generations, which complicates redesign and simulation efforts. Simplified simulations of board thermal management using black box reduced-order models (e.g., linear time invariant [74], linear parameter varying [75], and singular value decomposition [76]) are possible but often lack field-specific accuracy or require extensive training data, whereas PINNs offer a promising approach for prescreening and estimating untested scenarios.
On the board, there can be power chips with different layers soldering with baseplates or printed circuit boards (PCBs). In a study conducted by Yang et al., four SiC MOSFET chips are mounted on DBC AIN substrates with integrated deionized water pin-fin cooling channels on top. Five parameters, including initial temperature, heat flux, chip total powers, baseplate thickness, and pin-fin heights, are explored in terms of design space [77]. For the SiC power module, due to the presence of layers with different material properties, representing separate physical domains, the authors applied a total of nine Fourier neural networks with soft coupling constraints in PINNs for rapid design exploration. This approach outperforms traditional FEM solvers for faster thermal profile prediction. The seven Fourier networks represent and fit the thermal equations for each physical domain, whereas two Fourier networks address fluid flow and convective heat transfer PDEs, resulting in a total of nine Fourier NNs as training inputs. Random sampling points are then generated uniformly across the domain and boundaries to evaluate a loss function, which integrates PDE residuals, boundary condition errors, and interface continuity penalties, which are balanced using adaptive weights. The model undergoes iterative training via backpropagation with the Adam optimizer and an exponential decay learning rate, minimizing the loss below a threshold (10−5) for synchronized convergence across networks. Once trained, the PINN model enables the rapid inference of thermal fields for any input parameter combination without retraining, offering an efficient alternative to traditional numerical methods. The author suggests that in terms of scalability, PINNs delivered performance similar to that of COMSOL for 100 simulations. However, when expanding to a significantly larger set of simulations—such as 10,000 cases—the PINN-based approach demonstrated substantially higher simulation efficiency, particularly for exploring expansive design spaces. However, PINNs were trained on GPUs while benchmark COMSOL simulations are based on CPUs, which is not a fair comparison due to the parallel computation capabilities of GPUs. As the authors introduced geometrical parameters, mesh-dependent accuracy can be a potential risk when scaling.
Another application involves a cooling component on the board, the thermoelectric cooler (TEC), which consists of a series of N-type and P-type semiconductor materials [78]. When an electric current passes through these materials, heat is absorbed from one side (the cold side) and released on the opposite side (the hot side), which is also called the Peltier effect. Simulating and designing TECs can be challenging due to their complex physics and computational demands: (1) TECs operate based on intertwined thermal and electrical phenomena, governed by nonlinear PDEs and highly influenced by temperature-dependent material properties, including thermal conductivity, the Seeback coefficient, and electrical conductivity; (2) 3D FEA simulation that requires fine spatial discretization results in large systems of equations and high demand for computer resources; and (3) identifying optimized parameters escalates the complexity. The study conducted by Chen et.al introduced a surrogate model that reduces 3D TEC geometry to a 1D problem that incorporates key parameters like current density and thermal boundary conditions [79]. The implicit physics-constrained neural network (IPCNN) framework proposed by the authors employs a two-stage training process. Firstly, temperature-dependent material properties are approximated using an extreme learning machine (ELM), followed by a PINN enforcing TEC-specific PDEs in the second stage. This method achieves an 8.5× speed increase over traditional COMSOL simulations and enhances stability compared to conventional PINNs. Additionally, a hybrid finite element neural network (FENN) method integrates the surrogate model into COMSOL, yielding a 5.1× speedup and 5.4× memory reduction for VLSI chip thermal analysis. The author demonstrated that this IPCNN approach showed much better convergence to a loss of 10−6 versus 10−1 for traditional PINNs in smaller length ranges. It avoids the slow convergence and large errors of one-step PINN training by separating the modeling of material properties and PDE solutions, thereby reducing the optimization search space. However, a decrease in accuracy was observed as the parameter ranges widened (e.g., length from 0.05 to 1.2 mm), suggesting scalability limitations, in addition to the increased complexity of implementing a two-stage process compared to a single-step PINN. The scalability of broader parameter ranges based on this study can potentially be achieved by employing multiple neural networks for subregions, as suggested in the text. Further refinement could also involve adaptive learning rates or advanced network architectures to maintain accuracy across diverse conditions. Integrating more physics constraints or exploring alternative approximators beyond ELM could also boost performance. Overall, this study underscores IPCNN’s potential as a transformative tool in TEC modeling, balancing efficiency and precision while identifying pathways for future enhancements in board thermal management.
Another study conducted by Farrag et al. involved the soldering reflow process (SRP) [80], which occurs during the PCB manufacturing process. In SRP, solder paste is melted and then solidified to connect electronic components to PCBs. Precise temperature control is critical to ensure the PCB’s quality. Due to the escalating complexity of the PCB and a greater number of different electrical components, monitoring and accommodating the SRP process is extremely difficult. The SRP-PINN model proposed by Farrag et al. leverages PINNs to predict temperature distribution across PCBs, ensuring solder joints meet manufacturer-specified thermal profiles for quality assurance. Unlike traditional CFD approaches, which are computationally intensive, the SRP-PINN integrates PDEs into a DNN to achieve accurate predictions with limited experimental data. The study uses a 1D heat transfer model along the PCB length, trains the PINN with sparse data from one recipe, and demonstrates its generalizability across different PCB designs and soldering recipes. The experimental results, which were conducted using a Heller 1707 W reflow oven and SAC305 solder paste, show the model’s effectiveness, achieving 98% accuracy compared to 97% for a hybrid physics–ML benchmark, with potential applications in real-time manufacturing optimization.

3.3. System Thermal Management

When it comes to system scale, in addition to the above-mentioned chip- and board-level challenges, intricate geometrical configurations can pose a challenge due to the complexities of CAD geometries, such as in electronics enclosures and large-scale cooling components [81,82,83,84]. To precisely capture the fluid flow and temperature profile, setting non-conformal mesh at different regions to balance calculation accuracy and computer resources requires a significant amount of human instruction and experience, let alone the time to start a simulation [85,86]. A PINN can be very useful in resolving repetitive work and save running time when only several parameters need to be modified for a new design.
In the data center field, energy consumption is long-term and intensive, and heat generation is significant due to the computing requirement. The primary cooling source is the heating, ventilation, and air conditioning (HVAC) system, while the heat is generated from racks, power supplies, and power generators. Chen et al. demonstrated notable performance in a six-month case study on data center thermal modeling using an adaptive physically consistent neural network (A-PCNN) (Figure 4) [66]. The approach leveraged an NN with Softplus activation functions, replacing traditional preset and fixed coefficients to reduce trial-and-error costs and increase flexibility. Specifically, it reduced the mean absolute error by 17.3% for a 15 min forecast and by 79.2% over a 7-day period.
Tanaka et al. developed an alternative modeling approach for system-level thermal systems by employing data compression through proper orthogonal decomposition combined with physics-informed machine learning (PIML). To evaluate the effective PIML technique, two distinct thermal mathematical frameworks were developed to explore how the training dataset size and model complexity influenced prediction precision. Furthermore, the study contrasted the prediction accuracy and computational training expenses between traditional data-driven machine learning and the PIML approach. The results demonstrated that this method accurately forecasted the temperature across both models in diverse heat input scenarios while also reducing the overall training cost by 26% to 81% relative to data-driven machine learning techniques, showcasing the advantages of the physics-informed strategies [87].
Zhang et al. investigated the application of PINNs in simulating fluid flow and heat transfer in manifold microchannel (MMC) heat sinks designed for cooling high-power Insulated Gate Bipolar Transistors (IGBTs), which are critical components in power electronics [88]. The study developed a PINN model with two sub-networks—one for flow dynamics and another for thermal behavior—each employing a DNN with a sine activation function to capture high-order derivatives and mitigate vanishing gradient issues. Compared to traditional CFD simulations, PINNs show similar trends, such as increased pressure drops and decreased temperatures with higher inlet velocities, although discrepancies occur in regions with rapid flow changes and maximum temperature predictions. The mesh-free nature of PINNs and their ability to embed physical laws into the loss function enable the efficient simulation of complex geometries with fewer data than purely data-driven approaches. Additionally, the paper explores PINNs’ potential in solving inverse problems, like estimating kinematic viscosity and thermal diffusivity, highlighting their versatility. Despite computational expense and sensitivity to geometry and hyperparameters, PINNs emerge as a promising alternative for thermal management in engineering applications.
All the system thermal management benefits mentioned above stem from the implementation of the PINN framework. With emerging technologies and fast turn-around requests in high-tech consumer electronics, data centers, and electric vehicles, PINNs have more potential when computational requests escalate.
The key methods for electronics thermal management at different scales, along with highlights for each scale, are summarized Table 2 below:

4. Applications of PINNs in Battery Thermal Management

Battery systems involve multiple physical phenomena, such as electrochemical reactions, heat transfer, and mechanical stress, making simulation and analysis complex [89,90]. Uses for PINNs include state estimation [91], degradation [92,93], and aging estimation [93], thus proving the interest and potential of PINNs in battery systems. Finegan et al. outlined a perspective, discussed the challenges related to scarce data for analysis, and thus recommended physics-based learning for predicting battery failure more accurately and reliably [94]. Physics integration can include physics-based datasets, physics-informed training constraints, and physics-guided algorithm structures, which are a hybrid approaches that merge data-driven and physics-based methods.
Thermal management in batteries is critical, as thermal runaway (TR) is one of the primary safety concerns in batteries [95,96]. It occurs when an increase in temperature accelerates internal chemical reactions, generating more heat and potentially causing an uncontrollable chain reaction that may result in a fire or explosion. TR in a single LIB can rapidly propagate from the root cell to all adjacent cells, thus resulting in catastrophic accidents in large-scale battery packs and systems. Therefore, efficient battery thermal management systems at different scales are critical for ensuring the safe temperature of each cell and mitigating the TR phenomenon overall.
Battery thermal management simulation presents a range of challenges across different levels—from the cell to the pack to the system (Figure 5). At the cell level, the need to capture non-linear thermal behavior and electrochemical–thermal multiphysics interactions make modeling complex, especially for the accurate real-time prediction of thermal runaway events. Moving to the pack level, cell-to-cell variability, non-uniform temperature distribution, and the impact of cell arrangement and manufacturing deviations complicate the prediction of thermal behavior and require detailed spatial resolution. At the system level, the challenges expand to modeling under wide operational conditions, ensuring effective heat dissipation coordination across components and maintaining a high level of model sophistication to balance accuracy and computational efficiency. The challenges are not confined to a single scale; instead, they can play a role across different scales. In addition, unlike electronics thermal management, the heat source includes power loss from electrical components, traces, or frequency-varying processes. The heat source in batteries is mainly introduced through multiple endothermic and exothermic electrochemical processes and dynamically changes according to the charging state, which makes capturing the exact source term and modeling battery thermal behavior even more challenging.
The interconnections among the challenges related to battery thermal management simulation across the cell, pack, and system levels are deeply intertwined, as issues at one scale directly influence the others, creating a complex multiphysics problem. The complexity of precise cell thermal management propagates to the pack level, where cell-to-cell variability and non-uniform temperature distribution, which are driven by manufacturing deviations and cell arrangement, lead to uneven heat generation and dissipation, further complicating thermal predictions. At the system level, these challenges culminate in the need to model the entire battery pack under diverse operating conditions, where the cumulative effects of cell- and pack-level thermal inconsistencies must be addressed through coordinated heat dissipation strategies, all while balancing computational efficiency with model accuracy. Unlike electronics thermal management, where heat sources are primarily power losses from electrical components, BTMS must account for the dynamic electrochemical heat sources, making the accurate capture of source terms a cross-scale challenge that links the cell, pack, and system levels in a unified modeling framework. Therefore, this section will divide the PINN research into battery cells, battery packs, and battery systems, following the same escalating order as Section 3.

4.1. Battery Cell Thermal Management

For battery thermal profile prediction, the source term is generally simplified with a uniform heat generation source, or thermal resistance network [101,102]. Many researchers have simplified PDEs to create lumped parameter models. Research can, however, be conducted by considering the realistic 3D battery thermal model at different charging states.
Deng et al. applied a PINN to integrate the electric–thermal mechanism of the battery and data information through a weight adaptive function [103]. This was achieved by integrating transient thermal equations, with heat generation uniformly calculated from the relationship between current and voltage. The thermal equation is relatively easy to train due to the constant material property, and the battery heat generation rate is simply determined by joule heating without considering the various reactions and their exothermic rates in the formula. Wang et al. proposed a battery informed neural network (BINN) that selects features (voltage, current, experiment time, etc.) to obtain the cell surface temperature, open circuit voltage, and internal resistance, which reflect aging [98].
Kim et al. applied a multiphysics-informed neural network (MPINN) in thermal runaway analysis using a simple cylindrical lithium-ion cell, without any charge or discharge processes [97]. The MPINN addresses this by embedding physical laws—such as the energy balance equation and the Arrhenius law—into the neural network, enabling it to estimate time- and space-dependent temperature and concentration profiles more effectively than purely data-driven models like standard artificial neural networks (ANNs). The study achieves significant advancements by demonstrating that the MPINN outperforms ANNs in accuracy across various data availability scenarios (Figure 6). With fully labeled data, the MPINN reduces the mean absolute error (MAE) and the root mean squared error (RMSE) compared to ANNs (e.g., MAE of 0.46 vs. 1.17 for temperature). In semi-supervised settings with limited labeled data, the MPINN’s errors remain low (MAE of 0.08 vs. 47.46 for ANN), and it can even predict TR without labeled data for positive electrode decomposition, leveraging its physics-informed framework. This capability positions the MPINN as a promising surrogate model for real-time TR prediction and battery safety optimization, which is validated through comparisons with high-fidelity COMSOL simulations. Looking forward, the paper suggests promising improvements, including expanding the MPINN to model additional TR mechanisms like negative electrode and solid electrolyte interphase (SEI) layer decomposition for a more comprehensive representation. Reducing the computational cost of training, currently a bottleneck, would enhance its practicality. Additionally, applying the MPINN to complex battery geometries or multi-cell systems could broaden its real-world applicability, thereby advancing battery management systems and safety designs. These enhancements could solidify the MPINN’s role in improving LIB reliability and safety.

4.2. Battery Pack Thermal Management

Cho et al. explores the use of a PINN to tackle significant challenges associated with managing lithium-ion battery packs, particularly the accurate prediction of temperature distributions [104]. The PINN approach addresses these issues by integrating physical laws, such as energy balance equations, directly into the neural network’s loss function. This hybrid method combines the strengths of physics-based and data-driven models, reducing the need for large datasets and extensive parameter tuning while maintaining accuracy, especially in scenarios with limited data or unknown initial conditions. The paper demonstrates notable success in applying the PINN method to improve temperature prediction accuracy within lithium-ion battery packs (Figure 7). By embedding physical constraints into the neural network, the PINN achieves a root mean square error (RMSE) of 0.57 °C for the Direct Current Fast Charge (DCFC) test profile and 0.52 °C for the Grade Load (GL) 100 test profile. These results mark a significant improvement over traditional methods, offering a more efficient and reliable way to monitor battery temperatures. Enhanced accuracy is particularly valuable for preventing thermal runaway and extending battery life, which are key concerns in battery management systems. The study also refines the PINN’s performance by incorporating additional inputs, such as chamber temperature, and optimizing the neural network architecture. This hybrid approach not only outperforms conventional models in terms of precision but also reduces computational overhead, making it a practical solution for real-time temperature monitoring in battery packs.
Looking ahead, the paper suggests several avenues to further enhance the PINN approach for battery pack applications. One promising improvement is the integration of more comprehensive physical models into the neural network, such as those accounting for varying thermal properties or complex heat transfer mechanisms within the battery pack. This could lead to even greater prediction accuracy by capturing a broader range of physical phenomena. Another potential advancement lies in optimizing the neural network architecture, possibly by exploring hybrid models or alternative network types to better handle the nonlinear dynamics of battery systems. Additionally, expanding the dataset to encompass a wider variety of operating conditions and battery types could improve the model’s generalizability, making it adaptable to diverse real-world scenarios and battery configurations. These enhancements could solidify PINN’s role as a cornerstone in battery management systems, driving further improvements in safety, efficiency, and performance for lithium-ion battery applications.

4.3. Battery System Thermal Management

When battery packs are embedded in large systems, such as electronics and electric vehicles, simulation becomes more challenging.
Shen et al. applied PINNs to address the challenge of accurately estimating temperature distributions in large-format lithium-ion blade batteries, which is a critical aspect of thermal management in electric vehicle battery packs [105]. Effective temperature control is vital because it directly impacts battery performance, safety, and lifespan, with excessive heat potentially leading to thermal runaway. Traditional physics-based models, while detailed, are computationally demanding and require extensive manual parameter tuning, making them impractical for real-time use. Conversely, data-driven models depend on large datasets, which may not always be available, especially under diverse operating conditions. The paper introduces a PINN model that integrates a simplified multi-node thermal model into an LSTM neural network, combining physical laws with machine learning (Figure 8). This hybrid approach reduces the need for extensive data and calibration by embedding heat transfer equations into the network’s loss function, enabling real-time temperature predictions with improved accuracy and interpretability, particularly for large-format batteries with non-uniform heat generation.
In this study, experimental data were gathered using a test bench equipped with a Nebula Charging and Discharging testing system, a GDBELL high- and low-temperature test chamber, three K-type thermocouples, and an upper computer. A series of battery thermal characteristic experiments, including discharging, cooling, charging, resting, and discharging, were performed under varying current rate and environmental conditions to capture the real-time temperature profiles of the blade battery. To develop the framework, a 1D thermal model was established along the length direction of the large-format blade battery, as heat transfer in the width and thickness directions was negligible. The model incorporates thermal resistance between nodes, heat generation, Joule heating, and convective heat transfer, and is simplified as follows:
f = d T 2 d t ( Q + T 1 T 2 R x 12 + T 3 T 2 R x 23 + T a m b T 2 R h 2 ) / M 2 c
Here, Tx denotes the temperature at different nodes, while R h and R x represent the thermal resistance between nodes, including convective heat transfer resistance (Rh) and equivalent internal resistance (Rx). The Rh values were determined using experimentally measured data via the recursive least squares (RLS) method, whereas Rx values were derived from open circuit voltage (OCV) testing combined with the Joule heating formula. The transient temperature data, equivalent resistance curves, experimental data, and thermal model were integrated into the PINN model, which includes an LSTM layer to handle long-sequence time-series data, thereby preserving and updating critical temperature information. The model is trained by minimizing the loss function, ultimately outputting the transient central temperature of the battery as a temperature curve.
The paper demonstrates significant achievements through the development and testing of the PINN model for battery temperature estimation. By incorporating a one-dimensional thermal model with three nodes—accounting for heat generation, transfer, and dissipation—the PINN leverages LSTM to capture the time-series nature of temperature changes. Tested under various charging conditions (0.5 C, 1.0 C, and 2.0 C), the model outperforms traditional approaches like backpropagation neural networks (BP-NN) and standalone LSTMs. For instance, at a 2.0 C charging rate, it achieves an R2 of 0.9863, a mean absolute error (MAE) of 0.2875 °C, and a root mean square error (RMSE) of 0.3306 °C, showcasing high predictive accuracy. A notable advantage is its ability to learn parameters, such as equivalent internal resistance, automatically via neural network training, thereby eliminating manual calibration. These results, validated through a realistic experimental setup with a battery test bench, highlight the PINN’s effectiveness in providing precise, real-time temperature estimates and enhancing battery management systems’ ability to prevent thermal issues. The paper outlines several promising directions for enhancing the PINN model’s capabilities. One key improvement is extending temperature estimation to the battery module level, predicting the overall temperature field across multiple cells in a pack. This would offer a more comprehensive thermal profile for practical applications but would require addressing increased complexity in modeling inter-cell interactions. Another avenue is refining the thermal model by incorporating additional physical phenomena, such as detailed heat transfer mechanisms or variable thermal properties, to further improve accuracy. Expanding the dataset to include a broader range of operating conditions (e.g., different charge rates, ambient temperatures, and battery aging states) could enhance the model’s generalizability and robustness. Additionally, optimizing the neural network architecture—potentially by exploring advanced hybrid designs—could improve its ability to handle complex, nonlinear thermal dynamics. These advancements would strengthen the PINN’s role in real-time thermal management, supporting safer and more efficient battery pack operations in electric vehicles.
The applications of PINNs in battery thermal management discussed in this section are summarized in Table 3. However, this area is not well understood due to the complexity of electrochemical–thermal multiphysics, which makes it difficult to identify the thermal source and corresponding PDEs, especially with different SOC. Data-driven experimental identification is the current approach being used, which deviates from the intention of PINNs because they are not data-driven.

5. Conclusions

In this review paper, we first discussed the working principles of PINNs and their variants, as well as their general applications in the physics simulation world in Section 2. Based on this understanding, we then discussed the challenges associated with both electronics and battery thermal management at three scale levels and how PINNs can tackle these challenges accordingly. Subseqently, we reviewed study that has been conducted at the chip level, board level and system level of electronics thermal management in Section 3. PINNs are being used to solve different analysis challenges at different scales. In Section 4, we then reviewed research conducted on battery thermal management with the same scale categorizing standards, from the battery level to the pack level to large-format systems. As the two most heat-intensive components in the high-tech, automobile, and industrial fields, the thermal management of electronics and batteries should be considered together. Gaps clearly exist in the connections between these two components. In the current research, PINNs have solved the challenges of data scarcity and prediction efficiency, and the critical idea is to figure out the correct PDEs to use and adjust the training algorithms to various operating conditions. Despite the promising advantages that PINNs offer over traditional machine learning, as well as their flexibility to combine with different techniques to enhance performance, the application of PINNs is not without challenges. They can be computationally intensive, sensitive to the correct formulation of governing equations and neural network architecture, and may face convergence issues or require extensive hyperparameter tuning when dealing with stiff or highly nonlinear systems, making their deployment in real-time or large-scale industrial settings more challenging. Therefore, this review points out both the pros and cons of applying PINNs, which will benefit from inspiration in using advanced machine learning approaches to address system-level electronics and battery system thermal management.

Author Contributions

Z.D.: Conceptualization, investigation, formal analysis, methodology, project administration, resources, supervision, validation, visualization, Writing—original draft, Writing—review & editing. R.L.: Investigation, literature review, formal analysis, methodology, visualization, Writing—original draft, validation, resources, Writing—review & editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

Author Zichen Du is employed by the Ansys, Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

  1. Pistoia, G. Battery Operated Devices and Systems; Elsevier: Amsterdam, The Netherlands, 2009; ISBN 978-0-444-53214-5. [Google Scholar]
  2. Wang, Q.; Ping, P.; Zhao, X.; Chu, G.; Sun, J.; Chen, C. Thermal Runaway Caused Fire and Explosion of Lithium Ion Battery. J. Power Sources 2012, 208, 210–224. [Google Scholar] [CrossRef]
  3. De Bock, H.P.; Huitink, D.; Shamberger, P.; Lundh, J.S.; Choi, S.; Niedbalski, N.; Boteler, L. A System to Package Perspective on Transient Thermal Management of Electronics. J. Electron. Packag. 2020, 142, 041111. [Google Scholar] [CrossRef]
  4. Falcone, M.; Palka Bayard De Volo, E.; Hellany, A.; Rossi, C.; Pulvirenti, B. Lithium-Ion Battery Thermal Management Systems: A Survey and New CFD Results. Batteries 2021, 7, 86. [Google Scholar] [CrossRef]
  5. Li, X.; He, F.; Ma, L. Thermal Management of Cylindrical Batteries Investigated Using Wind Tunnel Testing and Computational Fluid Dynamics Simulation. J. Power Sources 2013, 238, 395–402. [Google Scholar] [CrossRef]
  6. Kim, G.-H.; Pesaran, A. Battery Thermal Management Design Modeling. World Electr. Veh. J. 2007, 1, 126–133. [Google Scholar] [CrossRef]
  7. Chen, W.; Hou, S.; Shi, J.; Han, P.; Liu, B.; Wu, B.; Lin, X. Numerical Analysis of Novel Air-Based Li-Ion Battery Thermal Management. Batteries 2022, 8, 128. [Google Scholar] [CrossRef]
  8. Eymard, R.; Gallouët, T.; Herbin, R. Finite Volume Methods. In Handbook of Numerical Analysis; Elsevier: Amsterdam, The Netherlands, 2000; Volume 7, pp. 713–1018. ISBN 978-0-444-50350-3. [Google Scholar]
  9. Birbarah, P.; Gebrael, T.; Foulkes, T.; Stillwell, A.; Moore, A.; Pilawa-Podgurski, R.; Miljkovic, N. Water Immersion Cooling of High Power Density Electronics. Int. J. Heat Mass Transf. 2020, 147, 118918. [Google Scholar] [CrossRef]
  10. American Society of Mechanical Engineers (Ed.) Proceedings of the ASME International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems-2018: Heterogeneous Integration: Microsystems with Diverse Functionality: Servers of the Future, IoT, and Edge to Cloud: Structural and Physical Health Monitoring: Power Electronics, Energy Conversion, and Storage: Autonomous, Hybrid, and Electric Vehicles: Presented at ASME 2018 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, 27–30 August 2018, San Francisco, CA, USA; The American Society of Mechanical Engineers: New York, NY, USA, 2019; ISBN 978-0-7918-5192-0. [Google Scholar]
  11. Shuai, S.; Du, Z.; Ma, B.; Shan, L.; Dogruoz, B.; Agonafer, D. Numerical Investigation of Shape Effect on Microdroplet Evaporation. In Proceedings of the ASME 2018 International Technical Conference and Exhibition on Packaging and Integration of Electronic and Photonic Microsystems, San Francisco, CA, USA, 27–30 August 2018; American Society of Mechanical Engineers: San Francisco, CA, USA, 2018; p. V001T04A010. [Google Scholar]
  12. Al Miaari, A.; Ali, H.M. Batteries Temperature Prediction and Thermal Management Using Machine Learning: An Overview. Energy Rep. 2023, 10, 2277–2305. [Google Scholar] [CrossRef]
  13. Abhijith, M.S.; Soman, K.P. Machine Learning Methods for Modeling Nanofluid Flows: A Comprehensive Review with Emphasis on Compact Heat Transfer Devices for Electronic Device Cooling. J. Therm. Anal. Calorim. 2024, 149, 5843–5869. [Google Scholar] [CrossRef]
  14. Floridi, L.; Chiriatti, M. GPT-3: Its Nature, Scope, Limits, and Consequences. Minds Mach. 2020, 30, 681–694. [Google Scholar] [CrossRef]
  15. OpenAI; Achiam, J.; Adler, S.; Agarwal, S.; Ahmad, L.; Akkaya, I.; Aleman, F.L.; Almeida, D.; Altenschmidt, J.; Altman, S.; et al. GPT-4 Technical Report. arXiv 2023, arXiv:2303.08774. [Google Scholar]
  16. Decencière, E.; Cazuguel, G.; Zhang, X.; Thibault, G.; Klein, J.-C.; Meyer, F.; Marcotegui, B.; Quellec, G.; Lamard, M.; Danno, R.; et al. TeleOphta: Machine Learning and Image Processing Methods for Teleophthalmology. IRBM 2013, 34, 196–203. [Google Scholar] [CrossRef]
  17. Munawar, H.S.; Hammad, A.W.A.; Waller, S.T. A Review on Flood Management Technologies Related to Image Processing and Machine Learning. Autom. Constr. 2021, 132, 103916. [Google Scholar] [CrossRef]
  18. Lu, R. Complex Wavelet Mutual Information Loss: A Multi-Scale Loss Function for Semantic Segmentation. arXiv 2025, arXiv:250200563. [Google Scholar]
  19. Lu, R. Steerable Pyramid Weighted Loss: Multi-Scale Adaptive Weighting for Semantic Segmentation. arXiv 2025, arXiv:250306604. [Google Scholar]
  20. Summerville, A.; Snodgrass, S.; Guzdial, M.; Holmgard, C.; Hoover, A.K.; Isaksen, A.; Nealen, A.; Togelius, J. Procedural Content Generation via Machine Learning (PCGML). IEEE Trans. Games 2018, 10, 257–270. [Google Scholar] [CrossRef]
  21. Justesen, N.; Bontrager, P.; Togelius, J.; Risi, S. Deep Learning for Video Game Playing. IEEE Trans. Games 2020, 12, 1–20. [Google Scholar] [CrossRef]
  22. Xu, M.; Maddage, N.C.; Xu, C.; Kankanhalli, M.; Tian, Q. Creating Audio Keywords for Event Detection in Soccer Video. In Proceedings of the 2003 International Conference on Multimedia and Expo. ICME 03. Proceedings (Cat. No.03TH8698), Baltimore, MD, USA, 6–9 July 2003; IEEE: Piscataway, NJ, USA; p. II–281. [Google Scholar]
  23. Huang, B.; Wang, J. Applications of Physics-Informed Neural Networks in Power Systems—A Review. IEEE Trans. Power Syst. 2023, 38, 572–588. [Google Scholar] [CrossRef]
  24. Li, A.; Yuen, A.C.Y.; Wang, W.; Chen, T.B.Y.; Lai, C.S.; Yang, W.; Wu, W.; Chan, Q.N.; Kook, S.; Yeoh, G.H. Integration of Computational Fluid Dynamics and Artificial Neural Network for Optimization Design of Battery Thermal Management System. Batteries 2022, 8, 69. [Google Scholar] [CrossRef]
  25. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-Informed Machine Learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
  26. Li, J.; Lopez, S.A. A Look Inside the Black Box of Machine Learning Photodynamics Simulations. Acc. Chem. Res. 2022, 55, 1972–1984. [Google Scholar] [CrossRef] [PubMed]
  27. Liu, H.-H.; Zhang, J.; Liang, F.; Temizel, C.; Basri, M.A.; Mesdour, R. Incorporation of Physics into Machine Learning for Production Prediction from Unconventional Reservoirs: A Brief Review of the Gray-Box Approach. SPE Reserv. Eval. Eng. 2021, 24, 847–858. [Google Scholar] [CrossRef]
  28. Raissi, M.; Perdikaris, P.; Karniadakis, G.E. Physics Informed Deep Learning (Part I): Data-Driven Solutions of Nonlinear Partial Differential Equations. arXiv 2017, arXiv:1711.10561. [Google Scholar]
  29. Cai, S.; Mao, Z.; Wang, Z.; Yin, M.; Karniadakis, G.E. Physics-Informed Neural Networks (PINNs) for Fluid Mechanics: A Review. Acta Mech. Sin. 2021, 37, 1727–1738. [Google Scholar] [CrossRef]
  30. Wang, H.; Cao, Y.; Huang, Z.; Liu, Y.; Hu, P.; Luo, X.; Song, Z.; Zhao, W.; Liu, J.; Sun, J.; et al. Recent Advances on Machine Learning for Computational Fluid Dynamics: A Survey. arXiv 2024, arXiv:2408.12171. [Google Scholar]
  31. Arzani, A.; Wang, J.-X.; D’Souza, R.M. Uncovering Near-Wall Blood Flow from Sparse Data with Physics-Informed Neural Networks. Phys. Fluids 2021, 33, 071905. [Google Scholar] [CrossRef]
  32. Zhou, W.; Miwa, S.; Okamoto, K. Advancing Fluid Dynamics Simulations: A Comprehensive Approach to Optimizing Physics-Informed Neural Networks. Phys. Fluids 2024, 36, 013615. [Google Scholar] [CrossRef]
  33. Gokhale, G.; Claessens, B.; Develder, C. Physics Informed Neural Networks for Control Oriented Thermal Modeling of Buildings. Appl. Energy 2022, 314, 118852. [Google Scholar] [CrossRef]
  34. Xu, J.; Wei, H.; Bao, H. Physics-Informed Neural Networks for Studying Heat Transfer in Porous Media. Int. J. Heat Mass Transf. 2023, 217, 124671. [Google Scholar] [CrossRef]
  35. Li, Z.; Luo, H.; Jiang, Y.; Liu, H.; Xu, L.; Cao, K.; Wu, H.; Gao, P.; Liu, H. Comprehensive Review and Future Prospects on Chip-Scale Thermal Management: Core of Data Center’s Thermal Management. Appl. Therm. Eng. 2024, 251, 123612. [Google Scholar] [CrossRef]
  36. Xia, Y.; Meng, Y. Physics-Informed Neural Network (PINN) for Solving Frictional Contact Temperature and Inversely Evaluating Relevant Input Parameters. Lubricants 2024, 12, 62. [Google Scholar] [CrossRef]
  37. Cai, S.; Wang, Z.; Wang, S.; Perdikaris, P.; Karniadakis, G.E. Physics-Informed Neural Networks for Heat Transfer Problems. J. Heat Transf. 2021, 143, 060801. [Google Scholar] [CrossRef]
  38. Daw, A.; Bu, J.; Wang, S.; Perdikaris, P.; Karpatne, A. Mitigating Propagation Failures in Physics-Informed Neural Networks Using Retain-Resample-Release (R3) Sampling. arXiv 2022, arXiv:2207.02338. [Google Scholar]
  39. Nabian, M.A.; Gladstone, R.J.; Meidani, H. Efficient Training of Physics-informed Neural Networks via Importance Sampling. Comput. Civ. Infrastruct. Eng. 2021, 36, 962–977. [Google Scholar] [CrossRef]
  40. Farea, A.; Yli-Harja, O.; Emmert-Streib, F. Understanding Physics-Informed Neural Networks: Techniques, Applications, Trends, and Challenges. AI 2024, 5, 1534–1557. [Google Scholar] [CrossRef]
  41. Cuomo, S.; Di Cola, V.S.; Giampaolo, F.; Rozza, G.; Raissi, M.; Piccialli, F. Scientific Machine Learning Through Physics–Informed Neural Networks: Where We Are and What’s Next. J. Sci. Comput. 2022, 92, 88. [Google Scholar] [CrossRef]
  42. Wang, S.; Teng, Y.; Perdikaris, P. Understanding and Mitigating Gradient Flow Pathologies in Physics-Informed Neural Networks. SIAM J. Sci. Comput. 2021, 43, A3055–A3081. [Google Scholar] [CrossRef]
  43. Yao, J.; Su, C.; Hao, Z.; Liu, S.; Su, H.; Zhu, J. Multiadam: Parameter-Wise Scale-Invariant Optimizer for Multiscale Training of Physics-Informed Neural Networks. In Proceedings of the International Conference on Machine Learning, PMLR, Honolulu, HI, USA, 23–29 July 2023; pp. 39702–39721. [Google Scholar]
  44. Wu, C.; Zhu, M.; Tan, Q.; Kartha, Y.; Lu, L. A Comprehensive Study of Non-Adaptive and Residual-Based Adaptive Sampling for Physics-Informed Neural Networks. Comput. Methods Appl. Mech. Eng. 2023, 403, 115671. [Google Scholar] [CrossRef]
  45. Tang, K.; Wan, X.; Yang, C. DAS-PINNs: A Deep Adaptive Sampling Method for Solving High-Dimensional Partial Differential Equations. J. Comput. Phys. 2023, 476, 111868. [Google Scholar] [CrossRef]
  46. Yu, T.; Yong, H.; Liu, L.; Wang, H.; Chen, H. MCMC-PINNs: A Modified Markov Chain Monte-Carlo Method for Sampling Collocation Points of PINNs Adaptively. Authorea Preprint 2023. [Google Scholar] [CrossRef]
  47. Yu, B.; E, W. The Deep Ritz Method: A Deep Learning-Based Numerical Algorithm for Solving Variational Problems. Commun. Math. Stat. 2018, 6, 1–12. [Google Scholar]
  48. Kharazmi, E.; Zhang, Z.; Karniadakis, G.E. Variational Physics-Informed Neural Networks for Solving Partial Differential Equations. arXiv 2019, arXiv:191200873. [Google Scholar]
  49. Kharazmi, E.; Zhang, Z.; Karniadakis, G.E. Hp-VPINNs: Variational Physics-Informed Neural Networks with Domain Decomposition. Comput. Methods Appl. Mech. Eng. 2021, 374, 113547. [Google Scholar] [CrossRef]
  50. Khodayi-Mehr, R.; Zavlanos, M. VarNet: Variational Neural Networks for the Solution of Partial Differential Equations. In Proceedings of the Learning for Dynamics and Control, PMLR, Berkeley, CA, USA, 10–11 June 2020; pp. 298–307. [Google Scholar]
  51. Jagtap, A.D.; Kharazmi, E.; Karniadakis, G.E. Conservative Physics-Informed Neural Networks on Discrete Domains for Conservation Laws: Applications to Forward and Inverse Problems. Comput. Methods Appl. Mech. Eng. 2020, 365, 113028. [Google Scholar] [CrossRef]
  52. Jagtap, A.D.; Karniadakis, G.E. Extended Physics-Informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations. Commun. Comput. Phys. 2020, 28, 2002–2041. [Google Scholar] [CrossRef]
  53. Jeon, J.; Lee, J.; Vinuesa, R.; Kim, S.J. Residual-Based Physics-Informed Transfer Learning: A Hybrid Method for Accelerating Long-Term CFD Simulations via Deep Learning. Int. J. Heat Mass Transf. 2024, 220, 124900. [Google Scholar] [CrossRef]
  54. Incropera, F.P.; DeWitt, D.P.; Bergman, T.L.; Lavine, A.S. (Eds.) Fundamentals of Heat and Mass Transfer, 6th ed.; Wiley: Hoboken, NJ, USA, 2007; ISBN 978-0-471-45728-2. [Google Scholar]
  55. Liaw, S.P.; Yeh, R.H. Fins with Temperature Dependent Surface Heat Flux—I. Single Heat Transfer Mode. Int. J. Heat Mass Transf. 1994, 37, 1509–1515. [Google Scholar] [CrossRef]
  56. Das, S.K.; Putra, N.; Thiesen, P.; Roetzel, W. Temperature Dependence of Thermal Conductivity Enhancement for Nanofluids. J. Heat Transf. 2003, 125, 567–574. [Google Scholar] [CrossRef]
  57. Bejan, A. Convection Heat Transfer, 4th ed.; Wiley: Hoboken, NJ, USA, 2013; ISBN 978-0-470-90037-6. [Google Scholar]
  58. Kaviany, M. Heat Transfer Physics, 2nd ed.; Cambridge University Press: Cambridge, UK, 2014; ISBN 978-1-107-04178-3. [Google Scholar]
  59. Bernardo, C.; Johann, W.K. Energietechnische Gesellschaft, Ed.; Proceedings/CIPS 2012, 7th International Conference on Integrated Power Electronics Systems: 6–8 March 2012, Nuremberg, Germany; Incl. CD-ROM; ETG-Fachbericht; VDE-Verl: Berlin, Germany, 2012; ISBN 978-3-8007-3414-6. [Google Scholar]
  60. Ohadi, M.M.; Dessiatoun, S.V.; Choo, K.; Pecht, M.; Lawler, J.V. A Comparison Analysis of Air, Liquid, and Two-Phase Cooling of Data Centers. In Proceedings of the 2012 28th Annual IEEE Semiconductor Thermal Measurement and Management Symposium (SEMI-THERM), San Jose, CA, USA, 18–22 March 2012; IEEE: Piscataway, NJ, USA; pp. 58–63. [Google Scholar]
  61. Gong, Y.; Zhou, F.; Ma, G.; Liu, S. Advancements on Mechanically Driven Two-Phase Cooling Loop Systems for Data Center Free Cooling. Int. J. Refrig. 2022, 138, 84–96. [Google Scholar] [CrossRef]
  62. Yuan, X.; Zhou, X.; Pan, Y.; Kosonen, R.; Cai, H.; Gao, Y.; Wang, Y. Phase Change Cooling in Data Centers: A Review. Energy Build. 2021, 236, 110764. [Google Scholar] [CrossRef]
  63. Abro, G.E.M.; Zulkifli, S.A.B.M.; Kumar, K.; El Ouanjli, N.; Asirvadam, V.S.; Mossa, M.A. Comprehensive Review of Recent Advancements in Battery Technology, Propulsion, Power Interfaces, and Vehicle Network Systems for Intelligent Autonomous and Connected Electric Vehicles. Energies 2023, 16, 2925. [Google Scholar] [CrossRef]
  64. Zhang, Y.; Udrea, F.; Wang, H. Multidimensional Device Architectures for Efficient Power Electronics. Nat. Electron. 2022, 5, 723–734. [Google Scholar] [CrossRef]
  65. Shan, L.; Bu, C.; Su, Y.; Wu, J.; Wang, Y.; Shen, L.; Xie, J. Towards Feasible Thermal Management Design of Electronic Control Module for Variable Frequency Air Conditioner Function in Extremely High Ambient Temperatures. Electronics 2025, 14, 1595. [Google Scholar] [CrossRef]
  66. Chen, D.; Chui, C.-K.; Lee, P.S. Adaptive Physically Consistent Neural Networks for Data Center Thermal Dynamics Modeling. Appl. Energy 2025, 377, 124637. [Google Scholar] [CrossRef]
  67. Ding, B.; Zhang, Z.-H.; Gong, L.; Xu, M.-H.; Huang, Z.-Q. A Novel Thermal Management Scheme for 3D-IC Chips with Multi-Cores and High Power Density. Appl. Therm. Eng. 2020, 168, 114832. [Google Scholar] [CrossRef]
  68. Sadiqbatcha, S.I.; Zhang, J.; Amrouch, H.; Tan, S.X.-D. Real-Time Full-Chip Thermal Tracking: A Post-Silicon, Machine Learning Perspective. IEEE Trans. Comput. 2021, 71, 1411–1424. [Google Scholar] [CrossRef]
  69. Chen, L.; Jin, W.; Tan, S.X.-D. Fast Thermal Analysis for Chiplet Design Based on Graph Convolution Networks. In Proceedings of the 2022 27th Asia and South Pacific Design Automation Conference (ASP-DAC), Taipei, Taiwan, 17 January 2022; IEEE: Piscataway, NJ, USA; pp. 485–492. [Google Scholar]
  70. Liu, Z.; Li, Y.; Hu, J.; Yu, X.; Shiau, S.; Ai, X.; Zeng, Z.; Zhang, Z. DeepOHeat: Operator Learning-Based Ultra-Fast Thermal Simulation in 3D-IC Design. In Proceedings of the 2023 60th ACM/IEEE Design Automation Conference (DAC), San Francisco, CA, USA, 9 July 2023; IEEE: Piscataway, NJ, USA; pp. 1–6. [Google Scholar]
  71. Jin, P.; Meng, S.; Lu, L. MIONet: Learning Multiple-Input Operators via Tensor Product. SIAM J. Sci. Comput. 2022, 44, A3490–A3514. [Google Scholar] [CrossRef]
  72. Chen, L.; Lu, J.; Jin, W.; Tan, S.X.-D. Fast Full-Chip Parametric Thermal Analysis Based on Enhanced Physics Enforced Neural Networks. In Proceedings of the 2023 IEEE/ACM International Conference on Computer Aided Design (ICCAD), San Francisco, CA, USA, 28 October 2023; IEEE: Piscataway, NJ, USA; pp. 1–8. [Google Scholar]
  73. Garimella, S.V.; Persoons, T.; Weibel, J.A.; Gektin, V. Electronics Thermal Management in Information and Communications Technologies: Challenges and Future Directions. IEEE Trans. Compon. Packag. Manuf. Technol. 2017, 7, 1191–1205. [Google Scholar] [CrossRef]
  74. Asgari, S.; Hu, X.; Tsuk, M.; Kaushik, S. Application of POD plus LTI ROM to Battery Thermal Modeling: SISO Case. SAE Int. J. Commer. Veh. 2014, 7, 278–285. [Google Scholar] [CrossRef]
  75. Hu, X.; Asgari, S.; Lin, S.; Stanton, S.; Lian, W. A Linear Parameter-Varying Model for HEV/EV Battery Thermal Modeling. In Proceedings of the 2012 IEEE Energy Conversion Congress and Exposition (ECCE), Raleigh, NC, USA, 15–20 September 2012; IEEE: Piscataway, NJ, USA; pp. 1643–1649. [Google Scholar]
  76. Hu, X.; Asgari, S.; Yavuz, I.; Stanton, S.; Hsu, C.-C.; Shi, Z.; Wang, B.; Chu, H.-K. A Transient Reduced Order Model for Battery Thermal Management Based on Singular Value Decomposition. In Proceedings of the 2014 IEEE Energy Conversion Congress and Exposition (ECCE), Pittsburgh, PA, USA, 14–18 September 2014; IEEE: Piscataway, NJ, USA; pp. 3971–3976. [Google Scholar]
  77. Yang, Y.; Wang, Z.; Liao, Y.; Kong, W.; Shi, X.; Hu, R.; Yao, Y. A Parameterized Thermal Simulation Method Based on Physics-Informed Neural Networks for Fast Power Module Thermal Design. IEEE Trans. Power Electron. 2025, 40, 9200–9210. [Google Scholar] [CrossRef]
  78. Hamid Elsheikh, M.; Shnawah, D.A.; Sabri, M.F.M.; Said, S.B.M.; Haji Hassan, M.; Ali Bashir, M.B.; Mohamad, M. A Review on Thermoelectric Renewable Energy: Principle Parameters That Affect Their Performance. Renew. Sustain. Energy Rev. 2014, 30, 337–355. [Google Scholar] [CrossRef]
  79. Chen, L.; Jin, W.; Zhang, J.; Tan, S.X.-D. Thermoelectric Cooler Modeling and Optimization via Surrogate Modeling Using Implicit Physics-Constrained Neural Networks. IEEE Trans. Comput.-Aided Des. Integr. Circuits Syst. 2023, 42, 4090–4101. [Google Scholar] [CrossRef]
  80. Farrag, A.; Kataoka, J.; Yoon, S.W.; Won, D.; Jin, Y. SRP-PINN: A Physics-Informed Neural Network Model for Simulating Thermal Profile of Soldering Reflow Process. IEEE Trans. Compon. Packag. Manuf. Technol. 2024, 14, 1098–1105. [Google Scholar] [CrossRef]
  81. Liu, H.; Wen, M.; Yang, H.; Yue, Z.; Yao, M. A Review of Thermal Management System and Control Strategy for Automotive Engines. J. Energy Eng. 2021, 147, 03121001. [Google Scholar] [CrossRef]
  82. Du, D.; Darkwa, J.; Kokogiannakis, G. Thermal Management Systems for Photovoltaics (PV) Installations: A Critical Review. Sol. Energy 2013, 97, 238–254. [Google Scholar] [CrossRef]
  83. Nadjahi, C.; Louahlia, H.; Lemasson, S. A Review of Thermal Management and Innovative Cooling Strategies for Data Center. Sustain. Comput. Inform. Syst. 2018, 19, 14–28. [Google Scholar] [CrossRef]
  84. Zhang, K.; Zhang, Y.; Liu, J.; Niu, X. Recent Advancements on Thermal Management and Evaluation for Data Centers. Appl. Therm. Eng. 2018, 142, 215–231. [Google Scholar] [CrossRef]
  85. Pogorelskiy, S.; Kocsis, I. BIM and Computational Fluid Dynamics Analysis for Thermal Management Improvement in Data Centres. Buildings 2023, 13, 2636. [Google Scholar] [CrossRef]
  86. Schmidt, R.R.; Cruz, E.E.; Iyengar, M. Challenges of Data Center Thermal Management. IBM J. Res. Dev. 2005, 49, 709–723. [Google Scholar] [CrossRef]
  87. Tanaka, H.; Nagai, H. Thermal Surrogate Model for Spacecraft Systems Using Physics-Informed Machine Learning with POD Data Reduction. Int. J. Heat Mass Transf. 2023, 213, 124336. [Google Scholar] [CrossRef]
  88. Zhang, X.; Tu, C.; Yan, Y. Physics-Informed Neural Network Simulation of Conjugate Heat Transfer in Manifold Microchannel Heat Sinks for High-Power IGBT Cooling. Int. Commun. Heat Mass Transf. 2024, 159, 108036. [Google Scholar] [CrossRef]
  89. Jordan, S.M.; Schreiber, C.O.; Parhizi, M.; Shah, K. A New Multiphysics Modeling Framework to Simulate Coupled Electrochemical-Thermal-Electrical Phenomena in Li-Ion Battery Packs. Appl. Energy 2024, 360, 122746. [Google Scholar] [CrossRef]
  90. Grazioli, D.; Magri, M.; Salvadori, A. Computational Modeling of Li-Ion Batteries. Comput. Mech. 2016, 58, 889–909. [Google Scholar] [CrossRef]
  91. Wang, F.; Zhai, Z.; Zhao, Z.; Di, Y.; Chen, X. Physics-Informed Neural Network for Lithium-Ion Battery Degradation Stable Modeling and Prognosis. Nat. Commun. 2024, 15, 4332. [Google Scholar] [CrossRef]
  92. Wen, P.; Ye, Z.-S.; Li, Y.; Chen, S.; Xie, P.; Zhao, S. Physics-Informed Neural Networks for Prognostics and Health Management of Lithium-Ion Batteries. IEEE Trans. Intell. Veh. 2024, 9, 2276–2289. [Google Scholar] [CrossRef]
  93. Navidi, S.; Thelen, A.; Li, T.; Hu, C. Physics-Informed Machine Learning for Battery Degradation Diagnostics: A Comparison of State-of-the-Art Methods. Energy Storage Mater. 2024, 68, 103343. [Google Scholar] [CrossRef]
  94. Finegan, D.P.; Zhu, J.; Feng, X.; Keyser, M.; Ulmefors, M.; Li, W.; Bazant, M.Z.; Cooper, S.J. The Application of Data-Driven Methods and Physics-Based Learning for Improving Battery Safety. Joule 2021, 5, 316–329. [Google Scholar] [CrossRef]
  95. Feng, X.; Ouyang, M.; Liu, X.; Lu, L.; Xia, Y.; He, X. Thermal Runaway Mechanism of Lithium Ion Battery for Electric Vehicles: A Review. Energy Storage Mater. 2018, 10, 246–267. [Google Scholar] [CrossRef]
  96. Xu, B.; Lee, J.; Kwon, D.; Kong, L.; Pecht, M. Mitigation Strategies for Li-Ion Battery Thermal Runaway: A Review. Renew. Sustain. Energy Rev. 2021, 150, 111437. [Google Scholar] [CrossRef]
  97. Kim, S.W.; Kwak, E.; Kim, J.-H.; Oh, K.-Y.; Lee, S. Modeling and Prediction of Lithium-Ion Battery Thermal Runaway via Multiphysics-Informed Neural Network. J. Energy Storage 2023, 60, 106654. [Google Scholar] [CrossRef]
  98. Wang, Y.; Xiong, C.; Wang, Y.; Xu, P.; Ju, C.; Shi, J.; Yang, G.; Chu, J. Temperature State Prediction for Lithium-Ion Batteries Based on Improved Physics Informed Neural Networks. J. Energy Storage 2023, 73, 108863. [Google Scholar] [CrossRef]
  99. Chen, K.; Song, M.; Wei, W.; Wang, S. Design of the Structure of Battery Pack in Parallel Air-Cooled Battery Thermal Management System for Cooling Efficiency Improvement. Int. J. Heat Mass Transf. 2019, 132, 309–321. [Google Scholar] [CrossRef]
  100. Liu, H.; Wei, Z.; He, W.; Zhao, J. Thermal Issues about Li-Ion Batteries and Recent Progress in Battery Thermal Management Systems: A Review. Energy Convers. Manag. 2017, 150, 304–330. [Google Scholar] [CrossRef]
  101. Gümüşsu, E.; Ekici, Ö.; Köksal, M. 3-D CFD Modeling and Experimental Testing of Thermal Behavior of a Li-Ion Battery. Appl. Therm. Eng. 2017, 120, 484–495. [Google Scholar] [CrossRef]
  102. Esmaeili, J.; Jannesari, H. Developing Heat Source Term Including Heat Generation at Rest Condition for Lithium-Ion Battery Pack by up Scaling Information from Cell Scale. Energy Convers. Manag. 2017, 139, 194–205. [Google Scholar] [CrossRef]
  103. Deng, H.-P.; He, Y.-B.; Wang, B.-C.; Li, H.-X. Physics-Dominated Neural Network for Spatiotemporal Modeling of Battery Thermal Process. IEEE Trans. Ind. Inform. 2024, 20, 452–460. [Google Scholar] [CrossRef]
  104. Cho, G.; Zhu, D.; Campbell, J.J.; Wang, M. An LSTM-PINN Hybrid Method to Estimate Lithium-Ion Battery Pack Temperature. IEEE Access 2022, 10, 100594–100604. [Google Scholar] [CrossRef]
  105. Shen, K.; Xu, W.; Lai, X.; Li, D.; Meng, X.; Zheng, Y.; Feng, X. Physics-Informed Machine Learning Estimation of the Temperature of Large-Format Lithium-Ion Batteries under Various Operating Conditions. Appl. Therm. Eng. 2025, 269, 126200. [Google Scholar] [CrossRef]
Figure 1. Schematics of PINN and its variations. (a) Schematic of the conservative physics-informed neural network (cPINN) architecture [51]. The addition of interface loss distinguishes cPINNs from traditional PINNs, enabling improved performance on conservation laws and problems with sharp gradients. (b) The residual-based physics-informed transfer learning (RePIT) strategy as a representative hybrid PINN approach [53]. The workflow alternates between conventional CFD computation and neural network-based predictions. The figure has been republished with permission from each indicated reference (ref. [51] for (a), ref. [53] for (b)).
Figure 1. Schematics of PINN and its variations. (a) Schematic of the conservative physics-informed neural network (cPINN) architecture [51]. The addition of interface loss distinguishes cPINNs from traditional PINNs, enabling improved performance on conservation laws and problems with sharp gradients. (b) The residual-based physics-informed transfer learning (RePIT) strategy as a representative hybrid PINN approach [53]. The workflow alternates between conventional CFD computation and neural network-based predictions. The figure has been republished with permission from each indicated reference (ref. [51] for (a), ref. [53] for (b)).
Batteries 11 00204 g001
Figure 2. Electronics thermal management challenges on (a) Chip, (b) Board, and (c) System. Image sources: (b) [65], (c) [66], Figure (b) was reproduced from [65] with permission under the terms of the Creative Commons. Figure (c) has been republished with permission from each indicated reference [66].
Figure 2. Electronics thermal management challenges on (a) Chip, (b) Board, and (c) System. Image sources: (b) [65], (c) [66], Figure (b) was reproduced from [65] with permission under the terms of the Creative Commons. Figure (c) has been republished with permission from each indicated reference [66].
Batteries 11 00204 g002
Figure 3. DeepOHeat framework. Figure has been republished from [71] with permission under the terms of the Creative Commons.
Figure 3. DeepOHeat framework. Figure has been republished from [71] with permission under the terms of the Creative Commons.
Batteries 11 00204 g003
Figure 4. Adaptive physically consistent neural network framework. The figure has been republished with permission from [66].
Figure 4. Adaptive physically consistent neural network framework. The figure has been republished with permission from [66].
Batteries 11 00204 g004
Figure 5. Battery thermal management challenges. Figure (a) was reproduced from [97] with permission under the terms of the Creative Commons. Figures (bd) have been republished with permission from each indicated reference (ref. [98] for (b), ref. [99] for (c), ref. [100] for (d)).
Figure 5. Battery thermal management challenges. Figure (a) was reproduced from [97] with permission under the terms of the Creative Commons. Figures (bd) have been republished with permission from each indicated reference (ref. [98] for (b), ref. [99] for (c), ref. [100] for (d)).
Batteries 11 00204 g005
Figure 6. Multiphysics-informed neural network framework. The figure has been republished from [97] with permission under the terms of the Creative Commons.
Figure 6. Multiphysics-informed neural network framework. The figure has been republished from [97] with permission under the terms of the Creative Commons.
Batteries 11 00204 g006
Figure 7. LSTM-PINN hybrid framework. The figure has been republished from [104] with permission under the terms of the Creative Commons.
Figure 7. LSTM-PINN hybrid framework. The figure has been republished from [104] with permission under the terms of the Creative Commons.
Batteries 11 00204 g007
Figure 8. PINN-based real-time battery temperature estimation framework. The figure has been republished with permission from [105].
Figure 8. PINN-based real-time battery temperature estimation framework. The figure has been republished with permission from [105].
Batteries 11 00204 g008
Table 1. Summary of major PINN variants and their core innovations.
Table 1. Summary of major PINN variants and their core innovations.
CategoryKey MethodsHighlights
Loss BalancingMultiAdam [43], Gradient Reweight [42]Adaptive optimizer, rebalancing loss terms
Sampling StrategiesImportance sampling [39], RAD [42], DAS-PINNs [45], MCMC-PINNs [46]Adaptive and probabilistic point selection
Variational FormDeep Ritz [47], VPINNs [48,49], VarNet [50]Weak form enforcement, lower derivative order
Domain DecompositioncPINNs [51], XPINNs [52]Local networks, interface stitching
Table 2. Summary of applications of PINNs in electronics thermal management.
Table 2. Summary of applications of PINNs in electronics thermal management.
ScaleKey MethodsHighlights
ChipDeeOHeat [71], ThermPINN [72]Temperature-dependent properties, DeepONet, variables separation
BoardPINN with SiC design exploration [77], IPCNN [78], SRP-PINN [80]Design space exploration, Multiphysics
SystemA-PCNN [66], PIML [87], PINN with MMC and IGBTs [88]CAD complexities, data size
Table 3. Summary of applications of PINNs in battery thermal management.
Table 3. Summary of applications of PINNs in battery thermal management.
ScaleKey MethodsHighlights
CellPINN with electric–thermal mechanism [103], BINN [98], MPINN [97]Electrochemical–thermal multiphysics, real-time TR prediction
PackLSTM-PINN [104]Data size, temperature uniformity, cell arrangement
SystemPINN-LSTM [105]Large format, wide operational range, 1D simplification
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Du, Z.; Lu, R. Physics-Informed Neural Networks for Advanced Thermal Management in Electronics and Battery Systems: A Review of Recent Developments and Future Prospects. Batteries 2025, 11, 204. https://doi.org/10.3390/batteries11060204

AMA Style

Du Z, Lu R. Physics-Informed Neural Networks for Advanced Thermal Management in Electronics and Battery Systems: A Review of Recent Developments and Future Prospects. Batteries. 2025; 11(6):204. https://doi.org/10.3390/batteries11060204

Chicago/Turabian Style

Du, Zichen, and Renhao Lu. 2025. "Physics-Informed Neural Networks for Advanced Thermal Management in Electronics and Battery Systems: A Review of Recent Developments and Future Prospects" Batteries 11, no. 6: 204. https://doi.org/10.3390/batteries11060204

APA Style

Du, Z., & Lu, R. (2025). Physics-Informed Neural Networks for Advanced Thermal Management in Electronics and Battery Systems: A Review of Recent Developments and Future Prospects. Batteries, 11(6), 204. https://doi.org/10.3390/batteries11060204

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop