Next Article in Journal
Battery State-of-Health Estimation with Embedded Impedance Spectrum Features Under Multiple Battery Chemistry and Temperature Conditions
Next Article in Special Issue
AI-Driven Thermal Management Optimization for Lithium-Ion Battery Packs: A Surrogate Model Approach to Cell Spacing Design
Previous Article in Journal
Novel Anodic Material Sourced from Biomass Based on Amorphous Carbon Doped with Aluminum as an Efficient Alternative for Next-Generation Lithium-Ion Batteries
Previous Article in Special Issue
State-of-Health Estimation for Lithium-Ion Batteries Based on Lightweight DimConv-GFNet
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Neural Architectures and Learning Strategies for State-of-Health Estimation of Lithium-Ion Batteries: A Critical Review

Department of Mechanical Engineering, Dong-A University, 37 Nakdong-Daero 550, Saha-gu, Busan 49315, Republic of Korea
*
Author to whom correspondence should be addressed.
Batteries 2026, 12(2), 76; https://doi.org/10.3390/batteries12020076
Submission received: 12 January 2026 / Revised: 6 February 2026 / Accepted: 17 February 2026 / Published: 19 February 2026

Abstract

Accurate state-of-health (SOH) estimation is a cornerstone of safe, reliable, and cost-effective operation of lithium-ion batteries (LIBs) in electric vehicles and energy storage systems. In recent years, rapid advances in artificial intelligence technology have led to the widespread adoption of neural-network-based SOH estimation methods, offering strong nonlinear modeling capability and improved adaptability compared with traditional model-based approaches. However, the growing diversity of neural architectures and learning strategies has led to fragmented development and inconsistent evaluation, hindering their practical deployment. This paper presents a critical and systematic review of the most recent representative studies on neural-network-based SOH estimation for LIBs between 2024 and 2025. A unified taxonomy is introduced to distinguish neural architectures from learning strategies. The neural architectures include artificial neural networks, convolutional and recurrent networks, attention-based models, Transformers, and physics-informed neural networks. The learning strategies encompass transfer learning, physics-constrained/physics-informed learning, robustness-oriented training and efficiency-aware design. The reviewed methods are analyzed in terms of modeling capability, generalization across operating conditions and chemistries, data efficiency, interpretability and deployability within battery management systems. Key challenges including nonlinear degradation, degradation diversity, data scarcity, and limited observability are critically examined. The roles of architecture-strategy co-design in addressing these issues are highlighted. Finally, open research directions and practical recommendations are discussed to guide the development of reliable, scalable and physically consistent SOH estimation frameworks. This review provides a structured reference for researchers and practitioners seeking to advance data-driven battery health monitoring toward real-world applications.

1. Introduction

Climate change and global warming are among the most pressing global challenges of the 21st century, driven primarily by anthropogenic greenhouse gas emissions [1,2]. Among various sectors, the transportation remains a major contributor to carbon dioxide (CO2) emissions due to its heavy reliance on fossil fuels [3,4]. Recent assessments indicate that the transport sector accounts for over 20% of global greenhouse gas emissions with road transportation being the dominant source [5]. These emissions not only accelerate global warming but also contribute to air pollution, public health risks and long-term ecological degradation [6]. Consequently, decarbonizing the transport sector has become a critical priority in global climate mitigation strategies.
In response to these environmental concerns, electric vehicles (EVs) have emerged as a promising alternative to conventional internal combustion engine vehicles [7,8]. By eliminating tailpipe emissions, EVs offer a pathway toward low-carbon transportation, particularly when coupled with renewable electricity generation [9,10]. In addition to zero local emissions, EVs exhibit higher energy conversion efficiency, lower operating noise, reduced maintenance requirements, and improved driving performance [11,12]. With continued advances in charging infrastructure, power electronics, and energy storage technologies, EVs are increasingly regarded as a cornerstone of sustainable mobility and an essential component of future intelligent transportation systems [13,14].
Lithium-ion batteries (LIBs) currently serve as the primary energy storage technology for EVs due to their favorable electrochemical characteristics [15,16]. Compared to alternative battery chemistries, LIBs offer high gravimetric and volumetric energy densities, elevated operating voltages, long cycle lives, low self-discharge rates, and relatively mature manufacturing processes [17,18]. These advantages enable extended driving range, compact battery pack design, and reliable long-term operation [19,20]. As a result, LIBs have been widely adopted across passenger EVs, commercial vehicles, and energy storage systems [21,22]. However, their performance and safety are strongly influenced by temperature, aging, and degradation processes that evolve over time, necessitating the need for accurate and reliable battery health monitoring [23,24].
State-of-health (SOH) estimation is a fundamental function of battery management systems (BMS), as it quantifies the degradation level of a battery relative to its initial condition [25]. Accurate SOH estimation plays a critical role in ensuring operational safety, reliability, and optimal lifetime utilization of LIBs [26]. It directly affects driving range prediction, energy management strategies, thermal and safety control, warranty assessment, and decisions related to second-life applications such as stationary energy storage. Conversely, inaccurate SOH estimation can lead to overly conservative operation, resulting in underutilization of battery capacity, or overly aggressive usage that increases safety risks and accelerates degradation [27]. Therefore, robust and precise SOH estimation is essential for both performance optimization and risk mitigation in EVs and energy storage systems.
Despite its importance, the practical SOH estimation remains a challenging task due to the complex and coupled nature of battery degradation [28]. LIBs exhibit highly nonlinear and non-stationary aging behavior influenced by operating conditions, usage history, and environmental factors [29]. Significant variability exists across cells, even within the same batch, and degradation mechanisms differ across battery chemistries [30]. In real-world applications, SOH estimation must often rely on partial, irregular, and noisy operational data rather than complete charge–discharge cycles. Additional challenges arise from capacity regeneration effects, variable load profiles, and changing temperature conditions. Furthermore, the internal electrochemical states governing degradation are only indirectly observable through external measurements, such as voltage, current, and temperature, leading to inherent limitations in observability [31]. These factors collectively complicate the development of accurate, generalizable, and reliable SOH estimation methods.
Traditional SOH estimation approaches are primarily based on electrochemical models or equivalent circuit models (ECMs), which offer strong physical interpretability and theoretical grounding [32]. However, these model-based methods often require extensive parameter identification, rely on simplifying assumptions, and struggle to capture complex degradation dynamics under diverse real-world operating conditions [33]. With the increasing availability of battery monitoring data, data-driven methods, particularly machine learning and deep learning techniques, have gained significant attention due to their ability to learn nonlinear relationships directly from data [34,35]. While purely data-driven approaches have demonstrated impressive accuracy under controlled conditions, they often suffer from limited generalization, poor interpretability, and high data dependence. To address these limitations, hybrid and physics-informed learning frameworks have emerged, aiming to combine the flexibility of neural networks with the robustness and consistency of physical knowledge [36,37].
Recent advances in neural architectures including recurrent neural networks (RNNs), convolutional neural networks (CNNs), Transformer-based models and physics-informed neural networks (PINNs) have significantly expanded the methodological landscape for SOH estimation [38,39,40]. These architectures enable the modeling of temporal dependencies, automatic feature extraction, learning of long-range degradation and integration of physical constraints [41]. However, recent studies indicated that improvements in model architecture alone are inadequate for ensuring reliable SOH estimation. Learning strategies such as transfer learning for cross-domain generalization, robustness-oriented training for non-stationary and noisy data, uncertainty-aware modeling for trustworthy decision-making and efficiency-aware design for embedded BMS deployment play an equally critical role [42]. Consequently, the effective SOH estimation requires the co-design of neural architectures and learning strategies to balance accuracy, generalization, interpretability and practical applicability.
In recent years, a growing number of review articles have surveyed SOH estimation methods for LIBs, reflecting the rapid expansion of data-driven and machine-learning-based approaches as presented in Table 1. However, a critical examination of the existing literature reveals several notable limitations. First, most reviews adopt a method-centric or accuracy-centric perspective which neural networks are treated as a homogeneous category within data-driven methods. Fine-grained distinctions among modern neural architectures such as recurrent networks, convolutional-recurrent hybrids, attention mechanisms, Transformer-based models and emerging efficient sequence learners are rarely analyzed in a structured and comparative manner. Second, learning strategies are largely underemphasized in existing reviews. While architectural choices are often cataloged, critical aspects such as transfer learning, domain adaptation, robustness to non-stationary degradation, uncertainty-aware learning, test-time adaptation, and efficiency-aware training are typically discussed only at a conceptual level or omitted altogether. Third, although physics-informed and hybrid approaches are increasingly recognized as promising, most reviews treat them in a coarse-grained manner, without systematically distinguishing how physical knowledge is incorporated, whether through physics-constrained loss functions, parameter embedding, system identification, or multi-task coupling. Consequently, the relationship between physical consistency, interpretability, and learning performance is not clearly articulated. Fourth, robustness and deployability considerations, including non-stationarity, capacity regeneration, adversarial vulnerability, partial and irregular data availability, and real-time constraints, are often addressed descriptively rather than analytically. Few reviews explicitly link methodological choices to deployment-relevant metrics such as inference latency, memory footprint, edge-cloud collaboration, or reliability under real-world operating variability. Finally, existing reviews generally lack a unified taxonomy that connects neural architectures, learning strategies, and practical SOH estimation challenges within a coherent analytical framework. As a result, the literature remains fragmented, making it difficult to extract design principles for developing generalizable, interpretable, and BMS-ready SOH estimation models.
To address the above limitations, the current review includes below points. Firstly, this article presents a critical and systematic synthesis of neural architectures and learning strategies for LIB SOH estimation with an explicit focus on generalization, robustness and practical applicability. Specifically, previous reviews primarily focus on categorizing methods by estimation paradigm or performance metrics. Meanwhile, this work adopts a dual-axis perspective to jointly examine the neural architectural design and learning strategy selection as co-determinants of SOH estimation performance. Secondly, existing SOH estimation studies have predominantly emphasized the development of increasingly complex neural architectures, while the role of learning strategies in ensuring robustness, generalization and practical deployability has received comparatively limited systematic attention. This review examines transfer learning, robustness-aware training, uncertainty handling, physics-guided learning and efficiency-aware design as essential components of modern SOH estimation frameworks, demonstrating that architectural sophistication alone is insufficient without appropriately designed learning strategies. Thirdly, in practical BMSs, SOH estimation is further complicated by non-stationary degradation behavior, capacity regeneration phenomena, partial and irregular operational data, adversarial perturbations and stringent real-time computational constraints, which are often insufficiently addressed in existing reviews that focus primarily on controlled laboratory conditions. This review systematically examines how these practical challenges impact SOH estimation performance in recent learning frameworks within the context of realistic BMS deployment rather than idealized laboratory settings. As a result, the objective of this review is to provide a comprehensive overview of existing methods to develop reliable, generalizable and deployable solutions for estimating SOH. To achieve this, a total of 154 studies have been reviewed and 51 representative studies with an emphasis on next-generation BMS published between 2024 and 2025 have been concisely reviewed to cover the most recent advances across data-driven, physics-informed and emerging paradigms. In addition, this work seeks to bridge the gap between algorithmic innovation and practical battery health management by unifying neural architectures and learning strategies within a single analytical framework.
This review is organized to progressively bridge methodological development and practical deployment of SOH estimation models. Section 2 introduces the problem formulation, data characteristics and evaluation challenges in SOH estimation. Section 3 reviews foundational learning paradigms and neural architectures, including data-driven, physics-informed and hybrid approaches. Section 4 provides a systematic review of recent neural architectures and learning strategies for SOH estimation, with emphasis on attention-based, efficient sequence and robustness-oriented methods. Building on this review, Section 5 synthesizes cross-cutting insights, critically examines validation realism, architectural complexity with failure modes, and presents decision-oriented design guidelines and author perspectives to support practical BMS deployment.

2. Battery SOH Overview and Fundamentals of SOH Estimation

2.1. Definitions of SOH

SOH is generally used to quantify the degree of aging of a LIB relative to its initial condition [48]. It characterizes the degradation of battery performance caused by cycling and calendar aging and is commonly defined from different performance perspectives, including capacity, internal resistance, and power capability. Capacity- and resistance-based SOH definitions are most commonly adopted and are expressed in Equations (1) and (2) [45]:
S O H = C c u r C i n t × 100 %
S O H = R e o l R c u r R e o l R i n t × 100 %
where C c u r presents the current available capacity, while C i n t indicates the initial nominal capacity of the battery. R e o l presents the battery internal resistance at the end of life (EOL), while R i n t expresses the initial battery internal resistance, and R c u r denotes the current battery internal resistance.
The capacity-based SOH defined in Equation (1) quantifies battery aging by normalizing the currently available capacity with respect to the initial nominal capacity. Because capacity fade directly reflects the loss of cyclable lithium and electrochemically active material during aging, this definition provides an intuitive, physically meaningful, and widely adopted measure of battery health. The resistance-based SOH in Equation (2) characterizes aging through the increase in internal resistance over time. Internal resistance generally increases as a result of aging mechanisms, such as the growth of a solid electrolyte interphase (SEI) layer, electrode degradation, current collector corrosion, and an increase in contact resistance. Consequently, resistance-based SOH effectively reflects performance deterioration, particularly under high-rate operating conditions where resistive losses dominate [43,45]. In addition to capacity- and resistance-based definitions, SOH can also be interpreted from a power or performance perspective. As aging progresses, the increase in internal resistance results in reduced power capability, decreased energy efficiency, and increased heat generation during charge and discharge. These effects ultimately limit the battery’s ability to satisfy high-power demands, even when a substantial fraction of its nominal capacity remains available.
It should be noted that, despite their clear physical interpretation, capacity- and resistance-based SOH definitions are not directly measurable during normal operation and typically require controlled testing conditions. In practical BMS, SOH must therefore be inferred indirectly from observable signals such as voltage, current, temperature, and their derived features. This indirect observability motivates the development of advanced estimation methods, including data-driven, physics-informed, and hybrid learning frameworks, which aim to map measurable operational data to the underlying health state.

2.2. Data Sources for SOH Estimation

SOH estimation relies on measurable signals and diagnostic data that reflect battery aging characteristics. Commonly used data sources include cycling data, electrochemical impedance spectroscopy (EIS), incremental capacity (IC) and differential voltage (DV) analysis, as well as onboard signals measured by BMS.

2.2.1. Cycling Data

During charge–discharge cycling experiments, key operational parameters, such as terminal voltage, current, and temperature, can be continuously measured and recorded [49]. Among these signals, voltage is one of the most informative indicators for SOH estimation, as the voltage response of a LIB evolves noticeably with aging. As degradation progresses, the voltage profiles during both charging and discharging exhibit increased polarization and steeper slopes, reflecting the growth of internal resistance and altered electrochemical kinetics [50]. Consequently, variations in voltage trajectories provide valuable insight into the battery’s health state.
Temperature-related data typically include the battery’s surface or internal temperature, as well as the ambient temperature [51]. Temperature plays a critical role in battery aging, as it directly influences reaction rates and degradation mechanisms. Elevated temperatures accelerate electrolyte decomposition, side reactions, and SEI growth, resulting in rapid capacity fade and increased resistance. In contrast, low-temperature operation increases the risk of lithium plating and dendrite formation, which can cause irreversible capacity loss and internal short circuits. For this reason, temperature features are frequently incorporated into SOH estimation models to capture thermal effects on degradation behavior [52].
In addition to voltage and temperature, charge–discharge capacity degradation provides a direct and intuitive measure of SOH evolution, as the rate of capacity fade reflects the progression of aging over time [53]. Similarly, the increase in internal resistance is widely regarded as a key signature of battery degradation. As aging advances, internal resistance growth leads to higher polarization losses, reduced energy efficiency, and increased heat generation during operation, further accelerating performance deterioration [54]. Accordingly, both capacity fade and internal resistance growth are commonly adopted as fundamental health indicators in SOH estimation studies.
It is worth noting that cycling data used for SOH estimation may originate from either complete charge–discharge cycles under laboratory conditions or partial and irregular cycling profiles encountered in real-world operation. While full-cycle data facilitates accurate extraction of degradation features, practical BMS often rely on partial charging or discharging segments, posing additional challenges for feature extraction and model generalization. This distinction has important implications for the design of SOH estimation algorithms and serves as a key motivation for the advanced learning-based approaches discussed in later sections.

2.2.2. Electrochemical Impedance Spectroscopy (EIS)

EIS is an analytical technique used to investigate the internal electrochemical dynamics of LIBs from a frequency-response perspective [55]. Unlike conventional cycling tests, which primarily capture macroscopic performance degradation, EIS enables a more detailed characterization of the transport processes and interfacial reactions occurring within the cell. Through impedance analysis, key electrochemical properties, including resistive losses, reaction kinetics, and interphase characteristics, can be indirectly quantified. As these properties evolve with aging, EIS serves as an effective complementary data source for battery health assessment [56].
During an EIS measurement, a battery is excited with a small-amplitude sinusoidal electrical signal, applied either as a voltage or current perturbation, while the corresponding response is recorded. The ratio between the applied signal and the measured response defines the impedance at a given excitation frequency. The complex impedance can be expressed as in Equation (3) [57]:
Z t = U t I t = R e Z + j I m Z
where Z t denotes the impedance, while U t and I t correspond to the voltage and current signals, respectively. The real part of the impedance corresponds to dissipative processes such as ohmic and charge-transfer resistances, while the imaginary part reflects energy storage behavior associated with capacitive effects and polarization phenomena.
By sweeping the excitation frequency over a wide range, EIS probes electrochemical processes characterized by different time constants. High-frequency responses are typically associated with bulk electrolyte resistance and contact resistance, whereas intermediate- and low-frequency regions reflect charge-transfer processes and interfacial phenomena, including solid electrolyte interphase (SEI) formation and diffusion-related effects [58]. Despite its ability to provide rich diagnostic information, the practical deployment of EIS for online SOH estimation remains challenging due to extended measurement durations, sensitivity to temperature and state-of-charge variations, and the complexity of the hardware. Consequently, EIS is often used in combination with data-driven or hybrid modeling approaches rather than as a standalone health monitoring technique [59,60]. Recent studies have therefore explored the integration of EIS-derived features with machine learning and deep learning models, leveraging the high interpretability of impedance characteristics while mitigating practical limitations through data-driven inference.

2.2.3. Incremental Capacity (IC) and Differential Voltage (DV) Analysis

IC and DV analysis are widely used techniques for extracting degradation-sensitive features from voltage-capacity curves obtained during battery charging or discharging [61,62]. By amplifying subtle variations in voltage plateaus, IC and DV curves reveal electrochemical changes that are often obscured in the original voltage-capacity profiles.
IC is conventionally defined as the derivative of capacity with respect to voltage, as presented in Equation (4) [45]:
d Q d V Δ Q Δ V = Q k Q k 1 V k V k 1
where Q k , Q k 1 , V k , and V k 1 represent the battery capacity and terminal voltage at the k -th and ( k 1 ) -th time intervals, respectively, while DV analysis is expressed d V / d Q . These differential representations enhance the detection of small variations in the voltage-capacity relationship, allowing for clearer identification of aging-related changes. As the SOH decreases, characteristic peaks in IC and DV curves tend to shift, broaden, or diminish, reflecting underlying degradation mechanisms such as loss of lithium inventory and loss of electrochemically active material. Consequently, peak-related descriptors, including peak position, magnitude, width, and slope, are widely adopted as informative health indicators for SOH estimation [63]. The studies showed that IC/DV analysis is typically performed under quasi-constant current conditions and is highly sensitive to measurement noise and voltage resolution. As a result, smoothing, filtering, and peak extraction procedures are often required, which may introduce uncertainty and limit robustness under real-world operating conditions. These challenges have motivated the integration of IC/DV features with data-driven and learning-based SOH estimation methods, which aim to automatically extract degradation-relevant patterns while mitigating noise sensitivity [64,65].

2.2.4. Onboard BMS Signals

For practical applications, onboard signals measured by the BMS constitute the most accessible and widely used data source for SOH estimation [66]. Core signals, including terminal voltage, current, and temperature, are continuously monitored during battery operation and can be exploited for online health assessment without requiring additional sensing hardware [67]. This makes BMS-based SOH estimation particularly attractive for real-world EV and energy storage applications. Beyond raw measurements, a variety of derived features can be extracted from onboard signals, such as charging and discharging durations, voltage rise or drop characteristics, cumulative capacity and energy, statistical descriptors, and temperature gradients. These features encode degradation-related information while remaining compatible with the limited sensing and computational capabilities of embedded BMS platforms. Compared with discharging data, charging data, especially under constant-current or controlled charging protocols, are often more stable and less affected by external disturbances such as variable load demand and user driving behavior [68]. Consequently, many practical SOH estimation methods emphasize feature extraction from charging processes, including partial charging segments encountered during routine operation.
However, SOH estimation based solely on onboard signals also presents significant challenges. In real-world scenarios, available data are often partial, irregular, and heterogeneous, with varying depths of charge, operating temperatures, and usage patterns. Sensor noise, limited sampling resolution, and the indirect observability of internal degradation states further complicate the inference of reliable health. These constraints limit the effectiveness of purely physics-based approaches, motivating the adoption of data-driven and hybrid learning frameworks that can extract degradation-relevant patterns from noisy and incomplete measurements. As a result, recent research has increasingly focused on leveraging onboard BMS signals in conjunction with advanced neural architectures and learning strategies, such as sequence modeling, attention mechanisms, physics-informed learning, and robustness-aware training. These approaches aim to bridge the gap between limited onboard observability and the need for accurate, generalizable, and deployable SOH estimation in practical BMS.

2.3. Key Challenges: Nonlinearity, Degradation Diversity, and Data Scarcity

Despite extensive research efforts, accurate and reliable SOH estimation remains a challenging problem due to the nonlinear nature of battery degradation, strong dependency on operating conditions, and limited availability of representative and well-labeled data. These challenges fundamentally constrain the performance, generalization, and practical applicability of SOH estimation methods.

2.3.1. Nonlinear Degradation Behavior

Battery aging is inherently nonlinear throughout its lifetime, and the degradation rate is not constant across charge–discharge cycles [69]. The underlying physical and chemical processes that drive capacity fade and internal resistance growth evolve over time, with usage history and environmental conditions influencing their progression. Moreover, dominant aging mechanisms may shift as the battery transitions from early life to midlife and late life stages. As a result, the relationship between measurable health indicators, such as voltage response, impedance-related features, and temperature behavior, and SOH is often stage-dependent. This nonlinearity complicates the construction of a single, stable estimation model that remains valid throughout the entire battery lifecycle [70]. The challenge becomes particularly pronounced near the EOL, where degradation can accelerate rapidly, and uncertainty in prediction typically increases.

2.3.2. Degradation Diversity and Operating-Condition Dependency

Degradation diversity introduces a substantial generalization challenge for SOH estimation. Battery aging trajectories are strongly influenced by operating conditions, including ambient temperature, charge and discharge rates, depth of discharge, voltage window, and rest patterns [71]. In practical applications, batteries are often subjected to mixed charging protocols, sporadic charging events, and dynamic load profiles that differ significantly from those used in controlled laboratory testing procedures. In addition, intrinsic cell-to-cell variability and differences in chemistry, form factor, and manufacturing tolerances contribute to heterogeneous aging behavior even under nominally similar operating conditions [72]. Consequently, SOH estimation models trained on data collected under a limited set of conditions may exhibit degraded performance when deployed to unseen operating regimes, unless robustness and transferability are explicitly addressed.

2.3.3. Data Scarcity and Labeling Limitations

Data scarcity and labeling cost remain major barriers to scalable SOH estimation, particularly for data-driven and learning-based approaches [73]. Full-lifecycle degradation datasets are expensive and time-consuming to obtain, while operational data collected in real-world applications may be incomplete, noisy, or proprietary. Reliable ground-truth SOH labels typically require reference performance tests, such as full capacity measurements, which may interrupt normal operation and increase experimental burden. Furthermore, available datasets often provide limited coverage of late-life behavior and may be imbalanced across life stages, with comparatively fewer samples near the EOL [74]. These limitations increase the risk of overfitting, reduce model robustness, and hinder the estimation of reliable SOH under diverse and realistic operating conditions.

3. Neural Architectures and Learning Strategies Overview

In this review, approaches for LIB SOH estimation are systematically categorized into neural architectures and learning strategies. Neural architectures define the structural design of the model and its capability to represent and map complex features, whereas learning strategies govern how models are trained, adapted, and regularized to enhance robustness, generalization, and practical applicability. This distinction is essential, as model architecture alone does not guarantee reliable SOH estimation under diverse operating conditions without the inclusion of appropriate learning mechanisms.

3.1. Neural Architectures

Neural architectures refer to models explicitly constructed using artificial neural networks and trained through gradient-based optimization. These architectures are responsible for learning nonlinear relationships directly from battery data and form the foundation of modern data-driven SOH estimation methods.

3.1.1. Artificial Neural Networks (ANNs)

A typical ANN architecture comprises an input layer, one or more hidden layers and an output layer, as shown in Figure 1. Each neuron receives signals from neurons in the preceding layer, multiplies them by corresponding weights, adds a bias term and applies a nonlinear activation function to generate its output [75,76,77].
ANNs have been widely applied to battery SOH estimation due to their strong nonlinear approximation capability and data-driven nature. In battery applications, measurable signals such as voltage, current, temperature and engineered health indicators exhibit complex, coupled and non-stationary relationships with SOH. However, ANNs can model these complex relationships without requiring explicit physical equations.
In practice, ANNs are most effective when informative features are available and operating conditions are relatively consistent. Their primary limitation lies in the lack of explicit mechanisms to exploit temporal dependencies and local correlations inherent in battery degradation processes. As a result, ANN-based SOH estimators often rely heavily on feature engineering and preprocessing to compensate for a lack of sequence awareness. In addition, ANN may generalize poorly under distribution shifts or partial-observation scenarios.
Despite these limitations, ANNs remain an important baseline and conceptual foundation for more advanced architectures, including convolutional and recurrent neural networks. These architectures extend ANN principles to better capture local structure and temporal evolution in battery aging data.

3.1.2. Convolutional Neural Networks (CNNs)

A standard CNN architecture typically consists of convolutional layers, pooling layers, and fully connected layers, as indicated in Figure 2. Convolutional layers apply learnable kernels to local regions of the input signal to extract salient features, while pooling layers reduce the spatial or temporal resolution of feature maps, thereby improving computational efficiency and robustness to noise. Fully connected layers subsequently aggregate the extracted features to generate the final output [79,80].
CNNs are widely used in battery SOH estimation for their ability to extract localized degradation-relevant patterns from voltage, current, temperature and derived signals. In particular, one-dimensional CNNs are well suited to battery time-series data, where local temporal structures reflect electrochemical phenomena such as voltage plateau shifts, polarization effects and transient responses during cycling [82].
A key advantage of CNN-based SOH models is their capacity to learn informative representations directly from raw or minimally processed signals, thereby reducing dependence on handcrafted features. By stacking convolutional layers, CNNs can capture degradation characteristics at multiple temporal scales and provide effective noise suppression, which is beneficial under non-ideal sensing conditions [83].
However, CNNs primarily model local dependencies and therefore have limited ability to capture long-term degradation trends across extended lifecycles. As a result, standalone CNNs may underperform when long-horizon temporal correlations or partial-observation scenarios dominate. This limitation has motivated the widespread adoption of hybrid architectures that combine CNN-based local feature extraction with recurrent or attention-based modules to jointly model short-term signal structure and long-term aging dynamics [84].

3.1.3. Recurrent Neural Networks (RNNs)

RNNs are a class of neural network architectures specifically designed for modeling sequential data [85,86]. In an RNN, an input sequence is processed sequentially, and the network maintains an internal hidden state that serves as a memory of past information. At each time step, the current input is combined with the hidden state from the previous time step, enabling the model to learn temporal correlations and cumulative effects over time [87]. Illustration of the typical RNN architecture as demonstrated in Figure 3.
RNNs are used for battery SOH estimation because they can capture temporal dependencies and cumulative degradation across charge–discharge cycles. By maintaining an internal state that evolves with sequential inputs, RNNs can directly model the progression of battery aging and have therefore been widely adopted for sequence-based SOH estimation and degradation trajectory learning [89].
In practice, RNN-based SOH models are effective when degradation evolves smoothly and training data are well curated. However, conventional RNNs are prone to vanishing or exploding gradients when applied to long battery lifecycles, limiting their ability to reliably learn long-horizon dependencies. They are also sensitive to shifts in operating conditions and to validation leakage when sequences from the same cell appear in both the training and test sets.
These limitations have motivated the extensive use of gated recurrent architectures, such as Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks, which improve gradient flow and robustness to long sequences. As a result, LSTM- and GRU-based models have become the dominant recurrent baselines in SOH estimation, while standard RNNs primarily serve as a conceptual reference for recurrent modeling in battery aging analysis [90].

3.1.4. Physics-Informed Neural Networks (PINNs)

PINNs are learning frameworks that integrate prior physical knowledge into neural network training in order to constrain model behavior beyond pure data fitting [91]. Rather than relying exclusively on observational data, PINNs incorporate governing relationships, such as differential equations, physical laws, or known functional dependencies, directly into the learning objective [92]. Overall schematic of the PINN as indicated in Figure 4. In battery SOH estimation, this approach aims to improve physical plausibility, interpretability and generalization, particularly under limited data or extrapolative operating conditions [93].
In existing SOH applications, physics constraints are typically introduced either through explicit degradation laws derived from electrochemical or semi-empirical models, or through structured residual formulations that regulate the learned SOH evolution. By restricting the solution space to physically admissible trajectories, PINNs can mitigate non-physical predictions and reduce reliance on large and fully labeled datasets [95].
However, the effectiveness of PINNs is highly sensitive to the correctness and appropriateness of the imposed physical constraints. Oversimplified or misspecified degradation models may bias the learning process toward incorrect aging trends, suppress data-driven evidence and degrade performance under distribution shift. Moreover, joint optimization of data-fitting and physics-based residuals can introduce numerical stiffness and stability issues, particularly in long-horizon or online deployment scenarios. These issues highlight the need for careful residual design, constraint weighting and validation against unconstrained baselines.
Overall, PINNs represent a promising but nontrivial extension of data-driven SOH estimation. Their practical utility depends not only on the availability of reliable physical priors but also on robust validation under varying operating conditions, making them best suited for hybrid or carefully stress-tested deployment settings rather than as universally applicable solutions.

3.2. Learning Strategies

Learning strategies define how neural architectures are trained, adapted, or constrained to address practical challenges in SOH estimation, including data scarcity, diversity of degradation, and domain mismatch. While neural architectures determine the representational capacity of a model, learning strategies play a critical role in enhancing generalization, robustness, and applicability under realistic operating conditions.

3.2.1. Transfer Learning (TL)

TL is a learning paradigm that aims to improve model performance on a target task by leveraging knowledge acquired from a related source task. Rather than training a model from scratch using only target-domain data, TL enables the reuse of feature representations or model parameters learned from a source domain, thereby reducing training cost and improving generalization, particularly when labeled data in the target domain are limited [96].
The fundamental assumption underlying transfer learning is that different tasks or datasets may share common latent structures or degradation-related features. In battery SOH estimation, degradation behaviors across different batteries, operating conditions, or usage scenarios often exhibit similar temporal patterns and electrochemical trends. By exploiting these shared characteristics, TL facilitates the transfer of useful information from a source battery dataset to a target battery dataset, even when the two domains differ in chemistry, temperature, or cycling protocol [97].
A typical transfer learning workflow involves two main stages. In the first stage, a neural model is trained on a source domain using sufficient historical data to learn general representations relevant to degradation. In the second stage, the pre-trained model is adapted to the target domain by applying it directly or by further fine-tuning selected layers using a limited amount of target-domain data [98]. This process allows the model to retain general knowledge from the source domain while adapting to domain-specific characteristics of the target batteries.
During transfer, the parameters learned from source-domain training are commonly used to initialize the model weights in the target-domain task. Such initialization provides a strong starting point for optimization, as the parameters encode meaningful representations related to degradation rather than random values. Consequently, TL often leads to faster convergence and improved predictive performance compared with models trained solely on target-domain data [99].
Recent studies have increasingly explored transfer learning for battery SOH estimation across different cells, chemistries, temperatures, and operating profiles. By pre-training models on large or diverse datasets and transferring them to data-scarce target scenarios, TL effectively enhances robustness and reduces reliance on extensive labeled datasets [100,101]. As a result, transfer learning has become a widely adopted and promising strategy for improving the generalization capability of learning-based SOH estimation methods in practical BMS.

3.2.2. Physics-Constrained and Physics-Informed Machine Learning (PCML/PIML)

PCML/PIML represent a class of hybrid learning strategies that integrate physical knowledge with data-driven models to improve interpretability, robustness, and generalization [102]. Unlike purely data-driven approaches, PCML/PIML frameworks incorporate system mechanisms, physical laws, or prior domain knowledge into the learning process, thereby addressing fundamental limitations related to data scarcity, extrapolation, and physical inconsistency [103].
Physics-constrained machine learning (PCML) focuses on explicitly restricting model behavior to satisfy known physical principles. Such constraints may be imposed through physics-guided loss functions, architectural restrictions, or parameter bounds that enforce physically admissible solutions. By embedding mechanistic knowledge directly into the optimization process, PCML reduces reliance on large-scale labeled datasets and mitigates overfitting, which is particularly important in applications where data collection is costly or limited. PCML has been successfully applied in various scientific and engineering domains, including materials science, energy systems, and battery state estimation, where preserving physical consistency is critical [104].
In contrast, physics-informed machine learning (PIML) emphasizes the use of physical knowledge as guiding information rather than strict constraints. In PIML frameworks, physical laws, empirical relationships, or engineering insights are typically incorporated as soft regularization terms or auxiliary objectives within the loss function. This formulation allows limited deviations from physical rules during training while encouraging the learned model to align with known system behavior. As a result, PIML provides a flexible balance between data-driven learning and physics-based reasoning, enabling improved generalization when models are deployed under unseen operating conditions or degradation regimes [45].
From a methodological perspective, PCML and PIML can be viewed as complementary strategies within a unified physics-guided learning paradigm. Existing studies commonly classify these approaches according to the stage at which physical information is introduced into the machine learning pipeline. Broadly, four representative categories can be identified: (i) physics-guided data or feature-space manipulation, where physical insights are used to preprocess signals or engineer degradation-sensitive features; (ii) physics-inspired architectural design, in which neural network structures or parameters are assigned explicit physical interpretations; (iii) physics-constrained or physics-informed loss formulation, where physical consistency is enforced through hard constraints or regularization terms; and (iv) hybrid physics-machine learning models, which directly couple physical models with neural networks, allowing one component to inform or constrain the other [105,106].
Hybrid physics-machine learning models are particularly attractive for battery SOH estimation, as they enable the simultaneous modeling of long-term degradation trends governed by physical processes and short-term variations captured through data-driven representations. By combining mechanistic models, such as equivalent circuit or reduced-order electrochemical models, with neural networks, these approaches exploit the complementary strengths of physical interpretability and learning flexibility [107]. Consequently, PCML/PIML strategies have become increasingly important for battery SOH estimation, where robustness under limited data, adaptability to varying operating conditions, and physically meaningful predictions are essential.
Despite these advantages, physics-informed approaches also introduce important limitations that are often under-discussed, particularly in the context of long-term and online BMS deployment. Specifically, PINNs/PCML/PIML can improve physical plausibility, but their benefits depend on the correctness of imposed physics. If degradation laws are oversimplified or misspecified, constraints can bias the learner toward an incorrect aging trajectory, masking data evidence and reducing cross-condition validity. This sensitivity is inherent because PINNs often embed a predefined degradation law, while PCML can enforce strict admissibility via loss/parameter bounds. Practically, constraint weighting and residual design can also introduce numerical stiffness, instability and scaling challenges for long-horizon/online deployment. Therefore, physics guidance should be validated (ablation with unconstrained baselines), stress-tested under distribution shift and accompanied by stability/compute reporting.
In summary, neural architectures determine the representational capacity of SOH estimation models, while learning strategies govern their ability to generalize, remain robust, and operate under realistic constraints. Therefore, the effectiveness of SOH estimation depends on the coordinated design of both components. The following section presents a systematic review of recent studies through this combined architectural and strategic lens.

4. Review of Neural Architectures and Learning Strategies for SOH Estimation

4.1. Sequential Deep Learning Architectures

Sequential deep learning architectures represent the earliest and most widely adopted data-driven paradigm for SOH estimation. This subsection reviews their evolution from canonical CNN-RNN models to increasingly robust, data-efficient, and optimized frameworks.
Jose and Shrivastava presented an analytical study focusing on the architecture of CNN-LSTM hybrid models for estimating the SOH of LIBs, incorporating bibliometric analysis, physics-based simulation, and data-driven validation, as depicted in Figure 5. Unlike works focused on accuracy, this paper systematically compares four integrated CNN-LSTM strategies: CNN-LSTM, LSTM-CNN, parallel CNN-LSTM, and attenuated CNN-LSTM, using the NASA battery aging dataset, supplemented by a 1D COMSOL electrochemical model and low-cost hardware-in-the-loop experiments. The results show that the canonical CNN-LSTM architecture consistently outperforms the alternatives, achieving an RMSE of below 0.008 and an R2 of above 0.99, while more complex variants suffer from instability and reduced accuracy [108].
Meng et al. proposed an Extended Long Short-Term Memory (xLSTM)-based SOH estimation framework designed to remain accurate under diverse charging strategies, addressing a key limitation of conventional LSTM- and CNN-based approaches, as demonstrated in Figure 6. This method integrates physics-informed feature engineering, focusing on incremental capacity (IC) curve characteristics, with voltage- and current-derived features from both charging and discharging phases. Feature relevance is rigorously screened using Spearman rank correlation, ensuring robustness to nonlinear relationships and measurement noise. The xLSTM architecture combines sLSTM (scalar LSTM) for numerical stability and noise suppression with mLSTM (matrix LSTM) for capturing complex multiplicative degradation interactions. Through systematic architectural ablation, the three-layer [‘s’, ‘m’, ‘s’] configuration was determined to be optimal, achieving an average MAPE of 0.20%, RMSE of 0.27%, and R2 of 0.997 on 124 MIT LFP cells with 72 fast-charging strategies, while maintaining comparable performance across NCM batteries, demonstrating strong cross-chemistry generalization. Notably, the model maintained reasonable accuracy even with fragmented IC data, highlighting its practical applicability [109].
Guo et al. proposed a dual dimensionality reduction and feature-weighting integrated transfer-learning framework (DDRFW-TL) for LIB SOH estimation, clearly addressing feature redundancy and cross-domain distribution mismatch. The study first constructed a comprehensive aging feature library from voltage, current, and energy signals across CC, CV, CC-CV, and voltage-window segments. It then applied a two-stage feature selection strategy that combined Lasso regression and Pearson correlation analysis. This process reduced the initial 132 features to only 7, while ensuring strong and consistent relevance to SOH in both source and target domains. The selected features were processed by an LSTM-based temporal model augmented with a dynamic feature-weighting module and an MMD-based domain adaptation layer, enabling the simultaneous prioritization of informative features and alignment of cross-domain distributions. Verification on the XJTU (NCM) and SNL (LFP) datasets under different temperatures and discharge rates demonstrated robust performance, achieving an average RMSE ranging from 0.7 to 1.5% and a MAPE lower than 1.3% with only 20 target domain samples, consistently outperforming the basic CNN-BiGRU, TCN, fine-tuning, and adversarial TL baselines [110].
Li et al. proposed a combined patch learning (PL) model for assessing the SOH of LIBs, in which a global GRU model is combined with CNN-BiLSTM patch models to correct for high-error local segments. A practical contribution is limiting the input to only constant-charge voltage curve characteristics and validating the relationship between the characteristics and SOH through correlation screening. Across all tested cells, the proposed framework consistently achieved low prediction errors, with RMSE, MAE, and MAPE values remaining below 0.72%, 0.59%, and 0.66%, respectively. These results demonstrate superior estimation accuracy and stronger generalizability compared to typical data-driven methods, including GRU, CNN-BiLSTM, LSTM, SVR, and ELM models [111].
He and Gong proposed a hybrid CNN-based SOH estimator with a Multi-scale Convolutional Attention (MCA) mechanism and residual learning, specifically targeting fast-charging variability that undermines feature consistency in conventional data-driven models. The MCA integrates multi-branch convolutions of 3 × 3, 5 × 5, and 7 × 7 with sequential channel and spatial attention, enabling adaptive emphasis on SOH-relevant features under different data regimes. A residual module further stabilizes learning and mitigates overfitting in small-sample settings. Validated primarily on the MIT fast-charging dataset and further on CALCE batteries, the proposed Res-MCACNN achieves strong accuracy with an average RMSE of 0.55% and MAE of 0.49% in limited sample cases, while maintaining errors within 1% across diverse charging strategies. More importantly, attention-weight analysis revealed that the model dynamically reallocates receptive fields in response to data availability, providing rare insight into how deep models adapt to training conditions [112].
Wang et al. presented a feature-centric hybrid deep learning framework that integrates Kolmogorov-Arnold Networks (KAN) with recurrent models for LIB SOH estimation, with particular emphasis on the systematic feature extraction and selection from the constant-voltage (CV) charging phase. The study extracts a large pool of 107 aging features spanning direct, evolutionary, and statistical descriptors of CV-phase current, capacity, and energy signals, and applies SVR-RFE to identify compact, physically meaningful subsets. The core methodological contribution is the RNN-KAN hybrid, where KAN replaces the conventional fully connected output layer to enhance nonlinear approximation capability while retaining temporal modeling strengths of RNNs. Extensive validation across multiple cathode chemistries, including NCA, NCM, and NCA + NCM, as well as various operating conditions, demonstrated that KAN integration consistently improved accuracy compared to standalone RNN, LSTM, and GRU models. This improvement achieved an average RMSE of less than 0.5% and reduced errors by approximately 80% in some cases. The study also demonstrated promising early-life prediction performance using only 20–30% of the cycle data [113].
Qu et al. presented a semi-supervised co-training framework for estimating the SOH of LIBs, aiming to address the scarcity of labeled aging data by simultaneously exploiting unlabeled charging data and sparse SOH labels. The proposed architecture combines an Extreme Learning Machine (ELM) trained on handcrafted charging features with a Bi-GRU network operating on deep features extracted via encoder–decoder. Through mutual pseudo-label exchange, the two models iteratively enhance each other’s predictions. The label selection mechanism filters out unreliable pseudo-labels, and dynamic masking retains the deep features most relevant to SOH. Extensive validation across the Oxford, CALCE CX2, and Tongji NCA datasets demonstrates that this framework achieves high accuracy, even with only 10% of the data labeled from a single battery, yielding average RMSE values of 0.34–1.35% and outperforming established semi-supervised methods, such as SVR-KNN and Dual-NARX [114].
Yuan et al. proposed a security-aware SOH estimation framework that explicitly addresses false data injection attacks (FDIA) in BMS by combining LOF-RANSAC-based data cleaning with a CNN-BiLSTM-Attention predictor. The key contribution lies in the multi-stage defense pipeline, where local outliers are first detected via density deviation (LOF) and then corrected by RANSAC with median fallback, significantly improving data integrity prior to learning. This design directly targets a real but underexplored vulnerability in data-driven SOH models. From a modeling perspective, integrating a CNN for spatial feature extraction, a BiLSTM for learning bidirectional temporal dependencies, and attention mechanisms for feature prioritization enables a robust mapping between incremental capacity-based health factors and SOH. Experiments on the Oxford and CALCE datasets demonstrate the high accuracy of RMSE and MAE with values lower than 0.5% and stable tracking capabilities, even under injected anomalies. Additionally, reduced training and inference times are achieved, highlighting the potential for deployment [115].
Yu et al. proposed a cross-domain SOH estimation framework that integrates a CNN-BiLSTM with attention (CNN-BiLSTM-AM) and Similarity Network Fusion (SNF) to address limited labeled data and domain mismatch across LIBs. This approach combines multidimensional health indicator indices, such as incremental capacity (IC) curves, constant current charge time (CCCT), constant voltage charge time (CVCT), and cycle index, filtered through Pearson correlation analysis, allowing for a more informative feature representation than using only raw voltage/current data. When sufficient training data is available, CNN-BiLSTM-AM achieves extremely low error rates, with an RMSE of 1.04 × 10−7. In contrast, in data-scarce multi-domain scenarios, SNF-CNN-BiLSTM-AM maintains robust performance, with an RMSE of less than 2.96 × 10−2 and an R2 of 0.9736 on the NASA and CALCE datasets [116].
Zhang et al. proposed a CNN-LSTM hybrid model combined with transfer learning (CNN-LSTM-TL) for LIB SOH estimation. The model utilizes the charging capacity sequence extracted from the partial charging region with 3.0–3.5 V before 80% SOC as the input, which is normalized using min-max standardization. A CNN is employed to automatically extract SOH-related features, while an LSTM network captures the temporal degradation characteristics. The SOH estimation performance is evaluated using MAE and RMSE. With transfer learning applied using only the first 20% of target-domain data, the CNN-LSTM-TL model achieves an average MAE of 0.0217 and RMSE of 0.0283, corresponding to 21.4% and 19.6% reductions, respectively, compared with the CNN-LSTM model without transfer learning. The most significant improvement is observed for short-life batteries, indicating the effectiveness of transfer learning in mitigating data distribution discrepancies caused by different charging strategies [82].
Yu and Hu proposed an enhanced Whale Optimization Algorithm (EWOA)-optimized CNN-BiLSTM-Attention model for LIB SOH estimation. Seven health indicators (HIs) combining static peak features from incremental capacity (IC) curves and dynamic operating features, including terminal voltage and discharge current, are extracted and validated using grey relational analysis, showing correlation coefficients of 0.65–0.96 with SOH. The proposed method integrates dual-strategy population initialization, adaptive parameter tuning, and hybrid mutation mechanisms into WOA to simultaneously optimize network hyperparameters and weights, while a squeeze-and-excitation (SE) attention mechanism dynamically emphasizes degradation-sensitive features. SOH estimation performance is evaluated using MAE, RMSE, and R2, achieving an overall MAE of 0.0054, RMSE of 0.0069, and R2 of 0.9924, while maintaining R2 above 0.96 under 50% incomplete charging scenarios, demonstrating strong robustness and generalization capability [117].
Li et al. proposed a novel ACLA hybrid model that combined an attention-CNN-LSTM-augmented neural ordinary differential equation (ANODE) model for LIB SOH estimation by modeling battery degradation as a continuous-time dynamical process. The method leverages features extracted from constant-current charging curves, where an attention mechanism emphasizes degradation-sensitive regions, CNN layers capture local voltage-time characteristics, and an LSTM learns long-term temporal dependencies across cycles. The resulting representations are then embedded into an ANODE framework to constrain SOH evolution. Model performance was evaluated using RMSE for SOH estimation and absolute error for end-of-life prediction. Experimental results showed that the proposed model achieved SOH estimation errors of approximately 1.01–1.04% on the TJU dataset and around 2.24% on the more challenging HUST dataset, outperforming conventional NODE- and ANODE-based approaches and demonstrating strong generalization capability across diverse battery chemistries and complex degradation behaviors [118].
Chen et al. proposed a data-driven SOH estimation framework for LIBs based on fragmented charging data, in which an improved gated recurrent unit (GRU) recurrent neural network optimized by the whale optimization algorithm (WOA) is employed to achieve accurate SOH estimation with limited training data. The SOH estimator utilizes the charging time within two specific voltage ranges during the constant-current charging stage, namely 3.55–3.75 V and 3.80–4.15 V, as health features, which are directly extracted without differential processing to ensure robustness against noise interference. The extracted health features are normalized and fed into the GRU network, whose hyperparameters, including learning rate, number of neurons, and iteration number, are optimized using WOA to minimize the mean squared error (MSE). The SOH estimation performance is evaluated using R2, MAE, AAE, RMSE, and maximum error, achieving an RMSE of 0.189% and a maximum error of 0.56% when only the first 15% of the lifecycle data is used for training, while maintaining estimation errors below 1% across different cells and battery types, demonstrating high accuracy and strong generalization capability [119].
Liu et al. proposed a SOH prediction method for LIBs based on Incremental Capacity Analysis (ICA) and an Adaptive Genetic Algorithm optimized Elman neural network (AGA-Elman), where IC curves are extracted from current, voltage, capacity, and time data and denoised using a Savitzky-Golay filter to enhance degradation-sensitive features. Pearson correlation analysis and aging-level variation analysis are employed to select IC features around peak 1 position as health indicators, which are then used as input to the Elman neural network to capture temporal degradation characteristics. The AGA dynamically adjusts crossover and mutation probabilities to optimize the initial weights and thresholds of the Elman neural network, thereby enhancing its global search capability and convergence speed. The SOH estimation performance is evaluated using MAE, MAPE, RMSE, and R2. Experimental validation on four 18650 LIBs demonstrates that, when trained with 50% of the data, the proposed AGA-Elman model achieves MAE, MAPE, and RMSE values all below 2.43%, with R2 exceeding 0.96, significantly outperforming BP, Elman, ELM, RBF, RF, SVM, GRU, and GA-Elman models, while maintaining robust prediction accuracy under limited training data conditions [120].
Mao et al. introduced an advanced method for predicting LIB SOH by integrating two-dimensional convolutional neural networks and bidirectional long short-term memory with the Gramian Angular Field (GAF) technique. Health indicators were derived from the incremental capacity and charging voltage curves. The recommended inputs to the CNN-based estimator were the complete CC-V time series values and the incremental capacity sequences corresponding to the 3.90 to 4.10 voltage interval in each cycle, to capture nonlinear degradation patterns. The estimation performance was evaluated using multiple metrics, including RMSE, MAE, MAPE, MSE, and absolute error, and Bayesian optimization was employed to determine the optimal hyperparameters for the 2DCNN-BiLSTM architecture. The results demonstrated superior accuracy and strong generalization across datasets, achieving an average RMSE of 0.0112 and an average MAE of 0.0087, thereby affirming practical significance for BMS under different operating temperatures [121].
Qian et al. developed a CNN-SAM-LSTM hybrid neural network for simultaneous estimation of SOC, SOE, and SOH in LIBs under dynamic operating conditions, clearly addressing the strong coupling and time-scale mismatch between these state variables. The core architectural innovation is the Customized Gate Control (CGC) layer, which combines multiple shared expert networks and task-specific experts (CNN + self-attention) with gate networks to selectively fuse shared and specialized features for each state. This design enables effective multi-task learning while mitigating negative transfer between SOC, SOE, and SOH estimation. Another contribution is the introduction of a shared loss function weighted by homoscedastic uncertainty, which allows for the automatic balancing of learning difficulty across the three tasks. Extensive validation across UDDS, BBDST, and CC discharge datasets reveals robust performance, with the RMSE of SOH generally below 1% and significantly outperforming individual- and dual-state estimators [122].
Li et al. proposed a feature-fusion-driven hybrid deep learning framework for LIB SOH prediction, integrating multidimensional health indicators with temporal modeling and automated hyperparameter optimization. This approach constructs a three-dimensional synthetic feature space by fusing Q-V, T-V, and dQ/dV-V characteristics, thereby enabling a richer representation of degradation dynamics compared to methods that use only a single feature. Four parallel Temporal Convolutional Networks (TCNs) independently extract feature-specific temporal patterns, whose outputs are adaptively weighted via an attention mechanism and sequentially modeled using a BiGRU. Model parameters are optimized using the Beluga Whale Optimization (BWO) algorithm, minimizing manual tuning and improving robustness. Extended experiments on the MIT dataset demonstrated high accuracy, with an RMSE of 0.32% and a MAPE of 0.26%. Transfer learning with fine-tuning enabled effective cross-material generalization to the Oxford dataset, using only 10% of the target data with an RMSE of 0.20%. Compared to the basic TCN, BiGRU, and SVM models, the proposed model consistently exhibited a tighter error distribution and higher stability [123].
Sequential deep learning architectures, particularly RNN-based and CNN-RNN hybrid models, represent the earliest and most extensively explored data-driven paradigm for LIB SOH estimation. The reviewed studies demonstrate that such architectures are effective in capturing nonlinear temporal degradation patterns under controlled cycling conditions, and their performance can be further enhanced through architectural refinements, semi-supervised learning, robustness-oriented designs, and automated hyperparameter optimization. However, despite continuous improvements in accuracy, these models generally remain sensitive to data distribution shifts, charging protocols, and operating conditions, owing to their reliance on large, well-curated training datasets and the absence of explicit physical constraints. These limitations motivate the subsequent development of attention-based, physics-informed, and adaptive learning frameworks, which aim to improve generalization, interpretability, and practical deployability beyond what purely sequential deep learning models can offer.

4.2. Attention-Based and Transformer Architectures

Building upon sequential deep learning models, attention-based and Transformer architectures have emerged as powerful alternatives for SOH estimation, owing to their ability to capture long-range dependencies, selectively emphasize degradation-relevant features, and adapt to complex, non-stationary battery aging behaviors.
Liu et al. introduce a two-layer SOH estimation framework designed to address the well-known “paradoxical triad” of BMS regarding accuracy, computational efficiency, and generalizability. This architecture combines lightweight multi-scale dynamic convolution (MSDC) to extract local degradation characteristics with an enhanced Retention Network (RetNet) to capture long-term aging trends with linear complexity of O(n), making it particularly suitable for embedded implementations. A notable methodological contribution is the multidimensional clustering group sampler, which groups samples by sequence length, cycle index, statistical charge characteristics, and battery ID, resulting in a reduction of padding rates by over 70% and improved batch homogeneity. This study further enhances generalizability through a hybrid PSO-BOA hyperparameter optimization strategy, effectively balancing global discovery and local refinement on heterogeneous datasets. Extensive evaluations on NASA and CALCE datasets revealed consistently high accuracy, with a minimum RMSE of approximately 0.0069 and a maximum R2 of about 0.9986, as well as robust performance under conditions of small sample size and capacity regeneration. Importantly, detailed analyses of latency, FLOPs, and memory revealed inference times that are significantly lower than the limitations of automotive BMS, thereby reinforcing the model’s feasibility for implementation [124].
Zhou et al. proposed a multi-level feature fusion-driven deep learning framework for LIB SOH estimation under short-time working conditions (STWC), aiming to bridge the gap between laboratory CC-CV tests and real-world operating profiles of EVs. The study introduces an MR-MLP-based feature fusion scheme that links short-term discharge characteristics, including the number of cycles, voltage, cumulative capacity, and energy, with long-term degradation descriptions. This is followed by an LSTM-Transformer hybrid model to capture both sequential dependence and global feature interaction. Validation across multiple chemistry types, including NMC, NCA, and LCO, with capacities of 2–3 Ah, demonstrated high accuracy, with RMSE generally below 0.5% and significantly outperforming the basic LSTM, Transformer, and CNN-LSTM models [125].
Li et al. proposed a transfer learning-enhanced hybrid neural network (TL-LSTM-MHDA-iTransformer) for jointly estimating the SOH and state of charge (SOC) in LIBs. The model uses discharge voltage drop time features related to SOH and voltage and current features related to SOC as inputs, with feature selection guided by Pearson correlation analysis. A hybrid architecture combining LSTM and an improved iTransformer with multi-head differential attention (MHDA) is employed to capture long-term temporal dependencies while suppressing noise. Transfer learning is applied using 10–20% of target-domain lifecycle data, and the estimation performance is evaluated using MAE and RMSE. The proposed model achieves SOH RMSE values of 2.09%, 2.32%, and 0.62%, with corresponding MAE values of 1.36%, 2.26%, and 0.25% for NCA, NCM, and Kokam batteries, respectively. For SOC estimation, RMSE values are maintained within 1.52%, 1.55%, and 1.29%, while the corresponding MAE values are 0.25%, 0.30%, and 0.05%, demonstrating robust estimation performance under different temperatures, battery chemistries, and noise conditions [126].
Yan et al. proposed a novel hybrid neural network-based SOH estimation method with multi-feature extraction for LIBs. The method extracts 12 health features (HFs) from charge/discharge curves and incremental capacity (IC) curves, including CC-CV charging time, Equal Voltage Drop Time (EVDT), average charge/discharge voltage and current, and IC peak height, position, and area, with feature relevance verified using the Pearson correlation coefficient. A hybrid architecture integrating dilated residual denoising convolution (DRDC), rotary position embedding (RoPE), Transformer, and channel attention mechanism is developed to achieve noise suppression, model long-range dependencies, and dynamically weight features. SOH estimation performance is evaluated using MAE, RMSE, and R2, achieving on the NASA dataset an MAE of 0.0072, RMSE of 0.0097, and R2 of 0.9896 for the best-performing test battery, while all test batteries maintain R2 values above 0.95 across both NASA and CALCE datasets, demonstrating robust estimation performance across different batteries [127].
Zhan et al. proposed a Physics-Informed Neural Network integrated with a Transformer (PI-TNet) for LIB SOH prognostics, in which the Verhulst physical degradation model is embedded into a Transformer architecture (ViT) as learnable parameters to impose physically meaningful constraints on the deep learning framework, as shown in Figure 7. The proposed SOH estimation approach utilizes a Convolutional Data Processor to extract multidimensional electrochemical features, employing voltage, current, temperature, and time-series cycling data as input parameters derived from NASA and CALCE LIB aging datasets. The SOH is defined based on capacity degradation and predicted through long-sequence temporal modeling. The estimation performance is evaluated using MAE, RMSE, and R2, demonstrating superior accuracy and generalization capability compared to conventional data-driven and physics-informed models. Specifically, the PI-TNet achieves RMSE values as low as 0.00223 (NASA B0007) with R2 up to 0.97844 on the NASA dataset, while maintaining robust performance on cross-dataset validation with the CALCE dataset, confirming its effectiveness in modeling long-term battery degradation dynamics [128].
Yuan et al. proposed a VMD-CNN-Transformer hybrid framework for LIB SOH prediction that explicitly targets capacity regeneration, measurement noise, and long-term dependency learning. The core idea is to apply Variational Mode Decomposition (VMD) to smooth raw capacity trajectories, separating the global degradation trend from high-frequency fluctuations induced by regeneration and noise. A 1D CNN network is then used to extract local features, while the Transformer encoder captures long-range temporal dependencies through self-attention and positional encoding. The iterative (recursive) prediction strategy enables full-lifecycle SOH forecasting using only 5–6% of early-cycle data, addressing data scarcity in early diagnosis. Verification on the CALCE and NASA datasets showed that the proposed model consistently outperformed CNN, LSTM, GRU, Transformer, and CNN-Transformer baselines, achieving an average MAE of 0.0116 and RMSE of 0.0151 on CALCE, with similarly strong generalization on NASA cells [129].
Liu et al. proposed a segmented hidden Markov model-Transformer-BiGRU (SHMM-Transformer-BiGRU) hybrid framework for data-driven SOH prediction, targeting non-stationary degradation and feature redundancy issues in LIBs. The main novelty lies in introducing SHMM-based temporal segmentation, which converts raw multi-dimensional time-series features into stage-level latent state sequences, thereby enabling the explicit modeling of degradation phase transitions. These semantically compressed features are then enhanced by the Transformer to capture long-range dependencies and processed by the BiGRU to learn bidirectional temporal correlations. A differential evolution (DE) algorithm is further used to optimize the global hyperparameter. Two novel charge-based temporal indicators, including VRT and VDT, were designed and combined with static voltage features; gray relational analysis confirmed their strong correlation with SOH. Verification on NASA and CALCE datasets showed superior accuracy with RMSE of 1.25% and 3.28%, respectively, and favourable accuracy-efficiency trade-offs compared with LSTM, CNN-LSTM, CNN-Transformer, and parallel Transformer-GRU baselines [130].
Zhang et al. proposed a CNN-Transformer framework with Test-Time Training (TTT) for battery pack-level SOH estimation under real-world operating conditions, explicitly addressing distribution drift and temporal degradation adaptation. The method integrates three tightly coupled ideas: First, Interpretable feature engineering, where eight health indicators, including energy, capacity, time, and temperature from CC/CV charging between 50–100% SOC, are selected via grey relational analysis, with all correlations exceeding 0.65. Second, multi-task learning, which jointly optimizes SOH regression and self-supervised charging-voltage reconstruction, improves representation learning without the need for additional labels. And third, selective TTT, in which only the Transformer attention layers and the reconstructing head are updated online with unlabeled test data, enabling continuous adaptation while avoiding catastrophic forgetting. Validated on four real EV datasets, the approach achieves very low MAE values, ranging from 0.0029 to 0.0036, for smooth LiFePO4 degradation and maintains strong robustness for non-monotonic NCM aging with capacity recovery, outperforming CNN-Transformer, auxiliary-only, and static TTT baselines [131].
As summarized in Table 2, reported Transformer gains should be interpreted jointly with input sequence length (n) and on-device computational cost. Since self-attention scales as O(n2) in time and memory, high reported accuracy is deployment-relevant only when long-horizon modeling is achieved with bounded latency and footprint. Among the reviewed studies, only efficiency-aware designs explicitly address this limitation. In particular, RetNet reduces complexity to O(n) and reports latency, FLOPs and memory within automotive BMS constraints, while maintaining high accuracy with RMSE of 0.0069 and R2 of 0.9986. Hybrid strategies partially mitigate overhead. Specifically, feature-engineered CNN-Transformer pipelines reduce training and memory demands relative to end-to-end Transformers, while test-time training that updates only attention layers achieves stable MAE values of 0.0029–0.0036 on real EV data. Overall, Table 2 highlights that architectural novelty alone is insufficient, underscoring the need for efficiency-aware design and transparent cost reporting to ensure practical relevance.
Attention-based and Transformer architectures significantly extend the capability of conventional sequential models by enabling selective feature weighting and long-horizon dependency modeling in LIB SOH estimation. The reviewed studies demonstrate that these architectures are particularly effective in handling complex degradation patterns, non-stationary behaviors, and heterogeneous operating conditions, especially when combined with hybrid sequential modeling, physics-informed constraints, or signal decomposition techniques. Moreover, recent advances incorporating transfer learning, test-time adaptation, and degradation-stage segmentation highlight a growing emphasis on robustness and real-world applicability. Nevertheless, the improved accuracy and adaptability of Transformer-based models often come at the cost of increased architectural complexity and computational demand, underscoring the need for careful co-design of efficiency, interpretability, and generalization in practical BMS implementations.

4.3. Physics-Informed and Hybrid Neural Architectures

To overcome the limited interpretability and generalization of purely data-driven models, physics-informed and hybrid neural architectures incorporate mechanistic battery knowledge into the learning process, enabling physically consistent SOH estimation across varying operating conditions, chemistries, and data availability.
Luo et al. proposed a physics-based hybrid neural network (PIHNN) that tightly integrates a multiphysics electrochemical-thermal-mechanical-side-reaction (ETMS) aging model with deep learning to enhance the SOH estimation of LIBs. A key improvement is the inclusion of membrane resistance as a physically significant health indicator, driven by its strong monotonic relationship with capacity degradation arising from SEI layer growth and lithium plating. This relationship is clearly embedded in the learning process through a physics-constrained loss function, ensuring physically consistent SOH trajectories. The data-driven component utilizes a CNN-GRU-Attention architecture to simultaneously capture local degradation patterns, long-term aging dynamics, and salient temporal features, while Bayesian optimization is employed to efficiently adjust hyperparameters. Extensive validation across various C-levels of 0.5C, 1C, 2C and on public Oxford and MIT datasets demonstrates that PIHNN consistently outperforms the basic CNN-GRU, GRU, and attention models, achieving MAE and RMSE values typically below 0.5%, with particularly strong robustness under varying operating conditions [132].
Wang et al. proposed an end-to-end SOH estimation framework that combines polarization-equilibrium analysis with an Adaptive Sampling Deep Neural Network (ASDNN) to enable accurate SOH inference from partial charging data, addressing a critical gap between laboratory protocols and real-world EV operation. A key methodological improvement is the transformation of charge data from conventional time-voltage (t-V) space to voltage-differential-voltage (V-ΔV) coordinates, eliminating dependence on absolute time alignment and avoiding the heavy filtering typically required for IC-based methods. The study further introduces an Independent Indicator (II) to verify the monotonic correlation between ΔV and SOH across most voltage ranges, providing a clear physical-data rationale for learning in the transformed space. ASDNNs utilize fixed-window adaptive sampling to handle variable-length partial charging segments and are exclusively trained on full CC-CV data. Experiments were conducted on segmented charging profiles that included polarization transients. Experiments on LG INR21700 M50E cells demonstrated strong accuracy, with an MAE value of approximately 0.0145, RMSE value of 0.0185, and an extremely low inference latency of approximately 0.0117 ms, outperforming basic CNN, RNN, attention, SVM, and GPR models, especially in the presence of polarization effects [133].
Li et al. proposed a unified physics-informed PBSID-Transformer framework for LIB SOH estimation that explicitly decouples SOC-SOH coupling by integrating reduced-order electrochemical modeling, predictor-based subspace identification (PBSID), and Transformer learning. A simplified electrochemical model is first derived via volume-averaging and linearization, yielding a low-order LTI state-space representation with clear physical meaning. PBSID is then employed to identify degradation-sensitive parameters across the full lifecycle, generating compact and interpretable features, especially those related to input gain and resistance. These features are finally processed by a Transformer to capture long-range aging dependencies. Validation on proprietary LiFePO4 batteries and the public CALCE LiCoO2 datasets revealed high accuracy, with a joint-cell RMSE of 0.34–0.38%, a cross-cell maximum error of 2.06%, and robust generalizability, while reducing training time and memory by orders of magnitude compared to end-to-end Transformers and LSTM baselines [134].
Wang et al. proposed a generalizable physics-informed neural network (PINN-LTB) for LIB SOH estimation using partial charging segments. The method extracts statistical, time-frequency, and physics-informed features from the 25–75% SOC charging voltage window, avoiding reliance on full charge–discharge cycles. A hybrid PINN-Lasso-Transformer-BiLSTM architecture is developed, where Lasso regression enables sparse feature selection and a nonlinear empirical degradation model is embedded as a learnable physical constraint within a dynamically weighted composite loss. The SOH estimation performance is evaluated using MAE, RMSE, and R2, achieving an RMSE generally below 0.6%, an MAE below 0.5%, and an R2 consistently above 0.98 across four battery chemistries. This approach maintains strong generalization across chemistries, temperatures, and C-rates, including transfer learning and zero-shot scenarios [135].
Tian et al. proposed a generic physics-informed neural network (PINN) framework for LIB SOH estimation, integrating SEI film growth kinetics reconstructed via Universal Differential Equations (UDE) with data-driven learning. The model adopts a dual-branch architecture, where the first MLP branch maps extracted features and cycle number to preliminary SOH, while the second MLP branch embeds the SEI growth degradation model and maps the predicted SEI film volume to SOH through a learned nonlinear relationship. The input features consist of 13 health indicators extracted from a 71–90% SOC charging voltage window, including statistical descriptors of the charging voltage sequence, charging capacity sequence, and charging time. Model performance is evaluated using MAE, RMSE, and MAPE, achieving an average MAPE of approximately 0.97%, with the RMSE mostly ranging from around 0.5% to 1%, depending on the dataset. The framework demonstrates strong generalization performance across various chemistries, including LFP, NCM, and NCA, as well as different charging protocols, small-sample conditions, and transfer learning scenarios [136].
Zhang et al. proposed a Time Series Physics-Informed Neural Network (TS-PINN) for LIB SOH estimation by jointly modeling time-series degradation behavior and physics-based capacity attenuation dynamics. The model utilizes incremental capacity (IC) features extracted from constant-current charging curves, along with raw cycling data. Temporal dependencies are captured via LSTM, and battery degradation is constrained through a PDE-based physical loss model with adaptive weight adjustment. SOH estimation performance was evaluated using MAPE and RMSE. On the TJU and XJTU datasets, the proposed TS-PINN achieved MAPE values ranging from 0.35% to 1.22% and RMSE values from 0.0052 to 0.0150, depending on the battery batch. On the NASA dataset, TS-PINN achieved MAPE values in the range of 1.36% to 1.61% and RMSE values between 0.0130 and 0.0174, demonstrating strong generalization performance across different battery chemistries and operating conditions [95].
Shaosen et al. proposed a physics-constrained machine learning (PCML) framework for real-time LIB SOH estimation using CV charging data, where the discrete state equations of a second-order RC equivalent circuit model are embedded into a recurrent neural network architecture to enable interpretable parameter identification and SOH prediction. The PCML-based SOH estimation utilizes CV-phase terminal voltage, charging current, open-circuit voltage, and SOC information derived from the equivalent circuit model as input features, allowing SOH to be inferred through the identification of internal resistance correlated with capacity degradation. The estimation performance is evaluated using the MSE between calculated and measured current profiles, demonstrating superior accuracy compared with particle swarm optimization, recursive least squares, and first-order RC-based PCML approaches. The proposed method achieves a minimum MSE of 0.0000886 and exhibits a strong negative correlation coefficient of −0.9362 between the estimated internal resistance and capacity-defined SOH, validating its effectiveness for real-time and mechanistically interpretable battery health monitoring [104].
Yang et al. proposed a physics-informed multi-task health management framework for LIBs that enables the co-estimation of SOH, remaining useful life (RUL), and short-term degradation path (S-DP) by integrating a Customized Gate Control (CGC) model with PINN. The proposed approach employs a simplified time-series Transformer and Bi-LSTM architectures to extract temporal and feature information from 20 statistical features derived from voltage and current curves surrounding the CV charging phase, including cycle number, accumulated charge, mean value, standard deviation, duration, kurtosis, skewness, curve slope, curve entropy, and cross-time deviation features. Physical consistency is enhanced by incorporating partial derivatives of SOH and RUL outputs with respect to input features into the loss function, forming a deep hidden physics model without explicitly defining degradation equations. The SOH estimation performance is evaluated using MAPE and RMSE, achieving a MAPE of 0.75% for NCM batteries, 1.08% for NCA batteries, and 0.85% for LFP batteries, while the proposed framework also attains an average deviation of approximately 0.01 for short-term degradation path prediction and a MAE of 104 cycles for RUL prediction, demonstrating superior accuracy, generalization capability, and robustness across different battery chemistries and operating conditions [137].
Chen et al. proposed a SOH estimation method for LIBs based on a Semi-empirical Physics-Informed Neural Network (SPINN), in which a semi-empirical battery capacity degradation model derived from the Logistic empirical equation and Arrhenius law is embedded into a neural network as physical constraints. The SOH estimation framework integrates a surrogate neural network with a physical dynamic model, where the input features consist of conventional health factors extracted from charge/discharge curves, including peak values of the incremental capacity curve during CC charging, charging duration, voltage-related point features, cycle number, and temperature. The SPINN training process incorporates a composite loss function composed of data fitting loss, physical residual loss, and derivative residual loss, further enhanced by a Huber loss function and adaptive loss weighting strategy. The SOH estimation performance is evaluated using MAE, MAPE, RMSE, RMSPE, and R2 across the MIT-Stanford, NASA, and CALCE datasets, where the proposed method achieves a maximum RMSPE below 0.65% under extremely small-sample training conditions of 10% training data and maintains R2 values above 96.9% for different battery chemistries and degradation patterns [138].
Lin et al. proposed a lightweight two-stage physics-informed neural network (TSPINN) for LIB SOH estimation that explicitly balances accuracy, interpretability, and deployability. The framework integrates explicit physics-informed data augmentation and implicit physics-informed loss constraints, distinguishing it from prior PINN approaches that rely on only one of these mechanisms. In the first stage, hybrid health characteristics are constructed by fusing ECM parameters extracted from post-charge voltage relaxation with incremental capacity (IC) peak features from the charging phase. These characteristics exhibit a clear monotonic relationship with SOH and maintain physical interpretability across different temperatures and chemical compositions. In the second stage, physical knowledge, specifically the monotonic constraint between IC peak value and SOH, is embedded directly into the loss function via gradient-based regularization, thereby suppressing non-physical predictions and enhancing robustness. Extensive validation on NCA and NCM cells at 25–45 °C demonstrated robust performance, with an average MAE of 0.68%, outperforming baselines based on IC-only, ECM-only, LSTM, CNN, GPR, and SVM. Notably, TSPINN shows superior cross-temperature and cross-chemistry generalization, even when trained on different materials, highlighting the transferability of physics-based features [139].
Wang et al. proposed a transferable multi-state estimation framework (SOH/SOC/SOE) that tightly integrates sparse electrochemical parameter identification, hybrid deep learning, and transfer learning, addressing both physical consistency and data scarcity in practical BMS implementations. Instead of determining parameters at every cycle, the study extracts seven key electrochemical parameters at a few representative aging points and reconstructs physically constrained pseudo-parameter trajectories over the full lifecycle. These trajectories serve as interpretable inputs to a CNN-BiLSTM model for SOH estimation, achieving a minimum MAE of 0.288% and R2 up to 0.994 on the Tongji dataset [140].
Cao et al. presented a general, end-to-end framework for LIB SOH estimation, clearly linking laboratory aging tests and real-world machine-learning deployment, with a strong emphasis on data irregularity and cross-domain transferability, as depicted in Figure 8. The core methodological novelty lies in transforming raw current-voltage-temperature time series of arbitrary length into fixed-dimension 3D histogram representations using a varying sliding-window strategy. This enables the 3D CNN to process irregular and incomplete operational data while preserving degradation-relevant stress distributions, rather than relying on precise temporal alignment. The framework was validated on an unusually large and diverse laboratory dataset with 24 Nissan Leaf Gen3 pouch batteries cycled under varying DoD, C-rate, and temperature levels, covering degradation from BOL to below 20% SOH, and further evaluated via transfer learning on an external cylindrical-cell dataset. The results showed stable performance, with a tested RMSE of 0.0445, MAE of 0.0172, and R2 of 0.87. Additionally, the application of transfer learning resulted in more than a 10-fold reduction in retraining time, demonstrating practical scalability. Importantly, resource analysis confirmed a very low inference latency of 0.03 ms/sample and reduced memory usage, supporting both BMS and cloud-edge deployment scenarios [141].
Physics-informed and hybrid neural architectures represent a major step toward reliable and interpretable SOH estimation by embedding electrochemical principles, degradation kinetics, or system-level constraints into data-driven learning frameworks. The reviewed studies demonstrate that incorporating physical knowledge, whether through constrained loss functions, mechanistic feature extraction, subspace identification, or semi-empirical degradation models, can significantly improve robustness, data efficiency, and cross-condition generalization compared with purely black-box approaches. Moreover, recent developments emphasize scalability and deployability, including partial-charging compatibility, sparse parameter identification, and cloud-edge co-design. Nevertheless, the effectiveness of physics-informed models depends strongly on the validity and availability of underlying physical assumptions, highlighting an ongoing trade-off between modeling fidelity, computational complexity, and practical applicability in real-world BMS.

4.4. Probabilistic, Classical ML, and Feature-Centric Models

In contrast to deep neural architectures, probabilistic, classical machine-learning, and feature-centric approaches emphasize interpretability, statistical rigor, and data efficiency, offering valuable insights into SOH estimation under limited data and stringent evaluation constraints.
Sedlařík et al. presented a systematic comparative study of classical machine learning techniques for estimating the SOH of LIBs, focusing on SVR, GPR, FFNN, and ANFIS under controlled CCCV charge/discharge cycle conditions of Samsung INR18650-35E batteries. The study clearly analyzed sensitivity to dataset size by splitting the dataset into 1/3–2/3 splits, overfitting behavior, cross-battery transferability, and fine-tuning effects, rather than simply reporting accuracy at a single point. Feature extraction techniques are performed transparently and based on physical principles, utilizing time- and voltage-based indices derived from both charge and discharge curves. Pearson correlation analysis and an exhaustive search are employed to minimize the occurrence of inaccurate inputs. As illustrated in Figure 9, SVR provided the most stable performance and best resistance to overfitting, while ANFIS achieved a very low RMSE of 0.15% when optimally configured, despite high variability across different dataset lengths. FFNN demonstrated strong adaptability in fine-tuning scenarios and cross-battery scenarios, while GPR was sensitive to dataset size and degradation regimes, resulting in less robust performance under data-limited conditions [142].
Mawassi et al. proposed a data-driven hybrid learning framework for the co-estimation of LIB SOH and SOC, in which the method relies on fitting the discharge voltage curve of each cycle using a fifth-degree polynomial regression and selecting highly correlated health factors that are then fused with optimized feed-forward neural network (FNN) and Gaussian process regression (GPR) models. The inputs to the SOH estimator consist of the area under the discharge voltage curve and four retained polynomial coefficients from the fifth-degree regression after excluding the low-correlated a0 and a1 terms, while the SOC strategy uses the estimated SOH of the previous cycle for selecting a health-informed training database with 5% SOH interval grouping and additionally employs discharge voltage, time, and temperature as inputs to the SOC estimator. The outputs are continuous SOH estimates and SOH-informed SOC values, and the evaluation metrics include MAE, RMSE, MSE, and R2. The proposed framework was validated using the Oxford Battery Dataset and the NASA Battery Dataset. Experimental results show that the optimized FNN provides the best overall SOH accuracy, achieving an R2 of 0.999, a MAE of 0.06%, and an RMSE of 0.17% for the Oxford dataset, and an MAE of 0.12%, an RMSE of 0.18%, and an R2 of 0.999 for the NASA dataset. Moreover, the SOC strategy based on previous-cycle SOH yields an average SOC MAE of 0.08% with a maximum SOC error of less than 2%, confirming that the integration of discharge-voltage-based health factors with an optimized FNN architecture is effective and robust for battery health monitoring [143].
Oyewole et al. proposed a two-stage probabilistic framework (CGMM-RNN) integrating a Conditional Gaussian Mixture Model for SOH estimation with a GRU-based RNN for RUL prediction. A key contribution lies in explicitly addressing cell-to-cell heterogeneity and multimodal degradation via mixture modeling, while enabling uncertainty quantification through Bayesian conditioning. Physically interpretable health features are extracted from charging curves and selected using gray relational analysis, thereby strengthening the interpretability of the model compared to purely end-to-end deep models. Extensive validation across three publicly available datasets, including LFP, NCA, and NCM, revealed a SOH error of less than 1% and reliable confidence intervals across different charging protocols, temperatures, and chemicals [144].
Pandit and Ahlawat presented a standardized comparative framework for LIB SOH estimation that prioritizes methodological rigor and reproducibility over proposing new model architectures, as demonstrated in Figure 10. The key contribution is the introduction of a one-to-many cross-battery validation protocol, where Battery 5 is used exclusively for training, and Batteries 6, 7, and 18 from the NASA dataset are reserved for testing. This design directly addresses a common flaw in previous studies, which is data leakage due to mixed-cycle splitting, and better reflects real-world BMS deployment scenarios that require cell-to-cell generalization capabilities. Within this unified framework, three representative machine learning models, including Extreme Gradient Boosting (XGBoost), Random Forest (RF), and Support Vector Machine (SVM), were evaluated using identical preprocessing, feature sets, and grid-search-based hyperparameter optimization with k-fold cross-validation. The results showed that tree-based ensemble methods significantly outperformed SVM, with XGBoost achieving the best overall performance, yielding a minimum MAE of 0.016 and an MSE of 3.47 × 10−4 on Battery 18, as illustrated in Figure 11. Although RF exhibited comparable accuracy, XGBoost was preferred due to its shorter training time, smaller memory footprint, built-in regularization, and explainability, making it more suitable for embedded or cloud-based BMS applications [145].
Li et al. proposed a three-stage cross-material SOH estimation framework that explicitly targets two practical challenges: material-induced degradation heterogeneity and fragmented, partial charging data. The approach operates on arbitrary charging segments with a ΔSOC of at least 25% by extracting six physically interpretable health characteristics from overlapping SOC sub-intervals between 20% and 85%, ensuring robustness for random charge start/end points. Transferability is achieved through a staged strategy combining global statistical alignment (Maximum Mean Discrepancy-MMD), feature-level adversarial alignment with Simplified Mixed Domain Adaptation (SMDA), and lightweight fine-tuning using only 1–5% labeled target data. Extensive experiments on four cathode materials, including LCO, NCA, NMC, and Hybrid, demonstrated strong generalizability, with an average RMSE of 0.75% for single interval and 0.84% for full-range transfer under 5% fine-tuning, consistently outperforming representative transfer-learning baselines [146].
Probabilistic, classical machine-learning, and feature-centric models offer a crucial counterpoint to deep learning-based SOH estimation, prioritizing interpretability, robustness, and methodological transparency. The reviewed studies demonstrate that, when combined with carefully engineered health indicators and rigorous validation protocols, classical models such as SVR, ensemble methods, and probabilistic mixtures can achieve competitive accuracy, particularly in data-limited and cross-battery scenarios. Moreover, uncertainty-aware modeling and standardized benchmarking frameworks highlight the importance of reproducibility and realistic generalization assessment. Nevertheless, the scalability of these approaches is often constrained by manual feature design and limited representational capacity, motivating their integration with deep learning and physics-informed strategies in more recent hybrid frameworks.

4.5. Decomposition, Non-Stationarity, and Robust Learning

Battery degradation data are inherently non-stationary and often affected by capacity recovery, noise, and external disturbances, motivating the development of decomposition-based, robustness-oriented, and system-level learning frameworks for reliable SOH estimation.
Li et al. proposed an enhanced SOH estimation framework for LIBs based on a multi-scale signal decomposition and Transformer neural network, in which the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) is employed to separate capacity regeneration components and long-term degradation trends, while an improved sparrow search algorithm (ISSA) is introduced to automatically optimize key Transformer hyperparameters. The SOH estimation model utilizes health indicators derived from full CC-CV charging profiles, including constant voltage charging time (CVCT), constant current charging time (CCCT), and internal resistance (Re), along with decomposed intrinsic mode functions (IMFs) of the capacity signal as input features. The estimation performance is evaluated using RMSE, MAE, MAPE, and R2 on CALCE and NASA public battery aging datasets. Experimental results demonstrate that the proposed CEEMDAN-ISSA-Transformer model achieves an RMSE as low as 0.0041, MAE of 0.0026, MAPE of 0.3507%, and R2 up to 0.9850, while maintaining estimation errors within 0.02 even when only 50% of the data is used for training, indicating strong robustness and generalization capability [147].
Lin et al. proposed a hybrid data-driven SOH estimation framework explicitly accounting for capacity recovery, a degradation characteristic often overlooked in deep-learning-based studies. The approach decomposes the SOH task into global degradation trend modeling and local fluctuation assessment, motivated by the non-stationary behavior induced by rest-period recovery. A CNN with a dual-loss design is first employed to automatically extract SOH-relevant yet diverse features from resampled voltage-current-temperature data. These features, together with the SOH trajectory, are then decomposed via Empirical Mode Decomposition (EMD) into residual (global trend) and IMF (local fluctuation) components. The global trend is estimated using SVR, while local recovery-driven fluctuations are modeled using an attention-based feature similarity (AFS) module inspired by Vision Transformers. Verification across four NASA cells, including B0005, B0006, B0007, and B0018, shows that the framework achieves an overall RMSE ranging from 0.66 to 1.29%, outperforming GRU, LSTM, and SVR baselines, particularly in metrics sensitive to local errors, such as RMSE and MAX [148].
Kalhori et al. proposed a correlation-aware multi-scale CNN (MSFF-CNN) for multivariate, multi-step SOH forecasting, addressing a key limitation of fixed-kernel CNNs in capturing battery degradation across different temporal resolutions. The central innovation is an adaptive kernel selection strategy in which short-, medium-, and long-term convolutional kernel sizes are determined automatically via cross-correlation analysis among input features, including voltage, current, temperature, and derived statistical/frequency indicators. This data-driven mechanism replaces heuristic kernel choices and aligns the CNN receptive fields with intrinsic degradation dynamics. A comprehensive preprocessing pipeline extracts statistical, time-domain, and frequency-domain cyclic features, followed by hybrid feature selection (Pearson correlation + RFE) to reduce redundancy. Extensive verification across NASA datasets and the commercial fast charging dataset (CH31) reveals that MSFF-CNN consistently outperforms CNN, LSTM, CNN-LSTM-Skip, and fully connected baseline models, achieving order-of-magnitude reductions in MAE and MSE, particularly for long input-output horizons. Learning curve analysis also confirms robust generalization capabilities without overfitting [149].
Zheng et al. conducted the first comprehensive investigation of adversarial attacks and defenses for deep-learning-based SOH estimation, framing battery health monitoring as a cyber-physical security problem rather than a purely data-driven regression task. Using a Residual Convolutional Network (RCN) trained on partial charging features Q(Vc), the authors systematically evaluated the impact of untargeted (UT), semi-targeted (ST), and targeted (TA) PGD attacks, demonstrating that carefully crafted perturbations, imperceptible in the input space, can manipulate the SOH trajectories or inflate RMSE by up to 11.4 times. Notably, TA attacks can force the estimator to follow an attacker-defined degradation path, posing a serious safety risk for cloud-based BMS decision-making. To mitigate these vulnerabilities, the paper introduces an adversarial training strategy (ATRCN) that jointly minimizes losses from normal and adversarial examples. Extensive experiments on NCA and NCM datasets at various temperatures and C-rates show that adversarial training reduces worst-case RMSE from 0.2445 to 0.037, to roughly 15% of the non-defended model, while preserving high accuracy on clean data with RMSE of 0.30% on NEs and 1.53% on AEs for unseen chemistry. Comparisons with MLP, LSTM, GRU, and CNN-LSTM further confirm that RCN + AT offers the best accuracy-robustness trade-off [150].
Chen et al. presented a vehicle-cloud collaborative SOH estimation framework that explicitly balances high estimation accuracy with real-time onboard applicability, addressing a critical gap in practical EV BMS. The framework combines a cloud-based CNN-LSTM-Self-Attention model with a vehicle-side double exponential decay empirical model, and fuses their outputs using a Kalman filter (KF). On the cloud, deep learning captures complex nonlinear degradation patterns from multi-feature fusion, including electrothermal statistics and incremental capacity (IC) curve features. On the vehicle, the lightweight empirical model ensures rapid SOH updates under limited computational resource conditions. The KF-based fusion adaptively weights both estimates, achieving robustness to noise, latency, and partial data availability. Validation on the NASA datasets showed that the collaborative method achieved an MAE of 0.019 and an RMSE of 0.024, outperforming standalone data-driven, empirical, and benchmark models such as SVR, RF, XGBoost, and CNN-Transformer. Importantly, extensive real-world vehicle data, comprising approximately 1.05 million samples, further confirmed the effectiveness of the framework, demonstrating superior accuracy and stability under realistic driving conditions [151].
Approaches targeting non-stationarity and robustness play a critical role in advancing SOH estimation beyond idealized laboratory conditions. The reviewed studies demonstrate that signal decomposition techniques and adaptive multi-scale learning can effectively separate long-term degradation trends from short-term fluctuations induced by capacity recovery, noise, or varying operating regimes. In parallel, robustness-oriented strategies, including adversarial defense mechanisms and vehicle-cloud collaborative frameworks, address vulnerabilities arising from data corruption, cyber threats, and resource constraints. While these methods substantially enhance stability and reliability, they often introduce additional computational or system complexity, highlighting the need for careful trade-offs between robustness, efficiency, and deployability in practical BMS.

4.6. Emerging and Non-Conventional Architectures

Beyond conventional voltage-current-temperature-based learning, a growing body of research explores emerging sensing modalities and non-conventional neural architectures to overcome fundamental observability, scalability, and robustness limitations in LIB SOH estimation.
Yang et al. introduced a non-invasive, ultrasound-aided physics-informed deep learning framework for LIB SOH estimation, thereby addressing the observability limitations of voltage/current-only methods. The core contribution lies in fusing physically interpretable ultrasound features, including time-of-flight, echo amplitude, echo energy, and spectral descriptors, with temporal deep models such as LSTM, BiLSTM, and CNN-LSTM. By integrating an embedded Kalman filter and categorical context, including charge/discharge state and transducer frequency, the framework stabilizes long-horizon learning while preserving explainability. Experiments using multi-frequency ultrasound from 2.5 to 10 MHz on LCO cells demonstrated that BiLSTM variants consistently outperformed LSTM and CNN-LSTM, achieving very low SOH errors with an RMSE of 0.0015, an R2 of 0.96, and generalizing well to 18,650 cylindrical cells [152].
Wu et al. proposed a hybrid deep learning model (CNN-ATT) for LIB SOH estimation using electrochemical impedance spectroscopy (EIS) data. The model employs full-spectrum EIS features, specifically the real part Re(Z), imaginary part Im(Z), impedance magnitude |Z|, and phase angle θ, measured over 60 frequency points from 0.02 Hz to 20 kHz at 100% SOC. A local information perceptron (LIP) based on 2D-CNN is used for automatic local feature extraction and noise suppression, while a global information perceptron (GIP) incorporating a multi-head attention (MHA) mechanism captures long-range dependencies between EIS features and capacity degradation. The SOH estimation performance is evaluated using MAE, RMSE, MAPE, and R2, achieving average MAE below 0.49%, RMSE below 0.59%, MAPE below 1.18%, and average R2 exceeding 0.9, with robust performance across different temperatures from 25 to 45 °C and superior accuracy compared to CNN-LSTM and CNN-BiLSTM benchmarks [57].
Cai et al. proposed an automated neural architecture design framework for LIB SOH estimation, aiming to replace expert-driven trial-and-error model construction. The method encodes charging voltage curves into Gramian Angular Summation Field (GASF) images, which serve as inputs to deep learning models. The neural architecture design is formulated as a multi-objective optimization problem that simultaneously considers estimation accuracy and computational complexity, and is solved using an evolutionary neural architecture search method (NSGA-Net). Model performance is evaluated using MAE, RMSE, and R2, where the automatically discovered accuracy-best architectures achieve MAE and RMSE below 0.6 and 0.7, respectively, and R2 above 0.98, outperforming hand-crafted CNN, VGG, ResNet, and GRU models. Meanwhile, trade-off and low-complexity solutions provide balanced accuracy and efficiency for practical BMS deployment [153].
Liang et al. proposed a stochastic SOH estimation method for LIBs based on a quantum convolutional neural network (QCNN) with automated feature fusion, where variational quantum circuits are employed to construct the convolutional layers for accurate, robust, and generalizable SOH estimation under highly stochastic and noisy operating conditions. The proposed QCNN-based SOH estimator uses discharge-based health indicator sequences, including the capacity-voltage sequence Q(V), the differential capacity sequence ΔQ(V) relative to a reference cycle, and the incremental capacity (IC) sequence extracted from small voltage windows centered at IC peaks, which are further normalized and encoded via quantum rotation gates with RX, RY, and RZ to achieve automated feature fusion within a single qubit representation. The SOH estimation performance is evaluated using RMSE, R2, MAE, and MAPE across four public datasets of CALCE, TJU, XJTU, and MIT, achieving RMSE values as low as 0.67%, R2 consistently exceeding 96%, MAE reduced by up to 18% compared to conventional CNN and LSTM models, and an overall RMSE reduction of at least 28% compared to multilayer perceptron models, demonstrating the effectiveness of QCNN-based feature fusion and quantum convolutional layers for SOH estimation [154].
Emerging and non-conventional architectures expand the methodological landscape of SOH estimation by introducing alternative sensing mechanisms, automated model design, and frontier computational paradigms. The reviewed studies demonstrate that incorporating ultrasound or impedance spectroscopy can substantially enhance observability and estimation accuracy, while neural architecture search reduces reliance on expert-driven design and enables systematic trade-offs between accuracy and efficiency. Quantum-inspired models further highlight the potential of advanced computation for robust feature fusion under stochastic conditions. Despite their promise, these approaches remain at an early stage of validation and often require specialized hardware, data acquisition, or experimental setups, underscoring the need for further benchmarking and practical assessment before large-scale deployment in BMS.

5. Synthesis, Critical Assessment and Design Guidelines

5.1. Synthesis of Reviewed Architectures and Learning Strategies

This review has systematically examined neural architectures and learning strategies for LIB SOH estimation, revealing a clear methodological evolution from accuracy-driven, purely data-driven models toward physically grounded, robust, and deployment-oriented frameworks. Early sequential deep learning models, such as RNN- and CNN-RNN-based architectures, demonstrate a strong capability in learning nonlinear temporal degradation patterns under controlled conditions. However, their performance often degrades in scenarios of data scarcity, operating variability, and cross-domain deployment. Attention-based and Transformer architectures address some of these limitations by enabling the modeling of long-range dependencies and adaptive feature weighting. However, their increased architectural complexity and computational demands necessitate careful efficiency-aware design for practical BMS.
Physics-informed and hybrid neural architectures emerge as a critical direction for improving interpretability, generalization, and data efficiency. By embedding electrochemical knowledge, degradation kinetics, or physically constrained loss functions into learning frameworks, these approaches mitigate non-physical predictions and reduce reliance on large, fully labeled datasets. Probabilistic and feature-centric models further emphasize the importance of uncertainty quantification, reproducibility, and rigorous evaluation, serving both as competitive solutions in data-limited scenarios and as essential benchmarks for assessing advances in deep learning. In parallel, robustness-oriented strategies, such as signal decomposition, adversarial defense, and vehicle-cloud collaborative learning, address non-stationarity, noise, cyber-physical threats, and deployment constraints, thereby narrowing the gap between laboratory performance and real-world reliability. Emerging architectures based on alternative sensing modalities, automated neural architecture search, and frontier computation expand the methodological landscape, although their maturity and scalability remain limited. Based on these findings, several practical recommendations can be drawn:
(a)
Architecture selection should be driven by data and deployment requirements: Sequential deep learning models remain suitable for well-curated datasets and offline analysis, whereas attention-based, physics-informed, or hybrid frameworks are preferable when generalization across operating conditions, chemistries, or partial charging data is required.
(b)
Physics guidance is essential for trustworthy SOH estimation: Incorporating physical constraints, either explicitly through mechanistic models or implicitly via physics-informed losses, substantially improves robustness, interpretability, and transferability, and should be considered a core design principle rather than an optional enhancement.
(c)
Accuracy alone is insufficient for BMS readiness: Future evaluations should jointly consider uncertainty, robustness to non-stationarity, computational efficiency, and inference latency, especially for edge or cloud-edge collaborative deployments.
(d)
Standardized benchmarking and reproducibility must be strengthened: Cross-battery and cross-domain validation protocols, along with transparent reporting of data splits and operating conditions, are crucial for fair comparison and meaningful progress.
(e)
Emerging approaches should be assessed with realistic constraints: Novel sensing modalities and automated or quantum-inspired architectures hold promise but require systematic validation under practical cost, hardware, and integration considerations.
Overall, the reviewed literature indicates a decisive shift toward generalizable, interpretable, and deployment-aware SOH estimation, suggesting that future advances will increasingly rely on the co-design of neural architectures, physical knowledge, and learning strategies tailored to real-world battery management requirements.

5.2. Critical Interpretation of Reported Accuracy for Practical BMS Deployment

Many SOH studies report very low RMSE/MAE and high R2. However, such results are often obtained under controlled laboratory conditions, such as full cycles, fixed protocols and narrow temperature ranges. These conditions differ substantially from real-world BMS operation. In practical EV/ESS settings, SOH estimation relies on partial and irregular data windows, heterogeneous SOC ranges, varying thermal and load profiles, and noisy or drifting sensors. Under these conditions, frequently reported “very high accuracies” may be optimistic.
A major concern is data leakage and overfitting. Random split-by-sample or split-by-cycle strategies can lead to highly correlated trajectories from the same cell appearing in both the training and test sets, inflating R2 and deflating RMSE/MAE. This issue is exacerbated when late-life data are scarce and ground-truth SOH labels are limited. Moreover, models validated on a single chemistry or narrow operating envelope often degrade when deployed across chemistries, temperatures and duty cycles. These issues reflect persistent generalization challenges as presented in Section 2.3.2.
To improve realism and comparability, reported performance should be accompanied by deployment-oriented validation, including: leave-one-cell-out testing; leave-one-condition-out evaluation (temperature, C-rate, DoD); cross-dataset or cross-chemistry transfer tests; error analysis by life stage, particularly near EOL; and uncertainty quantification alongside point metrics. Ultimately, BMS readiness should be assessed not only by accuracy but also by robustness, generalization and computational feasibility.

5.3. Architectural Complexity, Generalization and Embedded Deployability

Reported gains from complex architectures such as CNN-LSTM hybrids, attention mechanisms and xLSTM should be interpreted jointly with their generalization behavior and embedded deployment cost. Increased architectural complexity does not guarantee improved performance. Specifically, comparative CNN-LSTM studies have shown that canonical hybrid designs can outperform more elaborate variants due to training instability and over-parameterization. In practice, complexity is justified primarily when it yields demonstrable robustness across conditions. For instance, xLSTM maintains high predictive accuracy despite shifts in battery chemistry or charging strategies, demonstrating superior generalization. Simultaneously, models such as RetNet exemplify the benefits of explicit efficiency co-design. Specifically, as linear-complexity sequence learners, they deliver latency, FLOPs and memory usage compatible with automotive constraints. For embedded BMS, simpler models often remain preferable when they offer competitive accuracy with substantially lower compute and memory footprints, or when complexity is offloaded via cloud-vehicle split learning to preserve real-time feasibility on board. Accordingly, architecture-strategy co-design should be evaluated using accuracy-complexity Pareto fronts rather than accuracy alone.

5.4. Failure Modes and Negative Cases in SOH Learning

While many studies report strong performance under controlled evaluation settings, a critical review must also highlight recurring failure modes and unstable behaviors observed across SOH learning approaches. Several negative cases emerge consistently in the literature. First, data leakage and improper validation can lead to unrealistically high R2 and low error metrics, masking poor generalization to unseen batteries or conditions. Second, distribution shift across chemistries, temperatures, C-rates or duty cycles often results in substantial accuracy degradation, particularly for models trained on narrow laboratory datasets.
Third, late-life and EOL degradation remain major challenges. Specifically, limited labeled data near EOL often leads to error amplification, particularly in safety-critical regimes. Fourth, non-monotonic aging and capacity regeneration can destabilize models that implicitly assume smooth or monotonic degradation trajectories, leading to oscillatory or biased predictions. Fifth, over-parameterized architectures may underperform simpler baselines due to training instability, hyperparameter sensitivity or overfitting with limited data.
Finally, physics-informed approaches introduce their own failure risks. Specifically, misspecified or oversimplified degradation constraints can bias learning toward incorrect aging paths, while strong physics residuals may induce numerical stiffness or instability in long-horizon or online deployment. These negative cases underscore that improved accuracy alone is insufficient and motivate the need for rigorous validation, failure-aware benchmarking, and explicit reporting of unstable or adverse behaviors alongside positive results.

5.5. Decision-Oriented Design Guidelines for Practical SOH Estimation

Despite substantial progress in neural architectures and learning strategies for SOH estimation, translating reported performance into practical BMS design choices remains challenging due to heterogeneous data regimes, deployment constraints and evaluation protocols. To bridge this gap, the current review synthesizes the reviewed literature into a decision-oriented framework that distills architectural and learning-strategy choices under representative practical constraints, as presented in Table 3. Rather than emphasizing accuracy alone, the framework explicitly accounts for trade-offs among robustness, data efficiency, computational cost, and deployability, thereby providing actionable guidance for engineers and practitioners.
Table 3 summarizes recommended architecture-strategy combinations for common SOH deployment scenarios, highlighting when complex models are justified and when simpler or transfer-based approaches are preferable. It also clarifies decision criteria for choosing between transfer learning and physics-informed strategies and makes explicit which trade-offs are unavoidable under edge or data-limited conditions. Overall, Table 3 complements the high-level recommendations in this section by translating them into concrete, constraint-driven design guidance for practical SOH estimation.

5.6. Author Perspectives and Outlook

From the authors’ perspective, future progress in SOH estimation will be driven less by incremental architectural novelty and more by deployment-oriented rigor and robustness. First, standardized benchmarking practices, including cell-disjoint validation, cross-condition testing, uncertainty reporting and computational cost disclosure, are essential to ensure that reported accuracy translates into real-world reliability. Second, generalization under distribution shift, such as new chemistries, temperatures, usage patterns and aging regimes, remains a central open challenge and should be prioritized over marginal gains on curated datasets.
Third, while physics-informed and hybrid learning offer valuable inductive bias, their effectiveness depends critically on the validity of the assumptions imposed. Future work should emphasize adaptive or weakly constrained physics rather than fixed degradation laws. Fourth, efficiency-aware design will become increasingly important as attention-based and long-sequence models mature, particularly for edge or cloud-vehicle collaborative BMS architectures. Finally, advancing SOH estimation toward practical deployment will require closer integration of learning models with uncertainty quantification, explainability and safety-oriented evaluation, aligning algorithmic development with the needs of trustworthy BMSs.

6. Conclusions

This review has provided a critical synthesis of neural architectures and learning strategies for SOH estimation of LIBs, highlighting both methodological advances and persistent challenges. The surveyed literature demonstrates a clear progression from purely data-driven sequential models toward attention-based, physics-informed, and hybrid frameworks that explicitly address generalization, interpretability, and real-world deployability. While deep learning techniques have achieved impressive estimation accuracy under controlled conditions, their practical applicability increasingly depends on robustness to non-stationary degradation, data irregularity, and cross-domain operating scenarios.
The analysis further indicates that integrating physical knowledge, awareness of uncertainty, and efficiency-aware design is essential for bridging the gap between laboratory-scale validation and operational BMS. Emerging directions, including alternative sensing modalities and automated or frontier computational paradigms, expand the scope of SOH estimation but remain at an early stage of practical maturity. Overall, future research should prioritize the co-design of neural architectures, learning strategies, and physical constraints, supported by standardized evaluation protocols and realistic deployment considerations, to enable reliable, trustworthy, and scalable solutions for battery health monitoring.

Author Contributions

Conceptualization, T.D.L. and M.-Y.L.; methodology, T.D.L. and M.-Y.L.; formal analysis, T.D.L., J.-H.P. and M.-Y.L.; investigation, T.D.L., J.-H.P. and M.-Y.L.; resources, T.D.L. and J.-H.P.; data curation, T.D.L. and J.-H.P.; writing—original draft preparation, T.D.L. and J.-H.P.; writing—review and editing, T.D.L. and M.-Y.L.; visualization, T.D.L. and J.-H.P.; supervision, M.-Y.L.; project administration, M.-Y.L.; funding acquisition, M.-Y.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Korea Industrial Complex Corporation (KICOX) (No. VCDM2502), the Korea Evaluation Institute of Industrial Technology (KEIT) (No. 20024894), both grants funded by the Korean government (MOTIE), and the Regional Innovation System & Education (RISE) program through the Institute for Regional Innovation System & Education in Busan Metropolitan City, funded by the Ministry of Education (MOE) and the Busan Metropolitan City, Republic of Korea (2025-RISE-02-003).

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Vikram, S.; Sadar, A.; Rakshit, D. Experimental study of dielectric coolant-based immersion cooling technique for lithium-ion battery packs. J. Energy Storage 2026, 144, 119741. [Google Scholar] [CrossRef]
  2. Park, J.; Tai, L.D.; Lee, M. Numerical Study on the Heat Transfer Characteristics of a Hybrid Direct-Indirect Oil Cooling System for Electric Motors. Symmetry 2025, 17, 760. [Google Scholar] [CrossRef]
  3. Hwang, S.; Lee, M.; Ko, B. Numerical Analysis on Cooling Performances for Connectors Using Immersion Cooling in Ultra-Fast Chargers for Electric Vehicles. Symmetry 2025, 17, 624. [Google Scholar] [CrossRef]
  4. Lu, Y.; Sun, B.; Li, L.; Huang, F.; Chen, C. Numerical investigation of vertical vibration effects on immersion cooling heat transfer for lithium-ion battery at high discharge rate. Case Stud. Therm. Eng. 2025, 77, 107607. [Google Scholar] [CrossRef]
  5. Ke, W.; Tong, B.; Zhang, W.; Hu, T.; Gao, W.; Xu, J.; Li, S.; Tan, P. Experimental comparison of static immersion cooling and natural air cooling for suppressing thermal runaway triggering in lithium-ion battery modules. J. Energy Storage 2026, 149, 120444. [Google Scholar] [CrossRef]
  6. Miranda, M.H.; Silva, F.L.; Eckert, J.J.; Silva, L.C. Economic and performance optimization in electric vehicles using artificial neural networks and the multi-objective particle swarm algorithm. J. Energy Storage 2026, 147, 120082. [Google Scholar] [CrossRef]
  7. Pellegrini, A.; Rose, J.M. Vehicle choice and use under alternative policy scenarios: What needs to be done to promote electric vehicle uptake and usage. Transp. Res. Part A Policy Pract. 2026, 204, 104819. [Google Scholar] [CrossRef]
  8. Garud, K.S.; Han, J.; Hwang, S.; Lee, M. Artificial Neural Network Modeling to Predict Thermal and Electrical Performances of Batteries with Direct Oil Cooling. Batteries 2023, 9, 559. [Google Scholar] [CrossRef]
  9. Liu, J.; Tao, L.; Yang, Q.; Wang, J. Recent advances in immersion cooling for thermal management of lithium-ion batteries. Renew. Sustain. Energy Rev. 2025, 226, 116492. [Google Scholar] [CrossRef]
  10. Tai, L.D.; Garud, K.S.; Hwang, S.; Lee, M. A Review on Advanced Battery Thermal Management Systems for Fast Charging in Electric Vehicles. Batteries 2024, 10, 372. [Google Scholar] [CrossRef]
  11. Raeesi, M.; Laleh, A.A.; Shojaeefard, M.H.; Chavoshnia, P. Quantifying the impact of battery degradation and urban driving dynamics on the life cycle performance of electric vehicles: An energy, thermal, environmental, and economic analysis. Energy Convers. Manag. 2026, 351, 121021. [Google Scholar] [CrossRef]
  12. Tai, L.D.; Garud, K.S.; Lee, M. Experimental Study on Thermal Management of 5S7P Battery Module with Immersion Cooling Under High Charging/Discharging C-Rates. Batteries 2025, 11, 59. [Google Scholar] [CrossRef]
  13. Li, J.; Huang, M.; Zhang, Y. Multi-depot collaborative electric vehicle routing problem with heterogeneous fleet. Expert Syst. Appl. 2026, 302, 130575. [Google Scholar] [CrossRef]
  14. Tai, L.D.; Lee, M. Advances in the Battery Thermal Management Systems of Electric Vehicles for Thermal Runaway Prevention and Suppression. Batteries 2025, 11, 216. [Google Scholar] [CrossRef]
  15. Yaqteen, M.A.; Moon, S.; Kim, J.S. A novel spray-based immersion cooling for Li-ion batteries: An experimental comparison with flow immersion. Appl. Therm. Eng. 2025, 282, 128873. [Google Scholar] [CrossRef]
  16. Le, T.D.; Bang, Y.; Nguyen, N.; Lee, M. Artificial Neural Network-Based Optimization of an Inlet Perforated Distributor Plate for Uniform Coolant Entry in 10 kWh 24S24P Cylindrical Battery Module. Symmetry 2025, 18, 14. [Google Scholar] [CrossRef]
  17. Wang, C.; Wang, Y.; Zheng, Y.; Shen, K.; Lai, X.; Xu, C.; Ben-Marzouk, M.; Meng, X. Multi-condition simulation and thermal management enhancement of a rectangular-case lithium-ion battery module single-phase immersion cooling system. Int. J. Therm. Sci. 2026, 221, 110467. [Google Scholar] [CrossRef]
  18. Garud, K.S.; Tai, L.D.; Hwang, S.; Nguyen, N.; Lee, M. A Review of Advanced Cooling Strategies for Battery Thermal Management Systems in Electric Vehicles. Symmetry 2023, 15, 1322. [Google Scholar] [CrossRef]
  19. Bao, R.; Wang, Z.; Gao, Q.; Yang, H.; Zhang, B.; Tuo, Z.; Chen, S. Enhancing the thermal management of 21700 batteries via synergistic porous phase change material and immersion liquid cooling. Int. Commun. Heat Mass Transf. 2025, 169, 109819. [Google Scholar] [CrossRef]
  20. Han, J.; Garud, K.S.; Hwang, S.; Lee, M. Experimental Study on Dielectric Fluid Immersion Cooling for Thermal Management of Lithium-Ion Battery. Symmetry 2022, 14, 2126. [Google Scholar] [CrossRef]
  21. Zhou, B.; Li, W.; Yoshioka, H.; Wang, K.; Sun, X.; Wang, W.; Chen, T.; Guo, Y.; Tao, Z.; Jiang, C. The experimental and numerical study of battery thermal runaway influence on liquid cooling agent in the engineered fluid immersion system. Case Stud. Therm. Eng. 2025, 77, 107493. [Google Scholar] [CrossRef]
  22. Han, J.; Garud, K.S.; Kang, E.; Lee, M. Numerical Study on Heat Transfer Characteristics of Dielectric Fluid Immersion Cooling with Fin Structures for Lithium-Ion Batteries. Symmetry 2022, 15, 92. [Google Scholar] [CrossRef]
  23. Wagh, V.A.; Saha, S.K. On estimating critical channel number of hybrid battery thermal management system combining phase change material and forced convective immersion cooling. Int. Commun. Heat Mass Transf. 2026, 171, 110114. [Google Scholar] [CrossRef]
  24. Li, J.; Ou, J.; Zeng, S.; Chen, L.; Qiao, Y.; Tan, Z.; Li, Y.; Wu, W. Immersion cooling enabled thermal runaway prevention in overcharged batteries: Mechanisms and metrics. Appl. Energy 2025, 401, 126798. [Google Scholar] [CrossRef]
  25. Chen, Y.; Yin, A.; Qiao, Z.; Wei, Y.; Chen, C. State of health prediction of lithium-ion battery degradation using adaptive gated dual attention unit neural Wiener process. Neurocomputing 2026, 670, 132569. [Google Scholar] [CrossRef]
  26. Yang, Y.; Xia, Q.; Yuan, L.; Zhao, P.; Xuan, L.; Liang, J. Lithium-ion battery state of health estimation using a hybrid model with multiple health indicators. J. Energy Storage 2026, 147, 120248. [Google Scholar] [CrossRef]
  27. Shi, Y.; Wu, H.; Wu, J.; Zheng, F. Reinforcement learning for cumulative error correction in prediction the state-of-health of battery. J. Energy Storage 2026, 147, 119964. [Google Scholar] [CrossRef]
  28. Kajiura, Y.; Scott, L.M.; Matsumoto, T.; Liu, Y.; Zhang, D. Rapid state of health diagnosis of used lithium-ion batteries using mechanical and electrochemical measurements. J. Energy Storage 2026, 148, 120233. [Google Scholar] [CrossRef]
  29. Sheng, H.; Xu, X.; Zhou, W.; Zhang, J.; Chen, K. Experience-fusion dual-cache neural network for battery state of health estimation. J. Energy Storage 2026, 149, 119992. [Google Scholar] [CrossRef]
  30. Fan, Y.; Yuan, Y.; Zhao, J.; Zhang, Y.; Wu, X.; Zhang, W.; Ren, Z.; Guan, Q. An innovative image-based YieldNet framework for enhanced lithium-ion battery state-of-health estimation. J. Energy Storage 2026, 147, 120122. [Google Scholar] [CrossRef]
  31. Huang, X.; Liang, C.; Tao, S.; Che, Y.; Bian, N.; Zhang, J.; Wang, R.; Zhang, Y.; Xia, B.; Zhang, X. IC2ML: Unified battery state-of-health, degradation trajectory and remaining useful life prediction via intra-cycle and inter-cycle enhanced machine learning. J. Power Sources 2026, 666, 239148. [Google Scholar] [CrossRef]
  32. Gao, W.; Gu, Z.; Cheng, H. State of health estimation for lithium-ion batteries via electro-thermal feature fusion and a GRU-Guided Attention Network. Int. J. Electr. Power Energy Syst. 2025, 174, 111456. [Google Scholar] [CrossRef]
  33. He, R.; Peng, T.; Zhang, X.; Chen, Z.; Yao, J.; Nazir, M.S.; Zhang, C. A novel hybrid model for state of health prediction in lithium batteries based on non-stationary transformers optimized by tree-structured Parzen estimator considering health factors. Appl. Energy 2025, 402, 127030. [Google Scholar] [CrossRef]
  34. Li, X.; Zhao, M.; Zhong, S.; Li, J.; Cui, Z.; Fu, S.; Yan, Z. Deep transfer learning enabled online state-of-health estimation of lithium-ion batteries under small samples across different cathode materials, ambient temperature and charge-discharge protocols. J. Power Sources 2025, 650, 237503. [Google Scholar] [CrossRef]
  35. Sun, J.; Wang, H. Enhanced state of health estimation of lithium-ion batteries through advanced feature selection and self-developed datasets. J. Energy Storage 2025, 114, 115713. [Google Scholar] [CrossRef]
  36. Huo, W.; Chang, Y.; Luo, T.; Lu, B.; Guo, C.; Li, Y. Integrating particle swarm optimization with convolutional and long short-term memory neural networks for real vehicle data-based lithium-ion battery health estimation. J. Energy Storage 2025, 111, 115427. [Google Scholar] [CrossRef]
  37. Le, H.; Deng, W.; Nguyen, K.T.; Medjaher, K.; Gogu, C.; Wu, D. Physics-informed transfer learning by embedding physics into activation functions: An application in battery health management. Appl. Energy 2026, 406, 127161. [Google Scholar] [CrossRef]
  38. Soon, K.L.; Soon, L.T. A hybrid quantum neural network and classical gated recurrent unit for battery state of health forecasting incorporating SHAP analysis. J. Energy Storage 2025, 136, 118596. [Google Scholar] [CrossRef]
  39. Wu, M.; Zhang, X.; Wang, Z.; Tan, C.; Wang, Y.; Wang, L. State of health estimation of lithium-ion batteries based on the Kepler optimization algorithm-multilayer-convolutional neural network. J. Energy Storage 2025, 122, 116644. [Google Scholar] [CrossRef]
  40. Ye, J.; Xie, Q.; Lin, M.; Wu, J. A method for estimating the state of health of lithium-ion batteries based on physics-informed neural network. Energy 2024, 294, 130828. [Google Scholar] [CrossRef]
  41. He, L.; Tian, A.; Ding, T.; Dong, K.; Wang, Y.; Gao, Y.; Jiang, J. Lithium-ion battery state estimation based on adaptive physics-informed neural network of electrochemical model. Measurement 2026, 257, 118985. [Google Scholar] [CrossRef]
  42. Chen, G.; Meng, H.; Yang, Y.; Zhang, X.; Deng, W.; Liu, J. A transfer learning method for state of health prediction of lithium-ion batteries under cross-formation protocols. J. Energy Storage 2026, 147, 120113. [Google Scholar] [CrossRef]
  43. Nazim, M.S.; Chakma, A.; Joha, M.I.; Alam, S.S.; Rahman, M.M.; Umam, M.K.S.; Jang, Y.M. Artificial intelligence for estimating State of Health and Remaining Useful Life of EV batteries: A systematic review. ICT Express 2025, 11, 769–789. [Google Scholar] [CrossRef]
  44. Wang, B.; Wang, X.; Hu, B.; Xu, L.; Xiao, Y. A review on influencing factors, estimation methods, and improvement strategies for state of health in lithium-ion batteries. J. Energy Storage 2025, 139, 118577. [Google Scholar] [CrossRef]
  45. Wang, Y.; Guo, S.; Cui, Y.; Deng, L.; Zhao, L.; Li, J.; Wang, Z. A comprehensive review of machine learning-based state of health estimation for lithium-ion batteries: Data, features, algorithms, and future challenges. Renew. Sustain. Energy Rev. 2025, 224, 116125. [Google Scholar] [CrossRef]
  46. Lyu, Z.; Jin, Z.; Li, X.; Wang, H.; Wu, L.; Chen, Y. From tradition to innovation: Evolution and trade-offs of lithium-ion battery state of health estimation methods. J. Energy Storage 2026, 144, 119730. [Google Scholar] [CrossRef]
  47. Wang, S.; Zhang, M.; Zhou, L.; Fernandez, C.; Blaabjerg, F. Critical review of battery health state estimation with deep learning methods. J. Power Sources 2026, 666, 239106. [Google Scholar] [CrossRef]
  48. Yuan, Z.; Deng, Z.; He, Y.; Ning, Z.; Liu, J. Multi-step prediction of battery state of health based on self-supervised pre-training and transfer learning using the xPatch model. Energy 2025, 341, 139410. [Google Scholar] [CrossRef]
  49. He, Y.; Zeng, Q.; Tang, L.; Liu, F.; Li, Q.; Yin, Y.; Xu, S.; Deng, B. State of health estimation of lithium-ion battery based on full life cycle acoustic emission signals. J. Energy Storage 2025, 139, 118725. [Google Scholar] [CrossRef]
  50. Bai, H.; Wang, X.; Gan, Y.; Zhao, X.; Jin, H.; Wang, G.; Kang, G.; Chen, W. Evaluation of life-cycle state of health for solid-state lithium batteries using a hybrid LSTM model. Int. J. Fatigue 2026, 203, 109323. [Google Scholar] [CrossRef]
  51. Zheng, Y.; Che, Y.; Hu, X.; Sui, X.; Stroe, D.; Teodorescu, R. Thermal state monitoring of lithium-ion batteries: Progress, challenges, and opportunities. Prog. Energy Combust. Sci. 2023, 100, 101120. [Google Scholar] [CrossRef]
  52. Rawat, S.; Saini, D.K.; Choudhury, S.; Yadav, M. Advanced Monitoring and Real-Time State of Temperature Prediction in Lithium-Ion Cells Under Abusive Discharge Conditions Using Data-Driven Modelling. World Electr. Veh. J. 2024, 15, 509. [Google Scholar] [CrossRef]
  53. Chen, J.; Li, P.; Wu, L. Joint prediction of SOH and RUL of lithium-ion batteries using single-cycle charging data. Energy 2025, 336, 138351. [Google Scholar] [CrossRef]
  54. Hoque, M.A.; Nurmi, P.; Kumar, A.; Varjonen, S.; Song, J.; Pecht, M.G.; Tarkoma, S. Data driven analysis of lithium-ion battery internal resistance towards reliable state of health prediction. J. Power Sources 2021, 513, 230519. [Google Scholar] [CrossRef]
  55. Wan, Q.; Qu, J.; Xu, J. A feature optimization framework for improving state of health estimation of lithium-ion batteries based on electrochemical impedance spectroscopy. J. Energy Storage 2026, 144, 119713. [Google Scholar] [CrossRef]
  56. Wen, H.; Zhang, M.; Wang, S.; Zhao, W.; Zhao, Z.; Wang, Y.; Yan, Y.; Zhang, D.; Sun, X. Extraction of health indicators from electrochemical impedance spectroscopy for state of health estimation of lithium-ion batteries. eTransportation 2025, 25, 100456. [Google Scholar] [CrossRef]
  57. Wu, C.; Wang, L.; Meng, J.; Huang, J.; Yang, T.; Wang, L.; Chang, Y.; He, X. A hybrid deep learning model for lithium-ion battery state-of-health estimation using electrochemical impedance spectroscopy. Energy 2025, 339, 138974. [Google Scholar] [CrossRef]
  58. Chen, S.; Zhang, Q.; Wang, D.; Hao, Z.; Liang, X.; Hu, B. Physics-informed neural networks for degradation diagnosis of lithium-ion batteries via electrochemical impedance spectroscopy. J. Energy Storage 2025, 140, 119127. [Google Scholar] [CrossRef]
  59. Du, L.; Shen, X.; Li, X.; Li, X.; Wei, Y. Battery state of charge estimation from electrochemical impedance spectroscopy data based on machine learning with fractional order model parameters and distribution relaxation time features. J. Energy Storage 2026, 144, 119754. [Google Scholar] [CrossRef]
  60. Liu, R.; Zhang, H.; Xu, Y.; Liu, J.; Wang, Y.; Li, P. SOH correlation in coupling with electrochemical impedances and expansion rate for prismatic LiMnO2 lithium-ion batteries. J. Energy Storage 2025, 107, 115023. [Google Scholar] [CrossRef]
  61. Tao, S.; Zhu, J.; Li, Y.; Chen, S.; Wang, X.; Wang, X.; Jiang, B.; Chang, W.; Wei, X.; Dai, H. State-of-health estimation for EV battery packs via incremental capacity curves and S-transform. Appl. Energy 2025, 397, 126334. [Google Scholar] [CrossRef]
  62. Zhang, S.; Guo, X.; Dou, X.; Zhang, X. A rapid online calculation method for state of health of lithium-ion battery based on coulomb counting method and differential voltage analysis. J. Power Sources 2020, 479, 228740. [Google Scholar] [CrossRef]
  63. Wang, L.; Zhao, X.; Liu, L.; Pan, C. State of health estimation of battery modules via differential voltage analysis with local data symmetry method. Electrochim. Acta 2017, 256, 81–89. [Google Scholar] [CrossRef]
  64. Xia, F.; Wang, K.; Chen, J. State of health and remaining useful life prediction of lithium-ion batteries based on a disturbance-free incremental capacity and differential voltage analysis method. J. Energy Storage 2023, 64, 107161. [Google Scholar] [CrossRef]
  65. Liu, Y.; Liu, Y.; Shen, H.; Ding, L. Battery state of health estimation using a novel BiLSTM-Mamba2 network with differential voltage features and transfer learning. J. Energy Storage 2025, 110, 115347. [Google Scholar] [CrossRef]
  66. Wang, L.; Pan, C.; Liu, L.; Cheng, Y.; Zhao, X. On-board state of health estimation of LiFePO4 battery pack through differential voltage analysis. Appl. Energy 2016, 168, 465–472. [Google Scholar] [CrossRef]
  67. Jiang, M.; Li, D.; Li, Z.; Chen, Z.; Yan, Q.; Lin, F.; Yu, C.; Jiang, B.; Wei, X.; Yan, W.; et al. Advances in battery state estimation of battery management system in electric vehicles. J. Power Sources 2024, 612, 234781. [Google Scholar] [CrossRef]
  68. Khan, M.A.; Thatipamula, S.; Onori, S. Onboard Health Estimation using Distribution of Relaxation Times for Lithium-ion Batteries. IFAC-PapersOnLine 2023, 58, 917–922. [Google Scholar] [CrossRef]
  69. Dai, H.; Lai, Y.; Huang, Y.; Yu, H.; Yang, Y.; Zhu, L. State-of-health estimation of lithium-ion batteries using multiple correlation analysis-based feature screening and optimizing echo state networks with the weighted mean of vectors. J. Power Sources 2024, 623, 235482. [Google Scholar] [CrossRef]
  70. Vanem, E.; Salucci, C.B.; Bakdi, A.; Alnes, Ø.Å.S. Data-driven state of health modelling—A review of state of the art and reflections on applications for maritime battery systems. J. Energy Storage 2021, 43, 103158. [Google Scholar] [CrossRef]
  71. Tebbe, J.; Hartwig, A.; Jamali, A.; Senobar, H.; Wahab, A.; Kabak, M.; Kemper, H.; Khayyam, H. Innovations and prognostics in battery degradation and longevity for energy storage systems. J. Energy Storage 2025, 114, 115724. [Google Scholar] [CrossRef]
  72. Maddipatla, S.; Rauf, H.; Osterman, M.; Arshad, N.; Pecht, M. Swelling Mechanisms, Diagnostic Applications, and Mitigation Strategies in Lithium-Ion Batteries. Batteries 2025, 11, 356. [Google Scholar] [CrossRef]
  73. Mbagaya, L.; Reddy, K.; Botes, A. Machine Learning Techniques for Battery State of Health Prediction: A Comparative Review. World Electr. Veh. J. 2025, 16, 594. [Google Scholar] [CrossRef]
  74. Gu, X.; Liu, M.; Tian, J. State of Health Estimation for Batteries Based on a Dynamic Graph Pruning Neural Network with a Self-Attention Mechanism. Energies 2024, 18, 5333. [Google Scholar] [CrossRef]
  75. Mehta, C.; Sant, A.V.; Sharma, P. SVM-assisted ANN model with principal component analysis based dimensionality reduction for enhancing state-of-charge estimation in LiFePO4 batteries. E-Prime Adv. Electr. Eng. Electron. Energy 2024, 8, 100596. [Google Scholar] [CrossRef]
  76. Driscoll, L.; De la Torre, S.; Gomez-Ruiz, J.A. Feature-based lithium-ion battery state of health estimation with artificial neural networks. J. Energy Storage 2022, 50, 104584. [Google Scholar] [CrossRef]
  77. Manoharan, A.; Begam, K.; Aparow, V.R.; Sooriamoorthy, D. Artificial Neural Networks, Gradient Boosting and Support Vector Machines for electric vehicle battery state estimation: A review. J. Energy Storage 2022, 55, 105384. [Google Scholar] [CrossRef]
  78. Alobaid, A.; Bonny, T.; Alrahhal, M. Disruptive attacks on artificial neural networks: A systematic review of attack techniques, detection methods, and protection strategies. Intell. Syst. Appl. 2025, 26, 200529. [Google Scholar] [CrossRef]
  79. Leite, D.; Brito, A.; Faccioli, G. Advancements and outlooks in utilizing Convolutional Neural Networks for plant disease severity assessment: A comprehensive review. Smart Agric. Technol. 2024, 9, 100573. [Google Scholar] [CrossRef]
  80. Lu, Z.; Fei, Z.; Wang, B.; Yang, F. A feature fusion-based convolutional neural network for battery state-of-health estimation with mining of partial voltage curve. Energy 2024, 288, 129690. [Google Scholar] [CrossRef]
  81. Zheng, M.; Luo, X. Joint estimation of State of Charge (SOC) and State of Health (SOH) for lithium ion batteries using Support Vector Machine (SVM), Convolutional Neural Network (CNN) and Long Sort Term Memory Network (LSTM) models. Int. J. Electrochem. Sci. 2024, 19, 100747. [Google Scholar] [CrossRef]
  82. Zhang, X.; Wang, L.; Gong, Q.; Wang, Y. A CNN-LSTM hybrid network with transfer learning for accurate lithium-ion battery state of health estimation. J. Energy Storage 2026, 141, 119450. [Google Scholar]
  83. Mazzi, Y.; Ben Sassi, H.; Errahimi, F. Lithium-ion battery state of health estimation using a hybrid model based on a convolutional neural network and bidirectional gated recurrent unit. Eng. Appl. Artif. Intell. 2023, 127, 107199. [Google Scholar] [CrossRef]
  84. Jing, H.; Hu, J.; Ou, S.; Lv, Z.; Lyu, R.; Zhao, J. Scalable and generalizable deep learning for battery state of health estimation in on-road electric vehicles. J. Energy Chem. 2025, 110, 823–841. [Google Scholar] [CrossRef]
  85. Teixeira, R.S.; Calili, R.F.; Almeida, M.F.; Louzada, D.R. Recurrent Neural Networks for Estimating the State of Health of Lithium-Ion Batteries. Batteries 2024, 10, 111. [Google Scholar] [CrossRef]
  86. Liu, Y.; Liu, C.; Liu, Y.; Sun, F.; Qiao, J.; Xu, T. Review on degradation mechanism and health state estimation methods of lithium-ion batteries. J. Traffic Transp. Eng. 2023, 10, 578–610. [Google Scholar]
  87. Golshanrad, P.; Faghih, F. DeepCover: Advancing RNN test coverage and online error prediction using state machine extraction. J. Syst. Softw. 2024, 211, 111987. [Google Scholar] [CrossRef]
  88. Anwar, T.; Ullah, K.; Fiza, M.; Ullah, H.; Mahariq, I.; Al Mekhlafi, S.M. Recurrent neural network approach to thermal radiation in hybrid nanofluids with activation energy between two rotating disks. Results Phys. 2025, 77, 108451. [Google Scholar] [CrossRef]
  89. Hemavathi, S. Lithium-ion battery state of health estimation using intelligent methods. Frankl. Open 2025, 10, 100237. [Google Scholar] [CrossRef]
  90. Al-Selwi, S.M.; Hassan, M.F.; Abdulkadir, S.J.; Muneer, A.; Sumiea, E.H.; Alqushaibi, A.; Ragab, M.G. RNN-LSTM: From applications to modeling techniques and beyond—Systematic review. J. King Saud Univ. Comput. Inf. Sci. 2024, 36, 102068. [Google Scholar] [CrossRef]
  91. Wu, Y.; Sicard, B.; Gadsden, S.A. Physics-informed machine learning: A comprehensive review on applications in anomaly detection and condition monitoring. Expert Syst. Appl. 2024, 255, 124678. [Google Scholar] [CrossRef]
  92. Farea, A. Understanding Physics-Informed Neural Networks: Techniques, Applications, Trends, and Challenges. AI 2024, 5, 1534–1557. [Google Scholar] [CrossRef]
  93. Gao, K.; Li, Q.; Hu, L.; Huang, J.; Li, H.; Wu, Y. Physical informed neural network for SOH estimation of lithium-ion battery with electrochemical mechanism. Energy 2025, 342, 139753. [Google Scholar] [CrossRef]
  94. Akbari, E.; Chakherlou, T.N.; Tabrizchi, H.; Mosavi, A. Physics-Informed Neural Networks for Multiaxial Fatigue Life Prediction of Aluminum Alloy. Comput. Model. Eng. Sci. 2025, 145, 305–325. [Google Scholar] [CrossRef]
  95. Zhang, W.; Zhang, H.; Bi, Z. A time series physics-informed neural network framework for the state of health estimation of lithium-ion batteries. J. Energy Storage 2026, 142, 119549. [Google Scholar] [CrossRef]
  96. Salehi, Z.; Tofigh, M.; Kharazmi, A.; Smith, D.J.; Hanifi, A.R.; Koch, C.R.; Shahbakhti, M. Transfer learning-based deep neural network model for performance prediction of hydrogen-fueled solid oxide fuel cells. Int. J. Hydrogen Energy 2025, 99, 102–111. [Google Scholar] [CrossRef]
  97. Chen, K.; Luo, Y.; Long, Z.; Wang, H.; Li, Y.; Gao, G.; Wu, G. Battery state of health estimation using deep transfer learning on short-term charging data. Measurement 2025, 256, 118233. [Google Scholar] [CrossRef]
  98. Zhao, Z.; Alzubaidi, L.; Zhang, J.; Duan, Y.; Gu, Y. A comparison review of transfer learning and self-supervised learning: Definitions, applications, advantages and limitations. Expert Syst. Appl. 2024, 242, 122807. [Google Scholar] [CrossRef]
  99. Lim, S.; Seo, H.; Lee, S. Simulation-Driven Deep Transfer Learning Framework for Data-Efficient Prediction of Physical Experiments. Mathematics 2024, 13, 3884. [Google Scholar]
  100. Mallea, M.; Nebot, À.; Mugica, F. Rationalizing machine learning models for real-time health prognosis of lithium-ion batteries. Array 2025, 28, 100577. [Google Scholar]
  101. Giuliano, A.; Wu, Y.; Yawney, J.; Gadsden, S.A. Transformer-Based Transfer Learning for Battery State-of-Health Estimation. Energies 2024, 18, 5439. [Google Scholar] [CrossRef]
  102. Ma, Z.; Jiang, G.; Hu, Y.; Chen, J. A review of physics-informed machine learning for building energy modeling. Appl. Energy 2025, 381, 125169. [Google Scholar]
  103. Jiang, Z.; Wang, X.; Li, H.; Hong, T.; You, F.; Drgoňa, J.; Vrabie, D.; Dong, B. Physics-informed machine learning for building performance simulation-A review of a nascent field. Adv. Appl. Energy 2025, 18, 100223. [Google Scholar] [CrossRef]
  104. Shaosen Su Gao, L.; Garg, A.; Li, W. Physics-constrained machine learning for real-time lithium-ion battery health estimation from constant voltage charging data. Electrochim. Acta 2025, 543, 147608. [Google Scholar]
  105. Wang, B.; Zhang, P.; Xiang, Y.; Wang, D.; Wu, B.; Wang, X.; Tang, K.; Chen, A. Advancing Structural Failure Analysis with Physics-Informed Machine Learning in Engineering Applications. Engineering, 2025; in press. [Google Scholar] [CrossRef]
  106. Ahmadi, M.; Biswas, D.; Lin, M.; Vrionis, F.D.; Hashemi, J.; Tang, Y. Physics-informed machine learning for advancing computational medical imaging: Integrating data-driven approaches with fundamental physical principles. Artif. Intell. Rev. 2025, 58, 297. [Google Scholar] [CrossRef]
  107. Gong, J.; Xu, B.; Chen, F.; Zhou, G. Predictive Modeling for Electric Vehicle Battery State of Health: A Comprehensive Literature Review. Energies 2024, 18, 337. [Google Scholar] [CrossRef]
  108. Jose, A.; Shrivastava, S. An analytical examination of the performance assessment of CNN-LSTM architectures for state-of-health evaluation of lithium-ion batteries. Results Eng. 2025, 27, 105825. [Google Scholar] [CrossRef]
  109. Meng, X.; Xu, S.; Yu, Y.; Zhu, Y.; Gao, F. Extended long short-term memory network for robust state-of-health estimation of lithium-ion batteries under diverse charging strategies. Int. J. Electr. Power Energy Syst. 2025, 172, 111146. [Google Scholar] [CrossRef]
  110. Guo, H.; Yu, L.; Ren, Y.; Sun, A.; Yang, J. A dual dimensionality reduction and feature weighting integration transfer learning method for lithium-ion battery state of health estimation. J. Energy Storage 2025, 130, 117475. [Google Scholar] [CrossRef]
  111. Li, Y.; Zhang, X.; Li, Z.; Li, X.; Liu, G.; Gao, W. Accurate and adaptive state of health estimation for lithium-ion battery based on patch learning framework. Measurement 2025, 250, 117083. [Google Scholar] [CrossRef]
  112. He, T.; Gong, Z. State of health estimation for lithium-ion batteries using a hybrid neural network model with Multi-scale Convolutional Attention Mechanism. J. Power Sources 2024, 609, 234680. [Google Scholar] [CrossRef]
  113. Wang, L.; Liu, X.; Guo, W.; Chen, G.; Gao, Q.; Pan, H.; Min, Y. Synergistic integration of Kolmogorov-Arnold networks with RNN for enhanced lithium-ion battery state-of-health estimation with optimized feature extraction. Energy 2025, 341, 139521. [Google Scholar] [CrossRef]
  114. Qu, J.; Wang, T.; Wang, Y.; Li, X.; Li, M.; Zheng, R. An improved co-training architecture for Lithium-ion batteries state of health estimation with semi-supervised learning. J. Power Sources 2025, 643, 236928. [Google Scholar] [CrossRef]
  115. Yuan, T.; Gao, F.; Bai, J.; Sun, H. A lithium-ion battery state of health estimation method utilizing convolutional neural networks and bidirectional long short-term memory with attention mechanisms for collaborative defense against false data injection cyber-attacks. J. Power Sources 2025, 631, 236193. [Google Scholar] [CrossRef]
  116. Yu, D.; Su, J.; Du, Y.; Qu, X. Cross-domain state-of-health estimation for lithium-ion batteries: A deep learning and similarity network fusion framework with transfer learning. J. Energy Storage 2026, 143, 119706. [Google Scholar] [CrossRef]
  117. Yu, L.; Hu, P. An enhanced whale optimization algorithm for lithium-ion battery state of health estimation. J. Power Sources 2025, 661, 238703. [Google Scholar] [CrossRef]
  118. Li, Y.; He, M.; Liu, J. Attention-assisted neural ordinary differential equation model for the state of health estimation of lithium-ion batteries with high accuracy. J. Power Sources 2026, 665, 239046. [Google Scholar] [CrossRef]
  119. Chen, Z.; Peng, Y.; Shen, J.; Zhang, Q.; Liu, Y.; Zhang, Y.; Xia, X.; Liu, Y. State of health estimation for lithium-ion batteries based on fragmented charging data and improved gated recurrent unit neural network. J. Energy Storage 2025, 115, 115952. [Google Scholar] [CrossRef]
  120. Liu, Z.; Wang, H.; Zhou, X.; Chen, H.; Duan, H.; Liang, K.; Chen, B.; Cao, Y.; Wang, W.; Yang, D.; et al. State of health prediction of lithium-ion batteries based on incremental capacity analysis and adaptive genetic algorithm optimized Elman neural network model. Energy 2025, 335, 137955. [Google Scholar] [CrossRef]
  121. Mao, B.; Yuan, J.; Li, H.; Li, K.; Wang, Q.; Xiao, X.; Zheng, Z.; Qin, W. Gramian angular field-based state-of-health estimation of lithium-ion batteries using two-dimensional convolutional neural network and bidirectional long short-term memory. J. Power Sources 2025, 626, 235713. [Google Scholar] [CrossRef]
  122. Qian, C.; Guan, H.; Xu, B.; Xia, Q.; Sun, B.; Ren, Y.; Wang, Z. A CNN-SAM-LSTM hybrid neural network for multi-state estimation of lithium-ion batteries under dynamical operating conditions. Energy 2024, 294, 130764. [Google Scholar] [CrossRef]
  123. Li, Y.; Gao, G.; Chen, K.; He, S.; Liu, K.; Xin, D.; Luo, Y.; Long, Z.; Wu, G. State-of-health prediction of lithium-ion batteries using feature fusion and a hybrid neural network model. Energy 2025, 319, 135163. [Google Scholar] [CrossRef]
  124. Liu, D.; Tan, X.; Cao, Q.; Zhang, C.; Ru, Q. A dual-layer architecture for lithium-ion batteries’ state of health estimation: Adaptive local convolution and global temporal-frequency retention network. J. Energy Storage 2025, 141, 119237. [Google Scholar] [CrossRef]
  125. Zhou, J.; Rong, J.; Zhang, J.; Liu, C.; Yi, F.; Jiao, Z.; Zhang, C. Deep learning estimation of state of health for lithium-ion batteries using multi-level fusion features of discharge curves. J. Power Sources 2025, 653, 237781. [Google Scholar] [CrossRef]
  126. Li, J.; Wang, X.; Tian, D.; Ye, M.; Niu, Y. A deep learning method based on transfer learning and noise-resistant hybrid neural network for lithium-ion battery state of health and state of charge estimation. Electrochim. Acta 2025, 543, 147577. [Google Scholar] [CrossRef]
  127. Yan, Z.; Han, X.; Zhang, D.; Hong, H.; Han, W. A novel hybrid neural network-based state of health estimation method with multi-feature extraction for lithium-ion batteries. J. Energy Storage 2025, 137, 118550. [Google Scholar] [CrossRef]
  128. Zhan, Y.; Yan, K.; Zheng, X. Integrating transformers into physics-informed neural networks: An approach to lithium-ion battery state-of-health prognostics. Int. J. Electr. Power Energy Syst. 2025, 172, 111173. [Google Scholar] [CrossRef]
  129. Yuan, Z.; Tian, T.; Hao, F.; Li, G.; Tang, R.; Liu, X. A hybrid neural network based on variational mode decomposition denoising for predicting state-of-health of lithium-ion batteries. J. Power Sources 2024, 609, 234697. [Google Scholar] [CrossRef]
  130. Liu, Z.; Liu, Y.; Zhang, Y.; Wu, C.; Zhang, S.; Sun, C. Data-driven lithium-ion battery SOH prediction: A novel SHMM-transformer-BiGRU hybrid neural network method. Measurement 2026, 257, 118579. [Google Scholar] [CrossRef]
  131. Zhang, Y.; Wang, Y.; Soo, Y.; Sun, Z. A multi-task transformer approach for lithium-ion battery pack health state estimation using self-supervised reconstruction and online fine-tuning. J. Energy Storage 2025, 140, 118922. [Google Scholar]
  132. Luo, Y.; Ju, S.; Li, P.; Zhang, H. A method for estimating lithium-ion battery state of health based on physics-informed hybrid neural network. Electrochim. Acta 2025, 525, 146110. [Google Scholar] [CrossRef]
  133. Wang, T.; Wu, Y.; Zhu, K.; Cen, J.; Wang, S.; Huang, Y. Deep learning and polarization equilibrium based state of health estimation for lithium-ion battery using partial charging data. Energy 2025, 317, 134564. [Google Scholar] [CrossRef]
  134. Li, Y.; Wang, H.; Wang, C.; Wang, L.; Liao, C.; Wang, L. Unified physics-informed subspace identification and transformer learning for lithium-ion battery state-of-health estimation. J. Energy Chem. 2025, 112, 350–369. [Google Scholar]
  135. Wang, S.; Zhou, R.; Ren, Y.; Liu, H.; Lin, Y.; Lian, C. A generalizable physics-informed neural network for lithium-ion battery SOH estimation utilizing partial charging segments. J. Energy Chem. 2026, 112, 977–986. [Google Scholar]
  136. Tian, A.; He, L.; Ding, T.; Dong, K.; Wang, Y.; Jiang, J. A generic physics-informed neural network framework for lithium-ion batteries state of health estimation. Energy 2025, 332, 137215. [Google Scholar] [CrossRef]
  137. Yang, L.; He, M.; Ren, Y.; Gao, B.; Qi, H. Physics-informed neural network for co-estimation of state of health, remaining useful life, and short-term degradation path in lithium-ion batteries. Appl. Energy 2025, 398, 126427. [Google Scholar] [CrossRef]
  138. Chen, L.; Chang, C.; Liu, X.; Jiang, J.; Jiang, Y.; Tian, A. Physics-informed neural networks for small sample state of health estimation of lithium-ion batteries. J. Energy Storage 2025, 122, 116559. [Google Scholar] [CrossRef]
  139. Lin, C.; Wu, L.; Tuo, X.; Liu, C.; Zhang, W.; Huang, Z.; Zhang, G. A lightweight two-stage physics-informed neural network for SOH estimation of lithium-ion batteries with different chemistries. J. Energy Chem. 2025, 105, 261–279. [Google Scholar]
  140. Wang, Y.; Zhao, Z.; Cui, Y.; Guo, S.; Deng, L.; Zhao, L.; Li, J.; Wang, Z. A transferable multi-state estimation framework for lithium-ion batteries based on sparse electrochemical parameters. Energy 2025, 335, 138381. [Google Scholar] [CrossRef]
  141. Cao, Z.; Gao, W.; Fu, Y.; Kurdkandi, N.V.; Mi, C. A general framework for lithium-ion battery state of health estimation: From laboratory tests to machine learning with transferability across domains. Appl. Energy 2025, 381, 125086. [Google Scholar] [CrossRef]
  142. Sedlařík, M.; Vyroubal, P.; Capková, D.; Omerdic, E.; Rae, M.; Mačák, M.; Šedina, M.; Kazda, T. Advanced machine learning techniques for State-of-Health estimation in lithium-ion batteries: A comparative study. Electrochim. Acta 2025, 524, 145988. [Google Scholar] [CrossRef]
  143. Mawassi, H.; Hermann, G.; Ould Abdeslam, D.; Idoumghar, L. Enhanced co-estimation of state of health and state of charge in lithium-ion batteries using discharge voltage and an optimized feed-forward neural network. J. Energy Storage 2025, 109, 115034. [Google Scholar] [CrossRef]
  144. Oyewole, I.; Kim, Y.; Chehade, A. A conditional mixture model with recurrent neural network for prognostic analysis of lithium-ion batteries. J. Energy Storage 2025, 140, 118936. [Google Scholar]
  145. Pandit, R.; Ahlawat, N. A standardized comparative framework for machine learning techniques in lithium-ion battery state of health estimation. Future Batter. 2025, 7, 100099. [Google Scholar] [CrossRef]
  146. Li, H.; Li, X.; Dong, Y.; Hang, H.; Tian, Y.; Tian, J. A cross-material lithium-ion battery state of health estimation method based on three-stage domain adaptation. Energy 2025, 341, 139376. [Google Scholar] [CrossRef]
  147. Li, Y.; Shi, H.; Huang, Q.; Li, K.; Liu, C.; Nie, S.; Jia, X.; Fernandez, C. Enhanced multi-scale signal decomposition transformer neural network for state of health estimation of lithium-ion batteries. J. Energy Storage 2025, 134, 118191. [Google Scholar] [CrossRef]
  148. Lin, Y.; Zhou, L.; Yan, J.; He, S. A hybrid data-driven model for state of health estimation of Lithium-ion battery with capacity recovery. Eng. Appl. Artif. Intell. 2025, 161, 112146. [Google Scholar] [CrossRef]
  149. Kalhori, M.R.N.; Madani, S.S.; Fowler, M. Correlation-aware kernel selection for multi-scale feature fusion of convolutional neural networks in multivariate and multi-step time series forecasting: Application to Li-ion battery state of health forecasting. J. Energy Storage 2025, 139, 118860. [Google Scholar] [CrossRef]
  150. Zheng, K.; Li, Y.; Yang, Z.; Zhou, F.; Yang, K.; Song, Z.; Meng, J. Adversarial training defense strategy for lithium-ion batteries state of health estimation with deep learning. Energy 2025, 317, 134411. [Google Scholar] [CrossRef]
  151. Chen, X.; Yang, H.; Pan, C.; Jia, Z.; Wang, Z. A vehicle-cloud collaborative framework for state of health estimation of lithium-ion batteries via multi-feature fusion and hybrid data-driven-empirical modeling. Energy 2025, 340, 139287. [Google Scholar] [CrossRef]
  152. Yang, F.; Mao, Q.; Zhang, J.; Hou, S.; Cheng, K.E.; Lam, K.; Dai, J. Ultrasound-aided hybrid learning model for non-invasive state-of-health estimation in lithium-ion batteries based on physics-fusion convolutional and long short-term memory network. J. Energy Storage 2026, 143, 119703. [Google Scholar] [CrossRef]
  153. Cai, L.; Ru, R.; Jin, H.; Meng, J.; Wang, B.; Su, H.; Peng, J.; Cheng, G. An automated method for neural architecture design in lithium-ion battery state of health estimation. J. Energy Storage 2025, 138, 118636. [Google Scholar] [CrossRef]
  154. Liang, C.; Tao, S.; Huang, X.; Wang, Y.; Xia, B.; Zhang, X. Stochastic state of health estimation for lithium-ion batteries with automated feature fusion using quantum convolutional neural network. J. Energy Chem. 2025, 106, 205–219. [Google Scholar] [CrossRef]
Figure 1. A typical ANN architecture [78].
Figure 1. A typical ANN architecture [78].
Batteries 12 00076 g001
Figure 2. Diagram of CNN architecture [81].
Figure 2. Diagram of CNN architecture [81].
Batteries 12 00076 g002
Figure 3. Illustration of the architecture of the RNN [88].
Figure 3. Illustration of the architecture of the RNN [88].
Batteries 12 00076 g003
Figure 4. Comprehensive diagram of the PINN [94].
Figure 4. Comprehensive diagram of the PINN [94].
Batteries 12 00076 g004
Figure 5. Schematic of a CNN-LSTM hybrid architecture for battery SOH estimation [108].
Figure 5. Schematic of a CNN-LSTM hybrid architecture for battery SOH estimation [108].
Batteries 12 00076 g005
Figure 6. xLSTM network architecture for estimating SOH [109].
Figure 6. xLSTM network architecture for estimating SOH [109].
Batteries 12 00076 g006
Figure 7. The overall architecture of the Physics-Informed Transformer Network (PI-TNet) [128].
Figure 7. The overall architecture of the Physics-Informed Transformer Network (PI-TNet) [128].
Batteries 12 00076 g007
Figure 8. End-to-End framework for data-driven LIB SOH estimation with cross-domain transferability [141].
Figure 8. End-to-End framework for data-driven LIB SOH estimation with cross-domain transferability [141].
Batteries 12 00076 g008
Figure 9. Effect of training dataset length on SOH estimation accuracy across different machine-learning models [142].
Figure 9. Effect of training dataset length on SOH estimation accuracy across different machine-learning models [142].
Batteries 12 00076 g009
Figure 10. Standardized Cross-Battery Evaluation Pipeline for SOH Estimation [145].
Figure 10. Standardized Cross-Battery Evaluation Pipeline for SOH Estimation [145].
Batteries 12 00076 g010
Figure 11. Cross-Battery SOH Estimation Accuracy of Representative Machine-Learning Models [145].
Figure 11. Cross-Battery SOH Estimation Accuracy of Representative Machine-Learning Models [145].
Batteries 12 00076 g011
Table 1. Summary of recent review papers on SOH estimation for LIBs.
Table 1. Summary of recent review papers on SOH estimation for LIBs.
ReviewFocus of ReviewHighlightsMissing Topics/
Future Perspectives
Nazim et al. (2025) [43]
  • Systematic review of AI-based SOH and RUL estimation for EV LIBs
  • Emphasis on CNN- and RNN-based deep learning models
  • Dataset-centric perspective (NASA, CALCE, Oxford)
  • Performance comparison, preprocessing, and optimization strategies
  • Broad coverage of both SOH and RUL under a unified AI framework
  • Clear PRISMA-based literature screening with transparent inclusion/exclusion criteria
  • Comprehensive overview of machine learning and deep learning approaches
  • Discussion of feature selection, hyperparameter tuning, and transfer learning
  • Consideration of model complexity and deployment challenges
  • Balanced treatment of SOH and RUL estimation tasks
  • Lack of fine-grained taxonomy of neural architectures: architectures are grouped broadly (CNN/RNN) without distinguishing Transformers, attention mechanisms, hybrid or emerging models
  • Limited emphasis on learning strategies such as robustness, test-time adaptation, uncertainty quantification, adversarial defense, or efficiency-aware design
  • Physics-informed and hybrid learning are mentioned only at a high level, without systematic analysis or categorization
  • Non-stationarity and capacity regeneration are not treated as first-class challenges
Wang et al. (2025) [44]
  • Comprehensive overview of SOH degradation mechanisms, including chemical, physical, electrical, environmental, and manufacturing factors
  • Systematic classification of SOH estimation methods: experimental testing, model-driven (ECM, EM, Kalman filters), data-driven, and hybrid-driven approaches
  • Discussion of SOH improvement strategies, covering materials, thermal management, charge–discharge control, and intelligent BMS design
  • Strong emphasis on physical interpretability and degradation mechanisms, with detailed analysis of SEI growth, lithium plating, and material aging
  • Quantitative comparison of estimation methods in terms of RMSE, computational cost, and real-time feasibility, offering engineering-oriented guidance
  • In-depth review of Kalman filter variants and hybrid model-based methods for online SOH estimation
  • Inclusion of sensor-level innovations (EIS, temperature, gas sensors) and intelligent BMS perspectives
  • Limited coverage of modern neural architectures: RNN variants, CNN-Transformer hybrids, RetNet, attention mechanisms, and long-sequence modeling are not systematically analyzed
  • Learning strategies are underexplored: transfer learning, domain adaptation, test-time adaptation, robustness to distribution shift, and data-scarce learning are mentioned only briefly
  • Lack of architecture-level taxonomy linking model design choices to generalization, efficiency, and deployability in BMS
  • No critical synthesis connecting neural architecture choices with degradation, non-stationarity, and capacity regeneration
Wang et al. (2025) [45]
  • Broad survey of machine learning-based SOH estimation methods, covering statistical models, classical ML, deep learning, and hybrid-driven approaches
  • Emphasis on feature extraction, degradation indicators, and dataset-driven performance comparisons
  • Includes influencing factors, estimation pipelines, and application-oriented discussions
  • Comprehensive categorization of model-based, data-driven, and hybrid-driven SOH methods
  • Detailed discussion on health indicators (ICA, DVA, EIS, voltage/time features)
  • Identifies hybrid methods as a dominant trend for improving robustness and accuracy
  • Provides an extensive overview of datasets, preprocessing, and evaluation metrics
  • Discusses standardization, data sharing, and deployment challenges at a high level
  • Lacks a systematic taxonomy of neural architectures (CNN, RNN, Transformer, GNN, PINN)
  • Limited discussion on learning strategies such as transfer learning, few-shot/meta-learning, self-supervised learning, continual learning, and robustness-aware training
  • Physics-informed learning is discussed conceptually, but without architectural or loss-level distinctions
  • Does not analyze partial/irregular charging data handling strategies in depth
  • No structured comparison of deployability factors (latency, memory, onboard with cloud)
Lyu et al. (2026) [46]
  • Systematic taxonomy of experiment-based, model-based, and data-driven SOH estimation methods
  • Emphasis on trade-offs between accuracy, computational complexity, real-time performance, and practicality
  • Broad coverage from laboratory testing to real-world driving data scenarios
  • Clear comparison of method categories using radar-chart-style trade-off analysis (accuracy, data dependency, complexity, real-time feasibility)
  • Comprehensive discussion of aging mechanism diversity, capacity regeneration, and cell/module inconsistency
  • Strong emphasis on engineering deployment challenges, including embedded constraints, edge-cloud collaboration, and SOH chip design
  • Identifies physics-data hybrid modeling and transfer learning as key enablers for generalization
  • No fine-grained analysis of neural architectures (CNN, RNN and Transformer inductive biases)
  • Limited discussion of learning strategies, such as curriculum learning, adversarial robustness, test-time adaptation, or uncertainty-aware training
  • Lacks a unified taxonomy linking architectures to degradation challenges (non-stationarity, partial data, regeneration)
  • Emerging architectures (attention, Transformers, NAS, quantum/ultrasound fusion) are mentioned only tangentially
  • No discussion of architecture-strategy-deployability co-design, which is central to modern BMS integration
Wang et al. (2026) [47]
  • Dedicated review of deep learning-based battery health state estimation, with emphasis on SOH (and partially RUL)
  • Surveys mainstream deep neural architectures, including FNNs, CNNs, RNNs (LSTM/GRU), and hybrid CNN-RNN models
  • Focuses on performance comparison, data preprocessing, and feature extraction from voltage, current, temperature, and capacity trajectories
  • Primarily laboratory-dataset oriented (NASA, CALCE, Oxford)
  • Clear overview of deep learning architectures used for battery health estimation
  • Summarizes advantages and drawbacks of CNN-, RNN-, and hybrid-based approaches
  • Discusses data requirements, computational complexity, and training challenges
  • Highlights the importance of feature engineering and degradation-sensitive indicators
  • Identifies deep learning as a promising alternative to traditional model-based approaches for SOH estimation
  • Neural architectures are treated generically, without a fine-grained taxonomy distinguishing attention mechanisms, Transformers, efficient long-sequence models, or emerging architectures
  • Learning strategies are largely absent: transfer learning, domain adaptation, robustness to non-stationarity, test-time adaptation, and efficiency-aware training are not systematically discussed
  • Generalization across chemistries, temperatures, and operating profiles is not critically analyzed
  • No explicit linkage between architecture choice and BMS deployability (latency, memory footprint, edge/cloud integration)
Table 2. Quantitative comparison of attention-based and efficient sequence learners for SOH estimation.
Table 2. Quantitative comparison of attention-based and efficient sequence learners for SOH estimation.
ModelInput Regime/Sequence LengthAccuracy
(RMSE/MAE/R2)
Latency/FLOPs/MemoryEdge BMS Feasibility *
CNN-LSTM variants (baseline)Short-midVariesNRYes
LSTM-Transformer [125]STWC (short)
  • RMSE: 0.33–0.39%
  • MAE: 0.27–0.29%
NRConditional
TL-LSTM-MHDA-iTransformer [126]Mid
  • RMSE: 0.62–2.32%
  • MAE: 0.25–2.26%
  • Latency: ~2.31 s (SOH inference, hardware-specific)
  • Memory: Model < 50 KB
Conditional
DRDC-Transformer + Attention [127]Mid-long
  • RMSE: 0.0097
  • MAE: 0.0072
  • R2: 0.9896
  • Latency: ~5.43 ms/sample (range reported)
  • FLOPs/Memory: NR
Conditional
PI-TNet [128]LongNASA (best cell B0007):
  • RMSE: 0.00223
  • R2: 0.97844
NRConditional
VMD-CNN-Transformer [129]
  • Number of data segments: n = 4
  • Length of each segment: k = 12
  • Prediction starts at cycle 50
  • RMSE: 0.0151
  • MAE: 0.0116
NRConditional
SHMM-Transformer-BiGRU [130]Long (segmented)NASA:
  • RMSE: 1.25%
  • MAE: 0.81%
  • R2: 99.33%
CALCE:
  • RMSE: 3.28%
  • MAE: 2.45%
  • R2: 97.98%
Total execution time (training/evaluation), not per-sample latency: 103 s (NASA) and 218 s (CALCE)Conditional
CNN-Transformer + TTT [131]
  • Real-EV
  • 8 HIs + 100 voltage points (L = 50 after pooling)
  • RMSE: 0.0045–0.0048
  • MAE: 0.00294–0.00357
  • R2: 0.856–0.931
NR (qualitative argument only)Yes (with TTT)
MSDC-RetNet [124]Sequence length: n = 300
  • RMSE: ≈ 0.0069
  • R2: ≈ 0.9986
  • Latency: 11.55 ms
  • FLOPs: 30.0 M
  • Memory: 1.16 MB
Yes
* “Conditional”: feasible only if window length is bounded and/or computation is offloaded; “NR”: not reported.
Table 3. Decision-oriented design guidelines for SOH estimation under practical constraints.
Table 3. Decision-oriented design guidelines for SOH estimation under practical constraints.
Design ConstraintRecommended ArchitectureLearning StrategyWhy/Trade-Off Rationale
Well-curated lab data, full cycles, offline analysisLSTM/CNN-LSTMSupervised learningHigh accuracy with low complexity; limited need for advanced generalization
Partial/irregular windows (STWC), moderate computeCNN-LSTM or CNN-Transformer (lightweight)Feature engineering + supervisedLocal temporal features dominate; Transformers only help if windows are long
Long-horizon degradation, heterogeneous profilesEfficient sequence models (RetNet, xLSTM)Supervised or TLLinear/near-linear complexity improves robustness without prohibitive cost
Data scarcity (early life, few labels)Simple RNN/MLP + featuresTransfer learningTL reduces data demand without bias from assumed physics
Cross-chemistry/cross-condition deploymentHybrid modelsTL or weakly constrained PCMLTL handles distribution shift better; physics helps only if validated
High safety/interpretability requirementsPhysics-augmented hybrid modelsPCML/PIML (validated)Improves plausibility but may bias results if physics is misspecified
Edge BMS (strict latency/memory limits)Shallow RNN, tree-based models, RetNetSupervised/TLFavor deployability over marginal accuracy gains
Cloud-vehicle collaboration availableComplex Transformer hybridsSplit learning/TTTComplexity offloaded; online adaptation improves robustness
Non-stationary aging/regenerationHybrid + TTT or segmentationSelf-/semi-supervisedImproves robustness at cost of system complexity
Benchmarking/research comparisonAny (with ablation)Any (with controls)Must report splits, uncertainty and compute to ensure fairness
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Le, T.D.; Park, J.-H.; Lee, M.-Y. Neural Architectures and Learning Strategies for State-of-Health Estimation of Lithium-Ion Batteries: A Critical Review. Batteries 2026, 12, 76. https://doi.org/10.3390/batteries12020076

AMA Style

Le TD, Park J-H, Lee M-Y. Neural Architectures and Learning Strategies for State-of-Health Estimation of Lithium-Ion Batteries: A Critical Review. Batteries. 2026; 12(2):76. https://doi.org/10.3390/batteries12020076

Chicago/Turabian Style

Le, Tai Duc, Jin-Hyeok Park, and Moo-Yeon Lee. 2026. "Neural Architectures and Learning Strategies for State-of-Health Estimation of Lithium-Ion Batteries: A Critical Review" Batteries 12, no. 2: 76. https://doi.org/10.3390/batteries12020076

APA Style

Le, T. D., Park, J.-H., & Lee, M.-Y. (2026). Neural Architectures and Learning Strategies for State-of-Health Estimation of Lithium-Ion Batteries: A Critical Review. Batteries, 12(2), 76. https://doi.org/10.3390/batteries12020076

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop