Next Article in Journal
Optimizing Solvent-Assisted SAGD in Deep Extra-Heavy Oil Reservoirs: Mechanistic Insights and a Case Study in Liaohe
Previous Article in Journal
Advanced Diagnostic Techniques for Earthing Brush Faults Detection in Large Turbine Generators
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Artificial-Intelligence-Based Energy Management Strategies for Hybrid Electric Vehicles: A Comprehensive Review

by
Bin Huang
1,2,
Wenbin Yu
1,2,
Minrui Ma
1,2,
Xiaoxu Wei
1,2,* and
Guangya Wang
1,2
1
Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan 430070, China
2
Hubei Collaborative Innovation Center for Automotive Components Technology, Wuhan University of Technology, Wuhan 430070, China
*
Author to whom correspondence should be addressed.
Energies 2025, 18(14), 3600; https://doi.org/10.3390/en18143600
Submission received: 10 June 2025 / Revised: 28 June 2025 / Accepted: 6 July 2025 / Published: 8 July 2025
(This article belongs to the Special Issue Optimized Energy Management Technology for Electric Vehicle)

Abstract

The worldwide drive towards low-carbon transportation has made Hybrid Electric Vehicles (HEVs) a crucial component of sustainable mobility, particularly in areas with limited charging infrastructure. The core of HEV efficiency lies in the Energy Management Strategy (EMS), which regulates the energy distribution between the internal combustion engine and the electric motor. While rule-based and optimization methods have formed the foundation of EMS, their performance constraints under dynamic conditions have prompted researchers to explore artificial intelligence (AI)-based solutions. This paper systematically reviews four main AI-based EMS approaches—the knowledge-driven, data-driven, reinforcement learning, and hybrid methods—highlighting their theoretical foundations, core technologies, and key applications. The integration of AI has led to notable benefits, such as improved fuel efficiency, enhanced emission control, and greater system adaptability. However, several challenges remain, including generalization to diverse driving conditions, constraints in real-time implementation, and concerns related to data-driven interpretability. The review identifies emerging trends in hybrid methods, which combine AI and conventional optimization approaches to create more adaptive and effective HEV energy management systems. The paper concludes with a discussion of future research directions, focusing on safety, system resilience, and the role of AI in autonomous decision-making.

1. Introduction

Against the backdrop of the global “dual carbon” strategy and the accelerating transformation of the automotive industry, Hybrid Electric Vehicles (HEVs) are progressively cementing their pivotal role in the new energy vehicle market [1]. According to the Global EV Outlook 2025 published by the International Energy Agency (IEA), global electric vehicle sales—including hybrid models—exceeded 17 million units in 2024, representing a year-on-year increase of over 25% and accounting for more than 20% of total new vehicle sales [2]. Among these, Plug-in Hybrid Electric Vehicles (PHEVs) accounted for approximately 5.1 million units, making up about 30% of the total electric vehicle market [3]. Meanwhile, traditional automakers, notably represented by Toyota, have performed notably well in the non-plug-in hybrid sector, with annual sales surpassing 4 million units [4]. Due to their excellent fuel economy, low emissions, and strong adaptability to complex driving conditions, HEVs are widely regarded as a transitional technology for achieving low-carbon transportation—particularly suitable for regions where battery costs remain high and charging infrastructure is still underdeveloped [5]. Consequently, improving HEV operational efficiency and extending the lifespan of power systems has become a key focus for both industry and academia.
Within HEV systems, the Energy Management Strategy (EMS) serves as the core control logic, responsible for coordinating energy distribution between the internal combustion engine and the electric motor. It directly affects fuel efficiency, emission control, and driving experience [6]. Early commercial HEVs predominantly adopted rule-based control strategies, wherein engineers predefined logical rules and triggering conditions (such as battery state of charge or motor power demand) based on empirical knowledge to manage engine start–stop and energy flow switching. Although intuitive and easy to implement, these methods struggle to achieve global optimality under dynamically changing driving conditions [7]. To address this limitation, researchers proposed optimization-based methods such as Dynamic Programming (DP) and the Pontryagin’s Minimum Principle (PMP), which theoretically yield minimum fuel consumption paths and are often used as benchmarks to evaluate other strategies. However, their heavy computational burden and reliance on complete driving cycle information hinder real-time implementation [8].
To balance optimality and real-time responsiveness, online optimization strategies such as Equivalent Consumption Minimization Strategy (ECMS) and Model Predictive Control (MPC) have been explored. These approaches perform rolling horizon optimization to achieve near-optimal control based on current system states and short-term predictions [9,10]. While these methods outperform traditional rule-based strategies in terms of efficiency, challenges such as complex parameter tuning and sensitivity to model inaccuracies persist. Overall, although conventional EMS techniques have achieved substantial success in both theoretical and practical domains, their limited adaptability, delayed response to complex scenarios, and lack of scalability across different vehicle types and driving conditions have become major obstacles to further the performance improvement in HEVs. Furthermore, ensuring the safety and reliability of the entire HEV system, including critical aspects like wireless charging security (e.g., mitigating risks from foreign metal objects [11] and managing thermal hazards in ground assemblies during misaligned charging [12]) adds another layer of complexity that modern EMS must address.
In recent years, the rapid advancement of artificial intelligence (AI) technologies has provided new opportunities for breakthroughs in HEV energy management. With the increased computational power and widespread adoption of big data analytics, researchers have begun integrating AI algorithms into EMS frameworks, enabling environmental perception, autonomous learning, and intelligent decision-making—thus reducing the dependence on manual rules and predefined models [13]. In this context, this paper categorizes AI-driven EMS approaches into four types: knowledge-driven, data-driven, reinforcement learning-based, and hybrid strategies. Additionally, emerging AI technologies like deep learning and vehicle-to-vehicle communication are becoming central to modern EMS, providing enhanced adaptability and scalability in real-world conditions. These advancements mark a significant evolution in energy management, addressing the complex, dynamic scenarios encountered by modern HEVs.
While several previous studies have provided valuable insights into Energy Management Strategies (EMSs) for Hybrid Electric Vehicles (HEVs), notable gaps remain. Panday and Bansal [14] focused primarily on conventional rule-based and optimization methods, with limited discussion on recent advances in artificial intelligence (AI). Urooj et al. [1] proposed a classification into online and offline EMS approaches, but their review lacked a systematic comparison across AI paradigms and offered minimal discussion on performance metrics such as fuel economy or real-time feasibility. Lü et al. [15] conducted a detailed analysis of Model Predictive Control (MPC) in EMS but did not explore broader AI techniques or their integration with traditional strategies. Zhang et al. [16] introduced a structural classification of EMS; yet, their work did not fully account for the growing role of hybrid AI strategies or the need for Pareto-optimal trade-offs in multi-objective optimization. Additionally, reviews such as Khalatbarisoltani et al. [17], though informative for fuel cell vehicles, fall short in addressing the challenges specific to HEVs.
In contrast, the present review advances the field in several key aspects. First, it introduces a unified AI-based EMS classification framework encompassing four major paradigms—knowledge-driven, data-driven, reinforcement learning, and hybrid strategies—reflecting the most recent algorithmic trends. Second, it provides a detailed, quantitative comparison of each category’s strengths and limitations across critical dimensions, including fuel economy gains, computational cost, real-time performance, emissions reduction, and generalization ability. Third, it addresses emerging challenges such as interpretability, safety validation, and the lack of standardized benchmarking protocols, offering a structured analysis and future research perspectives. Finally, this work synthesizes recent interdisciplinary advances—such as digital twin technology, vehicle-to-everything (V2X) communication, and solid-state battery integration—into the EMS context, thereby offering a comprehensive and forward-looking roadmap for the development of intelligent, scalable, and robust energy management systems.

2. Overview of EMS for HEVs

The Energy Management Strategy (EMS) in Hybrid Electric Vehicles (HEVs) governs the power allocation between the internal combustion engine and the electric motor, playing a central role in determining fuel economy, emission levels, and overall vehicle performance. Over the past decades, researchers have developed a variety of EMS approaches, which can be broadly classified into two major categories based on their design principles: rule-based and optimization-based strategies. In addition to these, methods such as Model Predictive Control (MPC) and fuzzy control occupy important positions within the traditional EMS framework.
Figure 1 illustrates a commonly accepted classification framework for Energy Management Strategies in HEVs. Among them, rule-based control strategies and optimization-based control strategies constitute the two fundamental categories. The former relies on a set of predefined control rules, while the latter seeks optimal or sub-optimal control sequences through optimization algorithms.
With technological advancements, additional strategies such as predictive control and intelligent control have emerged, extending and supplementing the traditional classification schemes. In the following sections, we will introduce the underlying principles and key characteristics of each strategy type, and analyze how the incorporation of artificial intelligence has driven the evolution of energy management paradigms.

2.1. Rule-Based Strategies

Rule-based EMSs determine the operating modes and power distribution between the engine and motor according to a set of predefined rules [18]. These strategies are generally guided by engineering experience, heuristic logic, or simple mathematical models, and make decisions based on vehicle states such as battery state of charge (SOC), vehicle speed, and power demand.
Typical rule-based control strategies can be divided into two main categories: deterministic rules and fuzzy logic strategies. Deterministic rule-based strategies rely on clearly defined thresholds and conditions, employing a strict “if–then” logic to control the engine and motor operation modes [19]. For example, a strategy may define charging and discharging thresholds based on SOC: when the SOC falls below a certain level, the engine starts charging the battery, whereas, when the SOC exceeds the threshold, the electric motor is prioritized for vehicle propulsion. These strategies are structurally simple and easy to implement, and respond quickly, making them well-suited for engineering applications. However, they lack flexibility and struggle to adapt to complex or rapidly changing driving conditions.
Fuzzy logic strategies, in contrast, utilize fuzzy sets and fuzzy inference rules to map input variables—such as SOC, vehicle speed, and power demand—into fuzzy values. A fuzzy inference system is then employed to derive output control decisions. This approach is more effective in handling system uncertainty and nonlinearities, thereby enhancing the smoothness and robustness of energy management. While fuzzy logic strategies offer greater adaptability than deterministic rules, they typically require more effort in design and parameter tuning [1].
An overview of commonly used rule-based strategies and their main characteristics is presented in Table 1.

2.2. Optimization-Based Strategies

Optimization-based Energy Management Strategies center on formulating and solving an optimization problem, typically aimed at minimizing a cost function—such as fuel consumption, emissions, or overall operational cost—while satisfying vehicle power demand and system constraints [21]. Unlike rule-based approaches that apply direct control based on predefined rules, optimization strategies define global performance objectives and derive real-time control decisions using vehicle models and driving conditions through dedicated optimization algorithms. These strategies are theoretically capable of significantly improving fuel economy and offer a more systematic and objective alternative to heuristic methods [22].
Optimization-based strategies can generally be classified into two subcategories: global optimization and instantaneous (real-time) optimization. Global optimization focuses on the entire driving cycle and seeks the optimal energy allocation trajectory using methods such as DP and optimal control theory, including Pontryagin’s Minimum Principle (PMP) [23]. The resulting control trajectories are typically regarded as benchmarks for evaluating the performance of other strategies. However, global optimization requires full knowledge of future driving conditions and involves intensive computation, which renders it impractical for real-time implementation. It is, therefore, mostly used in offline simulations or precomputed control maps [24].
In contrast, instantaneous or real-time optimization aims to make locally optimal decisions at each time step or within a short prediction horizon. Through rolling horizon prediction and continuous updates, these strategies approximate global optimality over the entire trip. Representative methods include the Equivalent Consumption Minimization Strategy (ECMS) and MPC [25]. At each sampling instant, control inputs are computed to minimize the instantaneous cost function based on the current vehicle state and short-term predictions, thereby achieving near-optimal performance throughout the driving process. Since these methods do not require complete a priori knowledge of the driving cycle, they are well suited for real-time applications and have become a research and application hotspot [17].
Common optimization-based EMS strategies and their characteristics are summarized in Table 2 below.

2.3. Strategy Evolution and the Introduction of AI

With the swift advancement in vehicle connectivity and artificial intelligence, EMS for HEVs are also undergoing a notable transformation. Traditional systems—whether rule-based or optimization-driven—are increasingly being augmented by intelligent algorithms. In recent years, technologies like reinforcement learning and deep learning have started to reshape the EMS landscape, pushing energy management beyond a rigid, predefined logic toward systems capable of adaptive learning. This shift marks a growing convergence between knowledge-based and data-centric approaches. As illustrated in Figure 2, the new generation of EMS methods can generally be grouped into a few main types.
Knowledge-driven approaches are grounded in expert understanding and physical system modeling. These methods embed domain expertise into the decision-making process, typically relying on established rules or mathematical models. To enhance flexibility under varying driving conditions, researchers have focused on refining control parameters or incorporating modules capable of recognizing driving patterns [27].
On the other hand, data-driven approaches harness large-scale data and machine learning techniques to build models that can make decisions based on vehicle data—either from past records or in real time. These strategies often involve using neural networks or comparable tools to approximate optimal policies, or to function directly as controllers that learn how to split power between the engine and the electric motor. Crucially, these systems bypass the need for manually crafted rules, instead extracting control logic autonomously from extensive simulation or real-world data [28,29]. That said, their effectiveness is closely tied to the depth and diversity of the training datasets, along with the model’s ability to generalize to new conditions [29].
Reinforcement learning, as a subset of data-driven methods, focuses on learning by interacting with the environment. EMS is reframed here as a sequential decision-making problem, tackled by agents—often employing deep reinforcement learning—that iteratively refine their control strategies to maximize long-term benefits like improved fuel efficiency or reduced emissions [30]. These methods do not rely on precise system models, which allows them to adapt in real time to a wide array of driving situations. However, issues such as stability during training, the speed of convergence, and safety during exploration still pose significant challenges [31].
Finally, hybrid strategies aim to draw on the strengths of all the above to create more balanced systems. In many hybrid EMS frameworks, conventional rule-based or model-driven strategies handle higher-level decision-making to maintain safety and ensure performance, while AI-driven algorithms are tasked with fine-tuning the power distribution in real time. This setup allows for both global optimization and real-time responsiveness [32]. Some hybrid approaches take it a step further by using AI to improve existing rule-based systems—for example, optimizing fuzzy logic rules with genetic algorithms or deep learning techniques—or by embedding expert insights into learning algorithms to speed up convergence [33]. By combining the reliability of knowledge-driven methods with the adaptability of data-driven ones, hybrid strategies manage to achieve a more robust and versatile performance profile.

2.4. Benchmark Driving Cycles and Open-Source Tools for EMS Evaluation

To ensure reproducibility and an objective performance comparison, the recent EMS research for HEVs has emphasized the importance of standardized driving cycles and open-source simulation platforms. These elements enable the consistent evaluation of energy consumption, emissions, and real-time control performance across various operating conditions and strategy types [16,32].
Table 3 summarizes commonly used benchmark driving cycles. These standardized tests represent a wide range of driving conditions—including urban stop-and-go traffic, high-speed highway cruising, and aggressive acceleration scenarios—and are widely adopted in EMS validation studies. For instance, WLTC is now a globally accepted benchmark, while US06 and UDDS are often used in the U.S. to simulate real-world driving behavior.
To complement physical benchmarks, various open-source tools have emerged to support simulation-based EMS development and testing. As listed in Table 4, these tools provide capabilities such as powertrain modeling, signal acquisition, control optimization, and vehicle behavior simulation. Platforms like OpenVIBE and PyEM enable modular testing of AI-driven EMS designs under flexible and repeatable virtual conditions, significantly improving research transparency and comparability.
By integrating these benchmark cycles and tools into EMS evaluation frameworks, researchers can enhance the standardization of experimental setups, facilitate fair cross-study comparisons, and accelerate the development of reliable, scalable Energy Management Strategies for real-world HEV applications.

3. Classification and Principles of AI-Based EMSs for HEVs

In light of Section 2’s discussion on EMS evolution, this section categorizes AI-driven Energy Management Strategies into four main types: knowledge-driven, data-driven, reinforcement learning, and hybrid methods. For each category, the fundamental principles, typical algorithms, and representative use cases in HEV control are presented. This framework clarifies how various AI paradigms address HEV energy management tasks and prepares the ground for the detailed review in Section 3.1, Section 3.2, Section 3.3, Section 3.4.

3.1. Knowledge-Driven Strategies

In the energy management of HEVs, knowledge-driven strategies primarily rely on engineering expertise and vehicle dynamics knowledge to establish control rules. These strategies do not require extensive data training, making them particularly suitable and robust in scenarios where system models are difficult to construct or where considerable uncertainties exist [38]. Such strategies typically collect driving condition inputs—such as road type and vehicle speed variations—alongside the operational states of key components including the battery, electric motor, and internal combustion engine. These inputs are processed by the EMS, which utilizes a rule base composed of deterministic and fuzzy logic rules. A reasoning engine then infers an optimal energy distribution plan, which is implemented by a controller to coordinate the operation of all power sources, thereby achieving intelligent energy regulation (as illustrated in Figure 3).
Although knowledge-driven strategies lack self-learning capabilities, their decision logic—derived from expert systems and rule-based frameworks—offers strong interpretability and real-time responsiveness. This approach reflects the core characteristics of early symbolic artificial intelligence. With the rapid evolution of AI technologies, EMS frameworks have increasingly embraced hybridized methodologies. Knowledge-driven strategies, due to their structural stability and engineering feasibility, are frequently integrated into data-driven or reinforcement-learning-based intelligent control schemes. As such, they continue to hold significant research and practical value under the current paradigm of AI-driven vehicle control.
This category of strategy can be further divided into two main types: deterministic rule-based control and fuzzy logic control. Deterministic rule-based control, also known as logic threshold control, defines control logic by setting thresholds for key variables based on empirical knowledge. For instance, as shown in Figure 4, the engine’s start–stop behavior can be regulated by defining upper and lower bounds for the battery’s SOC. This approach was widely adopted in early HEV models—such as the SOC-hold strategy in the Toyota Prius—to balance energy consumption and power output. However, due to its tendency to cause abrupt control actions and result in poor system smoothness, its application is somewhat limited [39,40].
In contrast, fuzzy logic control is based on fuzzy set theory and aims to formalize expert knowledge to handle highly nonlinear systems. As shown in Figure 5, the typical fuzzy control process includes fuzzification, fuzzy inference, and defuzzification. Input variables are first transformed into linguistic variables (e.g., classifying SOC as “low,” “medium,” or “high”) and mapped through membership functions. Expert-defined IF–THEN rules are then applied for fuzzy inference—commonly using the Mamdani method, which determines the rule activation strength via the minimum operator and aggregates outputs via the maximum operator. Finally, defuzzification techniques such as the centroid method are used to derive concrete control commands [41,42].
Compared with deterministic rule-based control, fuzzy logic control provides greater adaptability and flexibility, significantly improving system smoothness and reducing abrupt transitions. Nonetheless, the design of fuzzy systems is heavily reliant on expert experience. The development of rule sets and membership functions is inherently subjective and labor-intensive, and achieving global system optimality remains challenging. To address these limitations, recent studies have incorporated intelligent optimization algorithms—such as genetic algorithms and particle swarm optimization—to automatically fine-tune fuzzy rule parameters, thereby enhancing system performance [43]. Common input variables for fuzzy controllers include SOC, vehicle power demand, and vehicle speed, while outputs typically involve the distribution of power between the engine and electric motor. To better accommodate dynamic driving environments, multi-mode fuzzy control schemes have been proposed. These approaches dynamically switch between rule subsets tailored for urban, suburban, or highway conditions, thereby improving the generalization and real-world adaptability of the EMS.

3.2. Data-Driven Strategies

With the rapid development of artificial intelligence, data-driven strategies have become a central tool in energy management for HEVs. Unlike knowledge-driven approaches that rely on human expertise, data-driven methods—illustrated in Figure 6—leverage real-world driving data such as vehicle speed and road conditions to establish mappings between input features and optimal energy allocation strategies using supervised or unsupervised learning. This enables intelligent energy coordination and significantly enhances vehicle performance and energy efficiency [13]. As shown in Figure 7, these strategies can be broadly categorized into supervised and unsupervised learning approaches. By deeply modeling the vehicle and environmental states, data-driven methods drive energy allocation toward optimality and improve the synergy between fuel and electric power usage [44].
Unsupervised learning does not require labeled datasets and excels in identifying latent structures from raw data. It is widely applied in driving pattern recognition and dimensionality reduction. As depicted in Figure 8, K-means clustering is commonly used to differentiate various driving conditions, enabling the dynamic adjustment of energy allocation strategies to improve dual energy source efficiency [45,46]. However, the results are highly sensitive to initial cluster centroids, which may lead to suboptimal outcomes. Incorporating GA to optimize initial conditions can significantly enhance the clustering quality and overall system performance [47]. Meanwhile, a Principal Component Analysis (PCA) simplifies high-dimensional data by extracting key features through dimensionality reduction. When combined with GAs to optimize control parameters, the PCA further strengthens the performance of HEV control systems [28,48].
Supervised learning, by contrast, relies on labeled data to build predictive models and is well-suited for applications such as driving condition classification, fault detection, and energy forecasting. Support Vector Machines (SVMs), known for their excellent generalization ability in handling high-dimensional data, not only offer precise fault diagnosis but also assist in extracting critical conditions for the Equivalent Consumption Minimization Strategy (ECMS), thereby improving both fuel economy and driving comfort [45,49]. In addition, decision trees and ensemble methods like Random Forests have demonstrated strong performance in HEV control. By integrating multiple decision trees, Random Forests enhance input feature selection and construct more robust energy management models, effectively reducing energy waste [50].
In summary, both supervised and unsupervised learning approaches within data-driven strategies contribute distinct advantages to HEV energy management and collectively enhance system intelligence. From the synergistic integration of K-means and GA, to the efficient dimensionality reduction by PCA, and the precise applications of SVM and Random Forests in fault identification and energy control (see Figure 9), data-driven methods are progressively forming the technological foundation for intelligent energy management in HEVs [28,51].

3.3. Reinforcement Learning Strategies

In reinforcement learning (RL), an agent interacts continuously with the environment and adjusts its strategy based on reward and punishment feedback, aiming to maximize long-term cumulative returns. As illustrated in Figure 10, RL plays a pivotal role in the energy management system of HEVs. It enables dynamic decision-making for power distribution between the internal combustion engine and electric motor while simultaneously considering multidimensional feedback such as fuel efficiency, emission impacts, and battery health, thereby continuously optimizing control strategies [52,53,54]. In contrast to conventional supervised learning methods, RL does not rely on predefined labeled datasets and exhibits a strong capacity for autonomous learning. This makes it particularly well-suited for complex driving scenarios characterized by a high uncertainty and dynamic environmental changes [55].
Among various RL algorithms, Q-Learning and SARSA are widely used representatives. As shown in Figure 11, Q-Learning is favored in HEV energy management due to its simplicity and deployment flexibility. It can gradually approximate optimal control policies even under conditions of incomplete information or inaccurate prediction, effectively improving fuel economy and extending battery lifespan [56,57]. SARSA, another value-based method, continuously updates the value of state-action pairs (Q(s, a)) in its Q-table to iteratively enhance the policy performance [58]. Given the high cost and safety risks associated with trial-and-error learning on physical vehicles, RL is typically trained offline within high-fidelity simulation environments. By utilizing extensive simulated driving data to achieve policy convergence, the trained model can then be deployed in real vehicles, with online learning and parameter fine-tuning incorporated to maintain a superior performance even under battery degradation or external condition fluctuations [59,60].
In recent years, the continued advancement of deep learning has expanded the application of deep reinforcement learning (DRL) in this field, as shown in Figure 12. DRL leverages deep neural networks to approximate Q-value functions or policy functions, significantly improving the system’s ability to handle high-dimensional state spaces and continuous action spaces, and enhancing adaptability in dynamically complex environments [61]. Techniques such as Deep Q-Networks (DQNs), Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO), and Soft Actor–Critic (SAC) have demonstrated great potential in HEV energy management [62,63]. DRL not only enables multi-objective optimization—covering fuel consumption, emissions, and battery longevity—but also facilitates flexible energy allocation strategies based on different driving contexts (e.g., urban traffic, highway cruising, or congestion), thereby exhibiting a superior generalization capability and practical applicability [64]. A comparative overview of representative reinforcement learning algorithms applied in HEV energy management is summarized in Table 5, highlighting their key characteristics, typical applications, and main advantages.

3.4. Hybrid Strategies

In light of the performance limitations of single control methods, the integration of intelligent algorithms with traditional optimization approaches—referred to as hybrid strategies—has emerged as an effective pathway to enhance the robustness and overall performance of HEV energy management systems.
To provide a structured overview of hybrid energy management approaches, Table 6 summarizes the five principal categories currently adopted in the literature. Each category reflects a specific integration paradigm between intelligent algorithms and conventional optimization/control methods, highlighting their key mechanisms, strengths, and application domains.
The first category involves a collaborative mechanism between offline optimization and online control. DP can generate globally optimal energy allocation strategies under predefined conditions; its computational complexity renders it unsuitable for real-time control applications [65]. In hybrid strategies, DP is typically used to generate a large volume of high-quality samples to train learning models that approximate DP-level performance in online environments. For example, Zhuang et al. obtained the optimal driving condition data using DP and constructed a mode transition map with Support Vector Machines (SVMs) to optimize the Equivalent Consumption Minimization Strategy (ECMS), leading to notable improvements in fuel economy and driving smoothness [48,68].
The second category embeds real-time optimization within a learning-based control framework. Model Predictive Control (MPC) is capable of dynamic optimization over short time horizons but incurs a high computational cost. To address this, Chen et al. [69]. proposed a strategy combining Q-Learning with MPC, as shown in Figure 13. A Q-table is trained offline and subsequently used in MPC’s rolling horizon optimization to assist value evaluation, effectively reducing the computational load and enhancing the energy efficiency [66]. Shen et al. further incorporated Double Delayed Q-Learning (DDQL) and vehicle speed predictions via Convolutional Neural Networks (CNNs) to develop a more adaptive MPC control structure [70].
The third category emphasizes the dynamic adjustment of optimization parameters using intelligent algorithms. For instance, the control performance of ECMS depends on fixed equivalence factors, which struggle to adapt to dynamic real-world conditions. Recent studies have proposed feeding the state of charge (SOC) and predicted driving conditions into deep reinforcement learning models to achieve the real-time adaptive tuning of equivalence factors [71]. Shi et al., for example, applied a Deep Double Q-Network (DDQN) combined with periodic prediction data to dynamically refine equivalence factors, enabling the precise coordination between the engine torque and gear ratio under varying driving demands and, ultimately, reducing fuel consumption [66].
The fourth category focuses on the intelligent optimization of parameters within rule-based or fuzzy control systems. Algorithms such as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) are frequently employed to fine-tune membership functions and control rules in fuzzy controllers. GA-based optimization has been shown to significantly improve fuel efficiency, achieving an up to 28% energy savings [72]. Fan et al. simultaneously optimized the fuzzy rule thresholds and ECMS equivalence factors using GA, achieving a superior real-time performance while maintaining its performance close to that of DP [73]. Additionally, adaptive Fuzzy Neural Network (FNN) structures have been proposed. Zhang et al. combined neural networks with driving cycle recognition to optimize fuzzy membership functions, improving the strategy’s adaptability across diverse conditions [74]. As shown in Figure 14, Wang et al. further developed the FlexNet network based on expert knowledge, integrating clustering techniques and hybrid learning to optimize FNN parameter configurations for a cooperative balancing between the battery and supercapacitor loads [67].
The fifth category of hybrid strategies implements hierarchical control architectures to coordinate multiple approaches. As illustrated in Figure 15, such architectures typically involve a high-level controller providing global or long-horizon control references—often based on reinforcement learning or global optimization—while low-level controllers (e.g., MPC or rule-based control) perform real-time tracking [75]. For example, Zhang et al. developed a dual-layer EMS, where the upper layer employs DDPG to generate control strategies, and the lower layer selects the appropriate control modes to achieve a balance between fuel economy and response time [74]. In intelligent and connected vehicle scenarios, this architecture can further scale to fleet-level and individual-vehicle-level control: the upper layer manages energy usage across the fleet, while the lower layer performs local optimization for each vehicle. This cooperative architecture enhances the overall system performance via inter-vehicle communication.
In summary, hybrid strategies combine the global optimality and theoretical rigor of optimization algorithms with the self-learning and adaptability of intelligent algorithms, balancing real-time responsiveness with generalization capabilities. In practice, deep reinforcement learning algorithms such as Proximal Policy Optimization (PPO) have already demonstrated near-DP performance while being suitable for real-time applications [76]; GA-optimized fuzzy control strategies integrate expert knowledge with search capabilities to overcome the energy-saving limitations of traditional rule-based control [72]. With the continuous advancement of computational power and vehicular networking infrastructure, multi-strategy hybrid approaches for HEV energy management are becoming mainstream, driving the deeper integration between intelligent control and energy optimization.

4. Research Achievements and Advances in AI-Based Strategies

To provide a clear overview of the performance of various artificial intelligence-based Energy Management Strategies for Hybrid Electric Vehicles (HEVs), Table 7 presents a comparative analysis across key performance metrics. The table highlights the fuel economy improvements, computational costs (CPU time), real-time feasibility (latency), emissions reductions, generalization ability, and the validation methods used in the corresponding studies.
As shown in Table 7, each strategy type—knowledge-driven, data-driven, reinforcement learning, and hybrid strategies—exhibits distinct advantages and limitations. For example, knowledge-driven strategies such as FLC are extremely efficient in terms of computational cost and real-time performance, making them suitable for applications requiring fast response times. However, they tend to have a weaker generalization ability compared to data-driven and reinforcement learning approaches. On the other hand, reinforcement learning methods like PPO show higher fuel economy improvements and stronger generalization, but they come with higher computational costs and are typically validated in simulation environments. Hybrid strategies, which combine reinforcement learning with Model Predictive Control (MPC), provide a balanced solution, offering higher fuel economy improvements and better real-time feasibility, but at the cost of an increased computational complexity.
This comparative table helps underline the trade-offs and application scenarios for each approach, guiding the selection of the most suitable Energy Management Strategy based on specific HEV design goals and constraints.

4.1. Applications and Effectiveness of Knowledge-Driven Strategies

Knowledge-driven strategies exhibit outstanding adaptability and robustness in energy management for HEVs, as they integrate expert knowledge, engineering experience, and intelligent algorithms to construct efficient and reliable control frameworks. These strategies are grounded in an a priori understanding of vehicle dynamics, energy flow principles, and driving scenarios, demonstrating particular effectiveness in dynamic environments and multi-objective optimization [77].
Fuzzy logic control (FLC) emphasizes a multi-input/multi-output rule design, balancing fuel efficiency, battery longevity, and driving responsiveness. For series HEVs, ref. [78] proposed a three-input and two-output fuzzy controller using the state of charge (SOC), driver demand, and vehicle speed as inputs, and engine power and regenerative braking strength as outputs. This design effectively maintained the SOC within the target range and achieved an 8% reduction in fuel consumption. Based on this framework, ref. [79] streamlined the rule set and developed a controller based on the battery energy state and target SOC, improving the SOC tracking accuracy and reducing emissions by 12%. Higher-order fuzzy systems demonstrate enhanced performance in highly uncertain environments. The interval type-2 fuzzy logic controller (IT2-FLC) introduced in [80] reduced the dynamic response error by 15% compared to traditional type-1 fuzzy logic controllers (T1-FLCs), revealing its potential in autonomous driving applications.
The adaptive neuro-fuzzy inference system (ANFIS) has further promoted the intelligence of knowledge-driven control strategies. In [81], an ANFIS-based energy management scheme was designed for vehicle-to-grid (V2G) microgrids, enabling the dynamic adjustment of electric vehicle (EV) charging and discharging power under conditions with large initial SOC disparities. This approach improved the power allocation efficiency by 20% compared to conventional FLC. As shown in Figure 16, ref. [82] integrated ANFIS with the equivalent consumption minimization strategy (ECMS) to dynamically calibrate the equivalent factor, resulting in a 23% reduction in the terminal SOC variance for a plug-in parallel HEV, while achieving fuel economy close to the DP benchmark. In a performance comparison between FLC and ANFIS, ref. [83] found that the latter, through data-driven rule base construction, not only enhanced the SOC maintenance capability by 15% but also significantly improved the smoothness of the control curve.
In addition, Fu et al. [84] proposed a rule-based Energy Management Strategy for fuel cell electric vehicles (FCEVs) that optimizes the fuel cell’s participation, open-circuit voltage (OCV) operation, and start/stop actions based on predefined rules, which are dynamically adjusted according to different driving cycles and load conditions. This strategy improved the fuel cell lifespan by 9.47%, while reducing hydrogen consumption and system degradation rates. Similarly, Qin et al. [85] introduced a strategy for fuel cell hybrid mining trucks that combines dynamic programming (DP) with real-time optimization, embedding optimal rules derived from DP into a real-time model refined through sequential quadratic programming (SQP) for energy allocation. This strategy resulted in a reduction in operational costs by up to 15.90% in continuous operation conditions.
Knowledge-driven strategies also demonstrate distinct advantages in the coordination of multi-energy systems. Reference [86] proposed an online energy management framework based on game theory, which utilizes the real-time identification of power source characteristics and power allocation via game-based decision-making. This approach reduced the total operating cost of a multi-stack fuel cell HEV by 9.7% and decreased hydrogen consumption by 6.2%. In [87], a high-precision artificial neural network (ANN) fuel consumption model was developed to capture the transient characteristics of the engine. By incorporating this model into an improved adaptive equivalent consumption minimization strategy (ECMS), the additional fuel consumption caused by engine state fluctuations was reduced by 99.16%, and overall fuel economy improved by 3.37%.
Furthermore, Reference [88] developed a trip-energy-driven hybrid rule-based Energy Management Strategy, which integrates personalized control rules to reduce engine start–stop events by up to 30%. Compared to conventional rule-based strategies, this approach achieved a 5.8% improvement in fuel efficiency.

4.2. Technical Progress in Data-Driven Strategies

Data-driven approaches have been increasingly applied to energy management in HEVs, aiming to extract patterns from complex data through machine learning and to optimize control strategies accordingly. Based on differences in learning paradigms, existing methods can be categorized into unsupervised learning and supervised learning, both of which offer complementary advantages in terms of feature extraction, model training, and practical deployment scenarios.
In the EMS of HEVs, unsupervised learning techniques are primarily employed for driving pattern recognition and dimensionality reduction. These methods are advantageous due to their ability to explore working conditions without requiring labeled data. Common techniques include K-means clustering and PCA [15,89]. For example, Zhang et al. applied K-means to classify driving behaviors to guide energy allocation strategies, resulting in reduced fuel and electricity consumption [90]. Sun et al. combined K-means with Markov chains for vehicle speed prediction and the dynamic adjustment of the equivalent factor in the ECMS, thus improving energy utilization efficiency [89]. Furthermore, some studies have integrated PCA with GA to cluster PHEV driving conditions and optimize control parameters, yielding a control performance close to the global optimum achieved by DP [91,92]. Li et al. optimized K-means cluster centers via GA to enable driving state identification and efficient engine operation. Dimensionality reduction techniques such as autoencoders have also been used to reduce the model complexity and prevent overfitting [93].
In contrast, supervised learning is widely utilized in EMS for tasks such as energy distribution, condition recognition, and fault diagnosis. SVMs, known for their strong classification capabilities on small, high-dimensional datasets, are frequently adopted for pattern recognition and anomaly detection. Zhuang et al. combined SVM with optimal reference points generated by DP to extract mode-switching information and enhance real-time control strategies, thereby improving system efficiency [68]. Ji et al. implemented a one-class SVM model for fault detection, demonstrating high accuracy in real vehicle validations [94]. As shown in Figure 17, Shi et al. integrated an SVM with PSO for driving condition recognition and ECMS parameter tuning, significantly extending battery life [66]. Other studies have employed SVMs to construct multi-class diagnostic systems or to achieve dynamic parameter adjustment, thereby realizing both a high identification accuracy and improved fuel savings [66].
Ensemble learning methods such as decision trees and Random Forests have also been used to build intelligent controllers. Wang et al. incorporated expert knowledge into a decision tree to initialize a neural network, thus enhancing the energy management performance [67]. Lu et al. applied Random Forests for input feature selection and trained controllers using data generated by DP, achieving better energy efficiency compared to conventional methods [95].
ANNs play a critical role in EMS as well. As illustrated in Figure 18, Millo et al. built a digital twin model and trained a bidirectional long short-term memory (Bi-LSTM) network with DP results to replicate the optimal control strategy, resulting in approximately a 4–5% improvement in fuel economy [96]. Xie et al. proposed a control method integrating PMP and ANN, taking battery aging into account to strike a balance between energy efficiency and battery life [97]. In recent years, hybrid approaches combining ensemble learning and ANN have gained attention. For example, Lu et al. used Random Forests for PHEV modeling and subsequently developed an ANN-based controller, significantly reducing the total system energy consumption [95].
In addition, Bo et al. [98] introduced an optimization-based power-following Energy Management Strategy for hydrogen fuel cell vehicles, aiming to optimize the power distribution and reduce the hydrogen consumption. This strategy improved fuel efficiency by ensuring the fuel cell operated in its optimal efficiency range, and the optimized strategy resulted in a reduction in hydrogen consumption by 8.8% over a 100 km driving distance.
In summary, supervised and unsupervised learning methods each demonstrate strengths in HEV energy management. Supervised learning, with its reliance on labeled data, excels in accurate control and fault detection, whereas unsupervised learning enhances adaptability through working condition exploration. The integration of both paradigms has been shown to outperform traditional control strategies under various practical conditions. Coupled with classical optimization techniques such as DP, PMP, and GA, these intelligent methods offer robust technical support for the intelligent development of EMSs by balancing real-time control and global optimality [96].

4.3. Performance Evaluation of Reinforcement Learning Strategies

The application of reinforcement learning (RL) in HEV energy management has seen continuous advancement, covering key areas such as algorithm optimization, multi-objective policy integration, hierarchical control design, and adaptive capabilities under complex driving conditions. Overall, RL-based strategies have demonstrated significant performance improvements, with algorithmic refinement serving as a critical breakthrough point. For example, the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm has garnered considerable attention due to its high stability and precision. As illustrated in Figure 19, Huang and He incorporated a prioritized experience replay mechanism into the TD3 framework to develop a data-driven Energy Management Strategy, which improved the economic performance of fuel cell buses by 8.03%, achieving 97.15% of the performance benchmarked by DP [99]. Zhang et al. further extended TD3 to manage thermal systems in multi-mode PHEVs, integrating traffic and topographic data to achieve a 93.7% fuel efficiency rate and up to 9.6% energy savings [100]. Simultaneously, other RL variants such as Deep Q-Learning (DQL) and Actor–Critic algorithms have shown strong performance in high-dimensional state spaces: Wu et al. achieved only a 2.76% cost deviation from DP with the Actor–Critic approach, alongside an 88.7% reduction in computation time [101]. Some studies have introduced integrated mechanisms such as Dyna-H and end-to-end (E2E) learning to further enhance real-time response and computational efficiency [102].
To quantitatively compare the performance of representative RL algorithms across different driving scenarios, Table 8 summarizes the fuel consumption results of Q-Learning, DQN, DDPG, SAC, and PPO under standard driving cycles such as FTP-75 and WVUSUB. These values were obtained using compact HEV or plug-in hybrid electric bus (PHEB) models on platforms like Simulink and CarSim. It is evident that deep RL methods like SAC and PPO significantly outperform classical algorithms in terms of fuel economy, with PPO (Clip) achieving the lowest fuel consumption (18.960 L/100 km) in the 4 × WVUSUB cycle, and SAC achieving 3.258 L/100 km under FTP-75.
In the case of FCHEVs, Tang et al. [104] proposed a degradation adaptive Energy Management Strategy using DDPG. This strategy dynamically adjusts the power distribution between the fuel cell and battery based on the real-time health status (SOH) estimates of the fuel cell stack, significantly improving the system durability while reducing the energy consumption.
In the realm of multi-objective optimization and constraint handling, RL effectively balances fuel economy, battery longevity, and system safety by flexibly designing reward functions and incorporating dynamic models. Deng et al. combined TD3 with an online battery aging assessment model to achieve dual-objective optimization involving reduced hydrogen consumption and improved SOC control [105]. Inverse reinforcement learning (IRL), as shown in Figure 20, has also gained attention; for instance, Lv et al. extracted critical reward weights from expert trajectories to achieve fuel savings ranging from 5% to 10% [106]. Meanwhile, user comfort is increasingly integrated into the optimization objectives. Yavuz and Kivanc combined deep RL (DRL) with vehicle-to-everything (V2X) technology to manage the energy within vehicle fleets, reducing costs by 19.18% while meeting passenger requirements [107]. Similarly, Sun et al. [108] integrated deep RL with a behavior-aware EMS to optimize the power distribution, enhancing fuel efficiency by approximately 56% while significantly reducing battery degradation.
To address complex driving scenarios and sparse reward challenges, hierarchical reinforcement learning (HRL) and multi-agent frameworks have emerged as research focal points. Qi et al. proposed a hierarchical DQL architecture that not only improved training efficiency but also reduced fuel consumption by 7.3–16.5% [109]. Yang et al. integrated Q-Learning with double Q-Learning to enable driver behavior prediction and regulation, enhancing system modularity and scalability [110].
In line with this, Lei et al. [111] proposed a hierarchical Energy Management Strategy based on HIO and MDDP–FSC, optimizing the power distribution and improving the hydrogen consumption efficiency by 95.20% for FCHEVs. This approach extended the lifespan of both the battery and fuel cell, while enhancing the overall system efficiency.
In terms of system integration and environmental adaptability, RL applications continue to expand. Pardhasaradhi et al. combined a deep reinforcement learning architecture (DRLA) with the global thermal optimal approach (GTOA) for a hybrid fuel cell–battery–supercapacitor system, significantly improving its dynamic performance [112]. Liu et al. introduced a reward function based on the remaining driving range, optimizing the SOC maintenance and control strategy, and validated its real-time performance and reliability through hardware-in-the-loop (HIL) testing [113].
Overall, RL has reached a near-mature level of control performance in HEV energy management, with fuel economy typically reaching 90–97% of the DP benchmark and computational efficiency gains ranging from 50% to 90%. Future research may focus on reducing the training data requirements, improving the cost estimation accuracy, and achieving vehicle-to-grid (V2G) integration. Lin et al. recommended combining distributed learning and transfer learning to enhance the generalization capability, and integrating physical modeling with multi-constraint frameworks to improve system safety [114,115]. Furthermore, the deep integration of RL with V2X communication and energy trading is expected to become a critical direction in the next generation of intelligent energy management systems.

4.4. Integrated Benefits of Hybrid Strategies

Research on Energy Management Strategies for HEVs is gradually evolving from single-algorithm approaches to hybrid strategies that integrate multiple methodologies. By combining AI technologies with conventional optimization techniques, researchers have achieved significant advancements in fuel efficiency, real-time control performance, and environmental adaptability.
In the fusion of reinforcement learning and optimization algorithms, Reference [116] proposed a DDPG–ANFIS network that integrates DDPG with an ANFIS, achieving a near-DP-level real-time control performance in real-world environments. This method outperformed standalone strategies in both the Worldwide Harmonized Light Vehicles Test Cycle (WLTC) and on-road experiments. Similarly, References [69,70] combined DQL with MPC, constructing an integrated framework that merges offline training and rolling optimization. This approach achieved a fuel economy comparable to DP while reducing the single-step computation time to 33.4 ms, indicating a high real-time capability. Further, References [102,117] incorporated the Dyna-H algorithm and an Actor–Critic architecture (as shown in Figure 21), which maintained the optimization effectiveness while significantly lowering the computational burden. Notably, the Actor–Critic-based method achieved training costs only 2.76% higher than DP, with a computation efficiency increase of up to 88.7%.
In the collaborative optimization of MPC and machine learning, Reference [118] embedded a long short-term memory (LSTM) network into an improved MPC framework (Figure 22), dynamically optimizing energy allocation and achieving an 18.71% improvement in fuel efficiency, approaching global optimality. References [119,120] employed convex optimization techniques such as the projected interior point method and the alternating direction method of multipliers (ADMM), which not only ensured computational efficiency but also outperformed DP in energy savings, demonstrating the potential of hybrid modeling in high-dimensional optimization problems.
The integration of fuzzy logic with intelligent optimization has also shown promising results in HEVs. Reference [121] proposed a Fuzzy Adaptive Equivalent Consumption Minimization Strategy (Fuzzy A-ECMS) that dynamically adjusts the equivalent factor, thereby improving both fuel economy and SOC stability under various standard driving cycles. Meanwhile, the fusion of data-driven techniques with simulation platforms has further extended the applicability of Energy Management Strategies. For instance, Reference [28] combined a TD3 algorithm with behavioral cloning regularization and simulation testing, achieving a 6.10% fuel saving rate in hardware-in-the-loop (HIL) experiments. Reference [122] employed a Gaussian Mixture Model (GMM)-based clustering of driving conditions to generate Stochastic Dynamic Programming (SDP) strategies, realizing the cooperative optimization of energy distribution efficiency and engine operation states.
To meet control demands under dynamic environments, the integration of reinforcement learning with Markov chains has emerged as a research hotspot. As shown in Figure 23, Reference [123] proposed a framework that combines online Markov chains with Speedy Q-Learning (SQL), allowing dynamic updates to state transition probabilities. This approach effectively accelerated policy convergence, enhanced fuel economy, and met real-time control requirements, showing a strong engineering adaptability in complex scenarios such as hybrid tracked vehicles.
In addition, Tao et al. [124] proposed a hybrid strategy that combines Support Vector Machines (SVMs) and Artificial Neural Networks (ANNs) to optimize driving styles, achieving up to a 16.47% reduction in hydrogen consumption. This approach dynamically adapts the Energy Management Strategy based on the driving style, enhancing fuel efficiency under various driving conditions. Satheesh Kumar et al. [125] introduced a hybrid Energy Management Strategy by combining Finite Basis Physics-Informed Neural Networks (FBPINNs) with the Mountain Goat Optimizer (MGO). This method was highly effective in optimizing fuel economy, achieving a 95% improvement in efficiency by accurately predicting the fuel consumption and adjusting the Energy Management Strategies accordingly.
In conclusion, hybrid control strategies leverage the learning capability of AI and the theoretical strengths of traditional algorithms to achieve multi-objective optimization in HEV energy management. Future research may focus on the deep integration of reinforcement learning with deep models, while incorporating intelligent traffic information to improve the responsiveness of control strategies. In addition, optimization frameworks capable of handling multiple constraints should be developed, and real-world factors such as battery degradation should be embedded into control models to promote the widespread engineering application of hybrid strategies [70,102].

5. Limitations and Challenges of AI-Based Strategies

Balancing multiple conflicting objectives (fuel economy, emissions, battery health, and response time) inherently requires Pareto-efficient trade-offs. In practice, knowledge-driven controllers encode fixed trade-offs (e.g., rule- or fuzzy-based) to meet these goals, but they yield a single hand-crafted policy rather than a spectrum of Pareto-optimal solutions. Data-driven (supervised) models, likewise, learn one composite objective from data, so any multi-objective balance (e.g., including battery aging) still depends on manually weighted training targets. RL agents can, in principle, learn Pareto-efficient policies by reward design and have achieved near-optimal energy control in HEVs. However, as noted in [126], tuning multi-objective reward weights in rapidly changing driving scenarios is difficult, which typically limits RL to a narrow set of trade-offs. Hybrid EMS schemes may combine offline Pareto data (e.g., via DP) with online learning to cover more of the trade-off space, but this adds architectural complexity and, to date, unified Pareto-optimization remains largely unaddressed. Table 9 summarizes the relative capabilities of these AI strategies in handling multi-objective EMS tasks [127].

5.1. Adaptability Issues of Knowledge-Driven Strategies

As explained earlier, rule-based energy management approaches for HEVs are mainly based on expert experience and preset rules to perform decision-making and control. They are easy to implement, have a low computational cost, and generally satisfy real-time control requirements. Additionally, their rule-based reasoning is transparent, with a high interpretability and engineering robustness [130,131]. Nonetheless, they have some major constraints as well.
Fixed rules are usually optimized for particular vehicle powertrains and specific sets of operating conditions and do not have the dynamic adaptation capability. The capability of such fixed rules to generalize under realistic and varied driving conditions is, hence, low, which renders them less optimal for unseen conditions [132]. The energy amount allocation strategy, as an illustration, can be optimized on normal driving cycles, yet perform poorly in encountering new patterns of driving or adverse conditions [27].
In contrast, expertise-based approaches rely heavily on human expertise, which constrains their potential for being optimized and their global optimality, respectively. In comparison to optimization-based approaches, such approaches fall short in maximizing the energy saving capabilities of hybrid powertrains in terms of suboptimal control performance [133]. Additionally, with different rule bases and tuning approaches being adopted by different developers, standardized evaluation criteria and benchmarking procedures are missing, which slows down objective comparisons between different expertise-based approaches and, to some degree, the progress of the field as well.
In short, important tasks facing knowledge-based energy management approaches are to improve their adaptability and generalizability, to unlock their optimization potential, and to build one integrated evaluation framework. These are open research problems to be addressed.

5.2. Data Dependency and Explainability Limitations of Data-Driven Strategies

Data-driven energy management policies are based on either past vehicle operation data or offline optimization solutions and can employ machine learning methods to automatically learn control policies. With better flexibility than rule-based methods, data-driven policies can detect more intricate nonlinear relationships without human interference, thus contributing to some level of improved fuel efficiency. Nevertheless, the implementation of such policies in real-world applications is still confronted with several challenges and technical hurdles.
First, the need for high-quality data creates inherent data collection and annotation challenges. Supervised learning, in fact, demands large quantities of training data covering extensive operating conditions, as well as trustworthy annotations—normally extracted from time-consuming offline optimum-seeking simulations. Without enough diversity and correct labeling, algorithms tend to overfit toward specific conditions, leading to a weak generalization to unknown conditions [134,135]. Although the need for labeled data is reduced by unsupervised learning, it still relies on a rich and diverse data distribution to account for common driving behaviors. In rare or boundary situations, data-driven approaches usually fail to learn adequate methods, leading to a degraded control performance in long-tail conditions [136].
Second, real-time execution is still a challenge. Even though deep or complex architecture trained models can be run fairly efficiently using present-day embedded hardware acceleration, it is still possible to fall short of the millisecond-level execution times needed for onboard controllers [137]. Moreover, data-driven models are usually “black boxes” with low interpretability, and it becomes hard to comprehend the reasons behind decisions [138]. The dearth of transparency gives rise to safety concerns in automotive applications involving safety, and makes debugging and validation cumbersome. In terms of algorithmic stability, data-driven approaches could be unstable both in training and actual operation. In training, random initial weight perturbations can lead to dramatically different learned policies. In deployment, the input being outside the training distribution can lead to anomalous or untrustworthy outputs [139]. Lastly, the lack of standardized datasets and widely accepted evaluation metrics amongst studies makes it hard to benchmark and comparatively, yet objectively, evaluate the performance of various data-driven approaches. Lacking this standardization, it is not easy to make distinct comparisons and progress within the domain.
Overall, augmenting the resilience and generalization capabilities of data-driven energy management approaches, diminishing the reliance on large and annotated data, increasing interpretability, and developing standardized evaluation protocols are critical challenges. With the improved availability of multi-source data provided by networked cars and self-driving vehicles, one of the primary challenges emerging is how to leverage this data effectively while maintaining confidence in the model.

5.3. Generalization and Safety Constraints in Reinforcement Learning Strategies

Reinforcement learning (RL)-based policies, which allow the automatic learning of energy management policies by means of iterated agent–environment interactions and trial-and-error learning, have been a hot research topic in recent years [53]. In contrast to supervised learning, RL does not need optimal control sequences as examples provided by humans. Instead, it uses a reward function to facilitate strategy optimization, with the agent being able to explore independent of human direction near-optimal solutions in large decision spaces [140]. This provides RL policies with the capability to beat human-designed rules and reach optimal control performance levels. For instance, deep reinforcement learning algorithms have been used in allocating energy to parallel hybrid vehicles, with better fuel-saving performance than traditional rule-based policies in simulations [141]. Nevertheless, implementing RL policies in real automotive applications is still confronted with various challenges.
First, sample efficiency and safe exploration are major concerns: large amounts of data are usually needed to train RL agents. Trial and error in real vehicles is slow and risky, so offline training often uses high-fidelity simulators (e.g., ADVISOR, and AVL Cruise) [142]. However, a significant experimental validation gap exists, as most studies rely solely on these simulation platforms. Simulation setups, though, fall short of closely modeling the complex dynamics and uncertainties involved in real conditions, and the discrepancy between virtual and real worlds can result in a dramatic performance degradation when policies well-performing in simulations are translated to real vehicles [143]. The lack of reported validation on real-vehicle testing platforms remains a critical barrier to deployment. RL policies can also destabilize or fail in long-tail circumstances (e.g., unusual weather or infrequent driving styles) because of the lack of training experience [144]. Generalization, robustness, and experimental validation are, therefore, still significant challenges to reinforcement learning in energy management, with the need to add varied scenarios and methods such as domain randomization to training, and, crucially, real-world testing [145].
Second, instability and convergence difficulties occur: the training process of deep reinforcement learning agents is sensitive to hyperparameters and initial conditions, and is likely to result in non-convergence or suboptimal performance. There is a need to fine-tune algorithms and network structures to overcome such difficulties [146]. Compared to static strategies, the policies of RL agents are not rigid rules but are iteratively updated, which makes it harder to ensure their performance stability [147].
Third, while real-time performance, in the case of online decision-making by learned RL policies, typically involves only one forward computation, decision latency and computational resource requirements must still be managed when the state space is large and sensory data is plentiful in complex environments [148]. Moreover, RL policies are often based on neural networks, which means their decision process is hard to interpret, and, thus, they are classic “black-box” models. This creates new safety and interpretability concerns in automotive control: engineering practices must guarantee safety constraints enforceable on vehicle control decisions by learned policies; otherwise, unacceptable control behaviors may occur [149]. The existing research seeks to embed safety considerations within the reward function and introduce rule-based constraints on training; yet, still, there is no mature solution to ensure the learned policies comply with the safety and legal-ethical specifications in every circumstance [150]. Demonstrating safety on real-vehicle platforms under diverse operating conditions is essential. Lastly, the differences in the simulation setups, vehicle models, experimental validation rigor (or lack thereof), and operating conditions among various studies, as well as the wide range of evaluation metrics (e.g., fuel consumption, emissions, and weight of battery degradation), result in the absence of common benchmarks with which to compare the performance of varied RL policies [151]. This benchmark gap is exacerbated by the scarcity of publicly available real-vehicle test results. In general, to leverage RL in the management of energy in HEVs, several challenges still need to be overcome, such as the efficiency in training, generalization of the adopted strategy, safety, experimental validation, and standards of validation [152]. Training with mixed data involving a real vehicle and simulations, the creation of reliable safety-constrained algorithms, the development and utilization of dedicated real-vehicle testing platforms, and the exploitation of connected vehicle predictive data have been proposed as research directions to enhance the performance and real-world applicability of RL strategies recently. These research directions indicate new challenges towards applying RL-based energy management in the age of AI [153].

5.4. Architectural Complexity and Validation Challenges of Hybrid Strategies

Hybrid energy management approaches are intended to integrate smart algorithms with classical rule-based control or optimization methods, reconciling the safety of rule-based methods with the superior performance of optimization methods to maximize the overall operating efficiency of HEVs [5,154]. Existing mainstream methods involve the following: applying rule-based methods to discrete control operations like engine start-stop, and optimizing continuous power allocation with reinforcement learning or neural networks, striking a trade-off between safety and flexibility; or utilizing algorithms like DP and ECMS to generate large amounts of optimal data offline used for training, greatly speeding the online response as well as approximating the optimal control performance [155]. Meanwhile, hierarchical control structures are also widely adopted, where the upper level is in charge of determining driving conditions and selecting corresponding sub-strategies, and the lower level executes in real time [156].
While the hybrid approach method is sophisticated, it has several technical difficulties in application contexts. The first one is architectural complexity: one needs to correctly delineate the functional interface of each submodule so as to not incur energy loss or system instability due to conflicting controls and frequent switching. Designing the mechanism of the optimal fusion of strategies is the key to success [157]. Secondly, the strategy fusion elevates the dimension of parameters, necessitating rule threshold tuning as well as frequent updates of the learning models, which greatly increases the cost of development and tests [72]. When it comes to real-time performance, in the case of integrating online optimization algorithms (like MPC) or deep-learning-based algorithms, the compression of algorithms or computational resource upgrading are necessary in order to ensure timely controls [158]. The question of interpretability is also crucial. While some solutions have sought to introduce expert rules in the learning module to constrain output values and increase strategy transparency [159], as soon as data-driven or RL algorithms are involved in the global architecture, their “black-box” status is still relevant. The validation of the strategies regarding safety and stability is still suboptimal. Lastly, unified evaluation metrics are still missing, and the majority of hybrid approaches only prove to be effective in particular test conditions, with the global capability to optimize fuel savings, emissions, and battery life as multi-objectives still deserving more research [114].
In the future, research needs to concentrate on developing a hybrid control framework with a unified architecture that can perform the dynamic selection and intelligent integration of various methods to manage the varied and complex operating conditions. Concurrently, control structures possessing good interpretability, safety assurance mechanisms, and excellent engineering implementability need to be developed, and integrated with the combination of simulation and real-vehicle experimental setups to evaluate their capabilities of generalization in depth. Although hybrid approaches are primitive in their development stage, the direction synthesizing conventional control stability features with the capabilities provided by AI-based optimization abilities is slowly emerging. With the advances in algorithms and computing capabilities, such approaches are anticipated to be very significant as an evolution trajectory of next-generation HEV energy management systems [109].

5.5. Future Research Directions and Development Trends of Different Strategies

For knowledge-driven strategies, the academic community has proposed various methods for improving adaptability and overcoming rigid rule structures. These include adaptive rule tuning, multi-mode switching logic based on driving condition recognition, and the integration of meta-rule frameworks [132]. Furthermore, fuzzy logic systems have been enhanced using intelligent optimization techniques to dynamically adjust control parameters. Moving forward, future research should emphasize the development of self-evolving control rules and lightweight expert-system fusion schemes to accommodate system degradation and variable user behavior over time [116].
In data-driven strategies, efforts have been made to improve generalization and robustness through the use of large-scale, diverse datasets and advanced feature extraction methods. Transfer learning and domain adaptation are increasingly adopted to extend the applicability across different vehicle types and driving conditions. In addition, hybrid models that embed physical constraints or expert knowledge into machine learning architectures have been introduced to enhance reliability and interpretability. Future research should focus on reducing the dependency on labeled data via self-supervised learning, improving model transparency, and ensuring long-term stability through continuous online adaptation [53].
For reinforcement learning strategies, current solutions primarily address safety, convergence, and generalization challenges. Offline training with high-fidelity simulators, ensemble-based uncertainty estimation, and hierarchical control structures have been widely studied to ensure safer exploration and improve policy robustness. In addition, transfer learning and curriculum learning have been employed to accelerate convergence in complex scenarios. Looking ahead, emphasis should be placed on safe policy certification, interpretable decision-making structures, and the integration of domain knowledge to constrain and guide learning processes, especially in real-time applications [30,157].
In the case of hybrid strategies, researchers have designed modular and layered control frameworks that combine the strengths of rule-based, optimization-based, and learning-based methods. Hierarchical architectures with decision decomposition, hybrid fuzzy-learning systems, and adaptive equivalence factor tuning mechanisms have shown improved energy efficiency and operational flexibility. However, the complexity of these systems poses challenges in coordination, validation, and interpretability. Future work should aim to standardize the hybrid EMS architecture design, develop transparent decision logic across control layers, and explore fleet-level collaborative optimization under connected vehicle environments [1,14].
Across all strategy types, several common directions emerge. First, online adaptation is crucial: whether through adaptive fuzzy rules, learning-based parameter tuning, or life-long learning RL, EMS should progressively refine themselves from new data. Second, explainability and trust must improve. Embedding domain knowledge into AI models (e.g., physics-informed neural nets) and using interpretable structures (like decision trees or sparse rules) can make AI–EMS decisions more transparent. Third, the holistic co-design of control and hardware is promising, for example, coordinating thermal management or battery degradation models with EMS decision-making. Finally, as vehicles become connected, federated and collaborative learning will allow EMSs to benefit from fleet-wide data while respecting privacy.

6. Conclusions and Future Outlook

In the context of accelerated transformation driven by artificial intelligence, the research on Energy Management Strategies for HEVs continues to deepen, gradually forming four main approaches represented by knowledge-driven, data-driven, reinforcement learning, and hybrid strategies. As summarized in Table 10, knowledge-driven strategies rely on rule systems and expert experience, providing significant advantages in ensuring real-time performance and interpretability. Data-driven methods leverage large amounts of real-world or simulation data for model training, enabling the system to exhibit stronger adaptability in changing environments. Reinforcement learning strategies optimize themselves through dynamic interaction with the environment, particularly demonstrating near-optimal performance in energy control tasks within continuous state spaces. Hybrid strategies, which integrate the strengths of the aforementioned approaches, balance control quality, system stability, and generalization capability, and have, thus, gradually become a focal point of research.
Despite numerous breakthroughs, HEV Energy Management Strategies still face several core challenges at present. First, deep learning and reinforcement learning models still lack a sufficient generalization ability when facing complex operating conditions, exhibiting limited stability for unseen scenarios. Second, current mainstream intelligent control algorithms lack sufficient interpretability, making it difficult to meet the stringent safety and compliance requirements of industrial applications. Third, some high-performance algorithms are highly dependent on computational resources, creating bottlenecks for deployment on onboard controllers, significantly restricting their large-scale practical application. Additionally, there is no unified evaluation standard or testing platform in the industry, making it difficult to directly compare the effectiveness of different strategies and hindering the efficiency and accuracy of transitioning from laboratory research to engineering applications.
In summary, these observations highlight that AI-based EMS methods, despite their advantages, face overarching challenges: they often rely on extensive, high-quality training data, involve complex model architectures that strain onboard resources, and lack standardized validation frameworks for deployment. Overcoming these issues will require data-efficient learning techniques, model simplification or compression, and unified testing protocols. Future research should, therefore, prioritize these areas to bridge the gap between AI innovation and real-world HEV energy management applications.
Looking to the future, the development of HEV energy management will be significantly influenced by cutting-edge technologies such as deep reinforcement learning, multi-agent cooperative control, vehicle-to-everything (V2X) communication, quantum communication, and digital twins. Deep learning and offline reinforcement learning methods are expected to significantly improve training efficiency in high-dimensional complex environments. Intelligent and Connected Vehicles (ICVs) will promote energy resource collaboration and optimization between vehicles. Meanwhile, the integration of V2X and quantum communication technologies combined with digital twin technology will facilitate more precise, secure environmental perception and predictive decision-making, ensuring robust and safe real-time communication. Quantum communication, particularly quantum key distribution (QKD), provides unconditional security for data transmission, effectively preventing information leakage and cyber-attacks, thereby enhancing the security of the data exchange in vehicle-to-vehicle and vehicle-to-infrastructure communication. In this context, the research value of hybrid strategies is becoming increasingly prominent. By integrating rule-based control logic, model predictive methods, and deep learning mechanisms, hybrid strategies not only have inherent advantages in robustness and flexibility but also demonstrate a significant potential for adaptability and scalability in practical applications.
At the engineering implementation level, future research should focus on improving the robustness of algorithms and their ability to model safety constraints, exploring more transparent interpretability mechanisms to meet regulatory requirements, and promoting the development of multi-objective collaborative optimization techniques, covering energy consumption control, emission reduction, dynamic response, and ride comfort. Additionally, there is a need to establish a unified testing condition database and evaluation platform to standardize strategy verification and systematize engineering transformation. It is crucial to address the lack of real-world validation, and, thus, more emphasis should be placed on experimental testing in real-vehicle environments. It is foreseeable that the next generation of HEV energy management systems will continue to improve in intelligence while placing a greater emphasis on the construction of safety assurance mechanisms, promoting the integration of artificial intelligence and systems engineering, and thus achieving more efficient and reliable practical applications.
In parallel, the emergence of solid-state batteries (SSBs) offers a promising direction for the future of HEVs. Compared to conventional lithium-ion batteries, SSBs provide a higher energy density, faster charging rates, and significantly improved safety due to the elimination of flammable liquid electrolytes. These characteristics enable more aggressive and flexible energy scheduling, posing new requirements for real-time energy management algorithms. SSBs offer energy densities greater than 500 Wh/L and can achieve 0–80% charging in under 12 min, along with enhanced thermal stability, which opens up new avenues for AI-driven strategies. These strategies can leverage the stable and predictable discharge behavior of SSBs to enhance prediction accuracy, improve battery health management, and optimize charging control. Furthermore, the predictable degradation behavior of SSBs makes them ideal for reinforcement-learning-based health management systems. Recent studies suggest that this predictability could potentially reduce battery aging by 15–20% under dynamic loads, thus extending the lifespan of the battery and improving overall vehicle performance [161,162]. As SSBs exhibit unique thermal and degradation dynamics, integrating advanced AI models, such as digital twins and deep reinforcement learning, will be essential to fully exploit their advantages. The convergence of SSB technology with AI-based control will further enhance the efficiency, adaptability, and safety of next-generation HEV energy management systems.

Author Contributions

Conceptualization, X.W. and B.H.; methodology, X.W.; validation, X.W., B.H. and W.Y.; formal analysis, X.W.; investigation, X.W.; writing—original draft preparation, X.W. and W.Y.; writing—review and editing, W.Y., M.M. and G.W.; visualization, X.W.; supervision, B.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Key Research and Development Program of China, grant number 2022YFC3006005.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
A-ECMS Adaptive Equivalent Consumption Minimization Strategy
ADMM Alternating Direction Method of Multipliers
ANFIS Adaptive Neuro-Fuzzy Inference System
ANN Artificial Neural Network
Bi-LSTM Bidirectional Long Short-Term Memory
CLTCChina Light-Duty Vehicle Test Cycle
CNN Convolutional Neural Network
DDPG Deep Deterministic Policy Gradient
DDQL Double Delayed Q-Learning
DDQN Deep Double Q-Network
DL Deep Learning
DP Dynamic Programming
DQN Deep Q-Network
DRL Deep Reinforcement Learning
DRLA Deep Reinforcement Learning Architecture
ECMS Equivalent Consumption Minimization Strategy
E2E End-to-End
EMS Energy Management Strategy
FIS Fuzzy Inference System
FLC Fuzzy Logic Control
FNN Fuzzy Neural Network
GA Genetic Algorithm
GMM Gaussian Mixture Model
GTOA Global Thermal Optimal Approach
HEV Hybrid Electric Vehicle
HIL Hardware-in-the-Loop
HRL Hierarchical Reinforcement Learning
ICV Intelligent and Connected Vehicle
IEA International Energy Agency
IRL Inverse Reinforcement Learning
ITS Intelligent Transportation Systems
IT2-FLC Interval Type-2 Fuzzy Logic Controller
KGA-Means K-means Genetic Algorithm
LSTM Long Short-Term Memory
MPC Model Predictive Control
NEDCNew European Driving Cycle
PCA Principal Component Analysis
PHEBPlug-In Hybrid Electric Bus
PHEV Plug-In Hybrid Electric Vehicle
PMP Pontryagin’s Minimum Principle
PPO Proximal Policy Optimization
PSO Particle Swarm Optimization
QKD Quantum Key Distribution
RL Reinforcement Learning
SAC Soft Actor-Critic
SDP Stochastic Dynamic Programming
SOC State of Charge
SQL Speedy Q-Learning
SSB Solid-State Battery
SVM Support Vector Machine
TD3 Twin Delayed Deep Deterministic Policy Gradient
T1-FLC Type-1 Fuzzy Logic Controller
UDDSUrban Dynamometer Driving Schedule
US06US06 Supplemental Federal Test Procedure
V2G Vehicle-to-Grid
V2X Vehicle-to-Everything
WLTC Worldwide Harmonized Light Vehicles Test Cycle

References

  1. Urooj, A.; Nasir, A. Review of Intelligent Energy Management Techniques for Hybrid Electric Vehicles. J. Energy Storage 2024, 92, 112132. [Google Scholar] [CrossRef]
  2. Global EV Outlook 2025–Analysis-IEA. Available online: https://www.iea.org/reports/global-ev-outlook-2025 (accessed on 27 May 2025).
  3. Global EV Outlook 2024–Analysis-IEA. Available online: https://www.iea.org/reports/global-ev-outlook-2024 (accessed on 27 May 2024).
  4. News, K. Toyota Remains World’s Top Automaker in 2024 as China’s BYD Emerges. Available online: https://english.kyodonews.net/news/2025/01/8f1af7eacf64-toyota-group-retains-crown-as-worlds-biggest-automaker-in-2024.html (accessed on 5 June 2025).
  5. Mittal, V.; Shah, R. Energy Management Strategies for Hybrid Electric Vehicles: A Technology Roadmap. World Electr. Veh. J. 2024, 15, 424. [Google Scholar] [CrossRef]
  6. Olatomiwa, L.; Mekhilef, S.; Ismail, M.S.; Moghavvemi, M. Energy Management Strategies in Hybrid Renewable Energy Systems: A Review. Renew. Sustain. Energy Rev. 2016, 62, 821–835. [Google Scholar] [CrossRef]
  7. Torreglosa, J.P.; Garcia-Triviño, P.; Vera, D.; López-García, D.A. Analyzing the Improvements of Energy Management Systems for Hybrid Electric Vehicles Using a Systematic Literature Review: How Far Are These Controls from Rule-Based Controls Used in Commercial Vehicles? Appl. Sci. 2020, 10, 8744. [Google Scholar] [CrossRef]
  8. Liu, T.; Tan, W.; Tang, X.; Zhang, J.; Xing, Y.; Cao, D. Driving Conditions-Driven Energy Management Strategies for Hybrid Electric Vehicles: A Review. Renew. Sustain. Energy Rev. 2021, 151, 111521. [Google Scholar] [CrossRef]
  9. Choi, K.; Byun, J.; Lee, S.; Jang, I.G. Adaptive Equivalent Consumption Minimization Strategy (A-ECMS) for the HEVs With a Near-Optimal Equivalent Factor Considering Driving Conditions. IEEE Trans. Veh. Technol. 2022, 71, 2538–2549. [Google Scholar] [CrossRef]
  10. Basantes, J.A.; Paredes, D.E.; Llanos, J.R.; Ortiz, D.E.; Burgos, C.D. Energy Management System (EMS) Based on Model Predictive Control (MPC) for an Isolated DC Microgrid. Energies 2023, 16, 2912. [Google Scholar] [CrossRef]
  11. Niu, S.; Zhao, Q.; Chen, H.; Niu, S.; Jian, L. Noncooperative Metal Object Detection Using Pole-to-Pole EM Distribution Characteristics for Wireless EV Charger Employing DD Coils. IEEE Trans. Ind. Electron. 2024, 71, 6335–6344. [Google Scholar] [CrossRef]
  12. Niu, S.; Yu, H.; Niu, S.; Jian, L. Power Loss Analysis and Thermal Assessment on Wireless Electric Vehicle Charging Technology: The over-Temperature Risk of Ground Assembly Needs Attention. Appl. Energy 2020, 275, 115344. [Google Scholar] [CrossRef]
  13. Zhong, D.; Liu, B.; Liu, L.; Fan, W.; Tang, J. Artificial Intelligence Algorithms for Hybrid Electric Powertrain System Control: A Review. Energies 2025, 18, 2018. [Google Scholar] [CrossRef]
  14. Panday, A.; Bansal, H.O. A Review of Optimal Energy Management Strategies for Hybrid Electric Vehicle. Int. J. Veh. Technol. 2014, 2014, 160510. [Google Scholar] [CrossRef]
  15. Lü, X.; Li, S.; He, X.; Xie, C.; He, S.; Xu, Y.; Fang, J.; Zhang, M.; Yang, X. Hybrid Electric Vehicles: A Review of Energy Management Strategies Based on Model Predictive Control. J. Energy Storage 2022, 56, 106112. [Google Scholar] [CrossRef]
  16. Zhang, F.; Wang, L.; Coskun, S.; Pang, H.; Cui, Y.; Xi, J. Energy Management Strategies for Hybrid Electric Vehicles: Review, Classification, Comparison, and Outlook. Energies 2020, 13, 3352. [Google Scholar] [CrossRef]
  17. Khalatbarisoltani, A.; Zhou, H.; Tang, X.; Kandidayeni, M.; Boulon, L.; Hu, X. Energy Management Strategies for Fuel Cell Vehicles: A Comprehensive Review of the Latest Progress in Modeling, Strategies, and Future Prospects. IEEE Trans. Intell. Transp. Syst. 2024, 25, 14–32. [Google Scholar] [CrossRef]
  18. Ding, N. Energy Management System Design for Plug-In Hybrid Electric Vehicle Based on the Battery Management System Applications. Ph.D. Dissertation, Auckland University of Technology, Auckland, New Zealand, 2020. [Google Scholar]
  19. Zhu, Y.; Li, X.; Liu, Q.; Li, S.; Xu, Y. Review Article: A Comprehensive Review of Energy Management Strategies for Hybrid Electric Vehicles. Mech. Sci. 2022, 13, 147–188. [Google Scholar] [CrossRef]
  20. Azim Mohseni, N.; Bayati, N.; Ebel, T. Energy Management Strategies of Hybrid Electric Vehicles: A Comparative Review. IET Smart Grid 2024, 7, 191–220. [Google Scholar] [CrossRef]
  21. Omakor, J.; Alzayed, M.; Chaoui, H. Particle Swarm-Optimized Fuzzy Logic Energy Management of Hybrid Energy Storage in Electric Vehicles. Energies 2024, 17, 2163. [Google Scholar] [CrossRef]
  22. Xu, N.; Kong, Y.; Chu, L.; Ju, H.; Yang, Z.; Xu, Z.; Xu, Z. Towards a Smarter Energy Management System for Hybrid Vehicles: A Comprehensive Review of Control Strategies. Appl. Sci. 2019, 9, 2026. [Google Scholar] [CrossRef]
  23. Li, J.; Wu, X.; Xu, M.; Liu, Y. A Real-Time Optimization Energy Management of Range Extended Electric Vehicles for Battery Lifetime and Energy Consumption. J. Power Sources 2021, 498, 229939. [Google Scholar] [CrossRef]
  24. Gautam, A.K.; Tariq, M.; Pandey, J.P.; Verma, K.S.; Urooj, S. Hybrid Sources Powered Electric Vehicle Configuration and Integrated Optimal Power Management Strategy. IEEE Access 2022, 10, 121684–121711. [Google Scholar] [CrossRef]
  25. Yao, M.; Zhu, B.; Zhang, N. Adaptive Real-Time Optimal Control for Energy Management Strategy of Extended Range Electric Vehicle. Energy Convers. Manag. 2021, 234, 113874. [Google Scholar] [CrossRef]
  26. Fesakis, N.; Falekas, G.; Palaiologou, I.; Lazaridou, G.E.; Karlis, A. Integration and Optimization of Multisource Electric Vehicles: A Critical Review of Hybrid Energy Systems, Topologies, and Control Algorithms. Energies 2024, 17, 4364. [Google Scholar] [CrossRef]
  27. Li, X.; He, H.; Wu, J. Knowledge-Guided Deep Reinforcement Learning for Multiobjective Energy Management of Fuel Cell Electric Vehicles. IEEE Trans. Transport. Electrific. 2025, 11, 2344–2355. [Google Scholar] [CrossRef]
  28. Hu, B.; Zhang, S.; Liu, B. A Hybrid Algorithm Combining Data-Driven and Simulation-Based Reinforcement Learning Approaches to Energy Management of Hybrid Electric Vehicles. IEEE Trans. Transport. Electrific. 2024, 10, 1257–1273. [Google Scholar] [CrossRef]
  29. Husseini, F.E.; Noura, H.; Vernier, F. Machine-Learning-Based Smart Energy Management Systems: A Review. In Proceedings of the 2024 International Wireless Communications and Mobile Computing (IWCMC), Ayia Napa, Cyprus, 27–31 May 2024; pp. 1296–1302. [Google Scholar] [CrossRef]
  30. Wang, H.; Ye, Y.; Zhang, J.; Xu, B. A Comparative Study of 13 Deep Reinforcement Learning Based Energy Management Methods for a Hybrid Electric Vehicle. Energy 2023, 266, 126497. [Google Scholar] [CrossRef]
  31. Zhang, Z.; Zhang, T.; Hong, J.; Zhang, H.; Yang, J. Energy Management Strategy of a Novel Parallel Electric-Hydraulic Hybrid Electric Vehicle Based on Deep Reinforcement Learning and Entropy Evaluation. J. Clean. Prod. 2023, 403, 136800. [Google Scholar] [CrossRef]
  32. Chandra Tella, V.; Alzayed, M.; Chaoui, H. A Comprehensive Review of Energy Management Strategies in Hybrid Electric Vehicles: Comparative Analysis and Challenges. IEEE Access 2024, 12, 181858–181878. [Google Scholar] [CrossRef]
  33. Jui, J.J.; Ahmad, M.A.; Molla, M.M.I.; Rashid, M.I.M. Optimal Energy Management Strategies for Hybrid Electric Vehicles: A Recent Survey of Machine Learning Approaches. J. Eng. Res. 2024, 12, 454–467. [Google Scholar] [CrossRef]
  34. Munsi, M.d.S.; Chaoui, H. Energy Management Systems for Electric Vehicles: A Comprehensive Review of Technologies and Trends. IEEE Access 2024, 12, 60385–60403. [Google Scholar] [CrossRef]
  35. Zhao, J.; Xu, X.; Dong, P.; Liu, X.; Wang, S.; Qi, H.; Liu, Y. A Novel EMS Design Framework for SPHTs Based on Instantaneous Layer, Driving Event Layer, and Driving Cycle Layer. Energy 2024, 307, 132722. [Google Scholar] [CrossRef]
  36. Pan, C.; Li, Y. Nonlinear Model Predictive Control for Integrated Thermal Mangement of Electric Vehicle Battery and Cabin Environment. In Proceedings of the 20th International Refrigeration and Air Conditioning Conference at Purdue, West Lafayette, IN, USA, 10–14 July 2022. [Google Scholar]
  37. Ensbury, T.; Picarelli, A.; Dempsey, M. High-Fidelity Multiphysics FCEV Bus Study with Detailed HVAC, Cabin, and Hydrogen Fuel Cell Models. In Proceedings of the Asian Modelica Conference 2022, Tokyo, Japan, 24–25 November 2022; pp. 53–61. [Google Scholar] [CrossRef]
  38. Li, Y.; He, H.; Khajepour, A.; Chen, Y.; Huo, W.; Wang, H. Deep Reinforcement Learning for Intelligent Energy Management Systems of Hybrid-Electric Powertrains: Recent Advances, Open Issues, and Prospects. IEEE Trans. Transport. Electrific. 2024, 10, 9877–9903. [Google Scholar] [CrossRef]
  39. Dounis, A.I. Artificial Intelligence for Energy Conservation in Buildings. Adv. Build. Energy Res. 2010, 4, 267–299. [Google Scholar] [CrossRef]
  40. Liu, L.; Guo, Y.; Yin, W.; Lei, G.; Zhu, J. Design and Optimization Technologies of Permanent Magnet Machines and Drive Systems Based on Digital Twin Model. Energies 2022, 15, 6186. [Google Scholar] [CrossRef]
  41. Zhang, Y.; Zhang, J.; Dong, L. Fuzzy Logic Regional Landslide Susceptibility Multi-Field Information Map Representation Analysis Method Constrained by Spatial Characteristics of Mining Factors in Mining Areas. Processes 2023, 11, 985. [Google Scholar] [CrossRef]
  42. Rostami, S.M.R.; Al-Shibaany, Z.; Kay, P.; Karimi, H.R. Deep Reinforcement Learning and Fuzzy Logic Controller Codesign for Energy Management of Hydrogen Fuel Cell Powered Electric Vehicles. Sci. Rep. 2024, 14, 30917. [Google Scholar] [CrossRef]
  43. Wu, M.; Ma, D.; Xiong, K.; Yuan, L. Deep Reinforcement Learning for Load Frequency Control in Isolated Microgrids: A Knowledge Aggregation Approach with Emphasis on Power Symmetry and Balance. Symmetry 2024, 16, 322. [Google Scholar] [CrossRef]
  44. Almughram, O.; Abdullah ben Slama, S.; Zafar, B.A. A Reinforcement Learning Approach for Integrating an Intelligent Home Energy Management System with a Vehicle-to-Home Unit. Appl. Sci. 2023, 13, 5539. [Google Scholar] [CrossRef]
  45. Fayyazi, M.; Sardar, P.; Thomas, S.I.; Daghigh, R.; Jamali, A.; Esch, T.; Kemper, H.; Langari, R.; Khayyam, H. Artificial Intelligence/Machine Learning in Energy Management Systems, Control, and Optimization of Hydrogen Fuel Cell Vehicles. Sustainability 2023, 15, 5249. [Google Scholar] [CrossRef]
  46. Zhou, D.; Al-Durra, A.; Gao, F.; Ravey, A.; Matraji, I.; Godoy Simões, M. Online Energy Management Strategy of Fuel Cell Hybrid Electric Vehicles Based on Data Fusion Approach. J. Power Sources 2017, 366, 278–291. [Google Scholar] [CrossRef]
  47. Beşkardeş, A.; Hameş, Y. Data-Driven-Based Fuzzy Control System Design for a Hybrid Electric Vehicle. Electr. Eng. 2023, 105, 1971–1991. [Google Scholar] [CrossRef]
  48. Liu, Y.; Zhang, Y.; Yu, H.; Nie, Z.; Liu, Y.; Chen, Z. A Novel Data-Driven Controller for Plug-in Hybrid Electric Vehicles with Improved Adaptabilities to Driving Environment. J. Clean. Prod. 2022, 334, 130250. [Google Scholar] [CrossRef]
  49. Sun, H.; Fu, Z.; Tao, F.; Zhu, L.; Si, P. Data-Driven Reinforcement-Learning-Based Hierarchical Energy Management Strategy for Fuel Cell/Battery/Ultracapacitor Hybrid Electric Vehicles. J. Power Sources 2020, 455, 227964. [Google Scholar] [CrossRef]
  50. Ahmed, I.; Rehan, M.; Basit, A.; Tufail, M.; Hong, K.-S. Neuro-Fuzzy and Networks-Based Data Driven Model for Multi-Charging Scenarios of Plug-in-Electric Vehicles. IEEE Access 2023, 11, 87150–87165. [Google Scholar] [CrossRef]
  51. Revathy, G.; Pokkuluri, K.S.; Shyamalagowri, M.; Gokulraj, S. Electric Vehicle Energy Management Using Fuzzy Logics and Machine Learning. In Optimization, Machine Learning, and Fuzzy Logic: Theory, Algorithms, and Applications; IGI Global Scientific Publishing: New York, NY, USA, 2025; pp. 245–260. ISBN 979-8-3693-7352-1. [Google Scholar]
  52. Li, J.; Liu, J.; Yang, Q.; Wang, T.; He, H.; Wang, H.; Sun, F. Reinforcement Learning Based Energy Management for Fuel Cell Hybrid Electric Vehicles: A Comprehensive Review on Decision Process Reformulation and Strategy Implementation. Renew. Sustain. Energy Rev. 2025, 213, 115450. [Google Scholar] [CrossRef]
  53. Boukoberine, M.N.; Zia, M.F.; Berghout, T.; Benbouzid, M. Reinforcement Learning-Based Energy Management for Hybrid Electric Vehicles: A Comprehensive up-to-Date Review on Methods, Challenges, and Research Gaps. Energy AI 2025, 21, 100514. [Google Scholar] [CrossRef]
  54. Zhou, J.; Zhang, Y.; Wang, C.; Zhao, W. A Deep Reinforcement Learning Based EMS for PHEV Considering Temperature Equilibrium of Battery Modules. Proc. Inst. Mech. Eng. D J. Automob. Eng. 2024, 9544070241300207. [Google Scholar] [CrossRef]
  55. Li, K.; Zhou, J.; Jia, C.; Yi, F.; Zhang, C. Energy Sources Durability Energy Management for Fuel Cell Hybrid Electric Bus Based on Deep Reinforcement Learning Considering Future Terrain Information. Int. J. Hydrogen Energy 2024, 52, 821–833. [Google Scholar] [CrossRef]
  56. Priyadarshini, A.; Yadav, S.; Patel, S. Energy Management Schemes for Hybrid Electric Vehicles. Int. Energy J. 2025, 25, 293. [Google Scholar] [CrossRef]
  57. Tong, H.; Chu, L.; Wang, Z.; Zhao, D. Adaptive Pulse-and-Glide for Synergistic Optimization of Driving Behavior and Energy Management in Hybrid Powertrain. Energy 2025, 330, 136622. [Google Scholar] [CrossRef]
  58. Yao, G.; Zhang, N.; Duan, Z.; Tian, C. Improved SARSA and DQN Algorithms for Reinforcement Learning. Theor. Comput. Sci. 2025, 1027, 115025. [Google Scholar] [CrossRef]
  59. Chapa, L.S.C. Machine Learning-Based Equivalent Consumption Minimization Strategy for HEV Torque Split Optimization. Master’s Thesis, King Abdullah University of Science and Technology, Thuwal, Saudi Arabia, 2025. [Google Scholar]
  60. Wei, Z.; Ma, Y.; Ruan, S.; Zhang, C.; Yang, N.; Xiang, C. Deep Learning Compound Architecture-Based Coordinated Control Strategy for Hybrid Electric Power System with Turboshaft Engine. Aerosp. Sci. Technol. 2025, 163, 110311. [Google Scholar] [CrossRef]
  61. Peng, H.; Yuan, D.; Jiang, K.; Xu, W. PPO-Based Fluctuation-Consumption Co-Optimized Energy Management for Wind-Disturbed Fuel Cell Hybrid Electric Flying Car. J. Power Sources 2025, 647, 237349. [Google Scholar] [CrossRef]
  62. Li, J.; Yi, Q.; Zhu, P.; Hu, J.; Yi, S. Data-Driven Co-Optimization Method of Eco-Adaptive Cruise Control for Plug-in Hybrid Electric Vehicles Considering Risky Driving Behaviors. Appl. Energy 2025, 392, 126039. [Google Scholar] [CrossRef]
  63. Jia, C.; Liu, W.; He, H.; Chau, K.T. Superior Energy Management for Fuel Cell Vehicles Guided by Improved DDPG Algorithm: Integrating Driving Intention Speed Prediction and Health-Aware Control. Appl. Energy 2025, 394, 126195. [Google Scholar] [CrossRef]
  64. Venkatesh, R.J.; Karpe, S.R.; Kokare, B.; Pradeep, K.V. Sand Cat Swarm Optimization and Attention-based Graph Convolutional Neural Network for Energy Management Analysis of Grid-connected Hybrid Wind-microturbine-photovoltaic-electric Vehicle Systems. Energy Storage 2025, 7, e70187. [Google Scholar] [CrossRef]
  65. Montazeri-Gh, M.; Mahmoodi-k, M. Development a New Power Management Strategy for Power Split Hybrid Electric Vehicles. Transp. Res. Part D Transp. Environ. 2015, 37, 79–96. [Google Scholar] [CrossRef]
  66. Shi, Q.; Qiu, D.; He, L.; Wu, B.; Li, Y. Support Vector Machine–Based Driving Cycle Recognition for Dynamic Equivalent Fuel Consumption Minimization Strategy with Hybrid Electric Vehicle. Adv. Mech. Eng. 2018, 10, 1687814018811020. [Google Scholar] [CrossRef]
  67. Wang, H.; Arjmandzadeh, Z.; Ye, Y.; Zhang, J.; Xu, B. FlexNet: A Warm Start Method for Deep Reinforcement Learning in Hybrid Electric Vehicle Energy Management Applications. Energy 2024, 288, 129773. [Google Scholar] [CrossRef]
  68. Zhuang, W.; Zhang, X.; Li, D.; Wang, L.; Yin, G. Mode Shift Map Design and Integrated Energy Management Control of a Multi-Mode Hybrid Electric Vehicle. Appl. Energy 2017, 204, 476–488. [Google Scholar] [CrossRef]
  69. Chen, Z.; Gu, H.; Shen, S.; Shen, J. Energy Management Strategy for Power-Split Plug-in Hybrid Electric Vehicle Based on MPC and Double Q-Learning. Energy 2022, 245, 123182. [Google Scholar] [CrossRef]
  70. Shen, S.; Gao, S.; Liu, Y.; Zhang, Y.; Shen, J.; Chen, Z.; Lei, Z. Real-Time Energy Management for Plug-in Hybrid Electric Vehicles via Incorporating Double-Delay Q-Learning and Model Prediction Control. IEEE Access 2022, 10, 131076–131089. [Google Scholar] [CrossRef]
  71. Xie, S.; Hu, X.; Qi, S.; Lang, K. An Artificial Neural Network-Enhanced Energy Management Strategy for Plug-in Hybrid Electric Vehicles. Energy 2018, 163, 837–848. [Google Scholar] [CrossRef]
  72. Mazouzi, A.; Hadroug, N.; Alayed, W.; Hafaifa, A.; Iratni, A.; Kouzou, A. Comprehensive Optimization of Fuzzy Logic-Based Energy Management System for Fuel-Cell Hybrid Electric Vehicle Using Genetic Algorithm. Int. J. Hydrogen Energy 2024, 81, 889–905. [Google Scholar] [CrossRef]
  73. Fan, L.; Wang, Y.; Wei, H.; Zhang, Y.; Zheng, P.; Huang, T.; Li, W. A GA-Based Online Real-Time Optimized Energy Management Strategy for Plug-in Hybrid Electric Vehicles. Energy 2022, 241, 122811. [Google Scholar] [CrossRef]
  74. Zhang, Q.; Fu, X. A Neural Network Fuzzy Energy Management Strategy for Hybrid Electric Vehicles Based on Driving Cycle Recognition. Appl. Sci. 2020, 10, 696. [Google Scholar] [CrossRef]
  75. He, H.; Huang, R.; Meng, X.; Zhao, X.; Wang, Y.; Li, M. A Novel Hierarchical Predictive Energy Management Strategy for Plug-in Hybrid Electric Bus Combined with Deep Deterministic Policy Gradient. J. Energy Storage 2022, 52, 104787. [Google Scholar] [CrossRef]
  76. Zhang, C.; Li, T.; Cui, W.; Cui, N. Proximal Policy Optimization Based Intelligent Energy Management for Plug-In Hybrid Electric Bus Considering Battery Thermal Characteristic. World Electr. Veh. J. 2023, 14, 47. [Google Scholar] [CrossRef]
  77. Wang, F.; Hong, Y.; Zhao, X. Research and Comparative Analysis of Energy Management Strategies for Hybrid Electric Vehicles: A Review. Energies 2025, 18, 2873. [Google Scholar] [CrossRef]
  78. Uysal, L.K.; Altin, N. Modelling and Fuzzy Logic Based Control Scheme for a Series Hybrid Electric Vehicle. Researchgate 2023, 7, 106–120. [Google Scholar] [CrossRef]
  79. Johanyák, Z.C. A Simple Fuzzy Logic Based Power Control for a Series Hybrid Electric Vehicle. In Proceedings of the 2015 IEEE European Modelling Symposium (EMS), Madrid, Spain, 6–8 October 2015; pp. 207–212. [Google Scholar] [CrossRef]
  80. Phan, D.; Bab-Hadiashar, A.; Fayyazi, M.; Hoseinnezhad, R.; Jazar, R.N.; Khayyam, H. Interval Type 2 Fuzzy Logic Control for Energy Management of Hybrid Electric Autonomous Vehicles. IEEE Trans. Intell. Veh. 2021, 6, 210–220. [Google Scholar] [CrossRef]
  81. Shakeel, F.M.; Malik, O.P. ANFIS Based Energy Management System for V2G Integrated Micro-Grids. Electr. Power Compon. Syst. 2022, 50, 584–599. [Google Scholar] [CrossRef]
  82. Vignesh, R.; Ashok, B. Intelligent Energy Management through Neuro-Fuzzy Based Adaptive ECMS Approach for an Optimal Battery Utilization in Plugin Parallel Hybrid Electric Vehicle. Energy Convers. Manag. 2023, 280, 116792. [Google Scholar] [CrossRef]
  83. Suhail, M.; Akhtar, I.; Kirmani, S.; Jameel, M. Development of Progressive Fuzzy Logic and ANFIS Control for Energy Management of Plug-In Hybrid Electric Vehicle. IEEE Access 2021, 9, 62219–62231. [Google Scholar] [CrossRef]
  84. Fu, X.; Fan, Z.; Jiang, S.; Fly, A.; Chen, R.; Han, Y.; Xie, A. Durability Oriented Fuel Cell Electric Vehicle Energy Management Strategies Based on Vehicle Drive Cycles. Energies 2024, 17, 5721. [Google Scholar] [CrossRef]
  85. Qin, Y.; Li, Z.; Geng, G.; Wang, B. Approximate Globally Optimal Energy Management Strategy for Fuel Cell Hybrid Mining Trucks Based on Rule-Interposing Balance Cost Minimization. Sustainability 2025, 17, 1412. [Google Scholar] [CrossRef]
  86. Ghaderi, R.; Kandidayeni, M.; Soleymani, M.; Boulon, L.; Trovão, J.P.F. Online Health-Conscious Energy Management Strategy for a Hybrid Multi-Stack Fuel Cell Vehicle Based on Game Theory. IEEE Trans. Veh. Technol. 2022, 71, 5704–5714. [Google Scholar] [CrossRef]
  87. He, H.; Shou, Y.; Song, J. An Improved A-ECMS Energy Management for Plug-in Hybrid Electric Vehicles Considering Transient Characteristics of Engine. Energy Rep. 2023, 10, 2006–2016. [Google Scholar] [CrossRef]
  88. Padmarajan, B.V.; McGordon, A.; Jennings, P.A. Blended Rule-Based Energy Management for PHEV: System Structure and Strategy. IEEE Trans. Veh. Technol. 2016, 65, 8757–8762. [Google Scholar] [CrossRef]
  89. Sun, X.; Cao, Y.; Jin, Z.; Tian, X.; Xue, M. An Adaptive ECMS Based on Traffic Information for Plug-in Hybrid Electric Buses. IEEE Trans. Ind. Electron. 2023, 70, 9248–9259. [Google Scholar] [CrossRef]
  90. Zhang, J.; Chu, L.; Wang, X.; Guo, C.; Fu, Z.; Zhao, D. Optimal Energy Management Strategy for Plug-in Hybrid Electric Vehicles Based on a Combined Clustering Analysis. Appl. Math. Modell. 2021, 94, 49–67. [Google Scholar] [CrossRef]
  91. Liu, T.; Yu, H.; Guo, H.; Qin, Y.; Zou, Y. Online Energy Management for Multimode Plug-In Hybrid Electric Vehicles. IEEE Trans. Ind. Informat. 2019, 15, 4352–4361. [Google Scholar] [CrossRef]
  92. Li, S.; Hu, M.; Gong, C.; Zhan, S.; Qin, D. Energy Management Strategy for Hybrid Electric Vehicle Based on Driving Condition Identification Using KGA-Means. Energies 2018, 11, 1531. [Google Scholar] [CrossRef]
  93. Gómez-Barroso, Á.; Makazaga, I.V.; Zulueta, E. Optimizing Hybrid Electric Vehicle Performance: A Detailed Overview of Energy Management Strategies. Energies 2024, 18, 10. [Google Scholar] [CrossRef]
  94. Ji, Y.; Lee, H. Event-Based Anomaly Detection Using a One-Class SVM for a Hybrid Electric Vehicle. IEEE Trans. Veh. Technol. 2022, 71, 6032–6043. [Google Scholar] [CrossRef]
  95. Lu, Z.; Tian, H.; Sun, Y.; Li, R.; Tian, G. Neural Network Energy Management Strategy with Optimal Input Features for Plug-in Hybrid Electric Vehicles. Energy 2023, 285, 129399. [Google Scholar] [CrossRef]
  96. Millo, F.; Rolando, L.; Tresca, L.; Pulvirenti, L. Development of a Neural Network-Based Energy Management System for a Plug-in Hybrid Electric Vehicle. Transp. Eng. 2023, 11, 100156. [Google Scholar] [CrossRef]
  97. Xie, S.; Qi, S.; Lang, K. A Data-Driven Power Management Strategy for Plug-In Hybrid Electric Vehicles Including Optimal Battery Depth of Discharging. IEEE Trans. Ind. Informat. 2020, 16, 3387–3396. [Google Scholar] [CrossRef]
  98. Bo, Z.; Chen, H.; Zhu, S.; Li, C.; Wang, Y.; Du, Y.; Zhu, J.; Jay Tsai Chien, C. An Optimization-Based Power-Following Energy Management Strategy for Hydrogen Fuel Cell Vehicles. World Electr. Veh. J. 2024, 15, 564. [Google Scholar] [CrossRef]
  99. Huang, R.; He, H. A Novel Data-Driven Energy Management Strategy for Fuel Cell Hybrid Electric Bus Based on Improved Twin Delayed Deep Deterministic Policy Gradient Algorithm. Int. J. Hydrogen Energy 2024, 52, 782–798. [Google Scholar] [CrossRef]
  100. Zhang, H.; Chen, B.; Lei, N.; Li, B.; Li, R.; Wang, Z. Integrated Thermal and Energy Management of Connected Hybrid Electric Vehicles Using Deep Reinforcement Learning. IEEE Trans. Transport. Electrific. 2024, 10, 4594–4603. [Google Scholar] [CrossRef]
  101. Wu, J.; He, H.; Peng, J.; Li, Y.; Li, Z. Continuous Reinforcement Learning of Energy Management with Deep Q Network for a Power Split Hybrid Electric Bus. Appl. Energy 2018, 222, 799–811. [Google Scholar] [CrossRef]
  102. Liu, T.; Hu, X.; Hu, W.; Zou, Y. A Heuristic Planning Reinforcement Learning-Based Energy Management for Power-Split Plug-in Hybrid Electric Vehicles. IEEE Trans. Ind. Informat. 2019, 15, 6436–6445. [Google Scholar] [CrossRef]
  103. Yazar, O.; Coskun, S.; Zhang, F. Adaptive Learning-Based Energy Management for HEVs Using Soft Actor-Critic DRL Algorithm. Mechatron Technol. 2024, 1, 1–21. [Google Scholar] [CrossRef]
  104. Tang, X.; Shi, L.; Zhang, Y.; Li, B.; Xu, S.; Song, Z. Degradation Adaptive Energy Management Strategy for FCHEV Based on the Rule-DDPG Method: Tailored to the Current SOH of the Powertrain. IEEE Trans. Transport. Electrific. 2025, 11, 993–1003. [Google Scholar] [CrossRef]
  105. Deng, K.; Liu, Y.; Hai, D.; Peng, H.; Löwenstein, L.; Pischinger, S.; Hameyer, K. Deep Reinforcement Learning Based Energy Management Strategy of Fuel Cell Hybrid Railway Vehicles Considering Fuel Cell Aging. Energy Convers. Manag. 2022, 251, 115030. [Google Scholar] [CrossRef]
  106. Lv, H.; Qi, C.; Song, C.; Song, S.; Zhang, R.; Xiao, F. Energy Management of Hybrid Electric Vehicles Based on Inverse Reinforcement Learning. Energy Rep. 2022, 8, 5215–5224. [Google Scholar] [CrossRef]
  107. Yavuz, M.; Kivanç, Ö.C. Optimization of a Cluster-Based Energy Management System Using Deep Reinforcement Learning without Affecting Prosumer Comfort: V2X Technologies and Peer-to-Peer Energy Trading. IEEE Access 2024, 12, 31551–31575. [Google Scholar] [CrossRef]
  108. Sun, H.; Li, J.; Cheng, C.; Shi, S.; Wang, J.; Lin, J.; Liu, Y. Health- and Behavior-Aware Energy Management Strategy for Fuel Cell Hybrid Electric Vehicles Based on Parallel Deep Deterministic Policy Gradient Learning. Eng. Appl. Artif. Intell. 2025, 158, 111311. [Google Scholar] [CrossRef]
  109. Qi, C.; Zhu, Y.; Song, C.; Yan, G.; Xiao, F.; Da, wang; Zhang, X.; Cao, J.; Song, S. Hierarchical Reinforcement Learning Based Energy Management Strategy for Hybrid Electric Vehicle. Energy 2022, 238, 121703. [Google Scholar] [CrossRef]
  110. Yang, X.; Jiang, C.; Zhou, M.; Hu, H. Bi-Level Energy Management Strategy for Power-Split Plug-in Hybrid Electric Vehicles: A Reinforcement Learning Approach for Prediction and Control. Front. Energy Res. 2023, 11, 1153390. [Google Scholar] [CrossRef]
  111. Lei, S.; Li, Y.; Liu, M.; Li, W.; Zhao, T.; Hou, S.; Xu, L. Hierarchical Energy Management and Energy Saving Potential Analysis for Fuel Cell Hybrid Electric Tractors. Energies 2025, 18, 247. [Google Scholar] [CrossRef]
  112. Pardhasaradhi, B.; Shilaja, C. A Deep Reinforced Markov Action Learning Based Hybridized Energy Management Strategy for Electric Vehicle Application. J. Energy Storage 2023, 74, 109373. [Google Scholar] [CrossRef]
  113. Liu, C.; Murphey, Y.L. Power Management for Plug-in Hybrid Electric Vehicles Using Reinforcement Learning with Trip Information. In Proceedings of the 2014 IEEE Transportation Electrification Conference and Expo (ITEC), Dearborn, MI, USA, 15–18 June 2014; pp. 1–6. [Google Scholar] [CrossRef]
  114. Lin, Y.; Chu, L.; Hu, J.; Hou, Z.; Li, J.; Jiang, J.; Zhang, Y. Progress and Summary of Reinforcement Learning on Energy Management of MPS-EV. Heliyon 2024, 10, e23014. [Google Scholar] [CrossRef] [PubMed]
  115. Liu, Z.E.; Zhou, Q.; Li, Y.; Shuai, S.; Xu, H. Safe Deep Reinforcement Learning-Based Constrained Optimal Control Scheme for HEV Energy Management. IEEE Trans. Transport. Electrific. 2023, 9, 4278–4293. [Google Scholar] [CrossRef]
  116. Zhou, Q.; Zhao, D.; Shuai, B.; Li, Y.; Williams, H.; Xu, H. Knowledge Implementation and Transfer with an Adaptive Learning Network for Real-Time Power Management of the Plug-in Hybrid Vehicle. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 5298–5308. [Google Scholar] [CrossRef]
  117. Li, Y.; He, H.; Peng, J.; Zhang, H. Power Management for a Plug-in Hybrid Electric Vehicle Based on Reinforcement Learning with Continuous State and Action Spaces. Energy Procedia 2017, 142, 2270–2275. [Google Scholar] [CrossRef]
  118. Sun, X.; Fu, J.; Yang, H.; Xie, M.; Liu, J. An Energy Management Strategy for Plug-in Hybrid Electric Vehicles Based on Deep Learning and Improved Model Predictive Control. Energy 2023, 269, 126772. [Google Scholar] [CrossRef]
  119. East, S.; Cannon, M. Energy Management in Plug-In Hybrid Electric Vehicles: Convex Optimization Algorithms for Model Predictive Control. IEEE Trans. Control Syst. Technol. 2020, 28, 2191–2203. [Google Scholar] [CrossRef]
  120. Amirfarhangi Bonab, S.; Emadi, A. Fuel-Optimal Energy Management Strategy for a Power-Split Powertrain via Convex Optimization. IEEE Access 2020, 8, 30854–30862. [Google Scholar] [CrossRef]
  121. Wang, S.; Huang, X.; López, J.M.; Xu, X.; Dong, P. Fuzzy Adaptive-Equivalent Consumption Minimization Strategy for a Parallel Hybrid Electric Vehicle. IEEE Access 2019, 7, 133290–133303. [Google Scholar] [CrossRef]
  122. Guo, R.; Xue, X.; Sun, Z.; Hong, Z. Clustered Energy Management Strategy of Plug-In Hybrid Electric Logistics Vehicle Based on Gaussian Mixture Model and Stochastic Dynamic Programming. IEEE Trans. Transport. Electrific. 2023, 9, 3177–3191. [Google Scholar] [CrossRef]
  123. Liu, T.; Wang, B.; Yang, C. Online Markov Chain-Based Energy Management for a Hybrid Tracked Vehicle with Speedy Q-Learning. Energy 2018, 160, 544–555. [Google Scholar] [CrossRef]
  124. Tao, F.; Liu, W.; Zhu, L.; Wang, J. Driver-Personality-Involved Car-Following Energy Management Strategies for FCHEVs. Int. J. Hydrogen Energy 2025, 138, 445–458. [Google Scholar] [CrossRef]
  125. Satheesh Kumar, P.; Pala Prasad Reddy, M.; Muqthiar Ali, S.; Devaraju, T. Optimizing Energy Management Strategy for Fuel Cell Hybrid Electric Vehicles: A Hybrid FBPINN-MGO Approach. Renew. Energy 2025, 253, 123159. [Google Scholar] [CrossRef]
  126. Yan, F.; Wang, J.; Du, C.; Hua, M. Multi-Objective Energy Management Strategy for Hybrid Electric Vehicles Based on TD3 with Non-Parametric Reward Function. Energies 2023, 16, 74. [Google Scholar] [CrossRef]
  127. Wang, H.; Chang, C.; Pan, Z.; Zhai, X.; Liu, H.; Zhang, S.; Liu, Y. Optimization of Energy Management Strategies for Multi-Mode Hybrid Electric Vehicles Driven by Travelling Road Condition Data. Sci. Rep. 2025, 15, 12684. [Google Scholar] [CrossRef]
  128. Liu, J.; Jin, T.; Liu, L.; Chen, Y.; Yuan, K. Multi-Objective Optimization of a Hybrid ESS Based on Optimal Energy Management Strategy for LHDs. Sustainability 2017, 9, 1874. [Google Scholar] [CrossRef]
  129. Zhu, L.; Tao, F.; Fu, Z.; Sun, H.; Ji, B.; Chen, Q. Multiobjective Optimization of Safety, Comfort, Fuel Economy, and Power Sources Durability for FCHEV in Car-Following Scenarios. IEEE Trans. Transport. Electrific. 2023, 9, 1797–1808. [Google Scholar] [CrossRef]
  130. Fallah, S.N.; Deo, R.C.; Shojafar, M.; Conti, M.; Shamshirband, S. Computational Intelligence Approaches for Energy Load Forecasting in Smart Energy Management Grids: State of the Art, Future Challenges, and Research Directions. Energies 2018, 11, 596. [Google Scholar] [CrossRef]
  131. Zhou, X.; Du, H.; Xue, S.; Ma, Z. Recent Advances in Data Mining and Machine Learning for Enhanced Building Energy Management. Energy 2024, 307, 132636. [Google Scholar] [CrossRef]
  132. Yan, R.; Zhou, Z.; Shang, Z.; Wang, Z.; Hu, C.; Li, Y.; Yang, Y.; Chen, X.; Gao, R.X. Knowledge Driven Machine Learning towards Interpretable Intelligent Prognostics and Health Management: Review and Case Study. Chin. J. Mech. Eng. 2025, 38, 5. [Google Scholar] [CrossRef]
  133. Mariano-Hernández, D.; Hernández-Callejo, L.; Zorita-Lamadrid, A.; Duque-Pérez, O.; Santos García, F. A Review of Strategies for Building Energy Management System: Model Predictive Control, Demand Side Management, Optimization, and Fault Detect & Diagnosis. J. Build. Eng. 2021, 33, 101692. [Google Scholar] [CrossRef]
  134. Lin, Y.; Tang, J.; Guo, J.; Wu, S.; Li, Z. Advancing AI-Enabled Techniques in Energy System Modeling: A Review of Data-Driven, Mechanism-Driven, and Hybrid Modeling Approaches. Energies 2025, 18, 845. [Google Scholar] [CrossRef]
  135. Zhao, J.; Burke, A.F.; Zhao, J.; Burke, A.F. Artificial Intelligence-Driven Electric Vehicle Battery Lifetime Diagnostics; IntechOpen: London, UK, 2025; ISBN 978-1-83634-865-8. [Google Scholar] [CrossRef]
  136. Aldossary, M. Enhancing Urban Electric Vehicle (EV) Fleet Management Efficiency in Smart Cities: A Predictive Hybrid Deep Learning Framework. Smart Cities 2024, 7, 142. [Google Scholar] [CrossRef]
  137. Zhao, J.; Qu, X.; Wu, Y.; Fowler, M.; Burke, A.F. Artificial Intelligence-Driven Real-World Battery Diagnostics. Energy AI 2024, 18, 100419. [Google Scholar] [CrossRef]
  138. Guo, S.; Zhao, C. iEVEM: Big Data-Empowered Framework for Intelligent Electric Vehicle Energy Management. Systems 2025, 13, 118. [Google Scholar] [CrossRef]
  139. Wei, Z.; Liu, K.; Liu, X.; Li, Y.; Du, L.; Gao, F. Multilevel Data-Driven Battery Management: From Internal Sensing to Big Data Utilization. IEEE Trans. Transport. Electrific. 2023, 9, 4805–4823. [Google Scholar] [CrossRef]
  140. Li, X.; Zhou, Z.; Wei, C.; Gao, X.; Zhang, Y. Multi-Objective Optimization of Hybrid Electric Vehicles Energy Management Using Multi-Agent Deep Reinforcement Learning Framework. Energy AI 2025, 20, 100491. [Google Scholar] [CrossRef]
  141. Guo, X.; Liu, T.; Tang, B.; Tang, X.; Zhang, J.; Tan, W.; Jin, S. Transfer Deep Reinforcement Learning-Enabled Energy Management Strategy for Hybrid Tracked Vehicle. IEEE Access 2020, 8, 165837–165848. [Google Scholar] [CrossRef]
  142. Xu, B.; Hu, X.; Tang, X.; Lin, X.; Li, H.; Rathod, D.; Filipi, Z. Ensemble Reinforcement Learning-Based Supervisory Control of Hybrid Electric Vehicle for Fuel Economy Improvement. IEEE Trans. Transport. Electrific. 2020, 6, 717–727. [Google Scholar] [CrossRef]
  143. Lian, R.; Peng, J.; Wu, Y.; Tan, H.; Zhang, H. Rule-Interposing Deep Reinforcement Learning Based Energy Management Strategy for Power-Split Hybrid Electric Vehicle. Energy 2020, 197, 117297. [Google Scholar] [CrossRef]
  144. Lian, R.; Tan, H.; Peng, J.; Li, Q.; Wu, Y. Cross-Type Transfer for Deep Reinforcement Learning Based Hybrid Electric Vehicle Energy Management. IEEE Trans. Veh. Technol. 2020, 69, 8367–8380. [Google Scholar] [CrossRef]
  145. Du, G.; Zou, Y.; Zhang, X.; Guo, L.; Guo, N. Heuristic Energy Management Strategy of Hybrid Electric Vehicle Based on Deep Reinforcement Learning with Accelerated Gradient Optimization. IEEE Trans. Transport. Electrific. 2021, 7, 2194–2208. [Google Scholar] [CrossRef]
  146. Zhang, D.; Li, J.; Guo, N.; Liu, Y.; Shen, S.; Wei, F.; Chen, Z.; Zheng, J. Adaptive Deep Reinforcement Learning Energy Management for Hybrid Electric Vehicles Considering Driving Condition Recognition. Energy 2024, 313, 134086. [Google Scholar] [CrossRef]
  147. Wang, Y.; Wu, J.; He, H.; Wei, Z.; Sun, F. Data-Driven Energy Management for Electric Vehicles Using Offline Reinforcement Learning. Nat. Commun. 2025, 16, 2835. [Google Scholar] [CrossRef]
  148. Wang, Y.; Wu, Y.; Tang, Y.; Li, Q.; He, H. Cooperative Energy Management and Eco-Driving of Plug-in Hybrid Electric Vehicle via Multi-Agent Reinforcement Learning. Appl. Energy 2023, 332, 120563. [Google Scholar] [CrossRef]
  149. Liu, T.; Tang, X.; Chen, J.; Wang, H.; Tan, W.; Yang, Y. Transferred Energy Management Strategies for Hybrid Electric Vehicles Based on Driving Conditions Recognition. In Proceedings of the 2020 IEEE Vehicle Power and Propulsion Conference (VPPC), Virtual, 18 November–16 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  150. Tresca, L.; Pulvirenti, L.; Rolando, L. A Cutting-Edge Energy Management System for a Hybrid Electric Vehicle Relying on Soft Actor–Critic Deep Reinforcement Learning. Transp. Eng. 2025, 19, 100308. [Google Scholar] [CrossRef]
  151. Xu, B.; Rathod, D.; Zhang, D.; Yebi, A.; Zhang, X.; Li, X.; Filipi, Z. Parametric Study on Reinforcement Learning Optimized Energy Management Strategy for a Hybrid Electric Vehicle. Appl. Energy 2020, 259, 114200. [Google Scholar] [CrossRef]
  152. Lee, H.; Cha, S.W. Energy Management Strategy of Fuel Cell Electric Vehicles Using Model-Based Reinforcement Learning with Data-Driven Model Update. IEEE Access 2021, 9, 59244–59254. [Google Scholar] [CrossRef]
  153. Lee, H.; Kang, C.; Park, Y.-I.; Kim, N.; Cha, S.W. Online Data-Driven Energy Management of a Hybrid Electric Vehicle Using Model-Based Q-Learning. IEEE Access 2020, 8, 84444–84454. [Google Scholar] [CrossRef]
  154. Dai, L.; Hu, P.; Wang, T.; Bian, G.; Liu, H. Optimal Rule-Interposing Reinforcement Learning-Based Energy Management of Series—Parallel-Connected Hybrid Electric Vehicles. Sustainability 2024, 16, 6848. [Google Scholar] [CrossRef]
  155. Guo, Z.; Guo, J.; Chu, L.; Guo, C.; Hu, J.; Hou, Z. A Hierarchical Energy Management Strategy for 4WD Plug-In Hybrid Electric Vehicles. Machines 2022, 10, 947. [Google Scholar] [CrossRef]
  156. Wu, J.; Zhang, Y.; Ruan, J.; Liang, Z.; Liu, K. Rule and Optimization Combined Real-Time Energy Management Strategy for Minimizing Cost of Fuel Cell Hybrid Electric Vehicles. Energy 2023, 285, 129442. [Google Scholar] [CrossRef]
  157. Laflamme, C.; Doppler, J.; Palvolgyi, B.; Dominka, S.; Viharos, Z.J.; Haeussler, S. Explainable Reinforcement Learning for Powertrain Control Engineering. Eng. Appl. Artif. Intell. 2025, 146, 110135. [Google Scholar] [CrossRef]
  158. Zeng, T.; Zhang, C.; Zhang, Y.; Deng, C.; Hao, D.; Zhu, Z.; Ran, H.; Cao, D. Optimization-Oriented Adaptive Equivalent Consumption Minimization Strategy Based on Short-Term Demand Power Prediction for Fuel Cell Hybrid Vehicle. Energy 2021, 227, 120305. [Google Scholar] [CrossRef]
  159. Liu, Y.; Sun, Z.; Wang, Y.; Li, M.; Chen, Z. Capacity Configuration of Fuel Cell Hybrid Vehicles Using Enhanced Multi-Objective Particle Swarm Optimization with Competitive Mechanism. Energy Convers. Manag. 2024, 321, 119039. [Google Scholar] [CrossRef]
  160. Jia, C.; He, H.; Zhou, J.; Li, J.; Wei, Z.; Li, K. Learning-Based Model Predictive Energy Management for Fuel Cell Hybrid Electric Bus with Health-Aware Control. Appl. Energy 2024, 355, 122228. [Google Scholar] [CrossRef]
  161. Janek, J.; Zeier, W.G. Challenges in Speeding up Solid-State Battery Development. Nat. Energy 2023, 8, 230–240. [Google Scholar] [CrossRef]
  162. Joshi, A.; Mishra, D.K.; Singh, R.; Zhang, J.; Ding, Y. A Comprehensive Review of Solid-State Batteries. Appl. Energy 2025, 386, 125546. [Google Scholar] [CrossRef]
Figure 1. Classification of common EMSs for HEVs.
Figure 1. Classification of common EMSs for HEVs.
Energies 18 03600 g001
Figure 2. Classification framework of EMSs for HEVs in the era of artificial intelligence.
Figure 2. Classification framework of EMSs for HEVs in the era of artificial intelligence.
Energies 18 03600 g002
Figure 3. Flow diagram of Energy Management Strategy for Hybrid Electric Vehicles based on knowledge-driven approach.
Figure 3. Flow diagram of Energy Management Strategy for Hybrid Electric Vehicles based on knowledge-driven approach.
Energies 18 03600 g003
Figure 4. Schematic diagram of Energy Management Strategy for Hybrid Electric Vehicles based on deterministic rules.
Figure 4. Schematic diagram of Energy Management Strategy for Hybrid Electric Vehicles based on deterministic rules.
Energies 18 03600 g004
Figure 5. Schematic diagram of the energy management system based on fuzzy logic rules.
Figure 5. Schematic diagram of the energy management system based on fuzzy logic rules.
Energies 18 03600 g005
Figure 6. Flow diagram of Energy Management Strategy for Hybrid Electric Vehicles based on data-driven approach.
Figure 6. Flow diagram of Energy Management Strategy for Hybrid Electric Vehicles based on data-driven approach.
Energies 18 03600 g006
Figure 7. Application framework of supervised and unsupervised learning in Hybrid Electric Vehicle energy management.
Figure 7. Application framework of supervised and unsupervised learning in Hybrid Electric Vehicle energy management.
Energies 18 03600 g007
Figure 8. Flowchart of three typical data-processing configurations using K-means clustering.
Figure 8. Flowchart of three typical data-processing configurations using K-means clustering.
Energies 18 03600 g008
Figure 9. Flowchart of the Random Forest algorithm.
Figure 9. Flowchart of the Random Forest algorithm.
Energies 18 03600 g009
Figure 10. Flow diagram of Energy Management Strategy for Hybrid Electric Vehicles based on reinforcement learning.
Figure 10. Flow diagram of Energy Management Strategy for Hybrid Electric Vehicles based on reinforcement learning.
Energies 18 03600 g010
Figure 11. Block diagram of the Q-Learning algorithm.
Figure 11. Block diagram of the Q-Learning algorithm.
Energies 18 03600 g011
Figure 12. Block diagram of the DQL algorithm.
Figure 12. Block diagram of the DQL algorithm.
Energies 18 03600 g012
Figure 13. Block diagram of energy management for plug-in Hybrid Electric Vehicles based on the DQL–MPC algorithm.
Figure 13. Block diagram of energy management for plug-in Hybrid Electric Vehicles based on the DQL–MPC algorithm.
Energies 18 03600 g013
Figure 14. Mapping from expert decision tree to FlexNet (left: decision tree; and right: FlexNet neural network).
Figure 14. Mapping from expert decision tree to FlexNet (left: decision tree; and right: FlexNet neural network).
Energies 18 03600 g014
Figure 15. Block diagram of hierarchical architecture for the energy management system of Hybrid Electric Vehicles.
Figure 15. Block diagram of hierarchical architecture for the energy management system of Hybrid Electric Vehicles.
Energies 18 03600 g015
Figure 16. Schematic diagram of online implementation of the designed ANFIS–AECMS.
Figure 16. Schematic diagram of online implementation of the designed ANFIS–AECMS.
Energies 18 03600 g016
Figure 17. Flowchart of the PSO–SVM algorithm.
Figure 17. Flowchart of the PSO–SVM algorithm.
Energies 18 03600 g017
Figure 18. Flowchart of supervised learning algorithm trained by DP.
Figure 18. Flowchart of supervised learning algorithm trained by DP.
Energies 18 03600 g018
Figure 19. Block diagram of the EMS integrating the TD3 algorithm with a prioritized experience replay mechanism.
Figure 19. Block diagram of the EMS integrating the TD3 algorithm with a prioritized experience replay mechanism.
Energies 18 03600 g019
Figure 20. Flowchart of the inverse reinforcement learning process.
Figure 20. Flowchart of the inverse reinforcement learning process.
Energies 18 03600 g020
Figure 21. Training diagram of the Actor–Critic framework.
Figure 21. Training diagram of the Actor–Critic framework.
Energies 18 03600 g021
Figure 22. The overall control framework of LSTM–IMPC-based EMS.
Figure 22. The overall control framework of LSTM–IMPC-based EMS.
Energies 18 03600 g022
Figure 23. Flowchart of the control strategy based on SQL.
Figure 23. Flowchart of the control strategy based on SQL.
Energies 18 03600 g023
Table 1. Common rule-based EMSs and their characteristics [16,20].
Table 1. Common rule-based EMSs and their characteristics [16,20].
Strategy TypeSpecific MethodMain FeaturesApplication Scenarios
Deterministic StrategyThermostat ControlDefines engine start–stop based on SOC thresholds; simple logic, easy to implementSimple parallel hybrid powertrains
Power FollowerEngine output follows vehicle power demand; electric motor compensates promptlyVarious hybrid architectures
Finite-State MachineVehicle operation divided into discrete states; clear logic based on predefined rulesWell-defined state classification scenarios
Fuzzy Logic StrategyConventional Fuzzy Logic ControlUses fuzzy rules to handle input uncertainty; enhances adaptability to nonlinear systemsNonlinear systems under complex conditions
Adaptive Fuzzy Logic ControlDynamically adjusts fuzzy rules and parameters to improve adaptabilityVarying or multi-terminal operational environments
Table 2. Common optimization-based EMSs and their characteristics [20,26].
Table 2. Common optimization-based EMSs and their characteristics [20,26].
Strategy CategorySpecific MethodMain FeaturesApplication Scenarios
Global OptimizationDPSolves global optimality via multi-stage recursion; computationally intensive; often used offlineStrategy benchmarking and offline design
PMPBased on optimal control theory; more efficient than DP; requires high model accuracyHigh-precision control requiring real-time feasibility
Genetic Algorithm
(GA)
Bio-inspired evolutionary search in complex nonlinear spaces; robust and flexible; computationally expensiveOptimization of non-convex complex problems
Instantaneous OptimizationECMSMinimizes instantaneous fuel-equivalent cost; achieves near-optimal power split in real timeWidely used in commercial hybrid powertrains
MPCUtilizes system prediction models and receding horizon optimization; balances present and future states; strong robustnessReal-time control of complex dynamic systems
Table 3. Commonly used driving cycles for EMS evaluation [34,35].
Table 3. Commonly used driving cycles for EMS evaluation [34,35].
Driving CycleKey FeaturesTypical Application
Worldwide Harmonized Light Vehicles Test Cycle (WLTC)Global city + highway mix; 30 min duration; up to 131 km/hEU regulation; main HEV/EV testing benchmark
Urban Dynamometer Driving Schedule (UDDS)Urban driving with frequent stops; avg. 31.5 km/hU.S. EPA standard for HEV EMS under stop-and-go traffic
US06 Supplemental Federal Test Procedure (US06)Aggressive high-speed, high-acceleration profileTests EMS robustness under dynamic conditions
New European Driving Cycle (NEDC)Earlier EU cycle; low-speed urban + steady highwayLegacy benchmark, replaced by WLTP
China Light-Duty Vehicle Test Cycle (CLTC)Reflects typical Chinese city/suburban trafficNational regulation standard in China (GB/T)
Japan 10-15 ModeLow-speed, start–stop pattern for short tripsJapanese legacy regulation cycle
Table 4. Open-Source Tools for EMS Development and Testing [36,37].
Table 4. Open-Source Tools for EMS Development and Testing [36,37].
Tool NameDescriptionFunctionalityApplication Area
OpenVIBEOriginally for brain–computer interfaces, adapted for synchronized sensor simulationReal-time multi-signal acquisition and system response validationSensor integration, EMS-HIL testing
PyEMPython-based electromagnetic transient simulatorModels power electronics and motor drive responsePowertrain and controller co-simulation
FASTSimLightweight simulator by NREL for estimating energy/fuel useSupports rule-based or AI-based EMS testingRapid EMS evaluation and concept comparison
Autonomie (licensed)Advanced HEV/EV modeling platformDetailed vehicle-level EMS evaluation and calibrationIndustrial-level validation
SUMOUrban mobility simulator for traffic flow and vehicle behaviorMulti-vehicle, connected scenarios for EMSCooperative EMS and fleet simulation
VeSyMAModular vehicle systems modeling toolFull HEV powertrain + EMS integrationCommercial-grade EMS development
Table 5. Comparison of representative reinforcement learning algorithms in HEV energy management systems [57,61,62,64].
Table 5. Comparison of representative reinforcement learning algorithms in HEV energy management systems [57,61,62,64].
AlgorithmKey CharacteristicsTypical Applications in HEV EMSMain Advantages
Q-LearningModel-free; value iteration; suitable for discrete state–action spacesEngine start/stop control, gear selection, basic energy distributionSimple to implement; good adaptability; no need for prior knowledge
SARSAOn-policy learning; updates based on current actions; suitable for small discrete spacesBattery SoC maintenance, conservative control tasksMore stable learning; safer in high-risk scenarios
DQNUses deep networks to approximate Q-values; handles high-dimensional and continuous spacesEnergy distribution under complex traffic conditionsBetter generalization; suitable for complex, high-dimensional environments
DDPGActor–Critic architecture; continuous action space controlPower allocation between engine and motor in real-timeHigh control precision; stable learning; adaptive to system changes
PPOPolicy gradient method; clipped surrogate objective ensures stabilityMulti-objective energy optimization strategiesEfficient and robust; less sensitive to parameter tuning
SACOff-policy Actor–Critic; combines value learning and entropy regularizationStable energy control in uncertain environmentsGood robustness; safe exploration; balanced performance
Table 6. Summary of hybrid energy management strategy categories for HEVs [65,66,67].
Table 6. Summary of hybrid energy management strategy categories for HEVs [65,66,67].
Strategy CategoryIntegration ApproachRepresentative MethodsMain AdvantagesTypical Application Scenarios
Offline Optimization + Online ControlUse offline DP data to train models for online EMSDP + SVM, DP + DRLNear-global optimality; improved real-time performanceOffline-trained EMS models for plug-in HEVs
Real-Time Optimization in Learning FrameworkEmbed MPC into learning-based controlMPC + Q-Learning, MPC + DDQL + CNNEnhanced adaptivity; reduced online computational costDynamic driving environments; urban traffic
Dynamic Parameter AdjustmentTune ECMS equivalence factors via AI modelsDRL-based ECMS adaptationReal-time responsiveness to load/SOC conditionsVariable terrain and driver behavior
Intelligent Rule/Fuzzy Parameter OptimizationApply metaheuristics to fuzzy or rule-based systemsGA/PSO + FLC, FNN + driving recognitionImproved fuel economy and adaptability; interpretableMultimodal traffic, mixed energy storage systems
Hierarchical Control ArchitecturesUpper-level AI planning + lower-level rule/MPC trackingDDPG + Mode Selector, FlexNet + ECMSScalability, flexibility, multi-agent coordinationIntelligent fleets; connected vehicles; fleet control
Table 7. Comparative evaluation of AI-based energy management strategies for HEVs [30,73].
Table 7. Comparative evaluation of AI-based energy management strategies for HEVs [30,73].
Strategy TypeFuel Economy ImprovementComputational Cost (CPU Time)Real-Time Feasibility (Delay)Emissions Reduction (%)Generalization AbilityValidation Method
Knowledge-Driven 3–11%Very low
(<1 ms)
Excellent5–8%WeakSimulation/WLTC
Data-Driven 10–13%Moderate (training in hours)Moderate (20–50 ms)8–12%ModerateHIL/Partial Real-World
Reinforcement Learning 5–17%High (training in days)Moderate (30–50 ms)10–15%StrongPrimarily Simulation
Hybrid Strategy 15–19%High (training + online optimization)Moderate to High (50 ms)12–18%StrongHIL/WLTC Real-World
Table 8. Quantitative comparison of RL algorithms for HEV energy management under standard driving cycles [37,76,103].
Table 8. Quantitative comparison of RL algorithms for HEV energy management under standard driving cycles [37,76,103].
AlgorithmDriving Cycle/ScenarioFuel Consumption (L/100 km)Vehicle Type/Platform
Q-LearningFTP-75 (Urban Driving)3.462Compact HEV, Simulink platform
DQNFTP-75 (3.356Compact HEV, Simulink platform
DDPGFTP-75 3.325Compact HEV, Simulink platform
SACFTP-75 3.258Compact HEV, Simulink platform
DQN4 × WVUSUB 20.418PHEB (Plug-in Hybrid Electric Bus), CarSim
DDPG4 × WVUSUB 20.093PHEB, CarSim
PPO (Clip)4 × WVUSUB 18.960PHEB, CarSim
Table 9. Comparative assessment of Pareto trade-off capabilities in AI-based EMSs [126,127,128,129].
Table 9. Comparative assessment of Pareto trade-off capabilities in AI-based EMSs [126,127,128,129].
StrategyPareto Trade-Off HandlingReal-Time ExecutionComments
Knowledge-drivenFixed rules/fuzzy (no Pareto frontier)High (deterministic)Interpretable, robust, low overhead
Data-drivenSingle-objective training (no Pareto exploration)High (fast inference)Data-dependent, black-box
Reinforcement LearningWeighted reward (partial Pareto)High (once trained)Adaptive, near-optimal
HybridOffline DP + adaptive control (extended Pareto)Medium (complex)Broadest trade-offs, very complex
Table 10. Comparison of AI-based energy management strategies [63,160].
Table 10. Comparison of AI-based energy management strategies [63,160].
Strategy TypePrincipleRepresentative MethodsAdvantagesDisadvantages
Knowledge-DrivenBased on expert experience and physical models to formulate control rules, no need for large-scale data training.Deterministic rules, Fuzzy Logic ControlEasy to interpret, real-time, and robust for engineering applications.Lack of learning ability, difficult to adapt to complex conditions.
Data-DrivenUses data from different driving environments to train and learn optimal control strategies.Neural Networks, Decision Trees, SVM, K-means, PCA.Adaptability to different quality data, flexible control.Depends on data quality, susceptible to data differences.
Reinforcement LearningLearns through interaction with the environment to optimize the strategy autonomously.Q-Learning, DQN, DDPG, PPO, etc.Recent optimization, self-learning capability.Limited real-world validation, high computational cost, limited efficiency in noisy environments.
HybridCombines rule-based optimization with reinforcement learning to enhance system adaptability.DP + SVM, MPC + RL, GA optimization.Comprehensive performance, efficient optimization.System complexity increases; hard to design and debug.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Huang, B.; Yu, W.; Ma, M.; Wei, X.; Wang, G. Artificial-Intelligence-Based Energy Management Strategies for Hybrid Electric Vehicles: A Comprehensive Review. Energies 2025, 18, 3600. https://doi.org/10.3390/en18143600

AMA Style

Huang B, Yu W, Ma M, Wei X, Wang G. Artificial-Intelligence-Based Energy Management Strategies for Hybrid Electric Vehicles: A Comprehensive Review. Energies. 2025; 18(14):3600. https://doi.org/10.3390/en18143600

Chicago/Turabian Style

Huang, Bin, Wenbin Yu, Minrui Ma, Xiaoxu Wei, and Guangya Wang. 2025. "Artificial-Intelligence-Based Energy Management Strategies for Hybrid Electric Vehicles: A Comprehensive Review" Energies 18, no. 14: 3600. https://doi.org/10.3390/en18143600

APA Style

Huang, B., Yu, W., Ma, M., Wei, X., & Wang, G. (2025). Artificial-Intelligence-Based Energy Management Strategies for Hybrid Electric Vehicles: A Comprehensive Review. Energies, 18(14), 3600. https://doi.org/10.3390/en18143600

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop