1.2. Literature Review
For the energy management of FCHEVs, strategies can be categorized into rule-based EMSs [
4], optimization-based strategies [
5], and learning-based strategies.
Rule-based EMS (RB-EMS) allocate power between the fuel cell and the power battery through pre-defined logic and rules. Typically, this strategy requires the design of a rule table based on the researcher’s expertise, experimental results, or mathematical models [
6]. Power distribution between the fuel cell and the battery is then determined by referencing the rule table according to the vehicle’s current state and input parameters. Generally, such EMSs are straightforward to implement, require minimal computational effort, and do not necessitate considering the vehicle’s working conditions. These advantages render RB-EMSs widely applicable in real vehicle scenarios. Wang et al. [
7] devised a finite state machine-based EMS that partitions the state of charge (SOC) and input–output of lithium-ion batteries into nine distinct states, effectively managing hybrid power systems. This study also explored energy management following the integration of supercapacitors. Yuan et al. [
8] addressed the issue where the energy generated by the fuel cell lags behind the power demand within certain ranges. They implemented a dynamically allocated, optimized power-following logic threshold filtering strategy, which reduced hydrogen consumption and effectively stabilized the battery’s capacity range. RB-EMSs offer simplicity and strong real-time performance, ensuring stable and reliable system operation. However, such strategies are also limited by poor flexibility and suboptimal performance [
9]. Some scholars employed fuzzy control rules [
10] to tackle such issues. Some scholars [
11] compared deterministic rule-based and fuzzy logic energy management strategies for PEMFC hybrid vehicles, demonstrating their respective strengths in computational efficiency and adaptability to improve overall energy efficiency under standard driving cycles. Nevertheless, RB-EMS performance heavily relies on parameter calibration, often making it challenging to optimize for fuel economy.
Optimization-based energy management strategies represent advanced methods that dynamically allocate power output across different energy sources. These strategies primarily include two types of solutions: global optimization (e.g., Dynamic Programming (DP) [
12]) and local optimization (e.g., Model Predictive Control (MPC) [
13] and ECMS [
14,
15]). Global optimization strategies employ algorithms that can find the optimal solution on a global scale. Ravey et al. [
16] designed an EMS using the SOC of the battery as the state variable in a DP framework, improving the vehicle’s overall economic performance. To explore the potential of achieving online optimal energy management, scholars have investigated numerous suboptimal EMSs. These include the MPC and ECMS. MPC first formulates a mathematical model based on the vehicle’s operating state, which can predict the dynamic behavior of the system over a future time horizon, such as vehicle speed and power requirements [
17]. By incorporating constraints like power limits and SOC ranges, MPC then employs optimization algorithms to solve for the minimization of the objective in real time, thereby determining the energy distribution strategy. Zhou et al. [
18,
19] proposed a cooperative speed prediction method based on Markov chains and Fuzzy C-Means clustering, which enhances prediction accuracy. Similarly, Yang et al. [
20] derived an optimal power distribution decision by minimizing a multi-objective function. Jia et al. [
21] developed a system Linear Parameter Variation (LPV) prediction model that considers changes in system parameters. The LPV prediction model is updated online to adapt to changes in battery charging status. Since the precision of MPC is closely tied to the accuracy of its predictions, researchers have primarily focused on improving prediction accuracy. In addition to using conventional or improved Markov chains [
22] for prediction, some have explored the use of neural networks [
23] and other nonlinear models to forecast vehicle speed. Although these speed prediction methods have demonstrated high accuracy in simulation environments, their performance heavily relies on the completeness of historical databases [
24]. Lin et al. [
25] developed a Pontryagin’s Minimum Principle (PMP)-based predictive energy management strategy for FCPHEVs that derives and corrects co-state boundaries in real time to enable fast, optimal, and constraint-compliant control. Moreover, Ma et al. [
26] proposed a variable-horizon velocity prediction-based adaptive PMP-EMS for FCHEVs. In real-world conditions, vehicle speed is influenced by numerous uncertain factors, such as the random distribution of traffic lights and adverse weather conditions like rain or snow. Therefore, some scholars [
27] proposed an energy management strategy that incorporates future terrain information to optimize power allocation, improving battery durability and reducing operating costs.
ECMSs aim to minimize the overall energy consumption of the system [
28]. For electrical energy consumed by the battery, ECMSs employ an EF to convert it into an equivalent fuel consumption [
29]. This EF is crucial, as it significantly influences battery output and, in turn, the overall energy distribution. Therefore, much of the research in this area has focused on determining the optimal EF. A conventional approach involves offline optimization to find the EF for specific operating conditions [
30]. However, discrepancies often arise between the practical driving cycle and the specific conditions used in offline training, resulting in suboptimal energy allocation policies. To address this limitation, Wei et al. [
31] proposed a feedback regulation method based on the current SOC of the lithium-ion battery, dynamically adjusting the EF based on the discrepancy between the current SOC and the reference SOC. Li et al. [
32] proposed a method for the adaptive adjustment of the EF based on vehicle driving conditions. This method involves offline training to derive the optimal EF values for different driving conditions. During the online application process, the current driving conditions are first determined, and then the corresponding EF values are applied in real time. Despite these advancements, previous methods primarily focused on hydrogen consumption as a single optimization objective, neglecting economic losses due to aging fuel cells and lithium-ion batteries [
33]. Therefore, Li et al. [
34,
35] proposed an adaptive ECMS considering both hydrogen consumption and fuel cell degradation, bridging the gap between fuel cell lifetime degradation and adaptive ECMS. In summary, ECMS improves fuel economy and energy utilization efficiency through real-time optimization algorithms. Gao et al. [
36] presented a scenario-oriented adaptive ECMS using NODC-based EF initialization and hybrid neural network speed prediction to enhance SOC control and reduce hydrogen consumption in fuel cell hybrid vehicles. Nevertheless, challenges remain in enhancing the adaptability of the EF and in holistically considering the effects of fuel cell and lithium-ion battery aging on economic performance, which warrant further research. This article proposes an EMS for searching for the best EF in real time, which effectively avoids the situation where EF is not applicable due to the particularity of offline working conditions. Moreover, this article also fully considers the impact of fuel cell aging and power cell aging.
The application of learning-based strategies in fuel cell hybrid systems has demonstrated significant potential. By leveraging machine learning and artificial intelligence techniques, these strategies can optimize energy management while exhibiting strong adaptability and intelligence, especially in handling complex nonlinear problems and uncertainties [
37]. In fuel cell hybrid systems, learning-based strategies first collect a large amount of operational data from various sensors and historical records. This data is then processed and used to train machine learning models, with commonly used models including neural networks [
38], support vector machines [
39], and Reinforcement Learning (RL) [
40]. Unlike EMSs based on neural networks, reinforcement learning-based strategies directly learn the optimal energy management scheme from historical data [
41]. Through continuous experimentation and feedback, the agent gradually optimizes its control strategies, with RL algorithms playing a critical role in this process. Currently, RL algorithms commonly used in energy management include Q-learning [
41], deep Q-network [
42], deep deterministic policy gradient [
43], and Twin Delayed Deep Deterministic Policy Gradient (TD3) [
44]. Zhou et al. [
45] applied deep reinforcement learning algorithms to design SOC regulators, which assist in power distribution among various energy sources. Liu et al. [
46] proposed an adversarial imitation reinforcement learning EMS, using DP-based expert guidance to accelerate training and reduce battery capacity loss. In further studies, Deng et al. [
47] applied TD3 to the energy management of FCHEVs, showing promising performance. Nie et al. [
48] proposed a hierarchical cooperative optimization strategy combining TD3 and other energy management techniques to reduce hydrogen consumption. Li et al. [
49] applied a proximal-policy-optimization-based deep reinforcement learning EMS for FCHVs. However, these strategies also face several challenges. For instance, training and optimizing the models require a large amount of high-quality data, and the computational demands during real-time operation are significant. Furthermore, the performance and stability of these models in real-world complex environments require further validation and optimization [
50]. While learning-based and deep reinforcement learning methods offer promising results, they are often limited by high data requirements and poor real-time adaptability. Thus, there remains a gap in developing a lightweight, real-time EMS that balances prediction simplicity and optimization robustness. Essoufi et al. [
11] proposed a data-driven co-state network energy management scheme for fuel cell and battery electric vehicles that ensures robust, constraint-compliant control without prior system knowledge or extensive training.
Despite the significant progress made in EMSs for FCHEVs, most existing studies tend to focus either on enhancing fuel economy or improving power response, while few simultaneously address the combined impact of fuel cell and battery aging on long-term economic performance [
51]. This study introduces an EMS framework that combines ARIMA-based real-time speed prediction with GA-based equivalent factor (EF) optimization. This framework is designed to not only maintain system responsiveness and predictive stability under uncertain traffic conditions but also to holistically optimize fuel consumption and aging-related economic losses.
1.3. Original Contributions
Compared with deep learning models such as RNN and LSTM, which require extensive offline training, the ARIMA model offers low computational complexity and eliminates the need for prolonged training, making it well suited for real-time vehicle speed prediction under limited onboard computational resources. Furthermore, ARIMA provides interpretable parameters and robust short-term temporal modeling capabilities, enabling stable performance in dynamic and uncertain traffic environments.
For optimization, the Genetic Algorithm (GA) is adopted over methods such as Particle Swarm Optimization (PSO) and Reinforcement Learning (RL) due to its global search capability, implementation simplicity, and flexibility in handling nonlinear multi-objective functions involving both fuel consumption and degradation-related economic costs. Unlike RL, which requires large-scale interaction data and offline simulation environments, GA can operate in online or semi-online modes with relatively low computational overheads.
The main contributions of this work are summarized as follows:
(1) A lightweight ARIMA-based real-time vehicle speed prediction framework is proposed, employing a sliding time window for dynamic updates. This design achieves a balance between predictive accuracy and computational efficiency, particularly in data-sparse scenarios.
(2) A novel GA-based adaptive equivalent factor optimization method is developed, integrating hydrogen fuel consumption with the economic costs of fuel cell and battery degradation. This approach bridges short-term energy optimization with long-term cost-effectiveness, enhancing both energy efficiency and system durability.
Overall, this study establishes a real-time, computationally feasible energy management framework that achieves a balance between predictive performance, optimization robustness, and lifecycle economic considerations.