4.1. Battery Aging Discussion
This study evaluated battery aging for two NMC pack sizes while varying target SoC and maximum charging power, using both the minimal and the oracle charging strategy (
Section 3.1). Across all feasible parameter combinations (i.e., without mid-service breakdown), the simulated capacity loss spans a wide range relative to the respective reference variant.
For the 100
pack, the relative capacity loss varies between
43% and
101% of the reference (
Table A1). The lowest value (43.2%) occurs for the oracle strategy at 55% target SoC and 100 kW maximum charging power. Even more moderate adjustments are effective: for example, reducing the target SoC from 100% to 80% and the max. charging power from 200 kW to 120 kW lowers capacity loss to 71.6% of the reference while keeping all trips feasible. The oracle strategy provides an additional but smaller reduction: at 100% target SoC and 200 kW max. power, capacity loss drops from 100% (minimal) to 97.7% (oracle), i.e., by about two percentage points.
For the 50
pack, relative capacity loss ranges between
92% and
103% of the reference (
Table A1). Thus, at identical operating conditions the larger pack exhibits a substantially greater potential for aging reduction: at 100 kw and 90% target SoC, capacity loss is 80.0% of the reference for 100
, but 92.2% for 50
. This quantifies the intuitive trade-off between higher investment cost for larger batteries and lower per-kWh stress in operation.
Energy demand reacts only weakly to these changes. For all feasible variants of the 100
pack, total energy drawn from the catenary stays between 98.6% and 100.5% of the reference (
Table A2). The best case identified in this study combines a capacity-loss reduction to 43.2% of the reference with an energy demand of 98.6%. These numbers indicate that, in the investigated operating regime, reducing C-rate and avoiding high SoC mainly shifts internal losses and recuperation losses, rather than fundamentally changing net vehicle energy demand.
Some parameter combinations become operationally infeasible when both target SoC and charging power are aggressively reduced. For instance, at 50% to 85% target SoC, mid-service battery depletion occurs and the variant is excluded from the tables. Even where no breakdown occurs, lowering max. power tends to shift the usable SoC window downward over the day, increasing effective DoD and the risk of falling below a safety SoC margin. Consequently, any constraint on charging power or target SoC must be derived from service requirements first, and then optimized for aging within that feasible region.
Charging to 100% SoC does not monotonically increase aging in this setup. At high max. charge power, the variants that charge to 100% exhibit slightly lower capacity loss than those charging to 95% or 90% (e.g., for 100 , 200 kW, minimal strategy: 100% target SoC yields 100% relative loss, whereas 95% and 90% target yield 101.4% and 97.2%, respectively). The CC/CV controller limits current in the CV phase; thus, higher target SoC forces more time at lower C-rates, partially offsetting the stress from high SoC. This effect is specific to the chosen cell parameters and stress map, but it illustrates that SoC and C-rate cannot be optimized independently.
Overall, the quantified trends are consistent with empirical syntheses such as Klaproth et al. [
18], who report reduced degradation for low C-rates, mid-range SoC windows and avoidance of prolonged high SoC for NMC cells. Our results extend these qualitative recommendations to a concrete HTB use case by showing that, under realistic duty cycles reconstructed from CAN logs, charging-power and target-SoC policies can cut simulated capacity loss by well over 50%, while energy demand changes remain within roughly
1%. This aligns with the general understanding from stress-map-based aging models [
17,
39] that operation within “battery-friendly” regions primarily affects lifetime rather than day-to-day energy consumption.
The use of recorded, high-resolution CAN data also addresses the gap identified by Baure et al. [
16], who showed that synthetic drive cycles can underestimate aging compared to real-world operation. In our case, real velocity, power and stop patterns directly drive the stress-map evaluation, so the reported relative aging reductions are tied to realistic HTB duty cycles rather than idealized profiles.
4.2. Fleet Power Demand Discussion
Section 3.2 has shown that the investigated per-vehicle charging strategies and parameter variations have limited influence on the fleet’s daily load shape.
Across all 95 feasible parameter combinations, the RLM-based fleet peak power varies only within
97.5% to
103% of the respective reference case (
Table A3), i.e., peak changes are on the order of a few percent. In contrast, each configuration exhibits a substantial intrinsic peak-shaving potential when comparing peak power to the mean power during service hours: the difference ranges from 15% to 19% of the peak (
Table A5). For the reference configuration with the 100
pack, this corresponds to a theoretical reduction of about 300 kW (from roughly 1.9 MW peak), as illustrated in
Figure 8. However, neither lowering max. charging power nor employing the oracle lookahead strategy materially exploits this potential.
The mean power drawn during service hours is even less sensitive. For the 100
pack, all variants lie between 98.5% and 100.4% of the reference mean power (
Table A4); similar ranges are observed for the 50
pack. Thus, the timetable-induced concurrency of vehicles dominates the load profile; per-vehicle “local” smart charging that only uses own SoC and catenary availability (even with perfect local lookahead) is insufficient to reshape the fleet aggregate.
These findings complement existing work on IMC power management. Diab et al. [
29] propose adaptive charging constrained by spare substation capacity and feeder limits for Arnhem; their strategy effectively reduces local overload but does not explicitly optimize fleet-level peak power or battery aging. Our results are consistent with that perspective: local power limits or per-vehicle lookahead (oracle strategy) modestly change the timing and height of individual charging events, but the main peak remains driven by coincidence in service schedules. In other words, the quantitative 15% to 19% peak-shaving potential identified here can only be realized through cooperative, fleet-level coordination that explicitly considers concurrency across vehicles and feeder sections, rather than through isolated per-vehicle policies.
From an operator’s perspective, a 15% to 19% reduction in the system-wide peak power corresponds to 270 kW to 360 kW for the case study. At a representative capacity tariff of about 120
kW
−1 per year [
44], this equates to an annual saving potential on the order of 32,000
to 43,000
if all substations are billed jointly. The numerical analysis therefore supports developing fleet-level peak-shaving strategies; it also shows that per-vehicle battery-friendly policies alone are not sufficient to tap this economic potential.
4.3. Grid Losses and Voltage Drop Discussion
The node-voltage analysis quantifies the impact of charging policies on catenary efficiency and voltage quality.
Overall investigated variants, the catenary efficiency—defined as vehicle-side energy intake divided by substation-side delivery—ranges from 96.6% to 96.9%, with a mean of 96.7% (
Table 5). Thus, grid losses account for only about 3.3% of delivered energy, and their variation with charging power and strategy is marginal. The simplified accumulation method (Equations (
7) and (
8)), combined with a fixed efficiency factor, underestimates the pooled substation peak power by merely 0.07% to 1.19% (
Table 7). This demonstrates that, for first-order peak-demand studies, the inexpensive accumulation approach provides sufficient accuracy; the more expensive node-voltage analysis is mainly needed for detailed voltage-stability assessments and feeder-level studies.
Voltage-limit violations are infrequent but sensitive to charging power and strategy. For the 100
pack at 200 kW and 100% target SoC, the minimal strategy produces 132 undervoltage events (vehicle voltage below 400 V), while reducing the max. charging power to 80 kW cuts this to 46 events; the oracle strategy further reduces it to 44 events (
Table 8). Hence, in this case the combination of lower power and lookahead yields a reduction by about two thirds. Overvoltage events are rare (0 to 2 across all variants,
Table 9) and practically negligible in this case study.
These quantitative results are in line with detailed IMC grid studies such as Diab et al. [
24] and Barbone et al. [
25], who also report that high simultaneous charging currents in limited feeder sections are the main driver of voltage excursions in trolleybus networks. While their models are more detailed and include realistic feeder layouts and continuous motion, our coarser resistor-network approach reproduces key tendencies: reducing local charging currents improves voltage stability, but the overall energy efficiency remains high and only weakly dependent on charging parameters.
4.5. Applicability to Other Case Studies
This case study uses data from one HTB in a single city over one year, but the methodology is broadly transferable. The key requirements are (i) sufficiently resolved power demand time series (traction, auxiliaries, heating) and (ii) a description of the catenary layout and substation locations. Given these, the presented approach can be applied to other trolleybus networks and vehicle types.
The reliance on recorded CAN data directly addresses the challenge raised by Baure et al. [
16], who found that real-world traffic and operating conditions significantly affect battery aging compared to synthetic cycles. Our framework offers a practical way to bring such real-world variability into fleet-level aging and power-demand studies, without requiring high-fidelity multi-physics models.
The same simulation approach could, in principle, be applied to opportunity-charging bus fleets or partial-catenary rail applications (e.g., battery trams), provided that the charging infrastructure can be represented as a DC network with known feeder topology. However, because opportunity charging often involves fewer, more concentrated charging locations, the relative importance of local scheduling versus timetable-induced concurrency may differ from the trolleybus case explored here.
4.7. Key Limitations
For transparency, we list the key limitations of this study and their quantitative impact where possible.
4.7.1. Limited Number of Vehicles
Fleet-demand estimates rely on overlaying 40 recorded workdays from two physical vehicles. This preserves realistic route patterns and departure times but cannot capture vehicle-to-vehicle heterogeneity (mass, driver behavior, HVAC setpoints). The peak-shaving potential of 15% to 19% should therefore be interpreted as an order-of-magnitude estimate rather than an exact system value. Future studies with concurrent data from larger fleets could refine this number and quantify inter-vehicle variance.
4.7.2. Simplified Battery Model
The electric battery model is a single-node equivalent circuit without thermal dynamics. As a consequence, temperature-dependent effects on OCV, internal resistance and stress-map aging are not represented, and the absolute values of capacity loss should be treated as indicative rather than predictive. However, the relative differences between parameter sets are primarily driven by SoC window, C-rate and cycling pattern, which are known from the literature [
18,
39] to dominate NMC aging behavior. The finding that capacity loss can be reduced to roughly 40% to 80% of the reference, while energy demand remains within
1%, is therefore expected to be qualitatively robust.
4.7.3. Open-Loop Grid Model
The node-voltage analysis is performed post hoc; vehicles do not adapt their current draw in response to simulated catenary voltages. Undervoltage events are therefore diagnostic rather than indicative of actual protection behavior. Nevertheless, the energy error introduced by non-convergent time steps is below 0.004% of fleet energy demand (
Table 6), and the peak-power error of the accumulation-based method remains below 1.2% (
Table 7). Hence, the quantitative peak-shaving potentials reported above are only weakly affected by this simplification.
4.7.4. Synthetic Partial Catenary Overlay
The original network uses nearly full catenary coverage; partial-catenary operation is emulated via a synthetic overlay. While this enables controlled sensitivity studies, absolute values such as the number of undervoltage events (e.g., 44 to 132 per day for the 100
pack at 200 kW) depend on the assumed overlay. Applying the method to a network with existing HTB operation and measured substation data would sharpen these estimates and allow direct validation of the grid model, similar in spirit to the measurements reported by Paternost et al. [
23].
4.7.5. Oracle Strategy
The oracle strategy is not causal: it uses the actual future exit time from the catenary section to construct a linear SoC trajectory. In practice, similar behavior could be approximated using timetable information and statistical dwell-time distributions, but deviations between scheduled and actual times would introduce additional uncertainty. In this sense, the oracle results provide an upper bound on what purely per-vehicle lookahead can achieve; the fact that even this upper bound does not materially reduce the fleet peak power strengthens the conclusion that cooperative, fleet-level coordination is necessary for peak shaving.
4.7.6. Data Source Availability
Due to confidentiality constraints, the underlying CAN data cannot be published. This limits strict reproducibility but does not affect the methodological contribution. The code base and modelling approach can be applied to other datasets, and the relative trends (e.g., impact of charging power and SoC limits on aging and peak demand) are expected to generalize qualitatively.