1. Introduction
With the International Civil Aviation Organization (ICAO) [
1] and other national bodies setting ambitious targets for net-zero greenhouse gas (GHG) emissions by 2050, the aviation sector faces increasing pressure to reduce its environmental impact. Although sustainable aviation fuels (SAF) offer a short-term reduction in carbon intensity, they require complex refining infrastructure and do not fully eliminate tailpipe emissions [
2]. Compared to current battery–electric technologies, which are often limited by weight in vertical take-off and landing (VTOL) applications [
3,
4], hydrogen provides a zero-emission alternative with a much higher gravimetric energy density (approximately 120 MJ/kg). This advantage is particularly evident when hydrogen is used in Proton Exchange Membrane Fuel Cells (PEMFC) to support economic decarbonisation [
5]. In several technological fields, the hydrogen economy for aviation is gaining momentum.
Cybulsky et al. [
6] have established performance targets for hydrogen fuel-cell systems to support regional aviation decarbonisation, demonstrating that hydrogen-fuelled platforms can achieve energy efficiency equal to or greater than that of conventional jet-fuel aircraft. Evaluations of hydrogen storage technologies indicate that 700 bar compressed gas and cryogenic liquid hydrogen are particularly suitable for aerospace applications, with industry roadmaps projecting commercial viability within the next decade [
7,
8]. However, integrating hydrogen into light rotorcraft presents significant engineering challenges, such as the volumetric limitations of hydrogen storage, the need for effective thermal management of waste heat, and the complexity of power management during high-load operations, including hover and climb. Furthermore, hybrid architectures that combine steady-state fuel cells with transient battery buffers require advanced energy management strategies to maintain performance and safety throughout the flight envelope [
9].
The aviation sector has increasingly adopted electrification to address societal challenges, leading to significant advancements in unmanned aerial vehicles (UAVs) and the rapid development of eVTOL multicopter platforms. However, current battery technology imposes major limitations on endurance. For example, multicopters such as the DJI Matrice 300 RTK achieve about 55 min of flight time, while forward-flight VTOL UAVs like the Penguin BE UAV reach around 2 h. Battery packs for eVTOL applications typically provide 240 Wh/kg, which greatly restricts mission duration. NASA has set a target of 400 Wh/kg as a benchmark for future battery performance [
10], and the Faraday Institute reports that Tesla battery packs have recently achieved approximately 270 Wh/kg [
11]. In contrast, hydrogen fuel-cell systems combined with 700 bar storage offer significantly higher energy density, enabling extended mission profiles without proportional weight penalties [
12]. Gómez et al. [
13] indicate that achieving long-range autonomy (over 500 km) may require 5–10 kg of hydrogen stored in compressed vessels, while Clemens et al. [
14] show that fuel-cell/battery hybrid systems can provide up to 75% higher effective energy density compared to conventional battery packs. Recent conceptual studies demonstrate that retrofitting a VTOL aircraft with a fuel-cell and battery system can extend the operational range and the reduce propulsion system mass compared to battery-only configurations. For instance, Cäsar et al. [
15] report that a hybrid system increased the range from 112 km to 160 km and reduced propulsion system mass by approximately 38%, while An et al. [
12] present sizing methodologies for multi-mode eVTOL UAVs that further illustrate the potential of fuel-cell hybridisation to overcome battery-limited endurance. These results collectively highlight that hydrogen fuel cells can substantially enhance energy density and endurance; however, realistic system masses—including fuel cell stack, hydrogen storage, and balance-of-plant components—remain significantly higher than the highly optimistic projections sometimes cited in the literature, emphasising the need for careful engineering assessment when evaluating extreme performance claims.
Despite growing interest in hydrogen–electric propulsion for aviation, most research remains focused on conceptual designs, fixed-wing aircraft, or small UAV platforms, rather than practical retrofit applications in certified light rotorcraft. Experimental demonstrations at UAV scale, such as the hybrid-lift NederDrone, provide proof of concept for vertical take-off using hydrogen–electric propulsion [
16]. More specific work on lightweight helicopters in urban air mobility scenarios shows that hybrid configurations can outperform battery-only designs under certain mission profiles in terms of range and operational efficiency [
17]. Comprehensive reviews of hydrogen propulsion systems provide up-to-date surveys of fuel-cell technologies (PEMFC and SOFC), hydrogen storage, and associated challenges such as thermal management, power density, and integration with aircraft systems [
18]. Methodological studies demonstrate that hybrid fuel-cell and battery architectures can be sized for light aircraft to optimise both mass and thermal constraints, indicating that hybridisation is a viable path for short-range operations [
19].
Energy management strategies for fuel-cell-hybrid aircraft include rule-based, optimisation-based, and learning-based approaches [
20,
21,
22,
23]. While dynamic programming delivers optimal fuel economy, its computational cost restricts real-time use in flight-critical applications. The equivalent consumption minimisation strategy (ECMS) and Pontryagin’s minimum principle achieve near-optimal performance with real-time computational feasibility. Recent studies indicate that reinforcement learning can improve both fuel efficiency and component longevity, with safe-RL formulations achieving 15.6% hydrogen savings compared to rule-based strategies while ensuring zero constraint violations on component performance [
20,
24]. Reviews of hybrid-electric propulsion systems consistently emphasise that EMS is central to achieving high efficiency, analysing the trade-offs between power density, thermal limits, and varying load demands encountered by aviation fuel-cell systems. Concurrently, research in rotorcraft and hybrid-electric aircraft design has produced multi-objective optimisation frameworks that address mass, structural, and mission-performance constraints from the outset [
25]. Advances in hybrid-electric aircraft power management show that optimisation-based EMS reduces overall energy consumption while respecting thermal boundaries and preserving component life [
26]. However, most EMS research focuses on ground vehicles or fixed-mission UAVs; the rapid transient loads typical of helicopter operations—particularly vertical take-off, hover-to-climb transitions, and autorotation-capable descent—remain systematically underexplored [
27,
28]. This study uses a rule-based energy management strategy for screening-level feasibility assessment, with optimisation-based refinement and real-time validation deferred to the preliminary design and flight-testing phases.
Hybrid-electric aircraft sizing methodologies have progressed from energy-based parametric approaches to mission-centric multidisciplinary optimisation frameworks [
29,
30,
31]. Comparative benchmarking of independent sizing tools shows that maximum take-off mass predictions agree within 2–4% for validated baseline aircraft, confirming the robustness of current methods when applied to known reference platforms [
32]. However, simplified single-value assumptions for battery-specific energy and power can cause significant errors in propulsion system mass estimation [
33]. For regional aircraft, hybrid propulsion benefits are maximised at design ranges of 400–1000 nm with battery-specific energy above 750 Wh/kg; beyond these thresholds, propulsion system mass penalties offset endurance benefits [
29,
34]. For light rotorcraft retrofit applications, where airframe geometry and centre-of-gravity envelopes are constrained by baseline type-certification, Raymer’s empirical statistical method provides a conservative quick-estimate baseline suitable for screening-level design. Contemporary frameworks incorporating multidisciplinary optimisation [
35,
36] offer opportunities for refined component-level mass allocation in future preliminary design iterations, particularly for powertrain integration trades between the motor, fuel-cell, battery, and thermal management subsystems. For sizing multi-mode eVTOL aircraft with hybrid hydrogen–electric systems, recent methodologies show that fuel-cell and battery subsystems can meet both VTOL and cruise power requirements, providing engineers with a robust foundation for developing hybrid strategies in vertical flight.
Unlike clean-sheet eVTOL designs, which benefit from performance-based special conditions and parallel design and certification engagement [
37], retrofit applications inherit the airworthiness basis of existing type certificates. For the Robinson R22 and Guimbal Cabri G2, this means compliance with FAA Part 27 requirements for Normal Category Rotorcraft—regulations originally developed for conventional piston engines and mechanically actuated flight control systems, not for fuel-cell propulsion or hydrogen storage. Existing FAA regulations explicitly state that they did not anticipate fuel-cells or the use of hydrogen as an aircraft fuel. The FAA Hydrogen-Fuelled Aircraft Safety and Certification Roadmap targets 2028 to 2032 for regulatory readiness of fuel cell systems, indicating that certified retrofit operations are unlikely before the mid-2030s [
38]. Specific retrofit challenges include demonstrating preservation of autorotation capability after engine relocation, establishing the crashworthiness of hydrogen tanks under Part 27 impact requirements, and defining electrical-system redundancy standards for electrically managed propulsion architectures not addressed in legacy regulations [
39,
40]. Proposed certification considerations for hydrogen-fuelled aircraft configurations emphasise fundamental safety principles such as oxygen exclusion from fuel systems and prevention of hydrogen accumulation in adjacent compartments [
41]. Consequently, this study is explicitly positioned as a screening-level feasibility assessment rather than a certification-ready design. Progression towards type certification would require sustained engagement with airworthiness authorities and extensive flight-test validation, which are deferred to future phases of the project.
Therefore, we introduce “The Hawk”, a hydrogen–electric VTOL light rotorcraft demonstrator and evaluate it using a holistic framework that integrates powertrain sizing and mission energy modelling. The contribution lies in demonstrating a practical retrofit pathway with quantified potential for range extension and propulsion mass reduction, rather than an idealised clean-sheet eVTOL concept.
2. Materials and Methods
2.1. Mission Requirements and Design Drivers
The reference mission in this study follows the Vertical Flight Society conceptual design guidelines for light VTOL aircraft [
42]. The mission consists of a 100 km sortie with vertical take-off and landing, including hover, climb, cruise, descent, and landing hover segments. Cruise speed is set at 41.4 m/s (149 km/h), representative of certified two-seat light helicopters, while climb and descent are performed at a vertical rate of 7.6 m/s to a cruise altitude of 300 m. The aircraft must carry one pilot and one passenger, with a combined payload mass of 185 kg. Zero in-flight emissions are a primary design constraint, motivating the selection of a hydrogen–electric propulsion system. These requirements define the boundary conditions for mass sizing, power demand, and onboard energy storage.
2.2. Target Aircraft Specifications and Design Constraints
The Hawk (
Figure 1) retrofit is based on the certified Robinson R22 Beta II [
43] and Guimbal Cabri G2 [
44] light helicopters. These platforms were selected for their proven aerodynamic performance, comparable MTOW (620–700 kg), and fully articulated rotor systems, which simplify the structural integration of the electric powertrain. The retrofit strategy retains the existing fuselage truss, landing gear, and rotor mast, while replacing the reciprocating engine with a hydrogen–electric powertrain housed in a modified aft bay. Design targets are as follows:
MTOW: ~625 kg (Utility Category)
Payload: 185 kg (pilot + passenger)
Endurance: >100 km range with zero emissions
The structure is based on a modified truss frame, with hydrogen sponsons mounted to the fuselage. The hybrid propulsion system uses a parallel-source architecture, with all power sources feeding a common 400–750 V DC bus. The main power source is an automotive-grade PEM fuel-cell stack (Toyota TFCM2-B) with a net rated output of 85 kW. This unit was selected to comfortably meet the continuous hover power requirement (approximately 66 kW) while maintaining an adequate power margin [
45,
46]. This parallel hybrid architecture, combining a steady-state fuel-cell with a transient battery buffer, is adapted from fuel-cell electric vehicle (FCEV) powertrain configurations that have been extensively validated in automotive applications [
47]. The Toyota TFCM2-B stack, originally designed for light-duty FCEVs such as the Toyota Mirai, provides a mature, automotive-grade power source with proven reliability and thermal management characteristics. This is hybridised with a 30 kg lithium-ion battery buffer (4.5 kWh, ~120 kW peak), specifically sized to handle transient loads during take-off and to enable regenerative braking.
Propulsion is provided by a high-torque electric motor (EMRAX 268, EMRAX d.o.o., Kamnik, Slovenia) coupled to the main rotor via a 1:8 reduction gearbox. The motor delivers 100 kW continuous power and up to a 230 kW peak, ensuring performance parity with the baseline Lycoming O-360 internal combustion engine. Hydrogen is stored in two 700 bar Type-IV COPV tanks with a total usable capacity of 5.0 kg. Key powertrain specifications are summarised in
Table 1.
2.3. Airframe
The airframe is a lightweight truss structure inspired by endurance off-road racing vehicles (e.g., trophy trucks), constructed primarily from Aluminium 6061 T6 tubing (
Figure 2). The design provides the primary load path—a welded aluminium tube frame distributing rotor mast loads to the landing gear and tail boom—fuel tank sponsons (left and right fuselage-mounted housings for the Type IV hydrogen tanks), cabin structure (aluminium frame with composite panels and transparent windows; floor footprint ≥ 1.25 m × 1.5 m), landing gear (skid-type or wheeled configuration, typically lightweight tubular steel or aluminium), and tail boom (truss or semi-monocoque configuration carrying the tail rotor assembly).
2.4. Mass Estimation Methodology
The overall design mass properties were derived using Raymer’s conceptual sizing sequence for light rotorcraft [
48]. In this framework, the empty weight includes the airframe structure, rotor system, landing gear, avionics, and all installed propulsion hardware—the Proton Exchange Membrane Fuel Cell (PEMFC) stack, balance-of-plant (BoP), hydrogen tanks, and the 30 kg lithium-ion buffer battery—while the payload and mission hydrogen are treated as expendables. The empty weight,
We = 434 kg, reflects a detailed component build-up derived from the retrofit baseline and component specifications. The airframe mass is based on the Cabri G2 truss structure, while propulsion components use manufacturer data (Toyota, EMRAX). The PEMFC stack mass (109 kg) is sourced from Toyota TFCM2-B specifications, which represent current automotive-grade fuel-cell technology used in commercial FCEVs [
47]. This mass includes the stack itself, balance-of-plant components (air compressor, humidifier, coolant system), and auxiliary electronics, consistent with automotive FCEV powertrain integration standards. A detailed mass breakdown, including a structural contingency margin, is presented in
Table 2. The derived empty weight of 434 kg represents approximately 69.6% of the total take-off mass, which is within the typical range reported for early-stage hybrid and electric rotorcraft concepts.
To evaluate the plausibility of the estimated empty weight, a simple empirical validation was performed using statistical comparison with two certified light rotorcraft of comparable size and mission class, the Robinson R22 Beta II and the Guimbal Cabri G2. Based on the known gross take-off weights and empty weights of these reference aircraft, an empirical power–law correlation of the form:
This was derived by fitting a log–log regression to the reference data. This approach follows standard preliminary aircraft design practice, where power–law relationships are commonly used to capture scaling trends between gross and empty weight in the absence of detailed structural design. The fitted relationship yields coefficients of approximately k 25.39 and b 0.428, corresponding to a predicted empty weight of around 400 kg for an aircraft with a gross weight similar to the proposed configuration. The concept design estimate of 434 kg is, therefore, about 8.5% higher than this empirical prediction. An empty weight of 434 kg results in a gross take-off weight of 624 kg, bringing the overall mass budget to within 0.5% of the target utility-class limit. This result indicates that the retrofit architecture remains structurally feasible within the baseline MTOW envelope.
2.5. Rotor System
The Hawk features a fully articulated main rotor system with the following specifications: rotor radius
R = 3.39 m, disc area
A = 36.1 m
2, and three blades, unlike the two-blade configurations of the R22 and Cabri G2 baseline aircraft. The three-blade design was chosen to maintain similar disc loading while reducing aerodynamic loading per blade and smoothing torque transients, which is especially advantageous for an electric motor drive system. The blade chord is approximately
c = 0.18 m, and the design rotational speed is
Ω ≈ 560 rpm (≈52.36 rad/s), selected to minimise structural stresses, provide sufficient pitch control authority, and reduce torque ripple at the motor. The blades use a NACA 63015A airfoil, delivering a lift coefficient of approximately
CL ≈ 1.46 at an angle of attack of about 17°, with a corresponding drag coefficient
CD = 0.023. The blades are made from carbon-fibre-reinforced materials, with a mass of approximately 3.62 kg per blade (
Figure 3).
The next step was to verify that the rotor sizing and hover power estimates are physically consistent with performance data for light helicopters. At take-off, the aircraft weight is
W = 6115.20 N, resulting in a disc loading of
W/
A = 169.4 N/m
2 (approximately 17.30 kg/m
2). This value aligns with empirical data for two-seat light rotorcraft such as the Cabri G2 and the R22 Beta II. The estimated hover power requirement was 66.40 kW, consisting of approximately 51 kW of induced power and about 15 kW due to blade profile drag and rotor system losses. This value was calculated using momentum theory applied to the rotor disc and the standard induced power formula.
where induced velocity at the rotor disc
equals the:
where
kg/m
3.
This result aligns with empirical correlations for light rotorcraft operating at a disc loading of 17.3 kg/m2, confirming aerodynamic consistency with Robinson R22 and Cabri G2 reference data.
To ensure high-fidelity in the energy-consumption analysis, the cruise-power requirement (
) was calculated using a physics-based component-buildup method derived from Leishman’s momentum theory [
49]. The total shaft power required for steady-level flight is defined as the sum of induced (
), profile (
), and parasite (
) power, adjusted for mechanical transmission efficiency (
):
where the individual components are expressed as
For the specific retrofit airframe (Cabri G2/R22 class), the equivalent flat plate area (
) was estimated at 0.40 m
2, resulting in a parasite drag component of approximately 17.5 kW at cruise speed. However, the dominant energy consumer remains the profile power (
), representing the viscous drag of the main and tail rotor blades as they rotate through the air. This component is relatively constant across the flight envelope and is independent of forward velocity, accounting for approximately 35–40 kW of the total power budget. Applying the component-buildup method to the nominal cruise condition yields:
This calculated value provides strong physical validation for the present model, as it closely correlates with the published performance data of the baseline internal combustion engine powerplant (Lycoming O-360, 92 kW maximum continuous output). In practice, the Robinson R22 Beta II and Cabri G2 both maintain cruise at approximately 70–75% of maximum continuous power, corresponding to 61–65 kW [
43,
44]. This excellent fit between the theoretical calculation and empirical baseline data confirms that the aerodynamic model accurately captures the dominant energy consumers in rotary-wing flight, ensuring that the subsequent hydrogen consumption projections are grounded in realistic rotor physics rather than considering fuselage drag alone. Furthermore, the adjusted cruise power of 62.4 kW (the shaft power is 65.68 kW, while the effective power requirement is 62.4 kW after 95% transmission efficiency, consistent with baseline helicopter data) is employed throughout the mission analysis.
2.6. Mission Profile Definition
The outbound flight (Segment 1) and the return flight (Segment 2) constitute the reference mission profile for the Hawk concept. Each segment includes the operational phases of hover, climb, cruise, descent, and hover landing (
Figure 4). According to the Vertical Flight Society’s (VFS) conceptual mission guideline for hydrogen–electric VTOL aircraft, the complete cycle corresponds to a 100-kilometre sortie.
Following a vertical take-off and hover-out-of-ground effect (HOGE) for system checks and stabilisation, the flight plan specifies a steady ascent to an altitude of 300 metres at a vertical velocity of approximately 7.6 metres per second. During the outbound leg, the Hawk maintains level flight by accelerating to a cruise speed of 41.4 m/s during the forward-flight transition. Upon reaching the target waypoint, the vehicle executes a sharp descent to 30 m altitude, transitioning into horizontal loitering at a reduced velocity (
Vloiter ≈ 27 m/s) for mission-specific tasks such as inspection or survey. The return flight mirrors the outbound trajectory, concluding with a HOGE phase of 30 s prior to landing. To evaluate the mission power estimates against empirical rotor data, the disc loading is computed as
where
W0 = 624 kg (gross weight),
g = 9.81 m/s
2, and
A = disc area = 36.1 m
2. This yields
DL ≈ 169.5 N/m
2 ≈ 17.3 kg/m
2.
The estimated power demand and energy consumption profile for each mission phase is detailed below:
The nominal cruise power requirement of 62.4 kW (calculated in
Section 2.5) applies to the main cruise segments at 41.4 m/s. During mission-specific loitering operations (e.g., inspection or survey tasks), the aircraft may operate at a reduced speed of approximately 27 m/s, which requires only 28.4 kW. Both power demands remain well within the installed continuous power envelope of 100 kW from the EMRAX 268 electric motor, confirming that all mission phases (hover at 66.4 kW, climb at 73.0 kW, cruise at 62.4 kW, loiter at 28.4 kW, and descent at 14.2 kW) are feasible within the available propulsion system capacity.
- 4.
DESCENT PHASE:
Required power: 14.2 kW (motor provides lift-off braking)
PEMFC output: Reduced to 10–12 kW
BOP parasitic load: ~11.5 kW (air compressor at minimum flow, cooling fans active)
Battery contribution: ~13–15 kW (compensates for PEMFC output below total electrical demand)
Net result: Battery State-of-Charge increases due to regenerative energy recovery from rotor during controlled descent, offsetting BOP consumption and enabling hydrogen conservation
The State-of-Charge constraints:
Minimum SoC: 20% (preserves battery cycle life)
Maximum SoC: 80% (prevents overcharge degradation)
Nominal operating range: 20–80% for maximum longevity
Battery Management System (BMS) automatically adjusts PEMFC load demand to maintain SoC within operating envelope
The controlled descent is managed through a multi-stage control sequence:
It is important to note that autorotation capability for the retrofit configuration remains unverified and is outside the scope of this screening-level design study. The shifted centre of gravity and altered rotor inertia distribution resulting from engine and battery relocation may degrade the baseline airframe’s autorotational descent characteristics. Experimental validation of post-retrofit rotor dynamics and autorotational descent characteristics would be essential before any certification attempt.
Power-sharing between the PEMFC and battery is managed by a rule-based energy management system (EMS):
The PEMFC ramps to the requested power with a time constant τ ≈ 10 s, limited by air compressor dynamics and thermal transients.
The battery provides an instantaneous power surge (0–3 s) to bridge the gap between demand and available PEMFC output.
Battery State-of-Charge (SoC) is maintained within a 20–80% band by automatically adjusting the PEMFC setpoint.
During descent, PEMFC output is reduced to a minimum (10–12 kW) to minimise hydrogen consumption.
System-level dynamic-response time: less than 5 s to reach 95% of steady-state power demand. Control law details and optimisation-based EMS refinement are deferred to the preliminary design phase.
2.6.1. Power-Sharing Dynamic Analysis
To validate the overall energy balance and transient power behaviour of the proposed hybrid propulsion system, a high-fidelity MATLAB and Simulink model (MATLAB and Simulink Student Suite R2023) was developed to simulate the complete 100 km mission profile with a temporal resolution of one second. This approach enables explicit capture of short-term transients not visible in steady-state or quasi-static analyses, while maintaining consistency with the previously defined mission energy requirements. The model represents all major subsystems of the hybrid architecture. The PEM fuel-cell stack (
Figure 5) is modelled with voltage-dependent performance characteristics, including air-compressor dynamics represented by a 10 s ramp time constant. The battery subsystem is implemented as a lithium-ion model constrained to a 20–80% State-of-Charge operating window, capable of delivering up to 120 kW of peak discharge power and accepting regenerative charging during appropriate mission segments. The electric motor and inverter are modelled using the EMRAX 268 efficiency map, covering efficiencies from 92% to 98% and allowing fully bidirectional operation. System-level control is achieved through a rule-based energy management strategy that maintains the PEM fuel cell near a fixed operating setpoint while the battery buffers power transients during take-off, climb, and other dynamic phases of flight.
2.6.2. Battery Pack Specifications and Pulse Discharge Performance
The 30 kg lithium-ion battery pack (
Table 3) delivers 4.5 kWh usable energy (150 Wh/kg pack-level specific energy) with a continuous discharge capability of 60–80 kW and peak pulse discharge up to 120 kW for transient load support (≤30 s). This corresponds to continuous C-rates of 13–18C and peak pulse C-rates of approximately 27C.
Under high-rate discharge conditions typical of hover and climb (4–5C), the pack experiences voltage sag of approximately 10–15% from nominal voltage due to internal resistance and polarisation effects [
50,
51]. During peak transient pulses (27C), voltage sag can reach 20–30% [
50]. This voltage drop is managed by the DC bus voltage range (400–750 V) and power electronics inverter control.
Temperature rise during sustained high-power discharge (4–5C for 5–10 min) is estimated at 15–25 °C above ambient, based on experimental data from high-power lithium-ion batteries in eVTOL applications [
50,
52,
53]. Short-duration pulse discharge (27C for 30 s) induces a localised temperature rise of 2–5 °C per pulse event. To maintain battery cell temperatures below 60 °C and prevent thermal degradation, the battery pack requires integrated thermal management, which is coordinated with the fuel-cell cooling system [
52,
54].
The State-of-Charge (SoC) operating window is constrained to 20–80% by the Battery Management System (BMS) to maximise cycle life (≥2000 cycles at 80% depth-of-discharge). The most critical mission phase is landing, where high power demand coincides with low SoC (30–40%), resulting in increased voltage sag and elevated internal resistance. This operating constraint is reflected in the hybrid energy management strategy, which prioritises battery reserve for the landing safety margin.
2.7. Hydrogen Storage and Energy Formulation
Hydrogen storage comprises two Type IV composite overwrapped pressure vessels (COPV) designed for 700 bar internal pressure. Screening simulations indicate feasibility but do not provide experimental or certification-grade thermal validation. It is necessary to assess whether the selected hydrogen–electric powerplant can supply sufficient energy and power to meet the rotor power requirements throughout the mission profile. The empty and gross weight estimates directly determine the rotor thrust and shaft power demands in hover, climb, and cruise. Consequently, the sizing of the fuel-cell system and onboard hydrogen mass must be evaluated in terms of both instantaneous power capability and total usable energy. The following section quantifies the available electrical energy from the PEM fuel-cell system and relates it to the mission-level power requirements using Raymer’s electric aircraft energy formulation:
The lower heating value (
) of hydrogen is 33.33 kWh/kg, and the overall system efficiency, including stack, inverter, and motor, is assumed to be
ηsys = 0.55. The PEMFC stack efficiency is assumed to be 55%, resulting in 18.33 kWh/kg of useful electrical energy. Additional downstream losses in the motor (92%), inverter (97%), and gearbox (97%) reduce overall mechanical efficiency to approximately 50–51%, resulting in about 66 MJ/kg of shaft-available energy. With 5.0 kg of onboard hydrogen stored in two 700 bar Type-IV composite tanks, the total usable electrical energy available for the mission is, therefore,
Etotal ≈ 330 MJ, representing the maximum shaft-level energy deliverable by the propulsion system. DOE data indicate that a 5–6 kg/700 bar tank system (approximately a single 2.5 kg usable H
2) has a mass of 20 kg. Therefore, a 20 kg tank mass is expected for 5 kg of H
2. The tanks are connected with high-pressure tubing to a multi-stage regulator, which reduces the pressure to the stack pressure. Intake air enters through the U-shaped air intake system. The built-in centrifugal air compressor draws in air and sends it to the cathode via a built-in cooler and humidifier. The compressor pressurises the air to slightly above stack pressure (to achieve the required oxygen partial pressure). Air stoichiometry is often 2 to 3 times higher because only about 21% of air is O
2 and to aid cooling. A typical cell uses 2–2.5 times stoichiometry on oxygen, so total air flow is 10–15 kg/min at 85 kW. Air at high temperature must be cooled before entering the cathode. A low-temperature cooler is used to cool the compressed air. The air passes through the built-in humidifier, and the stream passes through a filter to remove liquid or particles. In the cathode, nitrogen and other inert gases enter without reaction, but they dilute the oxygen. For each mole of H
2, 0.5 mol O
2 is consumed. O
2 and H
2 flows are coupled by reaction stoichiometry. The pressure drop in the stack (flow channels) is maintained at 20–50 kPa [
55] to limit compressor work. When the compressor spins faster, excess power recovery is considered, but in this small system, the power consumed by the built-in compressor is a parasitic load (8–10 kW).
2.8. Supplementary Thermal Balance and Cooling System Sizing
Although a detailed thermal design requires CFD analysis and experimental validation, a first-order heat balance and cooling system sizing are provided here to demonstrate feasibility and transparency at a screening level.
The selected Toyota TFCM2-B PEM fuel-cell stack delivers a net electrical output of 85 kW. As stated previously, the stack efficiency is assumed to be approximately 55%. The corresponding chemical power input is therefore,
The difference between chemical input power and net electrical output represents waste heat that must be rejected by the cooling system:
This value represents a worst-case thermal condition corresponding to sustained high-power operation in hover and climb. Parasitic loads associated with the fuel-cell balance-of-plant, particularly the air compressor (8–10 kW as noted in
Section 2.7), are included implicitly within this waste heat envelope. Radiator sizing is estimated using a standard heat transfer relationship:
where
U is the overall heat transfer coefficient,
A is the effective radiator heat exchange area, and Δ
T is the characteristic temperature difference between the coolant and ambient air. For liquid-cooled PEM fuel-cell systems with forced-air cooling, conservative screening-level values are typically assumed based on the automotive heat exchanger practice of
U = 80–120 W/m
2K and Δ
T = 30–40 K. The temperature difference is conservatively selected to account for worst-case hover conditions at elevated ambient temperature (e.g., 20 °C), where the coolant exit temperature is approximately 80 °C and the effective ambient temperature is 35–40 °C. Applying these values yields a required effective radiator area:
This area does not correspond to a single exposed flat panel but rather to the cumulative internal surface area of compact, multi-pass heat exchangers. For integration into a light rotorcraft airframe, the radiator would most likely be realised as multiple compact liquid-to-air heat exchanger modules distributed within the fuselage sponsons or mounted beneath the cabin floor, similar to automotive PEMFC cooling architectures.
Because hover represents the most thermally demanding flight phase due to the absence of ram air, forced convection is required. The electrical power demand of the cooling fans is estimated as a fraction of the fuel-cell net output. At the screening-level, fan power consumption is assumed to be approximately 3–6% of the fuel-cell net power, yielding 2.5–5.0 kW. This estimate is consistent with compact forced-air cooling systems for automotive PEMFC applications, though it represents an optimistic lower bound. Heavy-duty fuel-cell systems may require up to 8–10% of net output for cooling fans under worst-case thermal conditions. During cruise flight at 41.4 m/s, dynamic pressure provides significant ram-air cooling, which can substantially reduce or eliminate the need for fan operation. However, this benefit is conservatively excluded from the present screening-level analysis, and fan power is assumed to be required throughout the mission envelope to ensure adequate thermal margins during all flight phases. The associated cooling-fan mass is estimated at 3–5 kg, representing compact axial or centrifugal fans with lightweight composite or aluminium construction. The radiator assembly, coolant (water–glycol mixture), and circulation pumps contribute an additional 10–12 kg, yielding a total thermal management subsystem mass allocation of 15 kg (as stated in
Table 2).
A summary of the cooling system power and mass allocation is provided in
Table 4.
2.9. PEMFC System Model Arhitecture and Validation
The PEMFC system model (
Figure 6) integrates five primary subsystems, hydrogen storage (Toyoda Gosei Co., Ltd., Kiyosu, Japan, 700 bar Type-IV COPV, 5 kg usable), air compression with integrated cooling (75% isentropic efficiency, 2.5× stoichiometry), PEMFC stack (Toxota Motor Europe, Brusseles, Belgium, Toyota TFCM2-B, 80 kW nominal design), thermal management (15–29 m
2 multi-pass radiators (see Equation (14)), 2.5–5 kW forced-air cooling), and DC electrical bus (250–400 V interface). Complete model input parameters are documented in
Table 5.
Power-sharing between the PEMFC and lithium-ion battery is governed by a rule-based energy management system comprising six control strategies:
The PEMFC stack ramps up to the requested power with a 10 s time constant, limited by air-compressor dynamics and thermal transients (
Section 2.6.1). During take-off and climb, when mission power demand is high, the stack cannot deliver full output immediately. Instead, it follows an exponential rise:
The 10 s delay reflects the physical constraint that the air compressor requires time to pressurise from 1.0 atm (sea level) to 2.5 atm operating pressure. During this ramp period, the fuel-cell output increases from 0 kW to 80 kW at a nominal rate of approximately 8 kW/s.
The battery delivers an instantaneous power surge during the first 0–3 s to bridge the gap between mission demand and available PEMFC output (
Section 2.6). During take-off hover, the aircraft requires 66.4 kW, but the PEMFC is still ramping up from idle. The battery immediately supplies 15–20 kW to meet the full thrust requirement, providing the transient power the fuel-cell cannot supply in the first 10 s. This is the primary role of the 30 kg lithium-ion battery buffer: to absorb high-rate discharge pulses (up to 120 kW peak) that are incompatible with fuel-cell dynamics, while the PEMFC handles sustained steady-state loads.
Battery State-of-Charge is maintained within a 20–80% operating band by automatically adjusting the PEMFC setpoint (
Section 2.6). The Battery Management System (BMS) enforces strict SoC limits: a minimum of 20% prevents deep discharge and accelerated ageing, while a maximum of 80% avoids overcharge-related degradation. When SoC falls below 20%, the EMS commands the PEMFC to increase output by 5–10 kW to recharge the battery. Conversely, when SoC exceeds 80%, the PEMFC output is reduced by 5–10 kW to prevent overcharge. During normal cruise phases (28.4–62.4 kW power demand), the PEMFC output matches the mission demand, and the battery remains idle, maintaining SoC between 50 and 70% with minimal ripple.
During descent, PEMFC output is reduced to a minimum of 10–12 kW to minimise hydrogen consumption (
Section 2.6). The descent phase requires only 14.2 kW of rotor power, which is controlled by reducing the pitch angle from +10° to near zero. The PEMFC is deliberately throttled to 10–12 kW by lowering the hydrogen supply pressure. This strategy prevents wasteful operation at part-load efficiency. Simultaneously, the electric motor transitions from motoring to generator mode, capturing 3–5 kW of regenerative energy from the descending rotor. This regenerative power is returned to the battery, increasing the State-of-Charge by approximately 5–10% over the 2 min descent phase.
The EMS enforces strict limits on operating temperatures and emergency reserves. If the PEMFC stack temperature exceeds 85 °C, the EMS reduces stack load by 10% and increases cooling fan power by 0.5 kW to prevent thermal runaway. If the battery-cell temperature exceeds 60 °C, the battery discharge current is reduced by 20% to prevent lithium-ion degradation. If the battery SoC drops below 10%, the mission-abort protocol is activated, and motor power is reduced to safe-descent levels only. If hydrogen tank pressure falls below 50 bar, the pilot is alerted to land immediately and refuel.
During mission-specific loitering operations (inspection or survey tasks at reduced airspeed), the aircraft cruises at 27 m/s, requiring only 28.4 kW of power (
Section 2.6). In this regime, the PEMFC supplies the full 28.4 kW demand, operating at an optimal steady-state efficiency of approximately 55%. The battery remains idle, maintaining its State-of-Charge at 50–60% with no discharge. Loiter flight is the most efficient mission phase, as the PEMFC operates continuously at a fixed-power setpoint with minimal battery involvement.
2.10. Validation Against Robinson R22 and Guimbal Cabri G2
The hover power predicted for the Hawk configuration—66.4 kW, as obtained from momentum theory (Equations (2) and (3))—is validated by comparison with well-documented baseline light helicopters in the open literature. This comparison provides an empirical anchor for the analytical model and ensures that the theoretical estimates remain physically realistic. For the Robinson R22 Beta II, specifications indicate a maximum continuous power of 92 kW from the Lycoming O-360 engine. Operational data show that typical hover power is approximately 65–67 kW, corresponding to about 75% of the maximum continuous rating. Similarly, the Guimbal Cabri G2, powered by a Lycoming O-320 with a maximum continuous power of 87 kW, exhibits typical hover power requirements in the range of 62–64 kW under comparable conditions. The Hawk’s analytically estimated hover power of 66.4 kW falls within the empirical range of 62–67 kW determined by these reference aircraft. The resulting deviation of 2.7% demonstrates excellent agreement between theory and observed operational performance, well within expected measurement uncertainty and modelling simplifications. For cruise power, a similar validation is achieved. The Hawk’s computed cruise power uses a component buildup approach (Equations (4)–(8)), is 62.4 kW. The published cruise power values for the Robinson R22 and Cabri G2 range from 61 to 65 kW, typically about 75% of the maximum continuous rating. The deviation of 1.9% further confirms the reliability of the analytical approach. Taken together, these results show that both hover and cruise power predictions are consistent with empirical data from comparable rotorcraft. This agreement validates the use of momentum theory and component-based power modelling for the Hawk configuration and confirms that the mission energy consumption estimates presented later (
Table 3) are based on realistic rotorcraft aerodynamic physics rather than optimistic assumptions.
4. Discussion
The Hawk demonstrator confirms that retrofitting certified light rotorcraft with hydrogen–electric propulsion overcomes the gravimetric limitations of current battery technology. While purely electric rotorcraft are typically constrained by battery energy densities of approximately 150–240 Wh/kg, the proposed hybrid architecture leverages the high specific energy of hydrogen (120 MJ/kg) to decouple the range from the payload mass. The analysis, explicitly accounting for all fuel-cell balance-of-plant parasitic loads (air compressor, cooling system, hydrogen recirculation), indicates that a 39% fuel reserve remains after a 100 km sortie, which proves that such systems are power-limited rather than energy-limited—a fundamental shift from the battery–electric paradigm [
4,
12,
15]. This outcome establishes the Hawk as a practical retrofit pathway addressing the fundamental endurance limitations that currently constrain battery–electric VTOL aircraft, while validating the broader hydrogen economy vision for aviation decarbonisation [
56].
To validate the two to three times range and 28% mass savings, a battery-only baseline for the same 100 km mission is analysed. Assuming the same airframe and rotor system and using contemporary lithium-ion battery technology at 200 Wh/kg (aerospace standard), a battery-only configuration would require 50.9 kWh electrical (45.8 kWh mechanical/0.90 drivetrain efficiency), corresponding to a battery pack mass of 254.5 kg. This compares to the Hawk’s total propulsion system mass of 199 kg (109 kg PEMFC + 20 kg H2 tanks + 30 kg battery + 40 kg motor/gearbox), representing a mass reduction relative to the battery-only configuration. When thermal management subsystems (15 kg) are included for both configurations, the hydrogen–electric system demonstrates 214 kg versus approximately 310 kg for battery-only equivalent (254.5 kg battery + 40 kg motor + 15 kg thermal management), corresponding to approximately 31% system-level mass savings. The increased battery mass raises MTOW from 624 kg to approximately 741 kg, increasing disc loading by 11.5% and hover power by about 22% due to the mass spiral effect characteristic of battery-limited rotorcraft. Based on the 50.9 kWh electrical energy requirement for the 100 km hydrogen–electric mission at 624 kg MTOW, the increased gross weight of the battery-only configuration (741 kg with a 254.5 kg pack), and the associated rise in hover and cruise power demand, together with the enforced 20–80% SoC window and a 30% energy reserve, only about 28–35 kWh of the installed energy can be used for propulsion. This constrained usable energy, combined with the higher average shaft power, limits the practical mission range of the battery-only configuration to approximately 45–55 km under the same mission profile and safety constraints. The battery-only configuration achieves a range of approximately 45–55 km at nominal power, compared to 100 km for Hawk, confirming the 1.8–2.2 times range advantage.
The 39% fuel reserve remaining after the 100 km mission (with explicit inclusion of BOP parasitic loads totalling 11.5–17 kW) indicates that the system is power-limited rather than energy-limited—a characteristic typical of rotorcraft operations, where transient hover and climb demands dominate the mission profile instead of sustained cruise. Battery-only systems must carry excess capacity to meet peak power demands, which proportionally increases vehicle mass, whereas the hybrid architecture decouples the steady-state energy supply (fuel cell) from transient peak power (battery buffer), enabling lighter overall propulsion systems [
6,
20,
21]. This validates the selection of a relatively small 30 kg battery buffer solely for transient loads (hover and climb), allowing the 85 kW fuel cell to be sized strictly for mean power requirements rather than peak loads. The power management challenge inherent in hybrid-electric aircraft has been extensively reviewed, with comprehensive assessments demonstrating that sophisticated control algorithms are essential for optimising efficiency across diverse mission profiles [
55]. A comprehensive review of primary cooling techniques and thermal management strategies for PEMFC systems shows that liquid cooling architectures are superior to air cooling for aerospace applications, where space constraints and weight penalties are critical [
57].
In energy management, assuming the PEMFC operates at constant power is a cautious starting point. Advancing to optimisation methods such as reinforcement learning or model predictive control could reduce hydrogen consumption by 8–15% through dynamic power-splitting. However, it remains necessary to verify how these strategies cope with the rapid, irregular power demands typical of helicopters [
26,
58]. On the hardware side, recent upgrades to key components—such as ejectors, circulation systems, and pressure regulators—have already made the system much more efficient and reliable [
59]. Retrofitting proven, certified airframes (such as the Robinson R22 and Cabri G2) was also a strategic decision, as it substantially reduces engineering risks and keeps weight lower than designing a new craft from scratch. The 434 kg empty weight (69.6% of MTOW) aligns with current trends for hybrid designs. Although this is 8.5% above initial empirical predictions, it demonstrates that the integration is tight but entirely feasible, rather than over-constrained.
The Hawk’s range advantage is consistent with the findings of An et al. [
12] and Cäsar et al. [
15], who report that hybrid fuel-cell–battery systems overcome battery energy density limitations (typically 240 Wh/kg) by utilising hydrogen’s superior gravimetric energy (120 MJ/kg). Although thermal management is critical for PEMFC operation, detailed radiator sizing involves complex aerodynamic trade-offs that are beyond the scope of this initial sizing study. The allocated 15 kg for the thermal subsystem is considered a conservative mass allowance based on automotive precedents, though future work must address the specific challenges of hover cooling, where ram air is negligible. The regenerative descent strategy (3–5 kW recovery) is rarely quantified for light rotorcraft retrofit scenarios in the open literature. While regenerative braking is established in electric vehicles, its application to VTOL descent with coordinated rotor pitch control lacks experimental validation in the literature. Hydrogen fuel-cell multi-rotor drones represent the closest precedent for small-vehicle hydrogen–electric integration, though most research has focused on multicopter configurations rather than rotorcraft with variable-pitch single-rotor configurations [
59]. Energy management strategies for hybrid aircraft have evolved from rule-based to learning-based approaches [
25,
26,
58], yet no published studies demonstrate such control laws implemented on rotorcraft with realistic transient power demands. The rule-based energy management strategy used in this study prioritises simplicity and real-time implementation for conceptual-level assessment. However, the published literature shows that optimisation-based strategies can reduce hydrogen consumption by 10–15% compared to rule-based control [
60,
61]. For the Hawk, this could lower 100 km mission consumption from 3.06 kg to approximately 2.60–2.75 kg. Classical dynamic programming is computationally prohibitive for real-time use (5–30 min offline), but Pontryagin’s minimum principle and the equivalent consumption minimisation strategy achieve 95–99% of DP optimality with less than 10 ms computational time, making them feasible for embedded processors. Implementation of optimised energy-management strategies is deferred to the preliminary design phase.
For aviation decarbonisation, the Hawk demonstrates hydrogen’s potential as a near-term solution (deployment within 5–10 years) without the need to wait for breakthrough battery technologies (400+ Wh/kg targets remain unmet). Unlike sustainable aviation fuels, hydrogen eliminates tailpipe emissions entirely, supporting ICAO’s net-zero 2050 targets [
1,
6,
7]. The retrofit strategy is particularly significant: rather than developing entirely new aircraft architectures, operators can incrementally upgrade existing certified platforms (R22, Cabri G2), reducing the certification burden and accelerating market adoption.
For the eVTOL sector, these results indicate that hybrid architectures are a serious alternative to pure battery designs, especially for missions where payload and range are more important than hover time alone. Commercial operations such as search and rescue, infrastructure inspection, and utility work stand to benefit most, as they can achieve a two- to three-fold increase in range. Ultimately, the path to making hydrogen fuel-cell aircraft commercially viable is becoming much clearer, thanks to recent breakthroughs in airframe technology and more efficient propulsion system integration [
59].
5. Conclusions
The Hawk screening-level analysis shows that hydrogen–electric hybrid architectures can overcome the gravimetric limitations inherent in battery-only VTOL rotorcraft. Within the assumptions and constraints of this conceptual design study, the proposed configuration achieves a quantified 100 km operational range with a 39% fuel reserve (explicitly accounting for all balance-of-plant parasitic loads) and a 28% propulsion system mass reduction compared to comparable battery–electric baselines. These results confirm that, at the conceptual design level, hydrogen fuel-cell propulsion offers a technically credible retrofit pathway for certified light rotorcraft, providing approximately 1.8–2.2 times greater range than current battery technology allows. Progression towards operational deployment, however, requires the resolution of critical technical and regulatory challenges. Experimental validation of thermal transients, energy-management optimisation under realistic mission profiles, post-retrofit autorotation capability, and crashworthiness compliance with Part 27 impact standards remain essential prerequisites. Existing FAA regulations do not yet accommodate fuel-cell propulsion or hydrogen storage in rotorcraft applications, and the FAA Hydrogen-Fuelled Aircraft Safety and Certification Roadmap targets 2028–2032 for regulatory readiness, indicating that certified retrofit operations are unlikely before the mid-2030s. Consequently, while this study establishes a replicable design methodology and quantified evidence supporting hydrogen’s technical viability at the screening level, the timeline and feasibility of commercial deployment depend critically on parallel advances in regulatory frameworks, component maturation, and sustained flight-test validation—factors beyond the scope of the present conceptual assessment.
This study is a screening-level analysis without experimental validation. Critical gaps include: (1) Thermal transients during take-off and climb have not been measured; peak temperatures during rapid power transitions could exceed design margins. (2) Energy management assumes idealised PEMFC ramp rates; realistic 5–10 s response times may require battery pre-staging strategies not explored here. (3) Regenerative energy recovery of 3–5 kW is estimated but excluded from nominal mission planning. Although the descent controller can, in principle, enable 3–5 kW of regenerative energy capture, this potential is conservatively omitted from the mission energy budget to prioritise descent safety and flight control authority. Motor inverter losses, controller latency, and transient current limits could further reduce actual recovery by 20–30%. (4) Autorotation capability post-retrofit remains unverified and represents a critical certification requirement. The shifted centre of gravity, altered rotor inertia distribution, and modified mass distribution resulting from powertrain relocation may degrade the baseline airframe’s autorotational descent characteristics. Experimental flight testing would be mandatory to validate minimum descent rate, flare authority, and touchdown energy absorption before any type certification attempt. (5) Certification requirements for hydrogen-powered rotorcraft do not yet exist, and the regulatory pathways remain undefined.
The mass estimation relies on empirical correlations from two reference aircraft, and scaling to much larger or smaller rotorcraft may reveal nonlinearities not captured by the power–law model. The 8.5% over-prediction margin, while conservative, is modest for preliminary design and may be inadequate for detailed stress analysis.