You are currently viewing a new version of our website. To view the old version click .
Actuators
  • Article
  • Open Access

29 November 2025

An Adaptive Command Scaling Method for Incremental Flight Control Allocation

,
,
and
Institute of Flight System Dynamics, Technical University of Munich, 85748 Garching, Germany
*
Author to whom correspondence should be addressed.
Actuators2025, 14(12), 579;https://doi.org/10.3390/act14120579 
(registering DOI)
This article belongs to the Section Aerospace Actuators

Abstract

Modern aircraft usually employ control allocation to distribute virtual control commands among redundant effectors. Infeasible virtual command can occur frequently due to aggressive maneuvers and limited control authority. This paper proposes a lightweight command scaling law for incremental flight control allocation. The method scales the raw incremental virtual command by a direction-preserving gain K [0,1]. It is updated via gradient descent on a Lyapunov function that balances allocation error against deviation from unity gain. The proposed adaptive update law ensures the convergence of K to a value that corresponds to the attainable portion of infeasible commands, independent of the specific allocator used. At the same time, feasible virtual commands will be preserved. Its performance was evaluated through open-loop ray sweeps of the attainable moment set and closed-loop INDI simulations for a yaw-limited eVTOL. The results demonstrate that the adaptive scaling gain closely approximates the linear programming ground truth while offering significantly higher computational efficiency. Furthermore, it effectively mitigates cross-axis coupling, reduces peak excursions, and alleviates rotor saturation. These findings highlight the method’s effectiveness, modularity, and suitability for real-time implementation in aerospace applications.

1. Introduction

Modern aircraft frequently employ over-actuation design, wherein the number of available control effectors exceeds the number of axes to be controlled. Particularly, novel configurations like electric vertical takeoff and landing (eVTOL) vehicles have many distributed electric propulsion units. This control redundancy offers benefits in flight performance and fault tolerance. It also necessitates a dedicated module, known as control allocation, to optimally distribute the desired motion commands to multiple effectors.
Control allocation serves as a modular component within the flight control architecture, receiving virtual commands (forces, moments, or accelerations) and translating them into physical commands for the redundant control effectors. This concept was developed in concert with feedback-linearization techniques like Nonlinear Dynamic Inversion (NDI) []. Within the past two decades, Incremental Nonlinear Dynamic Inversion (INDI) has gained tremendous popularity for its reduced sensitivity to model uncertainties and improved disturbance rejection []. It has been flight tested on different aircraft configurations [,,]. One advantage of INDI is that it simplifies the allocation problem to a linear one through local linearization, enabling the use of efficient linear control allocators. A detailed review of control allocators can be found in many references such as Refs. [,]. Among these linear allocators, Moore–Penrose pseudo-inverse (PI)-based methods are widely used because they provide a closed-form solution and have a low and predictable computational footprint. These advantages make them highly suitable for real-time, safety-critical aerospace applications [].
The physical capabilities of the effectors fundamentally limit the utility of any allocator. Actuators, rotors, and other control surfaces are subject to hard physical limitations on their positions and rates of movement, which define the Admissible Control Set (ACS). The mapping of this set through the control effectiveness matrix defines the Attainable Moment Set (AMS). Within an incremental control framework like INDI, these concepts correspond to an Admissible Set of Incremental Control and an Attainable Set of Incremental Moment. For clarity, this paper will use the terms ACS and AMS, while noting the distinction between absolute and incremental contexts. When a virtual command lies inside the AMS, it is feasible and can be realized to a large extent by pseudo-inverse methods such as redistributed pseudo-inverse. However, when a command lies beyond the AMS, it is infeasible, and no allocator can precisely achieve it. Such infeasible commands can arise from aggressive pilot inputs, strong external disturbances, system failures, or operation at the edges of the flight envelope []. It has been shown that when the (redistributed) pseudo-inverse allocators are used, these infeasible commands might cause significant allocation errors in magnitude and direction, leading to control saturation, degraded tracking performance, and compromised handling qualities [,].
Various methods have been proposed to address the issue of infeasible commands. Direct allocation techniques determine the intersection of the desired command vector with the boundary of the AMS, effectively scaling the command back to the boundary. While this approach preserves the intended command direction, it can be computationally intensive, as it requires either online geometric calculations or solving a linear programming problem [,,]. Another strategy involves precomputing the AMS offline and storing it in lookup tables for onboard command limiting []. However, this method demands significant memory resources and lacks robustness against unexpected changes in the AMS. Approximation of AMS has also been suggested. For example, Ref. [] approximated the AMS of a launch vehicle with elliptically constrained vectored nozzles using an internal ellipsoid. However, this approach is unsuitable for general flight control allocation scenarios with box control sets, and the internal ellipsoidal approximation might deviate significantly from the true AMS boundary. Finally, indirect methods have also been proposed, which focus on modifying the upstream reference model or controller state to address infeasible commands. A notable example is the pseudo control hedging (PCH) approach [,], which feeds back the allocation error into the reference model to effectively “slow down” the virtual command. While this method can mitigate allocation errors, it has certain limitations. Studies [,] have shown that PCH may not always protect against actuator dynamics and saturation. Moreover, it introduces complex coupling between the reference model and the error controller, which can distort the original command signal.
Therefore, there remains a need for gracefully handling infeasible commands without heavy online computation, a large database, or modifying upstream controller signals. In this work, we propose an adaptive command scaling approach that augments computationally efficient pseudo-inverse-based allocators. This method applies a scalar gain between 0 and 1 to the incremental virtual command, preserving the command’s original direction. The gain is governed by an adaptive law that ensures stability of both the allocation error and the gain itself. Intuitively, when a persistent allocation error is detected, the gain decreases, scaling down the infeasible command until it enters the feasible region. A recovery term is simultaneously included in the adaptive law to drive the gain back toward unity, preventing overly aggressive command suppression when the command is actually feasible.
The primary contribution of this paper is a novel command scaling algorithm applied to the incremental virtual command. This approach provides a stable allocation error mitigation strategy that operates within the control allocation module, preserving the integrity of upstream controller signals. The combination of the adaptive law and the recovery term provides a tradeoff between the instantaneous clipping of infeasible commands and the preservation of feasible ones. Most importantly, the algorithm is a pure algebraic process that involves no online optimization or iterative computation, making it ideal for real-time implementation.
The remainder of this paper is outlined as follows: In Section 2 we present the preliminaries of incremental control allocation and introduce the eVTOL aircraft model used as a case study. This is followed by the derivation of the adaptive scaling law and the algorithm in Section 3. A stability analysis of the INDI closed-loop under model uncertainties and allocation error appears in Section 4. Validation of the proposed method at both the control allocation level (Section 5) and within a closed-loop INDI system (Section 6) is carried out using the eVTOL example. Finally, Section 7 and Section 8 discuss the validation results, research limitations, and directions for future work.

2. Preliminaries

This section presents the foundational concepts of incremental nonlinear dynamic inversion control laws and incremental control allocation. Subsequently, an eVTOL platform is introduced as a testbed for analysis. In particular, its incremental ACS and AMS are investigated in detail.

2.1. Incremental Nonlinear Dynamic Inversion Control

Consider the control-affine system with full relative degree and input–output linearization:
y ( ρ ) = α ( x ) + B ( x ) u , y R p , u R m ,
where x R n is the state vector, u R m is the input vector, y R p is the output vector, y ( ρ ) is the “virtual output” denoted by ν , α ( x ) is the nonlinear term, and B is the control effectiveness. In the INDI law, we linearize Equation (1) around ( x 0 , u 0 ) and obtain
ν = ν 0 + B ( x 0 ) Δ u + δ ( x , Δ t ) , ν 0 : = α ( x 0 ) + B ( x 0 ) u 0 ,
where Δ u = u u 0 is the incremental input, ν 0 is the current “virtual control” (usually its estimation ν ^ 0 is used), and δ ( x , Δ t ) captures truncation and unmodeled dynamics. With a desired virtual control command ν c from the outer-loop, the desired incremental virtual control is
Δ ν c : = ν c ν ^ 0 .
The control allocator receives the virtual command in Equation (3) and returns an admissible input increment. The relationship is governed by the linear equation
Δ ν c = B ^ Δ u c ,
where B ^ is the estimated control effectiveness. As illustrated in Figure 1, this input increment Δ u c is then added to the measurement or estimation of current input value u 0 to form the final input command:
u c = Δ u c + u 0 .
Figure 1. Illustration of the incremental control allocation.
The mathematical problem of determining this increment Δ u c is formulated as
min J ( Δ u c ) subject to Δ ν c = B ^ Δ u c and Δ u c S ( Δ u ) ,
where J is the secondary goal, S ( Δ u ) is the set of admissible control increments. The most widely used solution is the Moore–Penrose pseudo-inverse (PI). It provides a closed-form solution that minimizes the 2-norm of Δ u c :
Δ u c = B ^ Δ ν c = B ^ T ( B ^ B ^ T ) 1 Δ ν c .
In this paper, the pseudo-inverse is computed using the MATLAB 2023b pinv function [], which is based on a singular value decomposition. This implementation includes a protection mechanism for ill-conditioned matrices: singular values less than or equal to a specified tolerance are treated as zero when forming the pseudo-inverse.
Since control limits are not explicitly considered in the basic PI solution, iterative schemes such as the Redistributed Pseudo-Inverse (RPI) or Cascaded Generalized Inverse (CGI) have been developed. These methods handle saturation by iteratively calculating the allocation error resulting from saturated effectors and redistributing this error among the remaining unsaturated effectors. Details of these algorithms can be found in Refs. [,].
A distinct problem in incremental allocation is path dependency. Since the absolute effector position at any time is the sum of all previous increments, the control configuration can drift over time to suboptimal positions (e.g., positions that induce high drag). This issue can be managed by utilizing the null space of the control effectiveness matrix, ( B ^ ) . An increment component from the null space, Δ u n u l l , can be added to the solution since B ^ ( Δ u c + Δ u n u l l ) = B ^ Δ u c . This allows the allocator to restore the absolute effector positions toward preferred values without altering the achieved virtual command increment. This paper adopts the null space update strategy given in Ref. [] to reduce the following secondary objective
J = Δ u c Δ u d e s 2 .

2.2. EVTOL Example

This paper focuses on an exemplary lift-plus-cruise eVTOL aircraft developed by NASA [], as shown in Figure 2. Specifically, we consider the hover condition where the pusher propeller is idle and aerodynamic surfaces are ineffective. Attitude control is provided solely by eight lift rotors. Based on the standard thrust/torque-speed quadratic model, we define the control vector as the squared rotor speeds normalized by their maximum value:
u = ω 1 2 ω max 2 , ω 2 2 ω max 2 , , ω 8 2 ω max 2 ,
where ω i is the i-th rotor speed, and ω max is the maximum rotor speed. The properties of rotor dynamics are taken from the NASA eVTOL model [] and listed in Table 1.
Figure 2. NASA lift plus cruise electric vertical takeoff and landing aircraft.
Table 1. Aircraft and Rotor properties.
The control effectiveness from u to body-frame angular acceleration is
B = p ˙ , q ˙ , r ˙ u ,
and this effectiveness is assumed constant in hover flight:
B = 2.4941 1.0651 0.8678 2.4782 2.5473 1.0729 1.2410 2.5795 0.3967 0.4043 0.4088 0.3992 0.3924 0.3505 0.3335 0.3979 0.0715 0.1271 0.1291 0.0735 0.0444 0.1456 0.1395 0.0561 .
The incremental virtual command is Δ ν = [ Δ p ˙ , Δ q ˙ , Δ r ˙ ] .

2.3. Incremental Control Set and Moment Set

In the incremental control framework, control allocation operates incrementally. This subsection discusses the admissible control set (ACS) and attainable moment set (AMS) of the eVTOL incremental control allocation.
Consider a first-order rotor dynamics model, which is a widely-used approximation in eVTOL rotor modeling [,]:
ω ˙ = 1 T ω ω c ω ,
where T ω is the time constant. The absolute and rate constraints on ω are
ω min ω ω max , ω ˙ max ω ˙ ω ˙ max .
For a step change in the rotor speed command ω c = ω + Δ ω c , the peak physical rate occurs at t = 0 + and equals ω ˙ = Δ ω c T ω . Hence, the rate-safe command increment is
ω ˙ max T ω Δ ω c ω ˙ max T ω .
Because the allocator works with squared rotor speed, u = ω 2 / ω max 2 , we need bounds on the incremental input command Δ u c u c u . Consider ω c = ω + Δ ω c , and assume Δ ω c is small, we can derive the following algebraic increment
Δ u c = ( ω + Δ ω c ) 2 ω 2 ω max 2 = 2 ω Δ ω c + ( Δ ω c ) 2 ω max 2 2 ω Δ ω c ω max 2 .
This suggests that the allowable incremental input range depends on the current rotor speed:
ω ˙ max T ω 2 ω ω max 2 Δ u c ω ˙ max T ω 2 ω ω max 2 .
Intersected with the position limit, we obtain the admissible incremental move:
Δ u c S ( Δ u ) = [ max ( ω min 2 ω max 2 ω 2 ω max 2 , ω ˙ max T ω 2 ω ω max 2 ) , min ( 1 ω 2 ω max 2 , ω ˙ max T ω 2 ω ω max 2 ) ] .
Let the nominal rotor speed be ω = 90 rad / s and consider rotor properties listed in Table 1. If the rotor acceleration constraint is neglected, the admissible incremental move for a single rotor at the operating point ω = 90 rad / s is
S ( Δ u ) = [ ω min 2 ω max 2 ω 2 ω max 2 , 1 ω 2 ω max 2 ] = 0.2869 , 0.7096 .
When the rotor acceleration bound is enforced via Equation (17), the admissible incremental move shrinks to
S ( Δ u ) = 0.0514 , 0.0514 ,
which is about 5.1% of full-scale. For a given rotor state ( ω 1 , , ω 8 ) , the corresponding incremental moment set is
M = { Δ ν = B Δ u | Δ u i S ( Δ u i ) i } .
Figure 3 visualizes the images of (18) and (19) under the control effectiveness matrix B . The green polytope depicts the AMS without the acceleration constraint; the red polytope shows the AMS when the acceleration bound is active. The latter is drastically smaller, with a volume of only about 0.1 % of the former, which explains why incremental allocation saturates much more readily once rotor acceleration is constrained.
Figure 3. AMS with and without rotor acceleration limit.

3. Adaptive Command Scaling Law

This section derives the adaptive command scaling law using the Lyapunov theory, and then presents a numerical algorithm design and implementation in simulation software.

3.1. Adaption Mechanism

Within the incremental allocation process, the achieved virtual control is Δ ν a = B ^ Δ u c , which might not match the command Δ ν c due to incapability of the allocator or infeasible commands beyond the AMS. Our goal is to apply a scalar gain K to the incremental virtual command before it enters the allocator, as shown in Figure 4 and  Figure 5. This gain K is expected to achieve two goals:
  • Scale down infeasible virtual commands to attainable level, as shown in the left subfigure of Figure 5;
  • Preserve feasible virtual commands, as shown in the right subfigure of Figure 5.
Figure 4. Illustration of the incremental control allocation.
Figure 5. Illustration of the scaling for infeasible (left) and feasible (right) commands. Red indicates the original command, and green indicates the scaled-down command.
An adaption mechanism for this scaling gain is derived in the following.
The raw allocation error between the raw virtual command and achieved amount is defined as
e ν 0 = Δ ν c Δ ν a .
With a scaled command, the allocation error becomes
e ν ( K ) = Δ ν s Δ ν a = K Δ ν c Δ ν a .
Choose the scalar Lyapunov candidate:
V ( K ) = 1 2 e ν ( K ) 2 Δ ν c 2 + λ 2 ( K 1 ) 2 , λ > 0 .
In this Lyapunov function, the first term is a dimensionless allocation error and the second term is a restoring term for K = 1 (no scaling). Because e ν ( K ) is affine in K, V ( K ) is a strictly convex quadratic. Its gradient with respect to K is
d V d K = Δ ν c e ν ( K ) Δ ν c 2 + λ ( K 1 ) = s ( K ) + λ ( K 1 ) ,
with the “ray-aligned” scalar
s ( K ) = Δ ν c e ν ( K ) Δ ν c 2 = K ρ , ρ : = Δ ν c Δ ν a Δ ν c 2 .
The scalar ρ intuitively denotes the fraction of Δ ν c achieved along its own direction. We design the update law to move K in the direction that most rapidly decreases V. The updating law takes the form:
K ˙ = γ d V d K = γ s ( K ) + λ ( K 1 ) , γ > 0 .
This law enables a non-positive gradient of the Lyapunov function:
V ˙ = d V d K K ˙ = γ d V d K 2 0 .
Thus V is nonincreasing for any allocator, and the trajectories converge to the unique minimizer K of V:
d V d K = 0 ( 1 + λ ) K = ρ + λ K = Π [ 0 , 1 ] ρ + λ 1 + λ ,
where Π [ 0 , 1 ] means clipping to [0,1]. The dynamics of K ( t ) in Equation (26) is rewritten as
K ˙ = γ ( 1 + λ ) K + γ ( ρ + λ ) .
Therefore, K ( t ) converges to K exponentially with rate γ ( 1 + λ ) . If the raw command is achievable along its ray ( ρ = 1 ) , then K = 1 and K ( t ) will rapidly snap back to 1. If the raw command is infeasible along that ray ( ρ < 1 ) , then K is the weighted average between 1 and ρ according to Equation (28). Larger λ biases toward 1, smaller λ adheres more to the allocator’s actual fraction ρ , therefore, λ is a weighting between two contradictory goals of scaling down and restoring the command.

3.2. Algorithm Design

A pseudocode of the adaptive command scaling law is summarized in Algorithm 1.
In line five of Algorithm 1, the scalar gain K is updated using the forward-Euler integration scheme. This method can be substituted with other suitable numerical integration techniques if desired. Line six ensures that the gain is constrained within the range [0,1] through a simple clipping operation. The adaptation law includes a persistent “restoring force”, ensuring the gain value does not remain stuck at zero. To prevent nullifying commands entirely, the lower limit of K can be set to a small positive value. Lines 9–11 introduce an optional reset mechanism for the integrator. This mechanism activates when the actual allocation error becomes negligible. Figure 6 illustrates the integrator structure, which can guide implementation in simulation tools like Simulink [].
Algorithm 1. Adaptive Scalar Command Scaling (single control tick)
Require:  Δ ν c R n (raw command), Δ ν a R n (achieved by the allocator), K [0,1]
   (current gain), γ > 0 , λ > 0 , controller sample time Δ t > 0 , (optional) reset threshold
   ε r e (0,1), tiny positive tolerance ε t o l (e.g., 1 × 10−10)
Ensure: Updated K [0,1]
  1: den max ε t o l , Δ ν c Δ ν c
2: e ν K Δ ν c Δ ν a {scaled residual}
3: s Δ ν c e ν den
4: K ˙ γ s + γ λ ( 1 K )
5: K K + Δ t K ˙
6: K min 1 , max ( 0 , K ) {project to [0,1]}
7: e ν 0 Δ ν c Δ ν a {unscaled residual for reset}
8: ratio e ν 0 den
9:if   ratio ε r e   then
10: K 1 {reset when raw command is feasible}
11:end if
12:return  K
Figure 6. Structure of the integrator for adaptive scaling gain.

4. Closed-Loop Stability

This section analyzes the closed-loop stability of the INDI control system in the presence of model uncertainties and the command scaling mechanism.

4.1. Virtual Output Under Uncertainty and Allocation Error

We explicitly consider two sources of uncertainty:
(1)
The estimated control effectiveness matrix B ^ may differ from the true effectiveness B .
(2)
The current virtual control is measured or estimated with error
δ ( ν 0 ) : = ν 0 ν ^ 0 ,
where ν 0 is the true virtual control at the linearization point and ν ^ 0 is its estimate.
According to the linearization Equation (2), the true virtual output under the allocated increment Δ u c is
ν = ν 0 + B Δ u c + δ ( x , Δ t ) .
Add and subtract B ^ Δ u c and the estimated virtual control ν ^ 0 :
ν = ν ^ 0 + ν 0 ν ^ 0 + B ^ Δ u c + B B ^ Δ u c + δ ( x , Δ t ) = ν ^ 0 + δ ( ν 0 ) + Δ ν a + ( B B ^ ) Δ u c + δ ( x , Δ t ) .
The commanded incremental virtual control defined in Equation (3) is Δ ν c : = ν c ν ^ 0 . Substituting ν ^ 0 = ν c Δ ν c into the expression for ν gives
ν = ( ν c Δ ν c ) + δ ( ν 0 ) + Δ ν a + ( B B ^ ) Δ u c + δ ( x , Δ t ) = ν c + δ ( ν 0 ) + δ ( x , Δ t ) + ( B B ^ ) Δ u c ( Δ ν c Δ ν a ) = ν c + δ ( ν 0 ) + δ ( x , Δ t ) + ( B B ^ ) Δ u c e ν 0 .
Thus, the actual virtual output can be written as
ν = ν c + ϵ tot ,
where the total residual is
ϵ tot : = δ ( ν 0 ) current virtual control error + δ ( x , Δ t ) linearization error + ( B B ^ ) Δ u c effectiveness error e ν 0 allocation error .
The adaptive command-scaling law modifies the command passed to the allocator from Δ ν c to a scaled version K Δ ν c . This only affects the achieved increment Δ ν a and thus the raw allocation error e ν 0 = Δ ν c Δ ν a . Importantly, the structure (35) remains valid irrespective of the presence of command scaling; the adaptive gain K appears only implicitly through e ν 0 .

4.2. Stability of Tracking Error Dynamics

We now consider the input–output linearized system in canonical “chains of integrators” form []:
ξ ˙ = A c ξ + B c ν , y = C ξ ,
where ξ is built from y i , y ˙ i , , y i ( ρ i 1 ) , i = 1 , 2 , , p , and A c , B c , C are block-diagonal matrices representing integrator chains. For each output channel i,
ξ ˙ i , 1 = ξ i , 2 , ξ ˙ i , 2 = ξ i , 3 , , ξ ˙ i , ρ i = y i ( ρ i ) = α i ( x ) + j = 1 m B i j ( x ) u j .
We want the state ξ to track a reference trajectory built from the desired output y r . The reference vector R collects y r , i , y ˙ r , i , , y r , i ( ρ i 1 ) , i = 1 , , p . Define the tracking error
e : = ξ R .
Differentiating,
e ˙ = ξ ˙ R ˙ = A c ξ + B c ν R ˙ .
Writing ξ = e + R and choosing the reference dynamics as
R ˙ = A c R + B c y r ( ρ ) , y r ( ρ ) : = [ y r , 1 ( ρ 1 ) , , y r , p ( ρ p ) ] ,
we obtain
e ˙ = A c ( e + R ) + B c ν A c R + B c y r ( ρ ) = A c e + B c ν y r ( ρ ) .
We now close the loop by specifying the virtual control law. To avoid notational conflict with the command-scaling gain K, we denote the outer-loop feedback matrix by F R p × p . The INDI virtual control is chosen as
ν c = y r ( ρ ) F e ,
where F is designed such that the nominal matrix A c B c F is Hurwitz. Substituting ν = ν c + ϵ tot and (42) into (41) yields
e ˙ = A c e + B c ν c + ϵ tot y r ( ρ ) = A c e + B c y r ( ρ ) F e + ϵ tot y r ( ρ ) = ( A c B c F ) e + B c ϵ tot .
Thus, the adaptive command scaling and all modeling or estimation errors enter the tracking error dynamics only through the additive disturbance ϵ tot defined in (35).
Since A c B c F is Hurwitz, there exist symmetric positive definite matrices P and Q satisfying the Lyapunov equation
( A c B c F ) P + P ( A c B c F ) = Q .
Consider the Lyapunov function
V ( e ) = e P e .
Along the trajectories of (43),
V ˙ = e ( A c B c F ) P + P ( A c B c F ) e + 2 e P B c ϵ tot = e Q e + 2 e P B c ϵ tot .
Using Young’s inequality [] (p. 49), for any η > 0 we have
2 e P B c ϵ tot 2 e P B c ϵ tot 2 e P B c ϵ tot η e 2 + P B c 2 η ϵ tot 2 .
Therefore, we have
V ˙ λ min ( Q ) η e 2 + P B c 2 η ϵ tot 2 .
where λ min ( Q ) is the minimum eigenvalue of Q . Hence, according to Remark 2.4 in Ref. [], the tracking error dynamics are input-to-state stable (ISS) with respect to the total residual ϵ tot . In the absence of uncertainties and allocation errors ( ϵ tot 0 ), the origin e = 0 is exponentially stable. For bounded ϵ tot , the tracking error remains bounded and ultimately enters a ball whose radius is proportional to sup t ϵ tot ( t ) .

4.3. Effects of Uncertainty and Allocation Error

Equations (43) and (35) allow a clear interpretation of the individual contributions:
  • Current virtual control error δ ( ν 0 ) : This term captures uncertainty or noise in the measurement or estimation of the current virtual control. In many INDI applications, ν 0 is derived from high-pass or complementary filters, so this term is typically bounded.
  • Linearization/model error δ ( x , Δ t ) : This term arises from the first-order Taylor expansion of the input–output map over one sampling interval. For sufficiently small sampling time and moderate state variations, δ ( x , Δ t ) is small and bounded.
  • Control-effectiveness error ( B B ^ ) Δ u c : This term represents a mismatch between the true control effectiveness and its estimate. Ref. [] has proved that this term is bounded when the sampling frequency is sufficiently high and B B ^ 1 has a diagonally dominant structure.
  • The focus of this paper is the allocation error term e ν 0 , which arises from input limits and the choice of control allocator. Unlike the other three residuals, e ν 0 can become large when the raw incremental command Δ ν c lies near or outside the incremental admissible control set. The proposed adaptive command scaling law acts precisely on this term. When the raw command Δ ν c is infeasible, K reduces the effective command magnitude so that the allocator operates in a valid region, thereby reducing the magnitude and direction errors between Δ ν c and Δ ν a . This scaling tightens the ISS bound.

5. Command Scaling Performance

This section evaluates the performance of the proposed adaptive command scaling algorithm using a representative control allocation task. The RPI allocator is used as the benchmark incremental allocator. The goal is to assess the effectiveness of Algorithm 1 in maintaining command feasibility. Its performance is also compared against the theoretically optimal linear programming (LP) solution.

5.1. Allocation Task Setup

Consider the incremental moment set presented in Figure 3. In this test, we fix Δ p ˙ = 0 and sweep the command vector in the ( Δ q ˙ , Δ r ˙ ) -plane to simulate a continuous variation in the virtual control command along a circular trajectory. At time t = 0 , the ray starts from ( Δ q ˙ , Δ r ˙ ) = ( 0.1 rad / s 2 , 0 ) , and then rotates counter-clockwise around the origin ( 0 , 0 ) along a circle with radius 0.1 rad / s 2 . The command completes one full revolution and returns to the initial position at t = 5 s . During this sweeping, the virtual command alternates between feasible and infeasible regions of the AMS. Therefore, this task provides a comprehensive test of the adaptive scaling algorithm under both conditions.
To obtain the theoretical scaling ratio along each command ray, we compute the intersection of the ray with the AMS boundary by solving the following linear programming problem []:
min Δ u , α α s . t . α Δ ν c = B Δ u , Δ u i S ( Δ u i ) , α 0 .
The optimal α represents the maximum achievable scaling factor along the original command direction. If α 1 , the command is feasible and can be executed directly, and we can clip α at 1 to preserve the feasible command; otherwise, α < 1 indicates that the command lies outside the AMS and should be scaled down proportionally.

5.2. Performance Comparison

For the adaptive scaling algorithm, we select γ = 30 , λ = 0.1 , and ε r e = 0.1 . These parameters correspond to a relatively strong “cutting-down” gain for infeasible commands, a weaker “restoring” gain that allows recovery toward K = 1 , and an allocation error threshold ε r e below which the gain is instantaneously reset to 1. Figure 7 and Figure 8 illustrates the command rays (trajectories) and the corresponding scaling behavior. The raw commands along the circular path are shown in blue, the LP-based scaled commands in red, and the adaptively scaled commands in green. Figure 9 compares the scaling factor returned by the LP solver and the adaptive scaling law.
Figure 7. Rays of raw commands and scaled commands in the allocation task. Blue rays represent raw commands, red rays represent commands scaled by linear programming, and green rays represent commands scaled by adaptive gain.
Figure 8. Trajectories of raw commands and scaled commands in the allocation task.
Figure 9. Scaling factors computed by the LP solver and adaptive law.
From t = 0 to t 0.28 s , all commands remain within the AMS; thus, the LP solver yields α = 1 , and the adaptive gain K remains constant at unity. Between t = 0.28 s and t = 2.22 s , the commands gradually move outside the AMS boundary, resulting in α < 1 . The adaptive gain begins to decrease only after t 0.52 s , since initially the allocation error is small and the restoring term dominates the adaptation. When the command reenters the feasible region at t 2.22 s , the LP factor α returns to one. The adaptive gain also recovers to unity but with a short delay of about 0.1 s . This pattern repeats periodically as the command sweeps the circle. Overall, the adaptive scaling law successfully reduces infeasible commands to the vicinity of the AMS boundary and preserves feasible commands. Compared with LP solutions, the adaptive gain exhibits a small time lag due to the competing effects of the “cutting-down” and “restoring” components in the adaptation mechanism.
The average execution times of the LP solver and the adaptive scaling algorithm are compared in Figure 10. All tests were conducted on a Windows 11 laptop running MATLAB R2023b with an AMD Ryzen 7 CPU and 32 GB of RAM. The LP intersection calculation requires an average of 4.3 ms per command, while the proposed adaptive law requires only 0.97 μ s , which is over four thousand times faster. The negligible computational cost makes the proposed adaptive scaling method appealing for real-time implementation.
Figure 10. Computational time of the LP solver and adaptive law.
To further investigate the influence of the adaptation gains, several values of γ and λ were tested (see Figure 11). The parameter γ > 0 determines the overall adaptation speed of the scaling factor K. A larger γ accelerates both the “cutting-down” phase and the “restoring” phase. As a result, increasing γ reduces the time delay between the proposed scaling solution and the reference LP-based allocation. However, when γ is chosen too large, the adaptation becomes overly aggressive, leading to small oscillations in K and chattering in the allocated incremental control. The parameter λ > 0 weights the bias toward K = 1 in the underlying cost function and therefore controls the restoring tendency of the scaling law. Larger λ values put more emphasis on recovering K to unity, resulting in faster restoration of the full command authority. On the other hand, a large λ reduces the relative weight of the allocation-error term in the adaptation, which can slow down the suppression of infeasible commands.
Figure 11. Effects of algorithm parameters on the scaling behavior: (a) Effect of γ . (b) Effect of λ .
Based on these observations, a simple tuning procedure is as follows. First, select γ such that the dynamics of K are faster than the dominant closed-loop dynamics, but slow enough to avoid noticeable oscillations in K ( t ) in the considered operating conditions. Second, choose λ to achieve a desired restoration time of K toward unity after constraints cease to be active. All simulation results in this paper were obtained using gains tuned according to this procedure. In additional numerical tests, we observed that moderate variations around the chosen ( γ , λ ) do not qualitatively alter the closed-loop behavior.
In summary, the adaptive command scaling law offers a computationally efficient and direction-preserving alternative to LP-based scaling. It dynamically reduces infeasible commands and restores full commands when feasible. It achieves real-time performance with negligible computational overhead. The tradeoff between response speed and smoothness can be tuned via the parameters γ and λ , allowing flexible adjustment depending on the desired aggressiveness of adaptation.

6. Closed-Loop INDI Flight Simulation

This section evaluates the proposed adaptive scalar command scaling within an INDI controller for the eVTOL’s rotational dynamics. Notably, we assume perfect virtual-control and control effectiveness estimates in order to isolate the effect of allocation and command scaling.

6.1. INDI Controller Setup

The controller structure is shown in Figure 12. A first-order reference model shapes the output-tracking dynamics for rotational rates ω = [ p , q , r ] :
ω ˙ r = R ω c ω r .
Figure 12. Structure of the INDI controller for rotational rates.
The desired virtual control is the summary of the reference virtual control and error dynamics:
ν c = ω ˙ r + K ω ω r ω ^ ,
and the incremental virtual command is
Δ ν c = ν c ν ^ 0 ,
where ω ^ and ν ^ 0 are obtained from filters or onboard model-based estimators. To isolate the impact of the scaling mechanism, we assume perfect state and virtual-control estimates ( ω ^ = ω , ν ^ 0 = ν 0 ). In the following simulations, the reference model and error gain matrices are set to
R = diag 3 , 3 , 2 , K ω = diag 8 , 8 , 4 .
Control allocation uses the RPI allocator with null-space update (cf. Equation (8)). Steady-hover inputs correspond to rotor speeds of 90 rad / s , i.e., the dimensionless trim u i = 0.29 for each rotor. Input bounds are enforced as in Equation (17) to consider both rotor rotational rate limits and acceleration limits.

6.2. Tracking Scenario and Results

Given the low effectiveness in yaw, a yaw-rate doublet command in [ 15 , 15 ] deg / s is applied, while p c ( t ) = q c ( t ) = 0 . We compare two configurations: (i) Nominal INDI (no scaling), and (ii) INDI + adaptive scaling with γ = 20 , λ = 0.1 , and reset threshold ε re = 0.1 .
Figure 13 shows the reference signals and rotational rates. In yaw, both configurations exhibit comparable tracking errors during the two saturation windows (around t 6 9 s and t 11 13 s ), consistent with the weak yaw effectiveness. In contrast, roll and pitch responses are different. Without scaling, the peak roll-rate deviation reaches 8.8 deg / s at t = 11.7 s , and peak pitch-rate deviation reaches 3.5 deg / s at t = 8.3 s . These excursions arise from aggressive (infeasible) virtual commands and allocator cross-axis coupling when saturation occurs. With adaptive scaling, oscillations in roll/pitch are substantially suppressed: peak roll deviation drops to 3.13 deg / s (around t 11 s ), and peak pitch rate deviation to 1.22 deg / s (around t 8.3 s ). Figure 13b plots the incremental virtual commands on all three axes. With scaling active, infeasible requests are drastically reduced toward the AMS boundary. The corresponding scalar gain K is shown in Figure 14. Infeasible commands are quickly cut down ( K < 1 ), while feasible commands are preserved as K 1 promptly whenever the allocation error becomes small. Beyond its control role, K serves as a proxy indicator of remaining control margin: K 1 implies sufficient margin while K 1 signals persistent saturation pressure and reduced margin. Figure 15 and Figure 16 show the incremental and absolute inputs of the eight rotors, respectively. Dark traces denote the nominal case; blue traces denote adaptive scaling. The scaling mechanism produces visibly smoother input variation during the aggressive maneuvers. Table 2 reports the total time each rotor spends at the bound during the simulation. All eight rotors benefit from reduced saturation durations.
Figure 13. Tracking response and virtual command in three rotational axes: (a) Reference signals and response. (b) Incremental virtual commands.
Figure 14. Time histories of the scaling factor of the incremental virtual command.
Figure 15. Time histories of the incremental input command. Dashed and dotted lines mark the upper and lower bounds of incremental inputs in two configurations.
Figure 16. Time histories of the absolute input command.
Table 2. Per-rotor saturation duration (seconds).
The closed-loop results align with the findings in Section 5: the adaptive scalar effectively trims infeasible virtual commands while preserving attainable commands. In the present maneuver, the primary benefit appears in the non-commanded axes (roll/pitch), where cross-axis couplings under saturation are moderated, yielding lower peaks and smoother responses.

7. Discussion

7.1. Simulation Results Discussion

In terms of the two goals stated in Section 3.1: “scale down infeasible commands” and “preserve feasible commands”, the open-loop ray sweeps in Section 5 and closed-loop INDI simulations in Section 6 demonstrate that our proposed adaptive scaling law can achieve these goals. Under saturation, the allocator might cause cross-axis coupling. By shrinking infeasible components along the command ray, the adaptive law reduces allocator stress, which explains the reduced roll/pitch peaks while preserving yaw performance. With protected virtual commands, input saturation is also alleviated. Moreover, the scalar trace K ( t ) can also serve as a real-time indicator of remaining control margin.
Although the linear programming method provides theoretically exact scaling of infeasible commands, it incurs a high computational cost. On the contrary, the present method is a lightweight, allocator-agnostic gradient law that steers requests toward what the allocator can realize. It achieves behavior close to the LP benchmark but at orders-of-magnitude lower run-time.

7.2. Modular Design

Another key advantage of the proposed method is modularity: the adaptive scaling resides entirely within the control allocation block and reads only inputs/outputs local to it (e.g., the commanded Δ ν c and the achieved Δ ν a ). No outer-loop signals, additional sensors, or changes to the controller structure are required. This method is highly adaptable and can be seamlessly integrated into other flight control architectures that utilize a distinct allocation module (e.g., PID/LQR + allocator, backstepping/NDI + allocator, or Extended-INDI + allocator []). Without further details, we provide a diagram demonstrating the implementation of adaptive command scaling within the NDI control system, as shown in Figure 17. Readers are encouraged to explore the application of adaptive command scaling in even more diverse control architectures.
Figure 17. Implementation of adaptive command scaling in an NDI control system.

7.3. Time-Varying Control Effectiveness

The command scaling law presented in this paper assumes a constant control-effectiveness matrix B , while in many applications (e.g., aircraft with large flight envelopes) the true effectiveness is state- and time-dependent. We briefly discuss here how the proposed adaptive command scaling behaves when B is time-varying and what additional challenges may arise.
From the closed-loop stability analysis in Section 4, the tracking error dynamics can be written as Equation (43). If we allow B = B ( x , t ) and B ^ = B ^ ( x , t ) to be time-varying (for example, through gain scheduling), the derivation remains valid with B and B ^ interpreted pointwise in time. The term B B ^ Δ u c becomes a time-varying disturbance that still enters only through the additive residual ϵ tot . As long as ϵ tot ( t ) in Equation (43) remains bounded, the input-to-state stability proof for the error dynamics is unchanged.
The adaptive command scaling module operates entirely in the virtual-control space and depends only on the instantaneous mapping between Δ u c and the achieved virtual control increment Δ ν a = B ^ ( x , t ) Δ u c . For each fixed time t, the geometry of the incremental admissible set and the relationship between the scaled command K Δ ν c and the allocation error e ν 0 = Δ ν c Δ ν a are determined by the current effectiveness estimate B ^ ( x , t ) and the current input limits. The proposed adaptive law then minimizes, at each instant, the allocation error along the commanded ray in this time-varying admissible set. Conceptually, this means that the command scaling algorithm generalizes to the case of time-varying B ^ ( x , t ) . There are, however, some practical challenges when B ^ ( x , t ) varies rapidly. For example, the shape and orientation of the incremental admissible set become time-varying, so the “optimal” scaling factor K ( t ) that minimizes the allocation error is also time-varying. The adaptive law for K then has to track a moving optimum. This is well-behaved if B ^ ( x , t ) varies slowly, but fast variations or large modeling errors may degrade the effectiveness of scaling. A detailed treatment of fast time-varying control effectiveness is beyond the scope of this paper and will be investigated in future work.

8. Conclusions

This paper introduced a lightweight command scaling law for incremental control allocation. This law is derived from minimizing a Lyapunov function that combines normalized allocation error and gain deviation from unity. From a theoretical standpoint, the adaptive law ensures the scaling gain will converge to a value close to the attainable portion of the raw command; thus, infeasible commands are reduced while feasible commands are preserved. The closed-loop stability analysis, incorporating various uncertainties, demonstrates that the INDI system with adaptive command scaling maintains input-to-state stability. Extensive simulations demonstrate the effectiveness of the command scaling. In open-loop ray sweeps, the scaled commands closely track the linear programming ground truth with negligible computation (about 0.97 μ s versus 4.3 ms for LP). Within the INDI closed-loop system, the adaptive scaling effectively suppresses cross-axis coupling (e.g., with similar yaw tracking performance, peak roll rate and pitch rate are reduced by over 60%). The suppression of infeasible virtual commands also alleviates saturation durations for all rotors.
The resulting adaptive scaling algorithm is simple to tune and computationally efficient, and it provides favorable modularity. Beyond providing feasibility protection, the scalar trace K ( t ) offers a practical indicator of remaining control margin. Future work will tighten guarantees under time-varying control effectiveness and validate the method in hardware experiments.

Author Contributions

Conceptualization, Z.L. and J.Z.; methodology, Z.L. and J.Z.; software, Z.L.; validation, Z.L., J.Z. and H.L.; resources, F.H.; writing—original draft preparation, Z.L.; writing—review and editing, Z.L., J.Z., H.L. and F.H.; visualization, Z.L.; supervision, F.H.; project administration, F.H.; funding acquisition, F.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data and code presented in this study are available upon request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Durham, W.; Bordignon, K.A.; Beck, R. Aircraft Control Allocation; John Wiley & Sons: Chichester, UK, 2017. [Google Scholar]
  2. Wang, X.; Van Kampen, E.J.; Chu, Q.; Lu, P. Stability analysis for incremental nonlinear dynamic inversion control. J. Guid. Control. Dyn. 2019, 42, 1116–1129. [Google Scholar] [CrossRef]
  3. Bordignon, K.; Bessolo, J. Control allocation for the X-35B. In Proceedings of the 2002 Biennial International Powered Lift Conference and Exhibit, Williamsburg, VA, USA, 5–7 November 2002; p. 6020. [Google Scholar] [CrossRef]
  4. Harris, J.J. F-35 flight control law design, development and verification. In Proceedings of the 2018 Aviation Technology, Integration, and Operations Conference, Atlanta, GA, USA, 25–29 June 2018; p. 3516. [Google Scholar] [CrossRef]
  5. Rupprecht, T.A.; Steinert, A.; Kotitschke, C.; Holzapfel, F. Indi control law structure for a medevac evtol and its reference models: Feedforward, physical limitations, and innerloop dynamics for optimal tracking. In Proceedings of the AIAA Aviation Forum and Ascend 2024, Las Vegas, NV, USA, 29 July–2 August 2024; p. 4425. [Google Scholar] [CrossRef]
  6. Johansen, T.A.; Fossen, T.I. Control allocation—A survey. Automatica 2013, 49, 1087–1103. [Google Scholar] [CrossRef]
  7. Blaha, T.M.; Smeur, E.J.J.; Remes, B.D.W. A survey of optimal control allocation for aerial vehicle control. Actuators 2023, 12, 282. [Google Scholar] [CrossRef]
  8. Raab, S.; Steinert, A.; Hafner, S.; Holzapfel, F. Toward efficient calculation of inverses in control allocation for safety-critical applications. J. Guid. Control. Dyn. 2024, 47, 2316–2332. [Google Scholar] [CrossRef]
  9. Acheson, M.J.; Gregory, I.M. Modified cascading generalized inverse control allocation. In Proceedings of the AIAA SCITECH 2023 Forum, National Harbor, MD, USA, 23–27 January 2023; p. 2542. [Google Scholar] [CrossRef]
  10. Durham, W.C. Constrained control allocation. J. Guid. Control. Dyn. 1993, 16, 717–725. [Google Scholar] [CrossRef]
  11. Bolender, M.A.; Doman, D.B. Method for determination of nonlinear attainable moment sets. J. Guid. Control. Dyn. 2004, 27, 907–914. [Google Scholar] [CrossRef]
  12. Orr, J.; Wall, J. Linear approximation to optimal control allocation for rocket nozzles with elliptical constraints. In Proceedings of the AIAA Guidance, Navigation, and Control Conference, Portland, OR, USA, 8–11 August 2011; p. 6500. [Google Scholar] [CrossRef]
  13. Johnson, E.N.; Calise, A.J. Pseudo-control hedging: A new method for adaptive control. In Proceedings of the Advances in Navigation Guidance and Control Technology Workshop, Redstone Arsenal, AL, USA, 1–2 November 2000; pp. 1–2. [Google Scholar]
  14. Zhang, J.; Holzapfel, F. Saturation Protection with Pseudo Control Hedging: A Control Allocation Perspective. In Proceedings of the AIAA Scitech 2021 Forum, Virtual, 19–21 January 2021; p. 0370. [Google Scholar] [CrossRef]
  15. Lu, Z.; Holzapfel, F. Stability and performance analysis for SISO incremental flight control. arXiv 2020, arXiv:2012.00129. [Google Scholar] [CrossRef]
  16. MathWorks. Pinv—Moore-Penrose Pseudoinverse. 2025. Available online: https://ww2.mathworks.cn/help/matlab/ref/pinv.html (accessed on 20 November 2025).
  17. Zhang, J.; Lu, Z.; Holzapfel, F. Null-Space Control Allocation for Generalized Load Reduction in eVTOL Aircraft. Aerosp. Syst. 2025; under review. [Google Scholar]
  18. Simmons, B.M.; Buning, P.G.; Murphy, P.C. Full-envelope aero-propulsive model identification for lift+cruise aircraft using computational experiments. In Proceedings of the AIAA Aviation 2021 Forum, Virtual, 2–6 August 2021; p. 3170. [Google Scholar] [CrossRef]
  19. Acheson, M.J. Generic Urban Air Mobility. 2024. Available online: https://github.com/nasa/Generic-Urban-Air-Mobility-GUAM (accessed on 1 October 2025).
  20. Lombaerts, T.; Kaneshige, J.; Schuet, S.; Hardy, G.; Aponso, B.L.; Shish, K.H. Nonlinear dynamic inversion based attitude control for a hovering quad tiltrotor eVTOL vehicle. In Proceedings of the AIAA Scitech 2019 Forum, San Diego, CA, USA, 7–11 January 2019; p. 0134. [Google Scholar] [CrossRef]
  21. Niemiec, R.; Gandhi, F.; Lopez, M.; Tischler, M. System identification and handling qualities predictions of an eVTOL urban air mobility aircraft using modern flight control methods. In Proceedings of the Vertical Flight Society 76th Annual Forum, Virtual, 5–8 October 2020. [Google Scholar]
  22. MathWorks. Simulink. 2025. Available online: https://www.mathworks.com/products/simulink.html (accessed on 21 November 2025).
  23. Wang, X.; Kampen, E.J.v.; Chu, Q.; Lu, P. Incremental sliding-mode fault-tolerant flight control. J. Guid. Control. Dyn. 2019, 42, 244–259. [Google Scholar] [CrossRef]
  24. Mitrinović, D.S. Analytic Inequalities; Young’s inequality; Springer: New York, NY, USA, 1970; pp. 48–50. [Google Scholar]
  25. Sontag, E.D.; Wang, Y. On characterizations of the input-to-state stability property. Syst. Control Lett. 1995, 24, 351–359. [Google Scholar] [CrossRef]
  26. Raab, S.A.; Zhang, J.; Bhardwaj, P.; Holzapfel, F. Consideration of control effector dynamics and saturations in an extended INDI approach. In Proceedings of the AIAA Aviation 2019 Forum, Dallas, TX, USA, 17–21 June 2019; p. 3267. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.