Modified Integral Control Globally Counters Symmetry-Breaking Biases

We propose a modified integral controller to treat actuation biases in nonholonomic systems. In such systems, although one cannot directly counter biases along all directions, one can still aim at restoring the symmetry of coordinated motion that the biases would break. Focusing on the example of steering controlled planar vehicles, we indeed show how to perfectly stabilize a circular trajectory coordinated with the leader, despite any unknown actuation bias. The proposed solution is a simple modified integral control term, and we prove via a detailed analysis of averaging theory that this control gadget globally stabilizes the perfect coordinated motion. The corresponding general observation is that stabilizing such symmetry-preserving situation can be viewed as rejecting a bias affecting the output measurement, instead of the input command. The symmetry-restoring control then becomes an output bias rejection, dual to the more standard input bias rejection with standard integral control.


Introduction
Coordinated motion of controlled vehicles in robotics has drawn great attention during the last two decades [1][2][3][4][5]. A basic control primitive is to design local interactions among vehicles, such that their emerging behavior corresponds to moving like a single rigid body. This stabilizes a manifold of states corresponding to the symmetry of invariant relative configuration, and leaves open how one might further drive the swarm along this manifold, towards specific relative configurations or a specific motion of the center of mass [3,5]. The most notable examples are steering-controlled vehicles (see same references). Control laws and performance proofs have been established, assuming that all the vehicles have exactly equal dynamics. However, real vehicles will not satisfy this symmetry under permutation perfectly. Part of this modeling error will be countered purposefully by the coordination controller, avoiding vehicles to drift completely apart over long times. However, small modeling errors that break the permutation symmetry will typically, also in the presence of control, break the target symmetry of coordinated motion: due to small residual velocity errors, vehicles will slowly but persistently drift with respect to each other (see simulations in Section 4 for an illustration). Depending on the application context, such unexpected symmetry breaking could be truly detrimental and it appears meaningful to search for a simple fix of the situation. The improved controller might also serve in other applications like information protection codes, where the very purpose is to isolate thanks to coordination mechanisms, a symmetric manifold on which realistic perturbations imply a minimal drift.

1.
The state space and therefore the error space is not a linear vector space, but a nonlinear manifold, more precisely a compact Lie group. This requires to revisit the concept of "integrating the error", since summing up or integrating variables relies on a vector space structure. The solution in [7,8] proposes to integrate instead the error-proportional feedback action, brought back to the Lie algebra (which is a vector space) thanks to standard Lie group operations. This error-proportional feedback is the main controller in standard studies without bias, like [1,4,5], and we thus only suggest to add a small multiple of its integral. The idea is that, similarly, if an error-proportional feedback action constantly fights against the same bias, then the integral would keep increasing; so if the controlled system stabilizes it must be at a situation with zero error-proportional feedback action, hence in principle (under some conditions, and at least locally) with zero error. By assuming full actuation, Refs. [7,8] prove that any constant bias on a Lie group can be countered in this way.

2.
A new challenge in the present paper is the nonholonomic actuation typical of such systems, also sometimes called underactuation. This means that, if a bias pushes the system with some vector v, it is not necessarily physically possible with our actuator to push in the direction −v even if we knew it. Think for instance of a ship subject to lateral drift, or of a steering-controlled vehicle with translation velocity fixed at a wrong value. In standard linear control, this situation can appear as well. Considerẋ = Ax + Bu + Fv with perturbation force Fv that lies outside the range of actuation B u; then, there is no way to stabilize x = 0 exactly, since this simply cannot be made a steady state. In the coordination application, the goal is to stabilize a manifold of relative equilibria corresponding to any configuration compatible with synchronized right-invariant Lie group velocities [9,10]. Configurations where the vehicles can keep moving in a coordinated way appear feasible even for nonholonomic systems with uncountered bias.
We here propose a simple modified integral control gadget, which we call moderating integral control (MIC) as it acts by reducing the error-proportional output feedback in a suitable way, and which indeed stabilizes coordinated motion perfectly, despite actuation biases and for underactuated systems. In the present paper, we develop this idea in detail for steering controlled planar vehicles. We prove that our MIC not only restores symmetry exactly, but it ensures global convergence to this situation from any initial condition, like in the absence of bias the original error-proportional feedback controller did. It thus appears like a plug-and-play adaptation which could open up discussion for a more general use of integral control as a mechanism in symmetry-restoring applications.
As an independent observation, we note that the MIC is a suitable algorithm integrating accurate inputs to reject bias in outputs; this is dual to the usual integral control integrating accurate outputs to reject bias in inputs.
The paper is organized as follows: • In Section 2, we describe the abstract Lie group model for stabilizing an underactuated rigid body in a fixed relative configuration with respect to a swarm leader, which is thus the geometric invariant related to the symmetry of rigid body motions. While Refs. [4,5,10], among others, have derived corresponding controllers for perfectly modeled vehicles, we insist on achieving the same performance when the controlled vehicle is subject to biases or calibration errors on its translation velocity, both on actuated and on unactuated directions of motion. We progressively explain why coordinated motion/restoring symmetry is a proper objective in the presence of bias. We end this section by proposing a moderating integral controller (MIC) to solve this task.

•
In Section 3, we carry out a detailed convergence analysis for stabilizing the steering controlled vehicle into a given circular motion ( [4], see [3] for a concrete application), despite the actuation biases. We start the section by a more concrete formulation for the steering controlled vehicle in the plane. The main result proves how our MIC allows for still ensuring global convergence towards exact coordinated motion. The result combines a Lyapunov function adapted from the bias-less case [5,10], with a detailed analysis of averaging theory along the lines of [11], and showing that the averaging approximation becomes exact as we approach the target. • Section 4 provides simulation results illustrating the working of our controller for the steering controlled vehicle and giving a hint at expected performance.

•
In Section 5, we note how the MIC can be reformulated in an abstract context of output biases, to possibly solve problems beyond symmetry-preservation. In particular, we discuss its application to linear systems, where it enables a trade-off between rejecting biases on the input or on the output.

Underactuated Rigid Body Motion
The motion of a rigid body can be described on a Lie group by where g is the position on the Lie group G and ξ is the Lie velocity in body frame; the latter belongs to the Lie algebra g, i.e., the tangent vector space to the Lie group at identity. Examples of this model include rigid body attitude control, where g is a rotation matrix and ξ is an angular velocity matrix [2]; or the motion of a steering controlled vehicle [4], where with p x , p y the vehicle position, θ its heading, v a fixed translation velocity in body frame and ω the controlled angular rotation rate. We will treat this last example concretely and in detail. For controlling satellite orientation, one can usually apply any angular velocity in g = so(3) (full actuation [2]). In contrast, a vehicle can usually not have a sidewards velocity; in model (1), we can command one single degree of freedom in ξ, which is thus restricted to a subset of g = se(2). This situation is called underactuation or nonholonomic motion. While a nonholonomic system may still reach all g ∈ G, i.e., for the vehicle all the three-dimensional values of heading θ and position p x , p y , it can only do so indirectly, and this complicates bias rejection.
Nonholonomic systems are standard in geometric control and the stabilization of symmetries (see the next paragraph). However, surprisingly little has been said about what happens to symmetries when the actual vehicle model differs from the nominal one. We therefore introduce an actuation bias explicitly into the model: where A ∈ g is a fixed drift velocity, the control u belongs to a subspace U ⊂ g orthogonal to A, and u B ∈ g is an actuation bias.
When u B lies in the same subspace U as u, countering the bias comes down to knowing its value and applying an opposite correction, u = −u B + u 0 where u 0 ∈ U would be the control input applied in the absence of bias. One approach is to write a so-called dynamical observer [12][13][14][15], which estimates all the variables that have known dynamical models, including the constant biases. A computationally less intensive approach would use integral control [6]. If the nominal input is u = u 0 and the measured deviation from target at time t is e(t), then standard integral control applies u = u 0 − k I u I with Thus, "the longer the error has been positive, the stronger we consider pushing in the negative direction". Mathematically, if u I stabilizes to a fixed value, then Equation (2) implies e(t) = 0. The computation (2) only makes sense if u I and e belong to the same vector space. Thus, if for instance u I is a Lie group velocity and e a Lie group configuration (error) as in (1), it must be adapted. In [7,8] concurrently, it has been proposed how to adapt (2) for Lie groups where U = g, i.e., under full actuation. Indeed, since u 0 is usually reflecting how much we push against the configuration error, it is proposed to replace (2) by d dt Thus, "the longer we have been pushing in the negative direction, the stronger we consider pushing in that direction". The equation always makes sense since both u 0 and u I belong to g, e.g., for rigid body motion they represent the rotational velocity and translation velocity expressed in body frame. It is proven that, for appropriate tuning, a first-order or second-order integrator on a Lie group can be stabilized towards u 0 = 0 and u I + u B = 0 with this approach.
The present paper addresses the case where u B ∈ U , such that it is not possible to just cancel it with the control input. For instance in steering control, if the translation velocity v differs from its nominal value, then this cannot be directly canceled by changing the value of our control input ω (rotation rate). In this case, we must find an objective that is compatible with the presence of u B ; this brings us to coordinated motion.

Coordinated Motion
For the case with u B = 0, we follow the framework of [10], which was developed as a general theory behind proposals like [4,5] for steering control in the plane.
A set of rigid bodies is said to undergo a coordinated motion if their configuration at any time t is equal to their configuration at time 0, up to a symmetry transformation of the reference frame. In other words, the whole set moves together like a single rigid body. Ensuring coordinated motion can be seen as imposing symmetry on the behavior of the set of vehicles.
In the Lie group notation, coordinated motion of a vehicle g(t) with respect to a reference vehicle g ref (t) corresponds to keeping a fixed relative configuration In terms of velocities expressed in body frame, this is equivalent to where Ad h is the adjoint action of h on the Lie algebra; in terms of matrices, we have ξ −1 = −ξ and Ad h (ξ) = hξh −1 . Note that this adjoint action is not just a rotation since h in general is not unitary: it really expresses which combination of translation and rotation velocities ξ, at a configuration g, would correspond to a coordinated motion with a vehicle applying ξ ref at configuration g ref .
Once (4) is achieved at a given time, keeping ξ ref and ξ fixed will ensure that it stays like this for all times, since by definition Ad g −1 g ref also stays fixed. Such situations are known as relative equilibria and they form basic primitives for further control [9]. For instance, for planar vehicles, we can apply the above with matrix expressions (1). Consider at time t a reference configuration g ref with (p x , p y , θ) ref = (1, 0, π/2) and following a circular trajectory of radius 1 around the origin, by applying ξ ref with (ω, v) = (1, 2π). Consider a tracking vehicle at configuration g with (p x , p y , θ) = (2, 0, π/2). Intuitively, to maintain coordinated motion, i.e., reference and tracking vehicles moving together like a single rigid body, the tracking vehicle would have to follow a circle of twice the radius of the reference. Mathematically, using matrix expressions (1), the relative configuration This is in agreement with motion on a circle of radius 2 around the origin. By computing the derivative of relative position and orientation in body frame, one can check that applying this velocity indeed , the result is consistent. The condition (4) for coordinated motion constrains g indirectly, in a combination with ξ. When ξ can be freely chosen in g, any relative position g −1 g ref is compatible with a relative equilibrium. However, for nonholonomic systems, where ξ is restricted to U , relative equilibria can restrict the set of relative configurations g −1 g ref , namely to the situations where Ad In the case of a steering controlled vehicle (1) with v fixed and equal for all vehicles, there are two types of relative equilibria: parallel straight motion (ω ref = 0), where the underactuation restricts all the headings to be equal; and rotation on a common circle (ω ref = 0), where thus the underactuation restricts positions and headings to be properly aligned on that circle. Papers like [4,5] for planar vehicles, and [10] more generally, have proposed control laws u = u 0 (g −1 g ref , ξ ref ) to stabilize such coordinated motion for perfectly calibrated systems.
Stabilizing the symmetric motion (4) thus leaves some freedom on the actual g and ξ. The freedom on g is most visible, namely for a given solution (g, ξ) any alternative (g = hg, ξ), where h satisfies Ad h ξ = ξ is also a solution. This symmetry allows for complementing the controller stabilizing the coordinated motion by a controller that would specify particular relative configurations h, according to some other application needs and criteria. In addition, it is possible to satisfy (4) with a different ξ i.e., a different velocity applied in body frame. This would allow for satisfying coordinated motion even in the presence of biases, where ξ = A + u + u B . The possibilities depend on the range of Ad g ξ viewed as a function of g. For instance, when g represents an orientation (see satellite attitude control, [1,2]), Ad g corresponds to an orthogonal matrix and it is thus necessary that the available u and bias include the possibility ξ = ξ ref . For steering control, we must again distinguish two cases: , as long as we apply the same angular velocity, and independently of the translation velocity, we can always find a configuration where coordinated motion is possible. In other words, {Ad g ξ : g ∈ G, ξ ∈ U } corresponds to all of g except the measure-zero set where ω = 0 and the translation velocity would differ from the one implied by A + u B , or, more concretely, the symmetry of coordinated motion can be restored, in principle, for any circular reference motion and any bias. The question is how to design a simple control algorithm that would stabilize this coordinated motion.

A "Moderating" Integral Controller (MIC)
In the absence of bias, designs u = u 0 (g −1 g ref , ξ ref ) have thus been proposed in e.g., [4,5] to stabilize coordinate motion. To explain our bias-rejection idea, we split the input in two terms u = u s + u m . The part u s corresponds to the steady-state motion once coordination is reached, i.e., there exists someh such that and thus once this situation is reached it suffices to apply u = u s to have a coordinated motion. For the steering controlled vehicle, u s corresponds to ω s = ω ref . The part u m in contrast steers the system towards a relative configuration h satisfying (5). We denote by u P (g −1 g ref , ξ ref ) = u 0 − u s the corresponding part of the controllers proposed in [4,5]. In the absence of bias, u m = u P ensures that the vehicles asymptotically reach a coordinated motion, and so we also have u m = 0 asymptotically.
In the presence of bias, our idea for restoring perfect symmetry is to add into the controller a term that explicitly targets u m = 0 asymptotically. More precisely, in the presence of a bias, u P (g −1 g ref , ξ ref ) = 0 will hold at configurations which do not correspond to a true coordinated motion; the configuration will thus change over time and, likely, u P will not remain zero. However, as u P does take a particular value u P (h) at each pointh which truly corresponds to a coordinated motion in the presence of bias, we can consider this value as a cancelable input bias and write an integral controller to cancel it: Here, u m is thus known and can be considered as an observed output y which we want to stabilize to 0; in contrast, u P (h) is not known because we do not know at which value ofh the true coordinated motion holds, this is thus like an input bias; the remaining terms would be the effective controller, which involves u P (g −1 g ref ) − u P (h) a proportional action taking the value zero at the true coordinated motion g −1 g ref =h as standard, and u I a standard integral controller like (2) with e(t) = y(t) = u m (t).
The modified integral controller u I in (6) thus somehow moderates the effect of u P , pushing to cancel its value ath.

Remark 1.
We have here assumed that the value of u s is known for which (5) is feasible, only the corresponding h is unknown. In the steering vehicle example, this comes down to assuming a possible bias on translation velocity, but assuming that rotation velocity is exactly known. We did this to focus on integral control for countering bias in unactuated directions. An unknown bias ω B in the actuated direction, yielding rotation velocity u = u 0 + ω B , could be solved with standard integral control and without the specificities of symmetries and coordination. We will come back to the slight adaptation needed for this case in the simulations section.

Explicit Nominal Dynamics with Coordination Control
We begin by writing (1) more concretely for the planar vehicle. Following [4,5], we thus consider a rigid body moving in the plane, with position p ∈ R 2 and heading θ ∈ S 1 the unit circle. We denote Q = Q θ the 2 × 2 rotation matrix by angle θ that forms the upper left block of g in (1). By denoting v = [v; 0] ∈ R 2 the fixed translation velocity in body frame, the Lie group dynamics (1) rewrites: where Q π/2 is the π/2 rotation. The control input thus reduces to u = ω, and we will assume that v can be subject to an arbitrary bias.
, which we assume c ref = 0 without loss of generality. By rewriting the objective (4) of coordinated motion Ad g ξ = Ad g ref ξ ref concretely, we get the physical requirements: The rotation center c will play an explicit role in the analysis, its dynamics just expresses that rotating at angular velocity ω around a fixed point is equivalent to rotating at angular velocity ω ref around a point that is itself turning. The measurements performed towards feedback stabilization are the relative heading and relative position in body frame, [4,5]. We here take for instance the simple "proportional controller" design of [5,10]. For our purposes, we add saturation towards a better global behavior and introduce a notation with scaling parameter > 0 towards the convergence proof. With these elements, the proportional controller assuming v = e 1 , writes Here, k P > 0 is the proportional control gain, I 2 denotes the 2 × 2 identity matrix; and σ : R 2 → R 2 is a saturation function: for some appropriate δ ∈ R >0 . The saturation bounds the amplitude of feedback control actions w.r.t. the reference rotation rate ω ref .

Bias-Rejecting Controller and Proof Strategy
We now focus on countering biases on the un-actuated directions, i.e., on the translation velocity. We write v = |v|Q φ e 1 with any φ ∈ (−π/2, π/2) and assume that some bounds v m ,v, χ > 0 are known on the velocity error, such that v ≥ v m , e 1 − v ≤v and cos φ ≥ χ. These bounds will determine the acceptable saturation in u P which still allows for stabilizing the correct motion. Indeed, we will select δ ≥ 10v/(χω ref ) in (9), although this is not a tight condition.
To reject the bias, we thus apply (6); let us rewrite it for convenience: d dt u I = u P − k I u I , with u P given by the second line of (9). Note that now this second line does not correspond to the true circle center c anymore because the latter would have to be computed with (unknown) v instead of e 1 , i.e., This true circle center is unknown to the vehicle. For further reference, we denote byĉ the (possibly wrong) circle center used in u P , i.e., computed by the vehicle assuming nominal dynamics with translation velocity equal to e 1 .
On that basis, we can make a concrete interpretation of the controller along the general lines of Section 2.3. Coordinated motion requires by definition ω = ω ref and c = c ref , independently of how the rest of the vehicle model is written. Thus, in the presence of bias, perfect coordination would still be obtained with a proportional controller that replacesĉ by the true circle center c, that is if we Since v is not known, the proportional controller features a bias u B = u P −ũ P . When the saturation is inactive the expression simplifies and we obtain, with c ref = 0: For later convenience, we rewrite this as This effective input bias can be viewed as the information missing for restoring symmetry. The integral controller can be seen as progressively deducing, based on observed deviations from symmetry over time, what is the value u B that must be countered.
With this controller (10), we get the closed-loop dynamics for the system: In the remainder of this section, we prove that this system indeed globally converges towards perfect coordinated motion.
Local stability of (12) under appropriate tuning can easily be proven by linearization around c = 0, k I u I = ω B1 . To rigorously prove global stability under appropriate tuning, we use averaging theory with estimation of the basin of attraction. The averaging method requires that the variable over which we average, here θ, is on a fast timescale 1/ω ref = O(1) with respect to the slow variables to be studied, here c, u I which evolve on a timescale 1/ , hence we will take 1. The proof proceeds in two steps: • First (Section 3.3), we study more accurately the local stability, specifying a region of exponential attraction (c, u I ) ∈ B r1 , a ball of radius r 1 > 0 around (c = 0, k I u I = ω B1 ) where saturation is inactive, and we fix a value 1 such that our conclusions are valid for all ≤ 1 . • Then, (Section 3.4) we analyze the system behavior for (c, u I ) ∈ W 2 = R 3 \ B r2 , with r 2 < r 1 , and we prove that for all ≤ 2 the system starting in W 2 will end up in B r1 .
This allows for concluding that the system converges globally for ≤ min( 1 , 2 ). Our conclusions are obtained from exhibiting a so-called Lyapunov function V whose scalar value is shown to decrease along solutions of the closed-loop system. We take: where µ > 0 has dimensions of area, and wherẽ follows the same dynamics as u I . The balls are also defined with the µ-dependent metric, i.e., B r = {(c,ũ I ) : c 2 + µ|ũ I | 2 < r 2 }. Before proceeding, let us confirm that averaging over θ makes sense.

Lemma 1.
For fixed k P , k I , δ, ω ref , any initial conditions, and any C > 0, we can ensure that d dt θ > C max( d dt c(t) , | d dt u I (t)|) after some initial time by selecting a small enough > 0.

Proof. Since |u P (t)| is upper bounded by
and |u I | can only decrease when |u I | > k P ω ref δ . This implies essentially the same bound as for u P , for all t ≥ T, with T the time needed to leave the space |u I | > ρ k P ω ref δ k I , for any ρ > 1.
The time-derivatives of (c, u I ) are then obviously bounded by times some constants, while d dt θ equals ω ref up to adding the same constants times .
In the following Propositions, we will reduce our scope to what happens after we have reached |u I | ≤ k P ω ref δ k I ρ, with basically ρ 1. Lemma 1 allows for defining a (trajectory-dependent) bijection φ : t → θ, where θ ∈ R is interpreted keeping track of the number of turns. We abstractly write the first two lines of (12) as and we will denote components by e.g., h = [h c ; h uI ]. Our strategy is thus to study the dynamics h of x = (c, u I ), by considering the fast dynamics of θ as a perturbation.

Local Exponential Stability
Theorems 2 and 3 in [11] provide a way to deduce exponential stability of the original dynamics from the averaged dynamics. We first examine the averaged dynamics, analogously to part of the proof for Theorem 3 in [11]. (14) corresponding to (12) with fixed k P , k I , δ ≥ 10v/χω ref , and let r 0 ≤ δ −v/ω ref . Then, for any µ > 0, there exist α > 0 and 1 > 0 such that for all ≤ 1 , there exists an increasing sequence of t k (k ∈ Z) with t k → ∞ as k → ∞ and t k+1 − t k ≤ 4π/ω ref for all k ∈ Z, satisfying t k+1 t k ∂V ∂c T h c (c,ũ I , s) + ∂V ∂ũ I hũ I (c,ũ I , s) ds ≤ −αV (15) for all k and for all (c,ũ I ) ∈ W 0 := {c,ũ I : c ≤ r 0 and |k I u I | ≤ k P ω ref δ}.

Proposition 1. Consider dynamics
Proof. The bound r 0 is chosen such that saturation is inactive. We define the sequence such that the interval [t k , t k+1 ) corresponds to a 2π rotation of θ. Using the bound in the proof of Lemma 1 and denoting κ = ρk P δ, The sequence of t k then satisfies the requirements as soon as κ < 1/4 i.e., 1 < 1/(4k P δ). Now, the integral in (15) corresponds to averaging over one period of θ, while keeping c and u I fixed. We use the bijection φ, with thus dφ/dt = ω > 0: We decompose this into 1 , where the first term will give the desired result and the second term will be bounded. For the first term, we get: There remains to account for (1 + ω ref −ω ω ) = 1. For c, we develop, using (12): dθ .
Here, we defined β = k IũI /ω ref and c = c [ cos ψ ; sin ψ ], and we used the bound (13) with ρ 1. The first two terms of the result are bounded by constants times c 2 , and hence they will not perturb the term from (16) once we take 1. For the last term, we can use the Taylor approximation of 1/(a + bx) for small x, with x = c , b = k P cos(θ − ψ) and a = 1 − β. We can easily bound the higher order Taylor terms, so only the first order must be examined, namely: The first integral is zero and hence there only remains a term of order 2 c 2 , which again does not significantly perturb the term from (16). Forũ I , an analogous argument leads to terms of order ũ 2 I and |ũ I | c .
We have just established that V decreases exponentially under the average dynamics. The following result establishes local exponential stability of the original system.

Proposition 2.
Consider the same setting as in Proposition 1. Then, there exists 1 > 0 such that for all ≤ 1 , the equilibrium (c,ũ I ) = 0 is locally exponentially stable. The region of attraction includes B r1 with Proof. We check that our system satisfies the conditions of Theorem 2 in [11], such that exponential stability of the averaged system implies the same for the original system. Note that our balls correspond to the Euclidean norm of (c, √ µũ I ). The condition about the Lyapunov function shape is then trivially satisfied by our quadratic V. The second condition, i.e., exponential stability of V under averaged dynamics, is checked in Proposition 1. Remark 5 following Theorem 2 in [11] gives an associated estimate for the region of attraction, which includes ensuring that the trajectories stay in W 0 from Proposition 1. This estimate involves an exponential with T = 2π (1−2κ)ω ref from our Proposition 1 and a maximum Lipschitz constant on h(x). We estimate the latter, with our metric, through K to get the result.

Global Stability
For large c , the saturation complicates the analysis and precludes global exponential convergence of the Lyapunov function. We next consider this situation explicitly and show how V decreases for initial conditions (c,ũ I ) outside a ball B r2 . Proposition 3. Consider the same setting as in Proposition 1. Let (c,ũ I ) ∈ W 2 = {c,ũ I : c 2 + µũ 2 I > r 2 and |k I u I | ≤ k P ω ref δ} with r 2 = 0.8δ. There exist µ > 0, α > 0 and 2 > 0 such that, for all < 2 , if the system stays in W 2 for t ∈ [0, T = 1/ ], then V(T) < V(0) − α.
Proof. We divide V(T) − V(0) into two contributions ∆V 1 = V(Φ T (c,ũ I )) − V(c,ũ I ) and ∆V 2 = V(Φ T (c,ũ I )) − V(Φ T (c,ũ I )). Here, Φ T (x 0 ) denotes the result at time T of integrating the dynamics (12) with initial conditions (x 0 , θ 0 = 0), and Φ T (x 0 ) is the result of integrating the averaged dynamics. Since we are now working away from x = (c,ũ I ) = 0, we can ensure that terms of order 2 and higher can be dominated by e.g., x ; this facilitates the analysis compared to Proposition 1. In particular, we will take ω = ω ref + O( ). ∆V 2 thus gives the difference between the result of the actual and average dynamics. According to the fundamental theorem of averaging (Theorem.2.8.1 in [16]), keeping error estimates, there exists 2 > 0 such that for all ≤ 2 we have is a maximum Lipschitz constant on h(x) and C 1 is a bound on the integrated difference in vector flow between average and original system. From Lemma.2.8.2 in [16], we get C 1 ≤ (2 + KL) max x h(x) /ω ref where, thanks to saturation and (13), we have max L and the same for the average dynamics. Since V(Φ T (x)) = 1 2 Φ T (x) 2 , we thus get, for fixed L = T and ≤ 2 , Note that x ≤ √ µ|ũ I | + c whereũ I is bounded. ∆V 1 is the gain in Lyapunov function value under the average dynamics.
Forũ I , we observe that by symmetry . This implies that the contribution of µ For c, by considering the term Q π/2 inside the saturation as a disturbance on c, we can establish that the average dynamics would follow: with γ ≤ 4v and we can e.g., select µ < Over a period T, with T = L, the corresponding contribution of 1 2 c 2 to ∆V 1 (x) is negative with a magnitude at least 1 2 Taking all contributions together and fixing L = 1, we obtain with c > δ/ √ 2. Selecting µ < δ 2 /4β c ensures that the part without is (sufficiently) negative, and then a sufficiently small obviously yields the result.
We now have all the elements to state our main result. Theorem 1. There exists 0 > 0 such that, for all 0 < ≤ 0 , the controlled system (12) with fixed k P , k I , δ ≥ 10v/χω ref converges globally to (c,ũ I ) = 0.

•
By Lemma 1, after an initial transient, the system reaches the subset where the bound (13) holds and stays there for all future times. • Assume by contradiction that the system would stay in W 2 for all times. Then, by Proposition 3, the Lyapunov function would decrease like V(t) < V(0) − t α for some constant α > 0 for all times, which is not possible since V must stay positive (in fact even larger than r 2 2 /2 if we stayed in W 2 ). Thus, the system cannot stay in W 2 forever. • This means that, at some time, the system must enter B r2 , and thus it also enters B r1 since r 1 > r 2 .
Once the system is in B r1 , we know by Proposition 2 that it converges to (c,ũ I ) = 0 exactly.

Simulations
The benefit of our modified integral controller is illustrated with the following simulations.  (12) with k I = 0. After a short initial transient, the motion of the vehicle does converge to rotation on a circle centered at the origin, but not at the correct speed. Indeed, the tracking vehicle keeps accumulating a phase advance with respect to the reference. This is because, due to the bias, the controller thinks that it has not converged to the correct configuration yet, and keeps pushing. As a result, the symmetry of coordinated motion is broken: the relative configuration between the following vehicle and the reference is continuously changing, with macroscopic long-term effects. This is rather problematic for a controller whose very purpose is to stabilize coordinated motion. Our controller corrects this effect, as illustrated on Figure 1 (right) with tuning parameters k P = 1, k I = 0.1 and δ = 0.2. Indeed, after the initial transient during which the vehicle has to converge towards the correct circle, it moves with a fixed phase lag with respect to the reference, thus in coordinated motion. Figure 2 (left) shows a typical vehicle trajectory when starting farther away, and it confirms more visually that the vehicle converges to the correct circle (the fact that the vehicle is then rotating at the correct rate must be taken from other plots). Figure 2 (right) confirms the convergence of the true circle center c towards c ref = 0, which by definition (check (8) and the corresponding remark) should mean that it is rotating at the correct rate on the correct circle. The convergence is first linear due to the feedback saturation, then exponential once |c| < δ. The figure also shows that, while u P still thinks that it has to apply a correction (due to the bias between the true c and theĉ computed with the assumption v = e 1 by the controller), the modified integral controller progressively understands that it has to cancel it such that the total control input reduces to u = ω ref .
We next include an unknown bias on the angular velocity too. The rotation velocity of the vehicle thus now corresponds to The total effective bias thus now corresponds to u P (h) + ω B instead of just u P (h). In other words, if we can drive g −1 g ref towardsh and −k I u I + u P (h) + ω B towards 0, then we reach a perfect coordinated motion. It is tempting to view ω B and u P (h) as just two similar contributions to the total bias, and thus by analogy we would just use the integral controller d dt u I = u P (g −1 g ref ) − k I u I + ω B . The issue is that the controller a priori does not have access to the value of ω B . However, the controller does observe the relative heading θ(t) − ω ref t of the vehicle with respect to the reference. Remarkably, since d dt θ(t) = ω, we see that the integral term which we want to use in fact just corresponds to Thus, for the particular application of steering control, the general strategy in fact simplifies to feeding back a directly observed quantity. Figure 3 shows a simulation of this situation, with thus now biases on all velocities. We observe that the proposed modified integral controller indeed restores the symmetry of coordinated motion perfectly. The presence of ω B , with all other parameters and initial values kept as before, just translates into a different steady-state relative phase between reference and leader vehicle.  Figure 3. Evolution of position p over time with our modified integral controller, when adding as well an actuation bias ω B on the actuated part of the velocity. This corresponds to biases everywhere, and the correcting integral controller appears to boil down to u I (t) = θ(t) − θ(0). The perfect coordinated motion of reference (black) and actual motion (blue) is restored in that case too.

Towards Other Applications of the Modified Integral Controller
The modified integral controller (6) may have other applications. We briefly discuss this in a more abstract system-theoretic context.
Consider the following abstract setting, see Figure 4: • a system with state x(t), variable of interest y(t), must be stabilized to some target (y, w) = 0 by the control command w(t). • unlike standard, the value of y(t) is not measured, but deduced from actual measurements z(t) through the static function y = f (z, q) where q are some constant parameters. • the exact value of q is not known; without loss of generality, q = 0 is nominal and f (0, 0) = 0.
The target variable y is thus not available to the controller, while w is known. This setting could appear in various applications, as a consequence of a residual calibration error or of a sensor fault. The stabilization of coordinated motion can be formulated in this framework with the target variable to be put to zero, according to (4). For the vehicle, it has three scalar components The last component is equivalent to w = 0. Referring to (11), the others take the form indeed involving known constants, the unknown q, and observed variables z.
The goal is thus to propose a simple control mechanism which would automatically cancel the error on y associated with q = 0, without requiring advanced logic like an observer estimating sensor parameters.
- We repeat a reasoning similar to the one of Section 2.3. Assuming the nominal situation q = 0, a well-tuned proportional feedback w = u P (z) would typically drive z to zero and thus y towards f (0, 0) = 0, at least locally. However, if q = 0, then w = u P (z) = 0 does not correspond to y = 0 anymore. To perfectly reach the target (y, w) = 0, the controller should instead apply w = u P (z) − u P (z (q) ) , wherez (q) is a measurement value for which f (z (q) , q) = 0. Because q is not known, the value ofz (q) is not known either. However, in this formulation, the effect of q can be viewed just as an unknown input bias u B = u P (z (q) ) to be canceled.
A standard integral controller like (2) would require to integrate the true error y, which here is not measured. The hope is then to correct the situation by integrating instead the input corrections w. Indeed, the target includes the requirement w = 0; and, if the effect of q = 0 is observable on z (and this is an important requirement), then by monitoring the corrections that we apply we should learn something about q. This motivates the same modified integral controller as proposed for coordinated motion: If this controller reaches a steady state, then we have achieved half our goal namely w = 0. A further observability/invariance property of the plant is necessary to ensure that we stabilize the full target. Namely, if states with (w = 0, z =z (q) ) are not invariant, while (w, z −z (q) ) = 0 is invariant, then we can hope to show convergence towards f (z, q) = f (z (q) , q) = 0 and k I u I = u B .
This last requirement shows that (20) is likely to work, in general, if (w, y) = 0 is a natural equilibrium and the purpose of the controller is to stabilize it, while avoiding that the feedback based on measurement z displaces the natural equilibrium from y = 0.
We can make this more explicit on a linear system. Consider in Laplace domain z(s) = y(s) + q(s) = H(s) (w(s) + b(s)) + q(s) where the goal is to stabilize y(s) = 0, while b(s) is some actuation noise and q(s) is noise on the output measurement [6]. With a controller w(s) = C(s)z(s), we obtain the closed-loop equation For C(s) = 0, the measurement noise is of course not transmitted to y. However, a controller might be necessary for stabilizing the system or rejecting disturbances b according to frequency-dependent specifications.
At each frequency, strongly attenuating the effect of b requires a large value of HC, which implies that q is transmitted into y with almost unit gain. Thus, to perfectly counter a bias in the input, you must be able to trust the value of the measurement, and vice versa. A standard proportional-integral controller makes |C(s)| = |k I /s + k P | infinite at zero frequency, implying perfect rejection of b(0). Conversely, our modified integral controller is well-adapted to applications where q poses major problems at low frequencies while b(0) is low. Indeed, the MIC with u P = K(s)z writes C(s) = K(s) s s + k I .
For frequencies |s| k I , we have C(s) K(s). For small s, however, the controller is attenuated, and in particular when H(0)K(0) is finite the MIC ensures H(0)C(0) 1−H(0)C(0) = 0 i.e., perfect rejection of q(0). The stability analysis must (only) investigate the changes implied at those low frequencies. Practical situations where it is relevant to reject q(0) rather than b(0) could appear when drifts of the sensor-zero cannot be calibrated, e.g., a shift induced by a failure event or by fluctuations of operating conditions. Remark 2. The integrator with leakage as used in [17] would instead yield C(s) = K(s)(s + a) + k I s + a with a ≥ 0 the leakage parameter. For a = 0, one recovers an integral controller. For large a, we get C(s) K(s) at all frequencies. This does a priori not guarantee any specific transmission property for q(s); the purpose of a leaking integrator is indeed rather to ensure more robust stability.

Conclusions
In this paper, we have investigated how a simple modified integral control (MIC) allows for restoring perfect coordination symmetry in the motion of steering controlled vehicles subject to actuation bias. The non-holonomic dynamics of such systems make it impossible to directly counter the bias. However, the conditions for perfect symmetry to be achievable are clear in a Lie group framework, and an adaptation of integral control to this situation is proved to yield perfect global convergence.
This control perspective on symmetry, more particularly on restoring it, seems to rely on stabilizing a manifold of configurations where symmetry holds, rather than formulating the problem directly as a tracking task. We have also discussed how this MIC might address other tasks, related to biases in output measurements. The common framework is that the target situation would be a natural equilibrium of the system, a feedback control action is used in order to stabilize it, and we must avoid that errors in the model used by the controller displace the steady state from the target situation.
Of course, more powerful state-based methods could also be devised, explicitly estimating all the model errors and biases and specifying according actions-see, e.g., [15]. However, much like for standard integral control, we believe that simple control "gadgets" like the MIC can have their own interest in terms of plug-and-play robustness (no need for an extensive system model in the controller), simplicity of implementation (e.g., in analog hardware), and therefore maybe providing insight into how some natural systems could be wired in order to preserve perfect symmetries. Also note that, like standard integral control, the strategy should remain applicable if the bias is state-dependent, which would be harder to formulate with explicit estimation.