Adaptive Interaction Control of Compliant Robots Using Impedance Learning

This paper presents an impedance learning-based adaptive control strategy for series elastic actuator (SEA)-driven compliant robots without the measurement of the robot–environment interaction force. The adaptive controller is designed based on the command filter-based adaptive backstepping approach, where a command filter is used to decrease computational complexity and avoid the requirement of high derivatives of the robot position. In the controller, environmental impedance profiles and robotic parameter uncertainties are estimated using adaptive learning laws. Through a Lyapunov-based theoretical analysis, the tracking error and estimation errors are proven to be semiglobally uniformly ultimately bounded. The control effectiveness is illustrated through simulations on a compliant robot arm.


Introduction
Safety in robot-environment interaction is of significant value and can be improved by passive compliant devices. As a popular compliant device, a series elastic actuator (SEA) is developed by introducing elastic elements between the motor and the load and can bring some benefits including low output impedance, tolerance to shocks, and energy efficiency [1][2][3]. The introduction of SEAs in robots improves interaction compliance to some extent but (1) it cannot root out the conflict between high robot stiffness and the requirement of high compliance, and (2) the compliant actuators have bad adaptability and limited applications since SEAs make robots behave only in a certain impedance. The compliance of SEA-driven robots should be further improved by the regulation of robot impedance using active compliance control.
As one of the most popular compliance control approaches, impedance control proposed by Hogan in the 1980s [4] provides interaction compliance through a dynamical relationship between the position and interaction force. To date, extensive impedance control strategies for rigid-link robots have been developed based on adaptive learning [5][6][7], sliding mode [8], neural networks [9][10][11], and so on. For impedance control implementation, one significant problem to be solved is the determination of the desired robot impedance, which is highly dependent on environmental impedance. Although a variety of methods, including least-squares techniques and programming by demonstration [12][13][14], were developed for impedance learning, the impedance controllers based on these impedance learning methods were usually designed without stability guarantees. Recently, modelbased impedance learning control strategies [15][16][17] were developed for robot-environment interactions and validated in repetitive tasks with stability guarantees. The control approach can provide variable impedance regulations for robots without the requirement of interaction force sensing. However, the existing model-based impedance learning controllers mainly focus on rigid-link robots. The extension of model-based impedance learning control to SEA-driven compliant robots is not direct since the introduction of an SEA significantly increases control design complexity and turns the control system into a fourth-order underactuated system from a second-order fully actuated system. Based on the above analysis, designing model-based impedance learning control for SEA-driven robots can exploit the advantages of passive compliant devices and active compliance control to improve robot-environment interaction performance, but to date, no results on this topic have been produced.
In this paper, stability-guaranteed adaptive control using model-based impedance learning is proposed for SEA-driven robots with fourth-order underactuated systems. Impedance parameters of interaction forces and model uncertainty parameters are estimated using differential adaptation laws updated by tracking errors. In the control design, the command filter-based adaptive backstepping approach is used to decrease computational complexity and avoid the requirement of the high derivatives of the robot position in the backstepping control of SEA-driven robots. We prove the semiglobal stability of the closed-loop control system theoretically and illustrate the control effectiveness through simulations on a SEA-driven robot arm. The proposed control strategy can be applied to categories of robot-environment interactions including robot-assisted rehabilitation, exoskeletons, and polishing. Compared to related results, the contribution of this paper lies in the design of the adaptive impedance learning controller for SEA-driven compliant robots to obtain variable impedance regulations without interactive force sensing.

Robot Dynamics
The considered compliant robot has the following dynamics: where q ∈ R n and θ ∈ R n denote the positions of the rigid-link robot and the SEA, respectively; M(q) and B denote the inertial matrices; C(q,q) denotes the Coriolis and centrifugal matrix; G(q) is the gravity torque; K is the stiffness matrix for the SEA; τ en denotes the interaction force between the robot and its environment; and τ is the system control input. Property 1. M(q) and B are symmetric and positive definite matrices that satisfy where σ 1 and σ 2 are positive constants.
Property 3. The robot dynamics have the following parameterized form where W is a constant vector and contains unknown parameters.
Remark 1. The model in (1) derived by Spong [18] takes a balance between the complexity and physical validity by neglecting the inertial coupling between the link-side dynamics and the motor.
The viability of the model in (1) has been demonstrated for compliant robots with SEAs [19].
Denote q d as the desired trajectory of the robot in the interaction. Define the tracking error e as e 1 = q d − q.
As proven and presented in [17], the robot-environment interaction force can be expanded as τ en =K s e 1 + K dė1 (6) where K s = diag{K si }, and K d = diag{K di } denote the stiffness and damping terms in the interaction, respectively. Denote Then, the force τ en can be expressed as The objective of this paper is to design model-based adaptive impedance learning control using differential adaptation to estimate the impedance profiles in Q e so that the tracking error e 1 and impedance estimation errors are uniformly ultimately bounded (UUB) without the measurement of the interactive force τ en .

Impedance Learning-Based Interaction Control
This section presents an impedance learning-based adaptive interaction control strategy for the considered compliant robot using the CFAB approach. The control design procedure is stated as follows: Step 1: Define the error e 2 as where k 1 is a positive parameter. Based on (1), the dynamics of e 2 can be stated as where Y e Y(q d + k 1ė1 ,q + e 2 , q,q). Design the virtual control α 1 as where k 2 is a positive control gain andŴ andV are the estimators of W and V, respectively. The estimators are updated by˙Ŵ where γ 1 and γ 2 are the positive learning rates. Pass α 1 through the following command filter where ω and ξ ∈ R are the frequency and the damping ratio, respectively.
Step 2: For the SEA, define the errors e 3 and e 4 as From (1), the dynamics of e 4 can be presented as Design the control input τ as where k 4 > 0. Then, Remark 2. The use of the command filter in (12) can decrease computational complexity and can avoid the requirement of the high derivatives of positions in conventional backstepping control of SEA-driven robots. (12) on t ∈ [0, T), with T being a finite value. Given a small ∈ R + , there exists a sufficiently large ω such that ||α 1 || ≤ on t ∈ [0, T). (17) with the learning law in (11) for the considered compliant robot dynamics in (1). The tracking error e 1 and the estimation errorsṼ andθ are semiglobally uniformly ultimately bounded (SUUB).

Theorem 1. Design the impedance learning-based adaptive interaction controller in
Proof. Consider the following Lyapunov function candidate Taking the time derivative of L and substituting (14) and (16), one can obtaiṅ From Property 2 and the update laws in (11), According to Lemma 1, if the parameter ω chosen is sufficiently large, ||α 1 || ≤ on [0, T). Using Young's inequality, Based on (21) and (22), one can obtaiṅ which implies Based on Lemma 1 and (24), we can conclude that ||α 1 || ≤ can be satisfied for t ∈ [0, ∞) if the parameter ω chosen is sufficiently large. Given the initial values for the closed-loop control system, the inequality in (23) is satisfied for t ∈ [0, ∞) if the control parameters are properly chosen. Therefore, the proposed impedance learningbased adaptive controller makes the closed-loop control system SUUB.
In the simulation, a regulation problem and a tracking problem are considered as two cases, where q d = 0.7 rad for Case 1 and q d = 0.2 + 0.3 cos(πt/6) for Case 2. The simulation results in Case 1 and Case 2 are presented in Figures 1-3 and Figures 4-6, respectively.
In Case 1, by using the proposed controller shown in Figure 3, the regulation error e 1 in Figure 1 is very close to zero after 10 s. In Figure 2, it can be seen that althoughW andṼ are not close to zero owing to not enough excitation and the coupling between the robotic parameters' uncertainties and the impedance's uncertainties, the robotic parameters' estimation errorW and the impedance profiles' estimation errorṼ are significantly decreased after 5 s and the force estimation errors Y eW and Q eṼ are very close to zero after 10 s. In Case 2, the proposed impedance learning controller shown in Figure 6 renders the tracking error e 1 in Figure 4 ultimately close to zero. In Figure 5, it can be seen that the robotic parameter estimation errorW and the impedance profile estimation errorṼ are highly decreased, Q eṼ is close to zero, and Y eW is bounded but not close to zero. The reason is that Q eṼ plays a more important role in e 1 than Y eW andṼ receives more excitation.
The above simulation results illustrate the effectiveness of the proposed impedance learning-based controller in (17) and the adaptive impedance learning in (11). The proposed controller can ensure that robot impedance is more close to human impedance than impedance control with constant impedance profiles.

Conclusions
The variable impedance control of robots can improve human-robot interaction performance through the regulation of the impedance of robots to adjust the motions of human limbs. However, impedance variation affects the control stability of robots. Based on impedance learning, this paper has designed an adaptive controller of SEA-driven robots for human-robot interaction using the command filter-based adaptive backstepping approach. Adaptive estimators have been designed to approximate the robot modeling uncertainty and impedance parameters of interaction force. We have validated the practical control stability through theoretical analysis and showed the control effectiveness through simulations. The designed impedance learning control provides variable robot impedance regulation without interactive force sensing. By exploiting the advantages of impedance learning control and compliance actuators, this paper improves the safety and compliance of robot-environment interactions. In this paper, we only guarantee that the control system is SUUB. Guaranteeing that the impedance learning-based control is asymptotically stable and improving impedance estimation performance are our future research directions.