A Framework for Human-Robot-Human Physical Interaction Based on N-Player Game Theory

In order to analyze the complex interactive behaviors between the robot and two humans, this paper presents an adaptive optimal control framework for human-robot-human physical interaction. N-player linear quadratic differential game theory is used to describe the system under study. N-player differential game theory can not be used directly in actual scenerie, since the robot cannot know humans’ control objectives in advance. In order to let the robot know humans’ control objectives, the paper presents an online estimation method to identify unknown humans’ control objectives based on the recursive least squares algorithm. The Nash equilibrium solution of human-robot-human interaction is obtained by solving the coupled Riccati equation. Adaptive optimal control can be achieved during the human-robot-human physical interaction. The effectiveness of the proposed method is demonstrated by rigorous theoretical analysis and simulations. The simulation results show that the proposed controller can achieve adaptive optimal control during the interaction between the robot and two humans. Compared with the LQR controller, the proposed controller has more superior performance.


Introduction
In the past decade, physical human-robot interaction has attracted the attention of the research community due to the urgent requirement for robot technology in unstructured environment [1][2][3][4]. Physical human-robot interaction combines the advantages of humans and robots, which means that humans are good at reasoning and problem solving with high flexibility, while robots perform well in terms of execution as well as guaranteeing the accuracy of task execution [5,6]. The combination of these advantages has led to the wide application of physical human-robot interaction, such as teleoperation [7,8], collaborative assembly [9,10], and collaborative transportation [11][12][13].
Two types of specific human robot interaction strategies have been widely studied: co-activity type of interaction strategy and master-slave control strategy [14,15]. Co-activity type of interaction strategy is used in typical rehabilitation robots that help limb movement training or intelligent industrial systems that support heavy objects to resist gravity, where robots completely ignore human users' behaviors [16,17]. In contrast, the master-slave control strategy is used in the teleoperated robots or force extender exoskeletons use where robots completely follow the control of human users [18]. However, these strategies can only be used for specific interactive behaviors, the general framework for analyzing various interactive behaviors between robot and humans is still missing [19,20].
It has been pointed out that game theory can be used as a general framework to analyze complex interactive behaviors between multiple agents because different combinations of individual cost functions and different optimization objectives can be used to describe various interactive behaviors in game theory [21]. In [22], the human and the robot were been regarded as two agents and game theory was used in order to analyze the performance of the two agents. In [23], the optimal control was obtained for a given game with a linear system cost function by solving the coupled Riccati equation. In [24], an optimal control algorithm was developed for human-robot collaboration by solving the Riccati equation in each loop. In [25][26][27][28], policy iteration was used to solve the Nash equilibrium solution in order to improve the calculation speed. In [29], cyber-physical human systems was modeled via an interplay between reinforcement learning and game theory. In [30], haptic shared control for human-robot collaboration was modeled by a game-theoretical approach. In [31], human-like motion planning was studied based on game theoretic decision making. In [32], cooperative game was used for human-robot collaborative manufacturing. In [33], a bayesian framework was proposed for nash equilibrium inference in human-robot parallel play. In [19], non-cooperative differential game theory was used to model human-robot interaction system that results in a variety of interaction strategies. However, the above studies only consider two agents, that is, the interaction between one human and one robot. Therefore, the aforementioned methods are not suitable for human-robot-human physical interaction where more than one human interact with one robot physically. It is worth noting that the physical interaction between one robot and two humans will bring greater advantages such as operating larger loads, improving the flexibility and robustness of the system [28,[34][35][36][37]. These greater advantages are brought by the team collaboration between the robot and two humans. To the authors' acknowledgment, no literature have researched the problem of the physical interaction between one robot and two humans based on game theory.
In the paper, a general adaptive optimal control framework for human-robot-human physical interaction is proposed based on N-player game theory. Accordingly, the robot and two humans can interact with each other optimally by learning each other's control. N-player differential game theory was used to model the human-robot-human interaction system in order to analyze the complex interactive behaviors between the robot and two humans. In N-player differential game theory, humans' control objectives are assumed to be knowledge [38,39]. However, N-player differential game theory can not be used directly in actual scenerie since the robot cannot know humans' control objectives in advance. In order to let the robot know humans' control objectives, the paper presents an online estimation method to identify unknown humans' control objectives based on the recursive least squares algorithm. Subsequently, the Nash equilibrium solution of the multi-human robot physical interaction is obtained by solving the coupled Riccati equation to achieve coupled optimization. Finally, the effectiveness of the proposed method is demonstrated by rigorous theoretical analysis and simulation experiments. This paper makes the following four contributions.
(1) N-player differential game theory is firstly used to model the human-robot-human interaction system. (2) An online estimation method to identify unknown humans' control objectives based on the recursive least squares algorithm is presented. (3) A general adaptive optimal control framework for human-robot-human physical interaction is propose based on (1) and (2). (4) The effectiveness of the proposed method is demonstrated by rigorous theoretical analysis and simulation experiments.
The remainder of this paper is organized, as follows: Section 2 models the human-robot-human physical interaction system based on N-player differential game theory. Section 3 establishes an adaptive optimal control law, and the control performance of the system is analyzed theoretically. Section 4 verifies the effectiveness of the proposed method through simulation experiments. Finally, Section 5 concludes this work.

System Description
The system considered contains two humans and one robot. An example scenario is shown in Figure 1, where the robot and the humans collaborate to perform an object transporting task. In this shared control task, when the control objectives of humans' change, the robot should recognize the humans' control objectives and response adaptively and optimally. The forces exerted by the humans on the object are measured by force sensors at the interaction point. It is worth noting that the humans' control objectives are unknown to the robot. The forward kinematics of the robot are described as where x(t) ∈ R m and q(t) ∈ R n are the positions in Cartesian space and joint space respectively, m and n are degrees of freedom. Derivation of Equation (1) with time can be obtaineḋ where J(q(t)) ∈ R m×n is the Jacobian matrix. The following impedance model is given in Cartesian space where M d ∈ R m×m is the desired inertial matrix, C d ∈ R m×m is the damping matrix, u(t) ∈ R m is the control input in the Cartesian space [40][41][42], f 1 (t) ∈ R n is the contact force between object and human 1, f 2 (t) ∈ R n is the contact force between object and human 2.
To track a common and fixed target x d ∈ R m (ẋ d ∈ R m ) in cooperative object transporting task, Equation (3) can be transformed, as following In order to ease the design of the control, Equation (4) can be rewritten as the following state-space formż where 0 m and 1 m denote m × m zero and unit matrices, respectively.

Problem Formulation
According to non-cooperative differential game theory, in the paper, the interaction between the robot and the humans is described as a game between N players (in this paper, N = 3) [43]. In the game, each player will minimize their respective cost function where Γ, Γ 1 , Γ 2 are cost functions of the robot, human 1, and human 2, respectively, Q, Q 1 , Q 2 are state weights matrices of the robot, human 1 and human 2, respectively. Each player achieves the cooperative object transporting task by minimizing the error to the target while minimizing their own costs. Q, Q 1 , Q 2 contain two components corresponding to position regulation and velocity, respectively. Q 01 , Q 11 , Q 21 correspond to position regulation and Q 02 , Q 12 , Q 22 correspond to velocity. In [27], the N-player game has been studied if the cost functions are known. However, Γ 1 , Γ 2 are unknown to the robot because they are determined by the humans. Therefore, a method is proposed in the paper to estimate Γ 1 , Γ 2 in order to achieve adaptive optimal control and, thus, the human-robot-human cooperative object transporting task.

N-Player Differential Game Theory
Based on the differential game theory of linear systems, for N-player game the following linear differential equation [43] is considered: Each player has a quadratic cost function that they want to minimize: Different types of multi-agent behaviors are defined in game theory, which can be achieved through different concepts of game equilibrium [44,45]. In this paper, Nash equilibrium is considered. In the sense of Nash equilibrium, each player minimizes their cost function: where N is equal to 3 in this paper. In the sense of Nash equilibrium, the humans and the robot minimizes their own cost function: where α ≡ α e , α v is the feedback gain of the robot, β ≡ β e , β v is the feedback gain of the human 1, γ ≡ γ e , γ v is the feedback gain of of the human 2. α e , β e , γ e are the position error gains, α v , β v , γ v are the velocity gains, P r , P 1 , P 2 are the solutions of the above well-known Riccati equation consisting of Equation (10d-f). The robot and the humans influence each other through A r , A 1 , and A 2 in order to achieve the interactive control and the coupling optimization.
β, γ are unknown to the robot. Therefore, we aim to propose a method to estimate them in the following section.

Adaptive Optimal Control
A recursive least squares algorithm with forgetting factors is used in this paper to get the estimatê β,γ of β, γ in order to estimate the feedback gains of the humans in real time and avoid the data saturation phenomenon caused by the standard least squares algorithm [46]. Subsequently, the estimatê Q 1 ,Q 2 of Q 1 , Q 2 can be obtained using Equation (10e,f).
Equation (10e) is used as the model for identification. For convenience, we let Subsequently, Equation (10b) can be rewritten as The feedback gain of the human 1 are estimated by minimizing the total prediction error where λ 1 is the constant forgetting factor. The update rule of the parameter θ 1 can be obtained aṡθ The estimated error ofθ 1 is Thus, the estimateβ can be obtained asβ Similarly, we let θ 2 = −γ T , y 2 = f T 2 , W = z T . Afterwards, Equation (10c) can be rewritten as The feedback gain of the human 2 are estimated by minimizing the total prediction error where λ 2 is the constant forgetting factor. The update rule of the parameter θ 2 can be obtained aṡθ The estimated error ofθ 2 is Thus, the estimateγ can be obtained asγ Equations (13), (15), (18) and (20) are critical, because they enable each agent to recognize their partners' control objectives and use Equation (10a-f) to adjust their own control.
In order to ensure the performance of cooperative object transporting task, we let where C is the total weight. The cooperative object transporting task fixes the task performance through the total weight C and uses Equation (21) to share the the effort between 2 humans and the robot. Equation (21) makes the proposed controller be able to adjust the contributions between the humans and the robot and makes the humans and the robot take complementary roles as well.
The control architecture is shown in Figure 2. A pseudo-code summarizes the implementation procedures of the proposed method as Algorithm 1.

Algorithm 1 Adaptive optimal control algorithm based on N-player game
Input: Current state z, target x d . Output: Robot's control input u, estimated the humans' cost function state weightQ 1 ,Q 2 in Equation (10e,f).

Theorem 1.
Consider the robot dynamics shown in Equation (5). If the robot and the humans estimate the parameters of their partners' controller and adjust their own control according to Equations (10a-f), (13), (15), (18), (20) and (21), then the following conclusions will be drawn: • The closed-loop system is stable, and z, α,β,γ, u are bounded.
The Nash equilibrium is achieved for th human-robot-human interaction system.
According to Equation (10e), we can calculate the estimated errors e Q 1 =Q 1 − Q 1 , e Q 2 =Q 2 − Q 2 . e Q 1 , e Q 2 are due to the errors e P , e P 1 , e P 2 . Because e P , e P 1 , e P 2 converge to zero, we have lim t→∞ e Q 1 = 0, lim t→∞ e Q 2 = 0, that is lim t→∞Q1 = Q 1 , lim t→∞Q2 = Q 2 .
Multiplying Equation (10d) byẑ T on the left side and byẑ on the right side, and considering Equation (13), we have 0 =ẑ T Qẑ +ẑ T P r BB T P rẑ +ẑ T P rż +ẑP rż T +ẑ T P r He z +ẑP r He T z ≡σ.

Experimental Design and Ssimulation Settings
With the development of the robot technology, in the future, robots will enter our homes and become a member of family in our daily lives. In our daily lives, we often need to carry various objects. Some objects (e.g., objects with smaller size and lower weight) can be successfully carried by one human; some objects (e.g., objects with medium size and medium weight) need to be carried successfully by two humans; some objects (e.g., objects with larger size and higher weight) can be carried successfully by three or more humans. Consider one scenario: In our home, we have an object (such as a table with a relatively larger size and higher weight) that need to be carried by three humans. However, there are only two humans in the home. In this case, we can let the robot help us carry the object together with the two humans. The robot can play the same role as one human. A simulation is conducted with CoppeliaSim in order to verify the control performance of the controller proposed in this paper. The version of CoppeliaSim that we used is CoppeliaSim 4.0.0 (CoppeliaSim Edu, Windows). Figure 3 demonstrates the CoppeliaSim simulation scenario of cooperative object transporting task. The humans cooperate with the robot to transport the object between −10 cm and +10 cm back and forth along the horizontal direction. The controller that is proposed in this paper implements interactive control because every agent considers the control of other partners. In order to present the advantages of the proposed controller, we compare the proposed controller with the linear quadratic regulators (LQR) optimal controller. The LQR controller can be obtained by setting A r = A, A 1 = A, A 2 = A in Equation (10d-f). The LQR controller allows each agent to form its own control input optimally, but it ignores the controls of other partners. Let Q = Q 1 = Q 2 = diag(100, 0).
The cost functions of the humans usually change during the physical human-robot-human interaction. The robot needs to identify the change and adaptively adjust its own cost function in order to complete the cooperative object transporting task. In order to verify the ability of the robot to adaptively interact with two humans when humans' cost functions change, we simulated a scenario where the robot cooperated with the humans to perform an object transporting task. The task performance is achieved by setting the value of C in Equation (21). Let C = diag(300, 0). The cost functions of the human 1 and the human 2 change randomly according to Q 1 = diag(50, 0) + ρ · diag(50, 0), Q 2 = diag(50, 0) + ρ · diag(50, 0) ( ρ is a uniformly distributed random number between [0, 1]).
The human-robot-human cooperative object transporting task can be fulfilled with less effort with the proposed controller. In order to make this affirmation, we made a comparison with a human-robot cooperative object transporting task. In simulation of the human-robot-human cooperative object transporting task, we let Q = Q 1 = Q 2 = diag(100, 0). In simulation of the human-robot cooperative object transporting task, we let Q = diag(100, 0), Q 1 = diag(100, 0), Q 2 = diag(0, 0).
We assume that the humans and the robot do not have prior knowledge of each other (thus, initiallyα ≡ 0,β ≡ 0,γ ≡ 0 ). The control input of the robot are generated by Equations (5), (10a-f), (13), (15), (18) and (20). The simulated interaction forces f 1 , f 2 of the human 1 and the human 2 are generated by a similar set of equations. The simulation time is 40 s. Let the inertia of the robot M d = 6 kg, the damping of the robot C d = −0.2 N · m −1 [19], the real-time least squares algorithm forgetting factor λ 1 = λ 2 = 0.95. Simulation time step is 0.005 s.  Figure 4 is a smooth curve that looks like a sinusoidal signal. This smooth curve is determined by Equation (3). In Equation (3), u(t), f 1 (t), f 2 (t) are iteratively calculated by our proposed controller based on game theory. Due to the fact that the humans and the robot do not transport the object at a constant speed using our method, the end effector follows a curve signal rather than a straight line signal. As can be seen from Figure 4, the end effector can reach the target position with the proposed controller which means that the cooperative object transporting task is successfully fulfilled. In contrast, the end effector can not reach the target position with the LQR controller, which means that the cooperative object transporting task is not successfully fulfilled. The reason why the cooperative object transporting task can be successfully fulfilled with the proposed controller rather than with the LQR controller is that the proposed controller considers the interaction with other partners. When one partner decreases effort, the other partners will gradually increase their efforts to ensure the successful fulfillment of the cooperative object transporting task. In contrast, the LQR controller does not consider the interaction with other partners, so the cooperative object transporting task cannot be guaranteed to be successfully fulfilled. In Figure 5, we can see that the estimated humans' feedback gains converge to the real values in a few seconds. This means that the humans' feedback gains can be successfully estimated by the proposed method.  Figure 6 demonstrates that fulfilling the cooperative object transporting task requires larger control gains β, γ with the LQR controller compared with the controller proposed in this paper. It means that accomplishing the same task requires less effort using the proposed controller. This is because that the proposed controller considers the interaction with other partners and calculates the minimal effort for the humans and the robot to complete the task. In contrast, the LQR controller doesn't consider the interaction with other partners, so the humans and the robot only minimize their own cost function and may, therefore, require larger effort. The feedback gains are affected by the state weights of the cost functions. In order to verify the advantages of the proposed controller when the state weights vary, we let Q 1 vary from 0 to 10Q with Q 2 = diag(100, 0) and let Q 2 vary from 0 to 10Q with Q 1 = diag(100, 0) respectively. It can be seen from Figure 7 that accomplishing the same task always requires less effort using the proposed controller. We can also see that the difference between the control gains with our proposed controller and the control gains with LQR controller becoming smaller when Q 1 /Q or Q 2 /Q increases, this is because that the robot's relative influence decreases.

Results
From Figures 4-7, we can conclude that the human-robot-human cooperative object transporting task can be fulfilled with less effort and the system can be kept stable using the proposed controller.
It can be seen from Figure 8 that, when the cost functions of the human 1 and the human 2 change, the cost function of the robot will also change adaptively. When the sum of the state weights of the human 1 and the human 2 Q 1 + Q 2 increases, the state weight of the Robot Q decreases accordingly. Conversely, when the sum of the state weights of the human 1 and the human 2 Q 1 + Q 2 decreases, the state weight of the robot Q increases accordingly. The reason why the robot can change adaptively is that we set the constant C value in Equation (21). Equation (21) makes the proposed controller able to adjust the contributions between the humans and the robot and makes the humans and the robot take complementary roles as well. Figure 9 shows that, using the proposed controller, the adaptive cooperative object transporting task can be fulfilled and the system can be kept stable. From Figures 8 and 9, we can conclude that the adaptive cooperative object transporting task can be fulfilled with the proposed controller. During the physical interaction, the robot can successfully identify the change of each human's cost function, and then adaptively adjust its own cost function to achieve interactive optimal control. Figure 10 demonstrates that fulfilling the human-robot-human cooperative object transporting task requires smaller control gains β e , β v as compared with the human-robot cooperative object transporting task. It means that accomplishing the same task requires less effort by means of the human-robot-human physical interaction. This is because the human-robot-human cooperative object transporting task considers the interaction with more partners (two partners) and calculates minimal effort for the humans and the robot to complete the task. In contrast, the human-robot cooperative object transporting task consider the interaction with less partners (only one partner), so the human and the robot may therefore require larger effort.

Conclusions
In this paper, the human-robot-human physical interaction problem has been studied. An adaptive optimal control framework for the human-robot-human physical interaction has been proposed based on N-player game theory. The recursive least squares algorithm based on forgetting factor has been used to identify unknown control parameters of the humans online. The performance of the controller proposed in this paper has been verified by simulations of cooperative object transporting task. The simulation results show that the proposed controller can achieve adaptive optimal control during the interaction between the robot and two humans and keep the system stable. Compared with the LQR controller, the proposed controller has more superior performance. Compared with the human-robot physical interaction, accomplishing the same cooperative object transporting task requires less effort by means of the human-robot-human physical interaction based on the approach proposed in the paper. Although this paper only conducts simulations on the physical interaction between one robot and two humans, it is worth mentioning that the framework that is proposed in this paper has the potential to be generalized to the situation where multiple robots physically interact with multiple humans. As future work, we will extend the framework to the interaction between multiple robots and multiple humans.