Next Article in Journal
A Service Discovery Solution for Edge Choreography-Based Distributed Embedded Systems
Previous Article in Journal
An Optical Frequency Domain Angle Measurement Method Based on Second Harmonic Generation
 
 
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Development of a Real-Time Human-Robot Collaborative System Based on 1 kHz Visual Feedback Control and Its Application to a Peg-in-Hole Task  †

1
Interfaculty Initiative in Information Studies, The University of Tokyo, Tokyo 153-8505, Japan
2
Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan
3
Information Technology Center, The University of Tokyo, Tokyo 113-8656, Japan
*
Author to whom correspondence should be addressed.
This paper is an extended version of our paper published in Matsui, Y., Yamakawa, Y., Ishikawa, M. Cooperative operation between a human and a robot based on real-time measurement of location and posture of target object by high-speed vision. In Proceedings of the 2017 IEEE Conference on Control Technology and Applications (CCTA), Mauna Lani, HI, USA, 27–30 August 2017; Yamakawa, Y., Matsui, Y., Ishikawa, M. Human–Robot Collaborative Manipulation Using a High-speed Robot Hand and a High-speed Camera. In Proceedings of the 2018 IEEE International Conference on Cyborg and Bionic Systems (CBS), Shenzhen, China, 25–27 October 2018; Yamakawa, Y., Matsui, Y., Ishikawa, M. Development and Analysis of a High-speed Human-Robot Collaborative System and its Application. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics (ROBIO), Kuala Lumpur, Malaysia, 12–15 December 2018.
Current Affiliation: Azbil Corporation, Tokyo 100-6419, Japan.
Sensors 2021, 21(2), 663; https://doi.org/10.3390/s21020663
Received: 22 December 2020 / Revised: 15 January 2021 / Accepted: 16 January 2021 / Published: 19 January 2021
(This article belongs to the Section Sensors and Robotics)

Abstract

:
In this research, we focused on Human-Robot collaboration. There were two goals: (1) to develop and evaluate a real-time Human-Robot collaborative system, and (2) to achieve concrete tasks such as collaborative peg-in-hole using the developed system. We proposed an algorithm for visual sensing and robot hand control to perform collaborative motion, and we analyzed the stability of the collaborative system and a so-called collaborative error caused by image processing and latency. We achieved collaborative motion using this developed system and evaluated the collaborative error on the basis of the analysis results. Moreover, we aimed to realize a collaborative peg-in-hole task that required a system with high speed and high accuracy. To achieve this goal, we analyzed the conditions required for performing the collaborative peg-in-hole task from the viewpoints of geometric, force and posture conditions. Finally, in this work, we show the experimental results and data of the collaborative peg-in-hole task, and we examine the effectiveness of our collaborative system.

1. Introduction

Recently, research into Human-Robot interaction (HRI) has been actively undertaken. HRI contributes not only to industrial applications (for example, cell production systems) but also to human living environments (so-called Quality of Life (QoL) improvements). HRI techniques can be divided into three kinds:
  • Collaboration and cooperation,
  • Communication, and
  • Support and assistance.
In the area of collaboration and cooperation, robots perform tasks together with workers; in communication, robots enter into dialogue with humans; and in support and assistance, robots assist workers in performing tasks or actions. In this research, we focus on collaboration and cooperation. Human-Robot collaboration is a fundamental element for humans and robots to work together. Moreover, this field involves physical interactions between humans and robots.
To date, a great deal of research has been conducted on Human-Robot collaborative systems and cooperation systems [1]. Zoss et al. classified collaborative systems between humans and robots [2]. Hayashibara et al. developed an assistive system for carrying a long object [3]. Yokoyama et al. performed a task in which an object was held and carried by a Human-Robot cooperative system [4]. Kosuge et al. proposed a control algorithm of a mobile robot with dual arms for Human-Robot cooperation [5]. Suda and Kosuge constructed a system for handling objects using visual and force information [6]. Stückler and Behnke developed a system that followed human guidance to carry a large object during cooperation between a human and a robot [7]. Antao et al. proposed a method in which a manipulator assisted a human operator to execute a target task, while monitoring the operator in real-time [8]. Teke et al. proposed a method for real-time and robust collaborative robot motion control by using Kinect® v2 [9]. Çoban and Gelen achieved an assembly task with Human-Robot collaboration using a wearable device and shortened the operation time of the assembly task [10]. Wang et al. proposed a framework of a TLC (teaching–learning–collaboration) model for performing Human-Robot collaborative tasks and verified the effectiveness of the proposed method [11]. Shayganfar et al. explored the relevance and controllability of Human-Robot collaboration and suggested an evaluation algorithm for relevance and controllability [12]. Scimmi et al. achieved a hand-over task with a robot manipulator by using real-time visual information [13]. Galin. et al. constructed a mathematical model and simulation environment of Human-Robot collaboration [14]. Darvish et al. proposed a hierarchical architecture of a Human-Robot cooperative system, showed the algorithms used to evaluate the architecture and performed experiments [15].
Considering HRI performance in the same way as robot performance, the speed and accuracy of the system can be considered to be the most important factors. The previous research described above focused on the accuracy and construction of HRI systems, including algorithms for improving accuracy. Currently, more-accurate HRI systems using advanced techniques and AI techniques have been proposed [1,16,17,18]; these studies focus on robot perception with gesture [16], the proposal of integrated frameworks [17] and support techniques in HRI with machine learning [18]. This means that speed (real-time performance) has not been considered in detail in these systems. As a result, HRI systems cannot react to human motion and actions instantly, and the human has to adapt to the slow robot motion and action. We consider that such a style is not ideal for HRI systems, and that real-time performance is critically important.
Thus, this research pursues the goal of a high-speed, high-accuracy HRI system, and the target area is the top-right area shown in Figure 1. To achieve this, we developed a high-speed, high-accuracy HRI system using high-speed vision (1000 fps image acquisition), high-speed image processing (1000 fps image processing) and high-speed robot control (real-time visual feedback). At present, the main approach that has been used to develop HRI systems is to improve the accuracy first by using machine learning and prediction, and then also improve the speed by speeding up these processing steps. With this approach, the authors consider that it is extremely difficult to speed up the processing. As a result, speed improvement cannot be achieved successfully. On the other hand, our approach is that the speed will be improved first by fusing high-speed vision and a high-speed robot, and then the accuracy will be also improved by using high-speed multi-target tracking and robot hand control. Based on this approach, we can achieve a high-speed (real-time performance) and high-accuracy HRI system. Such an approach can be considered to be a novel feature of this research.
In work related to the HRI system using a high-speed robot, we have also developed a Janken (rock–paper–scissors) robot with a 100% winning rate by using a high-speed robot [19,20]. From the results of that study, we considered that high-speed robot technology can be applied to Human-Robot collaboration. As a basic task, we decided on collaborative motion between a human and a robot hand, as shown in Figure 2. First, we constructed a simple Human-Robot collaborative system consisting of a high-speed robot hand and a high-speed vision system; this system holds an object (a board) horizontally via Human-Robot collaboration [21]. Second, we extended that system to a collaborative system that keeps the board horizontal, as in the previous research [21], but with the added function that the robot follows movements around the roll axis and yaw axis, performed by a human [22,23]. However, the analysis of the developed Human-Robot collaborative system was not performed. Thus, in this work, we analyze the system from theoretical and experimental aspects. Since the performance of a Human-Robot collaboration system is considered to depend on the error arising from image processing and the latency caused by a low frame rate (which we call the collaborative error), we analyze the stability of this Human-Robot collaborative system and evaluate this error through analysis and experiments [24]. In addition, we analyze the conditions of the achievement of the collaborative peg-in-hole task, and we demonstrate the collaborative peg-in-hole task using the developed collaborative system.
In this paper, we examine the following seven aspects:
  • The construction of a high-speed and high-accuracy Human-Robot collaborative system,
  • The proposed strategy for high-speed visual sensing and robot control in a high-speed and high-accuracy Human-Robot collaborative system,
  • The stability analysis of the high-speed and high-accuracy Human-Robot collaborative system,
  • The theoretical analysis of the collaborative error due to image processing and latency,
  • The experimental evaluation of the collaborative error and control performance (torque inputs) of the robot hand,
  • The analysis of the peg-in-hole task performed by the collaborative system, and
  • Te demonstration of a concrete application (peg-in-hole task) via Human-Robot collaboration.
The first and second aspects are related to the development of a new Human-Robot collaborative system, including an algorithm. The third and fourth aspects are related to the basic analysis of the Human-Robot collaborative system. The fifth aspect is related to the verification of the analysis results for the collaborative error through experiments. The sixth aspect is related to a basic analysis of the peg-in-hole task in the collaborative system. The last aspect is related to realization of the task using our high-speed and high-accuracy collaborative system. Through all seven of these aspects, we demonstrate the effectiveness of our high-speed Human-Robot collaborative system using a high-speed robot hand and high-speed image processing.
The rest of this paper is organized as follows; Section 2 explains the developed Human-Robot collaborative system, Section 3 describes the proposed strategy for Human-Robot collaboration, Section 4 discusses the stability and the collaborative error, Section 5 shows the experimental results of collaborative motion and discusses the frame rate of the collaborative system, Section 6 explains the analysis and experimental results of the collaborative peg-in-hole task and Section 7 summarizes the conclusions obtained in this work.

2. Human-Robot Collaborative System

As shown in Figure 3, our Human-Robot collaborative system consists of the following:
  • A high-speed robot hand (Section 2.1),
  • A high-speed vision system (Section 2.2),
  • A real-time controller that receives the state values of the board (position and orientation) from the image-processing PC at 1 kHz and also controls the high-speed robot hand at 1 kHz,
  • A board that is handled by the robot hand and a human subject (Section 2.3), and
  • A peg (Section 2.4).

2.1. High-Speed Robot Hand

As the actuation system of the collaborative system, we used a high-speed robot hand, as shown in Figure 4 [25]. The joint of the robot hand has a closing speed of 180° in 0.1 s, which is a level of performance beyond that of a human hand.
The high-speed robot hand has three fingers: a left thumb, an index finger and a right thumb. Each finger has a top link and a root link, and the left and right thumbs rotate around a palm. Therefore, the index finger has two degrees of freedom (2-DOF), and both thumbs have 3-DOF. In addition, the robot hand has a wrist joint with 2-DOF (in Figure 4, 1-DOF movement is illustrated). Thus, the hand has a total of 10-DOF in its movement.

2.2. High-Speed Vision System

As the sensing system of the collaborative system, we used a high-speed vision system. The high-speed vision system consisted of a high-speed camera and an image-processing PC.
As the high-speed camera, we used a commercial high-speed camera, EoSens MC4086 produced by Mikrotron [26]. The image-processing PC was equipped with an Intel® Xeon® W5-1603 v3 2.8 GHz processor and 16 GB of RAM. The operating system of the image-processing PC was Windows 7 Professional (64-bit), and the image-processing software was Visual Studio 2017. The image-processing PC, equipped with a frame grabber board, could acquire raw image data from the high-speed camera. The connection between the high-speed camera and the image-processing PC used CoaXpress, which was able to transfer the data at high speed.
The raw image data were 1024 × 768 pixel, 8 bit gray-scale images. After acquiring the image data every 1 ms, the image-processing PC measured the position and orientation of the board within 1 ms and sent the measurement results to the real-time controller via an Ethernet connection using the UDP protocol.

2.3. Board with Hole as a Target Object

The board had a length of 220 mm, a width of 100 mm, a thickness of 5 mm and a mass of about 113 g. Retro-reflective markers were attached at four corners of the board to simplify corner detection by the high-speed camera. The configuration of coordinate axes on the board was as shown in Figure 5. In addition, the hole used in the collaborative peg-in-hole task was formed at the center of the board, as shown in Figure 6a. The radius (R) of the hole was 6.350 mm.

2.4. Peg

The peg was made of stainless steel, had a radius (r) of 6.325 mm, a length ( L p e g ) of 405 mm, a chamfer angle ( β ) of 45° and a chamfer length (w) of 1 mm, as shown in Figure 6b. In general, the chamfer is formed around the edge of the hole; in this research, however, the chamfer was formed on the peg. In this case, the conditions for the peg-in-hole task were the same as those for the previous analysis result described in Section 4.1. The peg was fixed to the frame by a magnet.

3. Strategy for Collaborative Motion

Here, we explain the overall strategy for achieving Human-Robot collaboration. The flow of the Human-Robot collaborative motion was the following, as shown in Figure 7:
  • The human subject moved the board,
  • The board position and orientation were changed as a result of the human operation,
  • The high-speed camera captured the image,
  • The tracking of the markers attached to the four corners of the board was executed by image-processing,
  • The position and posture of the board were calculated based on the information of the marker positions,
  • The reference joint angle of the robot hand was obtained by solving the inverse kinematics of the robot hand based on the position and posture of the board,
  • The torque to be input to the servo motor of the robot hand was generated by proportional derivative (PD) control for the reference joint angle, and
  • The robot hand moved according to the torque input.

3.1. Brief Overview

In the Human-Robot collaborative motion, we divided our strategy into the following two components:
  • Visual Sensing part in Figure 7: The position and orientation of the board were measured using high-speed image processing (Section 3.2).
  • Robot Control part in Figure 7: The robot hand was controlled according to the position and orientation of the board (Section 3.3).
By repeating the above two steps at high speed in real time (1000 fps), the board could be kept in the reference state, even if the human subject randomly moved the board at high speed. The visual sensing and robot control parts are explained below.

3.2. Image Processing and Measurement of Position and Orientation of Board

Figure 8 shows the flow of image processing and the measurement of the board state. In order to obtain the position and orientation of the board with global coordinates to control the robot hand, we needed to derive a transformation matrix T b w from the board coordinates to global coordinates:
T b w = R b w P b w 0 1 .
In Figure 8, the red elements are important for measuring the position and orientation of the board. Thus, we describe methods to calculate the transformation matrix T b w (Section 3.2.1), to track the markers attached to the corners of the board visually (Section 3.2.2) and to convert the roll, pitch and yaw angles from the transformation matrix T b w (Section 3.2.3). Figure 9 shows the relationship between transformation matrices.

3.2.1. Derivation of Transformation Matrix

Assuming that a transformation matrix T c w and a camera internal parameter matrix could be found in advance using camera calibration based on Zhang’s method [27], we were able to calculate a transformation matrix T b c . Therefore, we briefly describe how to obtain the transformation matrix T b c , which is composed of a rotation matrix R b c and a translation vector P b c . If the transformation matrix T b c is derived, the transformation matrix T b w can be calculated as follows:
T b w = T c w T b c .
Next, we describe a method of deriving the transformation matrix T b c .
We obtained the transformation matrix T b c through the following calculation by using the direct linear transformation (DLT) algorithm. The transformation matrix T b c could be expressed using the camera’s internal and external parameters as follows:
s x y 1 = K R b c P b c X Y Z 1 = f x 0 c x 0 f y c y 0 0 1 R b c P b c X Y Z 1
where s is a scaling factor, f x and f y are the focal lengths, c x and c y represent the centre of the image, [ x , y ] are image coordinates, [ X , Y , Z ] are world coordinates (here, board coordinates), R b c is a rotation matrix from camera coordinates to board coordinates, and P b c is a translation vector from camera coordinates to board coordinates. Here, the camera’s internal parameter K could be derived in advance by camera calibration using Zhang’s method [27]. Thus, we needed to derive the camera’s external parameters R b c and P b c .
The transformation matrix T b c can also be represented by
x i y i 1 = H X i Y i Z i 1 H = h 1 h 2 h 3 T = f x 0 c x 0 f y c y 0 0 1 R P / s .
The above equation can be rewritten using a vector representation as follows:
x i = h 1 T h 2 T h 3 T X i .
Rewriting this equation for the vector h i , we can get
0 X i T y i X i T X i 0 x i X i T h 1 h 2 h 3 = 0 .
A matrix A is defined by four components x i , X i ( i = 1 , 2 , 3 , 4 ) as follows:
A = 0 X 1 T y 1 X 1 T X 1 T 0 x 1 X 1 T 0 X 2 T y 2 X 2 T X 2 T 0 x 2 X 2 T 0 X 3 T y 3 X 3 T X 3 T 0 x 3 X 3 T 0 X 4 T y 4 X 4 T X 4 T 0 x 4 X 4 T
By performing singular value decomposition for the matrix A , we can get H . Then, we can also obtain T b c . Each component of the matrix A can be derived from the values x i and X i ( i = 1 , 2 , 3 , 4 ). The values x i can be obtained by the marker tracking, as described in the next session. The values X i ( i = 1 , 2 , 3 , 4 ) can also be determined from the board size.

3.2.2. Marker Tracking

The positions of the four corners were obtained by using a target tracking algorithm [28]. By attaching retro-reflective markers to the four corners of the board and binarizing the captured image, the board appeared white only at the corners. By calculating the image moment for each marker, the positions of the four corners were obtained.
The marker tracking operation was performed as follows. First, the obtained image was binarized with a threshold. Second, the ( i , j )-th order image moments m i , j were calculated by
m i , j = x y x i y j I ( x , y ) .
Using the image moments m i , j , we were able to obtain the image centroid ( x g , y g ) of the marker as follows:
x g = m 1 , 0 m 0 , 0 , y g = m 0 , 1 m 0 , 0 .
Once the marker image was captured and the centroid ( x g , y g ) was successfully calculated, a sub-frame region of interest (ROI) was set around the centroid. The ROI size was set to be smaller than the size of the original image to reduce the computational load.
This tracking operation was executed for each marker, and the positions of the four corners could be obtained.

3.2.3. Measurement of Board Position and Orientation in World Coordinates

From the transformation matrices T c w and T b c , the transformation matrix T b w from the board coordinates to the global coordinates was obtained. Consequently, the rotation matrix R b w and the translation vector P b w could be obtained; that is, the board position and orientation in global coordinates were measured.
The pitch, roll and yaw angles are expressed by θ x , θ y , θ z . The rotation matrix R b w could be obtained from the transformation matrix T b w :
R b w = r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 = R z R x R y = c z c y s z s x s y s z c x c z s y + s z s x c y s z c y + c z s x s y c z c x s z s y c z s x c y c x s y s x c x c y
As a result, the pitch, roll and yaw angles could be calculated as follows:
θ x = sin 1 ( r 32 ) , θ y = tan 1 r 31 r 33 , θ z = tan 1 r 12 r 22 .
The series of image processing steps described above could be executed every 1 ms (1000 fps). The inverse kinematics of the robot hand were solved by a transformation matrix T b w from the board coordinates to the world coordinates, and this is explained in the next subsection.

3.3. Robot Hand Control

In order to achieve the collaborative motion between the human and the robot, the robot hand was controlled based on the board position and orientation obtained as described above using the high-speed image processing. The robot hand control was also divided into two steps: solving the inverse kinematics of the robot hand and controlling the servo motors in the robot hand according to the reference joint angles.

3.3.1. Inverse Kinematics of the Robot Hand

Using the inverse kinematics of the robot hand, the reference joint angles could be obtained based on the measurement of the board position and orientation. Moreover, since there is a limit to the range in which the robot hand could move, it was necessary to provide a limit to the input angles for moving the robot. Thus, we set an appropriate movable range for the board height.
Figure 10 shows an illustration of the inverse kinematics of the robot hand. First, the height of the middle finger of the robot hand was derived based on the height of the center of the board:
x y z = h 1 = r 11 r 12 r 13 p x r 21 r 22 r 23 p y r 31 r 32 r 33 p z 0 0 0 1 0 l b 0 1 ,
where l b is the distance from the center of the board to the edge grasped by the robot hand.
Then, the heights of the tip positions of the three fingers were derived as follows:
z l = h + r f + L t cos θ y + l f tan θ y , z m = h r f + c cos θ y , z r = h + r f + L t cos θ y l f tan θ y ,
where l f is the distance between the fingers of the robot hand, r f is the radius of each finger, L t is the thickness of the board, and c is a small gap between the finger and the board. Therefore, the reference joint angles of the root links of the three fingers were calculated by
q l = tan 1 z l l , q m = tan 1 z m l , q r = tan 1 z r l ,
where l is the length of the root link of the finger. In order to set the top links parallel to the ground, the reference joint angles of the top links could be obtained by multiplying the reference joint angles of the root links by minus one. Moreover, the reference joint angle of the wrist was given by
q w = θ z .
The board was kept horizontal (the pitch angle θ x = 0 ) by manipulating it, and so the top links had to be kept horizontal. Therefore, the joint angles of the top links were obtained by multiplying the joint angles of the root links by minus one.

3.3.2. Joint Angle Control of the Robot Hand

In order to track the reference joint angles ( q l , q m , q r and q w ) obtained by the inverse kinematics of the robot hand, the joint angles of the robot hand are controlled by proportional derivative (PD) control. Namely, the following torque input τ was applied to the actuator installed in the robot hand:
τ = k p θ r θ + k d θ ˙ r θ ˙ ,
where θ r is the reference joint angle of each joint of the robot hand and is calculated using Equation (14). θ is the actual joint angle of each joint of the robot hand and was measured by an optical encoder installed in the servo motor. Furthermore, k p and k d are the proportional and derivative gains, respectively.

3.4. Advantages and Limitations of Proposed Method

The advantages of the proposed methods are the high speed, low latency and high accuracy of the collaborative system. As a result, the system can collaborate with human motion in the true sense, which means that robot can react instantly and flexibly to human motion. In addition, in the conventional methods using a force sensor [29,30,31], it takes time for the sensor to measure the reaction force, to determine that the object was actually operated by the subject and to recognize the direction in which the object was moved. It can be considered that it is difficult to achieve these tasks and to speed up this approach. However, the proposed method is intuitive and can be recognized at high speed and with high accuracy.
On the other hand, the limitations of the proposed strategy are that it requires markers which are attached into four corners, and it also requires a lighting environment to detect the markers. At present, the target object is limited to plate-shaped objects, and additional ingenuity is required for visual sensing in order to adapt it to objects of other shapes. In addition, it is possible that strict camera calibration and system coordinate calibration are required for the realization of Human-Robot collaboration.

4. Theoretical Analysis

This section describes a theoretical analysis of the stability of the Human-Robot collaborative system and the theoretical collaborative error (particularly, pitch angle error θ x ) that occurs during the collaborative motion.

4.1. Stability Analysis

The equations of the motion of the board (translational motion in the z direction and rotational motion around the pitch axis) during collaboration between a human and the robot system are given by
m z ¨ = m g f r f h ,
I ϕ ¨ = m g L l + m z ¨ L l 2 f r L l ,
where the moment of inertia of the board equals I = 1 3 m ( 2 L l ) 2 . In this analysis, we do not consider the torsional motion.
Next, we derive a force f r that acts on the board from the reference joint angles and the actual joint angles of the robot hand. Here, the reference joint angle θ r e f and the actual joint angle θ are as follows:
θ = tan 1 h + L l sin ϕ l f , θ r e f = tan 1 h L l sin ϕ l f .
When the angle ϕ is small ( ϕ 1 ), the angles can be approximated as sin ϕ ϕ and tan ϕ ϕ . The force f r is equal to τ / ( L l ) by using the approximation of the angles and the PD control law. Then, we can obtain
f r = 2 L l k p ϕ + k d ϕ ˙ .
Substituting f r into Equations (17) and (18), we can get
m z ¨ = m g 2 L l L f 2 k p ϕ + k d ϕ ˙ f h ,
I ϕ ¨ = 2 m g L l 4 L l 2 L f 2 k p ϕ + k d ϕ ˙ f h L l .
From these results, the transfer function G ( s ) from the force f h to the angle ϕ becomes
G ( s ) = 3 L f 2 / 4 L l m L f 2 s 2 + 3 k d s + 3 k p .
In the case where there is no latency in the system, the transfer function G ( s ) is stable. However, the actual robot control system has some latency. Thus, assuming that the latency time is T L , the transfer function G L ( s ) from a force f h to the angle ϕ can be rewritten as
G L ( s ) = 3 L f 2 / 4 L l m L f 2 s 2 + 3 k d s 1 1 + T L s + 3 k p 1 1 + T L s ,
where the transfer function of the latency element is assumed to be a first-order lag system in order to allow modeling with a finite-dimensional function.
By applying Routh-Hurwitz stability analysis to the transfer function G L ( s ) , the stability condition for the Human-Robot collaborative system is as follows:
k d k p T L > 0 .
As a result, the stability condition for the latency time T L and the proportional and derivative gains k p and k d of the PD controller is
T L < k d k p .
From this analysis result, we found that the latency time T L should be smaller than this value to stabilize the system. Additionally, the controller parameters could be adjusted from this condition under the determined sampling time T L .

4.2. Analysis of Collaborative Error

This section explores the collaborative error due to the image-processing and the latency resulting from the frame rate. Here, we define the collaborative error as the pitch angle θ x . Namely, in the case where the pitch angle θ x converges around 0 (this means that the board is kept horizontal), the collaborative error is considered to be small.

4.2.1. Collaborative Error Due to Image-Processing

We evaluate the error due to the image-processing. Assuming that the projection error from camera calibration is e r pixel, the error of the image moment is e p pixel, the pixel size is a μm/pixel, the focal length is f mm and the distance between the camera and the board is L c mm, the measurement error of the corner position in the world coordinates can be calculated by
e i = a L c f × ( e r + e p ) × 10 3 mm .
From the error e i obtained by Equation (27) and the board length L l , the error θ p i x e l due to the image-processing on the pitch-axis is given by
θ p i x e l = sin 1 e i L l rad .
From the experimental conditions shown in Table 1, we obtain θ p i x e l = 3.98 × 10 3 rad .

4.2.2. Collaborative Error Due to Frame Rate

Second, we also evaluate the error resulting from the frame rate. Assuming that the board is moved by a human subject with an amplitude of A mm and frequency of f f r e q Hz, the board velocity is 2 π A f f r e q m/s. In the case where the frame rate is set at 1000 fps (1 ms), the latency becomes 3 ms, including 1 ms for image acquisition, 1 ms for image transmission and 1 ms for control. As a result, the error θ l a t e n c y due to the frame rate on the pitch-axis is also given by
θ l a t e n c y = tan 1 2 π A f f r e q × T m + T t + T c L l rad ,
where T m is the measurement time for the image processing, T t is the transmission time of the data from the image processing PC to the real-time controller, and T c is the sampling time of the control. We assume that the transmission time and the sampling time are 1 millisecond each. T m depends on the frame rate of the high-speed camera. If the frame rate is 1000 fps, the measurement time becomes 1 millisecond. From the experimental conditions shown in Table 1, we obtain θ l a t e n c y = 2.14 × 10 2 rad in the case where the frame rate is 1000 fps.
The effect of the frame rate is about 10 times larger than the effect of the image processing. Consequently, the frame rate is very important for Human-Robot collaborative manipulation. We explored the validity of the collaborative errors in experiments that are described in the next section.

5. Experiment for Collaborative Motion Task

Finally, this section shows the experimental results of collaborative motion and the evaluation of the collaborative error and control performance due to the latency caused by the frame rate.

5.1. Result

Figure 11 and Figure 12 show continuous photographs of the experimental results at 1000 fps. The time intervals of the continuous photographs in Figure 11 and Figure 12 are 1 s and 0.5 s, respectively. Additionally, a video of the experimental results at 1000 fps is available on our website [32,33]. From the experimental results of the collaborative motion, the system was able to keep the board horizontal (the pitch angle θ x was around zero) and followed the orientations with respect to the y and z axes. Figure 13 shows the collaborative error and DA output for the root link of the middle finger. The DA output corresponds to the torque input, and the limit for the DA output is set at ±1 (this means that the maximum torque input of each servo motor is generated). In the left side in Figure 13, the blue line and the red line depict the board angle θ x and the board height P z , respectively. In the right side in Figure 13, the black line shows the DA output.
In the experiment, the collaborative motion was performed in a time span from 2 to 15 s, which is depicted by the gray dotted line in Figure 13. It can be seen from Figure 13 that the collaborative error could be successfully suppressed to within 0.03 rad (≈1.8°) even when the board was moved by the human subject at a high speed and randomly. Furthermore, the torque input could be suppressed to within 0.5.
As a result, collaborative motion between the human and the robot hand using the developed system and proposed method was achieved.

5.2. Evaluation

From the results described in Section 4.2, it can be seen that the collaborative error mainly depended on the frame rate. Therefore, we performed experiments with various frame rates: 50, 100, 300, 500 and 1000 fps. Figure 14 shows the theoretical and actual collaborative errors (pitch angle θ x ), and Figure 15 shows the collaborative errors (left figures) and DA output (right figures) with various frame rates.
As shown in Figure 14, the theoretical collaborative error can be calculated by Equation (29), and the actual collaborative error can also be derived from the experimental result shown in Figure 15. In the plot of the actual collaborative error, the vertical bar depicts the standard deviation of the collaborative error. Theoretically, the lower the frame rate, the greater the collaborative error. In the experiments, on the other hand, when the frame rate decreased, the collaborative error did not increase significantly. This reason for this was that the responsiveness of the collaborative system was not good enough to realize collaborative motion, and the human subject unknowingly restricted and slowed down the board’s operation. However, the standard deviation increased slightly.
From Figure 15, we confirmed that even if the frame rate became low, collaborative motion could be achieved. However, the amplitudes of the collaborative error and DA output increased when the frame rate decreased. In particular, the difference in the DA output for the various frame rates was significant. In cases of low frame rates such as 50 and 100 fps, the DA output reached ±1. This means that the maximum signal to the servo motor was generated. On the other hand, the DA output was suppressed to be less than ±0.5 in case of high frame rates. As a result, we found that the Human-Robot collaborative system became stable and the load on the robot hand was reduced when the frame rate was high.
From the experimental results shown in Figure 13, Figure 14 and Figure 15, we confirmed that the pitch angle θ x decreased when the frame rate increased. By increasing the frame rate (to over 300 fps), the stability of the Human-Robot collaborative system was improved, and the oscillation of the joint angles of the robot hand was also suppressed.
Next, we explain the analysis and experiment for the collaborative peg-in-hole task as a concrete task, which required high-accuracy performance as well as high-speed performance.

6. Collaborative Peg-In-Hole Task

In this section, we analyze the collaborative peg-in-hole task and clarify the conditions for achieving the task.

6.1. Conditions for Achieving Collaborative Peg-In-Hole Task

The peg-in-hole task has been widely investigated, and its modeling has also been studied. In the modeling, the condition described by Whitney has been analyzed and formulated using a peg-in-hole physical model [34,35]. In this section, we describe conditions for achieving the collaborative peg-in-hole task using our developed Human-Robot collaborative system, based on the conditions proposed by Whitney [34]. As a prior condition for the analysis, we assume that the human subject grasps one edge and the robot hand grasps one edge of the board.

6.1.1. Geometric Conditions

First of all, we consider geometric conditions for the peg-in-hole task. As a first condition, it was required that the peg and hole positions were adjusted to allow the peg to be inserted into the hole, as shown in Figure 16. As a permissible position error e i , the condition ( e i < e i ) has to be satisfied, where e i is the actual position error, and e i can be calculated as follows:
e i w cos θ 0 + c R w + c R c = R r R
In addition to the position adjustment, we also consider a condition for the board posture. As a permissible orientation error θ m , the condition θ < θ m has to be satisfied, where θ is the actual orientation error, and θ m can be calculated as follows:
θ m = cos 1 r R ,
where θ = θ p i x e l + θ l a t e n c y . θ p i x e l and θ l a t e n c y are collaborative errors due to the image processing and the latency, which can be calculated by Equations (28) and (29), respectively. Although the operation speed of the human subject was limited due to θ l a t e n c y , the collaborative peg-in-hole task could be achieved when the condition θ p i x e l < θ m was satisfied.

6.1.2. Force Condition

Next, we consider a condition in the case in which a lock phenomenon occurs between the peg and the hole in the board. The lock phenomenon means that a large contact force occurs between the peg and the hole, and the board cannot be moved in this state; i.e., the board is in a stationary state when the lock phenomenon occurs. Thus, let us consider force conditions from the viewpoints of the upward and downward directions and the rotational direction shown in Figure 17a. The force conditions can be described as follows:
m g + F h + F r μ f 1 + μ f 2 ,
f 1 = f 2 ,
F r L l cos θ p + l 2 p 2 ( f 1 + f 2 ) + r μ f 1 = F h L l cos θ p + r μ f 2 ,
where
l 2 p = 4 R 2 + r 2 + w 2 ,
L = L l 2 R ,
cos θ p = r R .
Deleting the forces f 1 and f 2 from Equations (32)–(34), the following equation can be obtained:
m g 2 μ L l r l 2 p R 1 F h 1 + 2 μ L l r l 2 p R F r .
When the above force condition is satisfied, the insertion can be executed without the lock phenomenon between the peg and the hole.
In addition, we consider the case of taking the board off of the peg. Since the friction force is the opposite to the case of the insertion, the signs of the terms μ f 1 and μ f 2 becomes the opposite (Figure 17b). Therefore, we can obtain
m g 2 μ L l r l 2 p R 1 F h + 1 + 2 μ L l r l 2 p R F r .
In Equation (38), when the robot does not collaborate with human motion, the achievement of the task strongly depends on the sign of the scale 2 μ L l r l 2 p R 1 . Actually, since the force F h will become negative, the insertion can be achieved even if the condition 2 μ L l r l 2 p R 1 < 0 is satisfied. On the other hand, in Equation (39), which means taking the board off of the peg, the force F h has to be a positive value. This means that the task cannot be achieved without robot collaboration.

6.1.3. Posture Condition

In addition to the force condition, the posture (pitch angle) condition of the board is also considered in the collaborative peg-in-hole task as a stringent condition. From the radii r and R and the thickness L t , the posture condition can be obtained as follows:
2 r < 2 R 2 L t sin θ p
R r L t > sin θ p
θ p < sin 1 R r L t
If this condition is satisfied during the collaborative motion, the collaborative peg-in-hole task can be achieved without satisfying the force condition, because contact between the peg and the hole does not occur. This condition can be satisfied by our real-time Human-Robot collaborative system in limited circumstances.

6.2. Experimental Result

We show one application of the Human-Robot collaborative system; that is, a peg-in-hole task carried out by a human and a robot. In the experiment, the radii ( R , r ) of the peg and the hole were 6.350 mm and 6.325 mm, respectively. Thus, since the clearance was only 0.025 mm (25 μm), precise motion and positioning were essential. Moreover, since it is very difficult to achieve the peg-in-hole task successfully, this task was considered to be valid for the verification of the effectiveness of our high-speed Human-Robot collaborative system.
Table 2 shows the experimental parameters of the collaborative peg-in-hole task. Substituting the experimental parameters into the above conditions for achieving the collaborative peg-in-hole task (Equations (30) and (31)), the following condition can be obtained:
e i w + c R = 1 + 0.025 1.13 [ mm ]
θ m = cos 1 r R = cos 1 6.325 6.35 8.88 × 10 2 [ rad ] 5.09 [ ° ]
θ p < sin 1 R r L t = sin 1 6.35 6.325 5 5.00 × 10 3 [ rad ] 2.87 × 10 1 [ ° ]
If the conditions e i < e i and θ < θ m are satisfied, we can achieve the collaborative peg-in-hole task. Since the posture condition shown in Equation (42) is sufficient for achieving the collaborative peg-in-hole task, we do not necessarily need to satisfy this condition.
In addition, in order to confirm the validity of the force condition, we conducted a preliminary experiment in which the board into which the peg was inserted was pulled off of the peg by a human subject. As a result, the lock phenomenon occurred, and the human subject could not move the board. This means that the force condition was not satisfied.
Figure 18 and Figure 19 show the experimental results and data of the collaborative peg-in-hole task. Furthermore, a video of the experimental result is available on our website [32,33]. Figure 18a is the initial state. Figure 18a–d shows the collaborative motion, and Figure 18e–i shows the collaborative peg-in-hole task. From the experimental results, the collaborative peg-in-hole task was carried out successfully. In particular, a human could move the board upward and downward smoothly, even when the peg was inserted in the hole. In general, it was not possible to move the board smoothly when the peg was in the hole.
In Figure 19, the collaborative motion was performed in the period of 2–15 s, and the collaborative peg-in-hole task was also performed in the period of 8–13 s. From Figure 19, it can be seen that the board could be moved upward and downward without the lock phenomenon using our proposed method and developed system, even when the peg-in-hole task was achieved. In fact, even when the z-location P z of the board was varied upward and downward, the pitch angle θ x was settled at around 0.1 rad. In addition, since the pitch angle θ x was not varied instantly comparing with the z-location, we found that the insertion and removal actions could be achieved smoothly.

7. Conclusions

In this paper, we developed a high-speed, high-accuracy Human-Robot collaborative system using a high-speed robot hand and a high-speed camera. Furthermore, we proposed visual sensing and robot hand control methods that could run at 1000 Hz. Then, we analyzed the stability of the collaborative system and the collaborative error. We demonstrated collaborative motion and evaluated the collaborative error based on the analysis results and the control performance with various frame rates. As a result, we found that high-speed performance was critically important in the HRI system from the viewpoints of the collaborative error, system stability and control performance of the robot. Moreover, we tried to achieve a collaborative peg-in-hole task using the developed system. To achieve the task, we analyzed the condition for performing the collaborative peg-in-hole from the viewpoints of geometric, force and posture conditions. Finally, we demonstrated the collaborative peg-in-hole task successfully. As a result, the validity of the developed high-speed and high-accuracy collaborative system was confirmed.
In the future, using our collaborative system, we plan to demonstrate other tasks that cannot be achieved with human-human collaboration or conventional Human-Robot collaboration. Moreover, since user feedback contributes to improving the performance of the robot action during Human-Robot collaboration [36], we plan to develop more flexible and intelligent HRI systems during the interaction between humans and robots.

Author Contributions

Conceptualization, Y.Y.; methodology, Y.M. and Y.Y.; software, Y.M.; validation, Y.M. and Y.Y.; formal analysis, Y.M.; investigation, Y.M. and Y.Y.; writing—original draft preparation, Y.Y. and Y.M.; writing—review and editing, Y.Y., Y.M. and M.I.; supervision, M.I.; project administration, Y.Y. and M.I.; funding acquisition, Y.Y. and M.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the JST, PRESTO Grant Number JPMJPR17J9, Japan.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chandrasekaran, B.; Conrad, M.J. Human-robot collaboration: A survey. In Proceedings of the IEEE SoutheastCon 2015, Fort Lauderdale, FL, USA, 9–12 April 2015; pp. 1–8. [Google Scholar]
  2. Zoss, A.B.; Kazerooni, H.; Chu, A. Biomechanical design of the berkeley lower extremity exoskeleton (bleex). IEEE/ASME Trans. Mechatron. 2006, 11, 128–138. [Google Scholar] [CrossRef]
  3. Hayashibara, Y.; Takubo, T.; Sonoda, Y.; Arai, H.; Tanie, K. Assist system for carrying a long object with a human-analysis of a human cooperative behavior in the vertical direction. In Proceedings of the 1999 IEEE/RSJ International Conference on Intelligent Robots and Systems, Kyongju, Korea, 17–21 October 1999; pp. 695–700. [Google Scholar]
  4. Yokoyama, K.; Handa, H.; Isozumi, T.; Fukase, Y.; Kaneko, K.; Kanehiro, F.; Kawai, Y.; Tomita, F.; Hirukawa, H. Cooperative works by a human and a humanoid robot. In Proceedings of the 2003 IEEE International Conference on Robotics and Automation, Taipei, Taiwan, 14–19 September 2003; pp. 2985–2991. [Google Scholar]
  5. Kosuge, K.; Kakuya, H.; Hirata, Y. Control algorithm of dual arms mobile robot for cooperative works with human. In Proceedings of the 2001 IEEE International Conference on Systems, Man, and Cybernetics, Tucson, AZ, USA, 7–10 October 2001; pp. 3223–3228. [Google Scholar]
  6. Suda, R.; Kosuge, K. Handling of object by mobile robot helper in cooperation with a human using visual information and force information. In Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; pp. 1102–1107. [Google Scholar]
  7. Stückler, J.; Behnke, S. Following human guidance to cooperatively carry a large object. In Proceedings of the 11th IEEE-RAS International Conference on Humanoid Robots, Bled, Slovenia, 26–28 October 2011; pp. 218–223. [Google Scholar]
  8. Antão, L.; Pinto, R.; Reis, J.; Gonçalves, G.; Pereira, F.L. Cooperative human-machine interaction in industrial environments. In Proceedings of the 2018 13th APCA International Conference on Automatic Control and Soft Computing, Ponta Delgada, Portugal, 4–6 June 2018; pp. 430–435. [Google Scholar]
  9. Teke, B.; Lanz, M.; Kämäräinen, J.; Hietanen, A. Real-time and robust collaborative robot motion control with Microsoft Kinect® v2. In Proceedings of the 2018 14th IEEE/ASME International Conference on Mechatronic and Embedded Systems and Applications, Oulu, Finland, 2–4 July 2018; pp. 1–6. [Google Scholar]
  10. Çoban, M.; Gelen, G. Realization of human-robot collaboration in hybrid assembly systems by using wearable technology. In Proceedings of the 2018 6th International Conference on Control Engineering & Information Technology, Istanbul, Turkey, 25–27 October 2018; pp. 1–6. [Google Scholar]
  11. Wang, W.; Li, R.; Chen, Y.; Diekel, Z.M.; Jia, Y. Facilitating human-robot collaborative tasks by teaching-learning-collaboration from human demonstrations. IEEE Trans. Autom. Sci. Eng. 2019, 16, 640–653. [Google Scholar] [CrossRef]
  12. Shayganfar, M.; Rich, C.; Sidner, C. Appraisal algorithms for relevance and controllability in human-robot collaboration. In Proceedings of the 2019 IEEE International Conference on Humanized Computing and Communication, Laguna Hills, CA, USA, 25–27 September 2019; pp. 31–37. [Google Scholar]
  13. Scimmi, L.S.; Melchiorre, M.; Mauro, S.; Pastorelli, S. Experimental real-time setup for vision driven hand-over with a collaborative robot. In Proceedings of the 2019 International Conference on Control, Automation and Diagnosis, Grenoble, France, 2–4 July 2019; pp. 1–5. [Google Scholar]
  14. Galin, R.; Meshcheryakov, R.; Samoshina, A. Mathematical modelling and simulation of human-robot collaboration. In Proceedings of the 2020 International Russian Automation Conference, Sochi, Russia, 6–12 September 2020; pp. 1058–1062. [Google Scholar]
  15. Darvish, K.; Simetti, E.; Mastrogiovanni, F.; Casalino, G. A Hierarchical architecture for human-robot cooperation processes. IEEE Trans. Robot. 2020. [Google Scholar] [CrossRef]
  16. Muthugala, M.A.V.J.; Srimal, P.H.D.A.S.; Jayasekara, A.G.B.P. Improving robot’s perception of uncertain spatial descriptors in navigational instructions by evaluating influential gesture notions. J. Multimodal User Interfaces 2020. [Google Scholar] [CrossRef]
  17. Sheng, W.; Thobbi, A.; Gu, Y. An integrated framework for human-robot collaborative manipulation. IEEE Trans. Cybern. 2015, 45, 2030–2041. [Google Scholar] [CrossRef]
  18. Lemmerz, K.; Glogowski, P.; Kleineberg, P.; Hypki, A.; Kuhlenkötter, B. A hybrid collaborative operation for human-robot interaction supported by machine learning. In Proceedings of the 2019 12th International Conference on Human System Interaction, Richmond, VA, USA, 25–27 June 2019; pp. 69–75. [Google Scholar]
  19. Ishikawa Group Laboratory. Available online: http://www.k2.t.u-tokyo.ac.jp/fusion/Janken/index-e.html (accessed on 15 December 2020).
  20. Janken (rock-paper-scissors) robot with 100% winning rate (human-machine cooperation system) in Yamakawa Laboratory. Available online: http://www.hfr.iis.u-tokyo.ac.jp/research/Janken/index-e.html (accessed on 15 December 2020).
  21. Yamakawa, Y.; Kuno, K.; Ishikawa, M. Human-robot cooperative task realization using high-speed robot hand system. In Proceedings of the 2015 IEEE International Conference on Advanced Intelligent Mechatronics, Busan, Korea, 7–11 July 2015; pp. 281–286. [Google Scholar]
  22. Matsui, Y.; Yamakawa, Y.; Ishikawa, M. Cooperative operation between a human and a robot based on real-time measurement of location and posture of target object by high-speed vision. In Proceedings of the 2017 IEEE International Conference on Control Technology and Applications, Mauna Lani, HI, USA, 27–30 August 2017; pp. 457–462. [Google Scholar]
  23. Yamakawa, Y.; Matsui, Y.; Ishikawa, M. Human-robot collaborative manipulation using a high-speed robot hand and a high-speed camera. In Proceedings of the 2018 IEEE International Conference on Cyborg and Bionic Systems, Shenzhen, China, 25–27 October 2018; pp. 426–429. [Google Scholar]
  24. Yamakawa, Y.; Matsui, Y.; Ishikawa, M. Development and analysis of a high-speed human-robot collaborative system and its application. In Proceedings of the 2018 IEEE International Conference on Robotics and Biomimetics, Kuala Lumpur, Malaysia, 12–15 December 2018; pp. 2415–2420. [Google Scholar]
  25. Namiki, A.; Imai, Y.; Ishikawa, M.; Kaneko, M. Development of a high-speed multifingered hand system and its application to catching. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, Las Vegas, NV, USA, 27–31 October 2003; pp. 2666–2671. [Google Scholar]
  26. MIKROTRON. Available online: http://www.mikrotron.de/ (accessed on 15 December 2020).
  27. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [Google Scholar] [CrossRef][Green Version]
  28. Ishii, I.; Nakabo, Y.; Ishikawa, M. Target tracking algorithm for 1 ms visual feedback system using massively parallel processing. In Proceedings of the 1996 IEEE International Conference on Robotics and Automation, Minneapolis, MN, USA, 22–28 April 1996; pp. 2309–2314. [Google Scholar]
  29. Peternel, L.; Tsagarakis, N.; Caldwell, D.; Ajoudani, A. Adaptation of robot physical behaviour to human fatigue in human-robot co-manipulation. In Proceedings of the 2016 IEEE-RAS 16th International Conference on Humanoid Robots, Cancun, Mexico, 15–17 November 2016; pp. 489–494. [Google Scholar]
  30. Peternel, L.; Tsagarakis, N.; Ajoudani, A. A human-robot co-manipulation approach based on human sensorimotor information. IEEE Trans. Neural Syst. Rehabil. Eng. 2017, 25, 811–822. [Google Scholar] [CrossRef]
  31. Rahman, S.M.M.; Ikeura, R. Calibrating intuitive and natural human-robot interaction and performance for power-assisted heavy object manipulation using cognition-based intelligent admittance control schemes. Int. J. Adv. Robot. Syst. 2018, 15, 1–18. [Google Scholar] [CrossRef][Green Version]
  32. Dynamic Human-Robot Interactive System in Yamakawa Laboratory. Available online: http://www.hfr.iis.u-tokyo.ac.jp/research/collaboration/index-e.html (accessed on 15 December 2020).
  33. YouTube. Available online: http://www.youtube.com/watch?v=xB9-vEiZwKY (accessed on 15 December 2020).
  34. Whitney, D.E. Quasi-static assembly of compliantly supported rigid parts. J. Dyn. Syst. Meas. Control 1982, 104, 65–77. [Google Scholar] [CrossRef]
  35. Hara, K.; Yokogawa, R.; Kai, Y. Kinematic evaluation of task-performance of a manipulator for a peg-in-hole task: A method of evaluation including a condition for avoidance of jamming on a chamfer). Trans. Jpn. Soc. Mech. Eng. Ser. C 1998, 64, 604–609. [Google Scholar] [CrossRef][Green Version]
  36. Muthugala, M.A.V.J.; Jayasekara, A.G.B.P. Enhancing user satisfaction by adapting robot’s perception of uncertain information based on environment and user feedback. IEEE Access 2017, 5, 26435–26447. [Google Scholar] [CrossRef]
Figure 1. Goal of this research. HRI: Human-Robot interaction.
Figure 1. Goal of this research. HRI: Human-Robot interaction.
Sensors 21 00663 g001
Figure 2. Purpose of this research [24].
Figure 2. Purpose of this research [24].
Sensors 21 00663 g002
Figure 3. Human-Robot collaborative system [22].
Figure 3. Human-Robot collaborative system [22].
Sensors 21 00663 g003
Figure 4. Mechanism of high-speed robot hand [25].
Figure 4. Mechanism of high-speed robot hand [25].
Sensors 21 00663 g004
Figure 5. Configuration of axes on the board [22].
Figure 5. Configuration of axes on the board [22].
Sensors 21 00663 g005
Figure 6. Board with hole and peg used in the collaborative peg-in-hole task.
Figure 6. Board with hole and peg used in the collaborative peg-in-hole task.
Sensors 21 00663 g006
Figure 7. Control flow of Human-Robot collaborative system. PD: proportional derivative.
Figure 7. Control flow of Human-Robot collaborative system. PD: proportional derivative.
Sensors 21 00663 g007
Figure 8. Flow of image processing and measurement of board state.
Figure 8. Flow of image processing and measurement of board state.
Sensors 21 00663 g008
Figure 9. Relationship between transformation matrices T [22].
Figure 9. Relationship between transformation matrices T [22].
Sensors 21 00663 g009
Figure 10. Inverse kinematics calculation of robot hand.
Figure 10. Inverse kinematics calculation of robot hand.
Sensors 21 00663 g010
Figure 11. Sequential photographs of experimental results [22]. (ah): the time interval of the sequential photographs is 1 s.
Figure 11. Sequential photographs of experimental results [22]. (ah): the time interval of the sequential photographs is 1 s.
Sensors 21 00663 g011
Figure 12. Sequential photographs of experimental results (side view). (ah): the time interval of the sequential photographs is 0.5 s.
Figure 12. Sequential photographs of experimental results (side view). (ah): the time interval of the sequential photographs is 0.5 s.
Sensors 21 00663 g012
Figure 13. Data of experimental results.
Figure 13. Data of experimental results.
Sensors 21 00663 g013
Figure 14. Theoretical collaborative error and actual collaborative error with various frame rates.
Figure 14. Theoretical collaborative error and actual collaborative error with various frame rates.
Sensors 21 00663 g014
Figure 15. Comparison between various frame rates.
Figure 15. Comparison between various frame rates.
Sensors 21 00663 g015
Figure 16. Peg shape and hole.
Figure 16. Peg shape and hole.
Sensors 21 00663 g016
Figure 17. Human-Robot cooperative peg-in-hole task. In addition to the illustrated force, gravitational acceleration g always acts on the board. (a) in the case of insertion (downward motion); (b) in the case of removal (upward motion).
Figure 17. Human-Robot cooperative peg-in-hole task. In addition to the illustrated force, gravitational acceleration g always acts on the board. (a) in the case of insertion (downward motion); (b) in the case of removal (upward motion).
Sensors 21 00663 g017
Figure 18. Collaborative peg-in-hole task. (ad): collaborative motion; (ei): collaborative peg-in-hole.
Figure 18. Collaborative peg-in-hole task. (ad): collaborative motion; (ei): collaborative peg-in-hole.
Sensors 21 00663 g018
Figure 19. Data of collaborative peg-in-hole task.
Figure 19. Data of collaborative peg-in-hole task.
Sensors 21 00663 g019
Table 1. Experimental conditions.
Table 1. Experimental conditions.
ParameterValue
e r 0.21 pixel
e p 0.1 pixel
a7 μm/pixel
f5 mm
L c 800 mm
L l 220 mm
A50 mm
f f r e q 5 Hz
Table 2. Experimental parameters for the collaborative peg-in-hole task.
Table 2. Experimental parameters for the collaborative peg-in-hole task.
ParameterValue
R6.350 mm
r6.325 mm
β 45°
w1 mm
L t 5 mm
L l 100 mm
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yamakawa, Y.; Matsui, Y.; Ishikawa, M. Development of a Real-Time Human-Robot Collaborative System Based on 1 kHz Visual Feedback Control and Its Application to a Peg-in-Hole Task . Sensors 2021, 21, 663. https://doi.org/10.3390/s21020663

AMA Style

Yamakawa Y, Matsui Y, Ishikawa M. Development of a Real-Time Human-Robot Collaborative System Based on 1 kHz Visual Feedback Control and Its Application to a Peg-in-Hole Task . Sensors. 2021; 21(2):663. https://doi.org/10.3390/s21020663

Chicago/Turabian Style

Yamakawa, Yuji, Yutaro Matsui, and Masatoshi Ishikawa. 2021. "Development of a Real-Time Human-Robot Collaborative System Based on 1 kHz Visual Feedback Control and Its Application to a Peg-in-Hole Task " Sensors 21, no. 2: 663. https://doi.org/10.3390/s21020663

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop