Next Article in Journal
CBD: A Deep-Learning-Based Scheme for Encrypted Traffic Classification with a General Pre-Training Method
Next Article in Special Issue
Teleoperation of High-Speed Robot Hand with High-Speed Finger Position Recognition and High-Accuracy Grasp Type Estimation
Previous Article in Journal
An Indoor Navigation Algorithm Using Multi-Dimensional Euclidean Distance and an Adaptive Particle Filter
Previous Article in Special Issue
Human–Machine Differentiation in Speed and Separation Monitoring for Improved Efficiency in Human–Robot Collaboration
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Human-Following Motion Planning and Control Scheme for Collaborative Robots Based on Human Motion Prediction

1
Center for Transformative AI and Robotics, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan
2
Robotics and Intelligent Systems Engineering (RISE) Laboratory, Department of Robotics and Artificial Intelligence, School of Mechanical and Manufacturing Engineering (SMME), National University of Sciences and Technology (NUST), Sector H-12, Islamabad 44000, Pakistan
3
Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong
*
Author to whom correspondence should be addressed.
Sensors 2021, 21(24), 8229; https://doi.org/10.3390/s21248229
Submission received: 8 November 2021 / Revised: 3 December 2021 / Accepted: 4 December 2021 / Published: 9 December 2021
(This article belongs to the Special Issue Human-Robot Collaborations in Industrial Automation)

Abstract

:
Human–Robot Interaction (HRI) for collaborative robots has become an active research topic recently. Collaborative robots assist human workers in their tasks and improve their efficiency. However, the worker should also feel safe and comfortable while interacting with the robot. In this paper, we propose a human-following motion planning and control scheme for a collaborative robot which supplies the necessary parts and tools to a worker in an assembly process in a factory. In our proposed scheme, a 3-D sensing system is employed to measure the skeletal data of the worker. At each sampling time of the sensing system, an optimal delivery position is estimated using the real-time worker data. At the same time, the future positions of the worker are predicted as probabilistic distributions. A Model Predictive Control (MPC)-based trajectory planner is used to calculate a robot trajectory that supplies the required parts and tools to the worker and follows the predicted future positions of the worker. We have installed our proposed scheme in a collaborative robot system with a 2-DOF planar manipulator. Experimental results show that the proposed scheme enables the robot to provide anytime assistance to a worker who is moving around in the workspace while ensuring the safety and comfort of the worker.

1. Introduction

The concept of collaborative robots was introduced in the early 1990s. The first collaborative system was proposed by Troccaz et al. in 1993 [1]. This system uses a passive robot arm to ensure safe operation during medical procedures. In 1996, Colgate et al. developed a passive collaborative robot system and applied it to the vehicle’s door assembly process carried out by a human worker [2]. In 1999, Yamada et al. proposed a skill-assist system to help a human worker carry a heavy load [3].
The collaborative robot systems are being actively introduced in the manufacturing industry. The International Organization of Standardization (ISO) amended its robot safety standards ISO 10128-1 [4] and 10128-2 [5] in 2011 to include the safety guidelines for human-robot collaboration. This led to an exponential rise in collaborative robot research and development. Today, many companies are manufacturing their own versions of collaborative robots, and these robots are being used in industries all over the world. Collaborative robots are expected to play a major role in the Industry 5.0 environments where people will work together with robots and smart machines [6].
In 2010, a 2-DOF co-worker robot “PaDY” (in-time Parts and tools Delivery to You robot) was developed in our lab to assist a factory worker in an automobile assembly process [7]. This process comprises a set of assembly tasks that are carried out by a worker while moving around the car body. PaDY assists the worker by delivering the necessary parts and tools to him/her for each task. The original control system of PaDY was developed based on a statistical analysis of the worker’s movements [7].
Many studies have been carried out on the human–robot collaborative system. Hawkins et al. proposed an inference mechanism of human action based on a probabilistic model to achieve wait-sensitive robot motion planning in 2013 [8]. D’Arpino et al. proposed fast target prediction of human reaching motion for human–robot collaborative tasks in 2015 [9]. Unhelkar et al. designed a human-aware robotic system, in which human motion prediction is used to achieve a safe and efficient part-delivery task between the robot and the stationary human worker in 2018 [10]. A recent survey on the sensors and techniques used for human detection and action recognition in industrial environments can be seen in [11]. The studies cited above [8,9,10] have improved efficiency of the collaborative tasks by incorporating human motion prediction into robot motion planning.
Human motion prediction was also introduced to PaDY. In 2012, the delivery operation delay of the robot–human handover tasks was reduced by utilizing prediction of the worker’s arrival time at the predetermined working position [12]. In 2019, a motion planning system was developed which optimized a robot trajectory by taking the prediction uncertainty of the worker’s movement into account [13]. In those studies [12,13], the robot repeats the delivery motion from its home position to each predetermined assembly position. If the robot can follow the worker during the assembly process, the worker can pick up necessary parts and tools from the robot at any time. Thus, more efficient collaborative work could be expected by introducing human-following motion.
In this paper, human-following motion of the collaborative robot is proposed for delivery of parts and tools to the worker. The human-following collaborative robot system needs to stay close to the human worker, while avoiding collision with the worker under the velocity and acceleration constraints. The contribution of this paper is summarized as follows:
1
The proposed human-following motion planning and control scheme enables the worker to pick up the necessary parts and tools when needed.
2
The proposed scheme achieves the human-following motion with a sufficiently small tracking error without adversely affecting the safety and comfort of the worker.
3
Experiments conducted in an environment similar to a real automobile assembly process illustrate the effectiveness of the proposed scheme.
This is a quantitative study where we conducted the experiments ourselves and analyzed the data collected from these experiments to deduce the results. The proposed scheme predicts the motion of the worker and calculates an optimal delivery position for the handover of parts and tools from the worker to the robot for each task of the assembly process. This scheme has been designed for a single worker operating within his/her workspace. It is not designed for the cases when multiple workers are operating in the same workspace, or when the worker moves beyond the workspace.
The rest of the paper is organized as follows. Section 2 describes the related works. Section 3 gives an overview of the proposed scheme, including the delivery position determination, the worker’s motion prediction, and trajectory planning and control scheme. The experimental results are discussed in Section 4. Section 5 concludes this paper.

2. Related Works

In this section, we present a review of the existing research on human–robot handover tasks, human-following robots, and motion/task planning based on human motion prediction.

2.1. Human–Robot Handover

Some studies have considered the problem of psychological comfort of the human receiver during the handover task. Baraglia et al. addressed the issue of whether and when a robot should take the initiative [14]. Cakmak et al. advocated the inclusion of user preferences while calculating handover position [15]. They also identified that a major cause of delay in the handover action is the failure to convey the intention and timing of handing over the object from the robot to the human [16]. Although these studies deal with important issues for improving the human–robot collaboration, it is still difficult to apply them in actual applications because psychological factors cannot be directly observed.
Some other studies used observable physical characteristics of the human worker for planning a robot motion that is safe and comfortable for the worker. Mainprice et al. proposed a motion planning scheme for human–robot collaboration considering HRI constraints such as constraints of distance, visibility and arm comfort of the worker [17]. Aleotti et al. devised a scheme in which the object is delivered in such a way that its most easily graspable part is directed towards the worker [18]. Sisbot et al. proposed a human-aware motion planner that is safe, comfortable and socially acceptable for the human worker [19].
The techniques and algorithms mentioned above operate with the assumption that the worker remains stationary in the environment. To solve the problem of providing assistance to a worker who moves around in the environment, we propose a human-following approach with HRI constraints in this paper.

2.2. Human-Following Robots

Several techniques have been proposed to carry out human-following motion in various robot applications. One of the first human-following approaches was proposed by Nagumo et al., in which an LED device carried by the human was detected and tracked by the robot using a camera [20].
Hirai et al. performed visual tracking of the human back and shoulder in order to follow a person [21]. Yoshimi et al. used several parameters including the distance, speed, color and texture of human clothes to achieve stable tracking in complex situations [22]. Morioka et al. used the reference velocities for human-following control calculated from estimated human position under the uncertainty of the recognition [23]. Suda et al. proposed a human–robot cooperative handling control using force and moment information [24].
The techniques cited in this section focus on performing human-following motion of the robot to achieve safe and continuous tracking. However, these schemes use the feedback of the observed/estimated current position of the worker. This makes it difficult for the robot to keep up with the worker who is continuously moving around in the workspace. In this paper, we solve this problem by applying human motion prediction and MPC.

2.3. Motion/Task Planning Based on Human Motion Prediction

In recent years, many studies have proposed motion planning using human motion prediction. The predicted human motion is used to generate a safe robot trajectory. Mainprice et al. proposed to plan a motion that avoids the predicted occupancy of the 3D human body [25]. Fridovich-Keil et al. proposed to plan a motion that avoids the risky region calculated by the confidence-aware human motion prediction [26]. Park et al. proposed a collision-free motion planner using the probabilistic collision checker [27].
Several studies have proposed robot task planning to achieve collaborative work based on human motion prediction. Maeda et al. achieved a fluid human–robot handover by estimating the phase of human motion [28]. Liu et al. presented a probabilistic model for human motion prediction for task-level human–robot collaborative assembly [29]. Cheng et al. proposed an integrated framework for human–robot collaboration in which the robot perceives and adapts to human actions [30].
Human motion prediction has been effectively used in various problems of human–robot interaction. In this paper, we apply the human motion prediction to human-following motion of the collaborative robot for delivery of parts/tools to a worker.

3. Proposed Motion Planning and Control Scheme

3.1. System Architecture

Figure 1 shows the system architecture of our proposed scheme. This scheme consists of three major parts:
1
Delivery position determination;
2
Worker’s motion prediction;
3
Trajectory planning and control.
In delivery position determination, an optimal delivery position is estimated using an HRI-based cost function. This cost function is calculated using the skeletal data of the worker measured by the 3-D vision sensor. In workers’ motion prediction, the position data obtained from the vision sensor are used to predict the motion of the worker. Moreover, after the completion of each work cycle, the worker’s model is updated using the stored position data. In the trajectory planning step, an optimal trajectory from the robot’s current position to the goal position is calculated using the receding horizon principle of Model Predictive Control (MPC). The robot motion controller ensures that the robot follows the calculated trajectory. The detailed description of these three parts of our scheme is given in the subsequent subsections.

3.2. Delivery Position Determination

In our proposed scheme, the delivery position is determined by optimizing an HRI-based cost function. This cost function includes terms related to the safety, visibility and arm comfort of the worker. These terms are calculated from the worker’s skeletal data observed by the 3-D vision sensor in real-time. This concept was first introduced by Sisbot et al. for motion planning of mobile robots [31]. The analytical form of the HRI-based cost function was proposed in our previous study [32]. Here, we provide a brief description of the cost function and solver for determining the optimal delivery position.
Let p del R n be the n dimensional delivery position, then the total cost C o s t ( p del , s w ) is expressed as:
C o s t ( p del , s w ) = C V ( p del , s w ) + C S ( p del , s w ) + C A ( p del , s w )
where s w is latest sample of the worker’s skeletal data obtained from the sensor. C V ( p del , s w ) is the visibility cost that maintains the delivery position within the visual range of the worker. This cost is expressed as a function of the difference between the worker’s body orientation and the direction of the delivery position with respect to the worker’s body center. C S ( p del , s w ) is the safety cost that prevents the robot from colliding with the worker. This cost is expressed as a function of the distance between the worker’s body center and the delivery position. C A ( p del , s w ) is the arm comfort cost that maintains the delivery position within the suitable distance and orientation for the worker. This cost is a function of the joint angles of the worker’s arm. In addition, this cost penalizes the delivery position where the worker needs to use his/her non-dominant hand.
The optimal delivery position is calculated by minimizing the cost function C o s t ( p del , s w ) . Since C o s t ( p del , s w ) is a non-convex function, we use Transition-based Rapidly-exploring Random Tree(T-RRT) method [33] to find the globally optimal solution. We apply T-RRT only in the vicinity of the worker to calculate the optimal solution in real-time. The process of determining the optimal delivery position is summarized in Algorithm 1.
Algorithm 1: Determination of Optimal Delivery Position using T-RRT
Input: Worker’s position p w ,
 Current sample of the worker’s skeleton s w ,
 Sampling range r s ,
 HRI cost function C o s t ( p del , s w )
Output: Optimal delivery position p del
1: Set the sampling area S near using p w and r s
2:  p cur S a m p l e ( S near )
3:  C o s t cur C o s t ( p cur , s w )
4:  C o u n t e r 0
5: while   C o u n t e r C o u n t e r max do
6:   p rand S a m p l e ( S near )
7:   p new p cur + δ ( p rand p cur )
8:   C o s t new C o s t ( p new , s w )
9:  if  T r a n s i t i o n T e s t ( C o s t new , C o s t cur , d new cur )  then
10:    p cur p new
11:    C o s t cur C o s t new
12:    C o u n t e r 0
13:  else
14:    C o u n t e r C o u n t e r + 1
15:  end if
16: end while
17:  p del p cur
18: return   p del
Figure 2 shows an example of the cost map in the workspace around the worker calculated from the HRI constraints. The worker’s shoulder positions (red squares) and the calculated delivery position (green circle) are shown in the figure. We can see that the proposed solver can calculate the delivery position that has the minimum cost in the cost map.

3.3. Worker’s Motion Prediction

The worker’s motion is predicted by using Gaussian Mixture Regression (GMR) proposed in our previous work [34]. GMR models the worker’s past movements and predicts his/her future movements in the workspace. Here, we provide a brief description of the motion prediction using GMR.
Suppose that p c = p w ( t ) R n is the worker’s current position at time step t, p h = p w ( t 1 ) p w ( t 2 ) p w ( t d ) T R n × ( d 1 ) is the position history, and d is the length of the position history. GMR models the conditional probability density function p r ( p c | p h ) whose expectation E [ p c | p h ] means the worker’s future position and variance V [ p c | p h ] is the uncertainty of the prediction. The details of GMR calculation are shown in Appendix A.
The procedure for the long-term motion prediction using GMR is summarized in Algorithm 2. The calculation to predict the worker’s position at the next time step is repeated until the length of the predicted trajectory becomes equal to the maximum prediction length T p . The worker’s predicted motion is expressed as the sequence of Gaussian distributions N w ( t c ) , N w ( t c + 1 ) , N w ( t c + T p ) starting from the current time t c . N w ( t ) is the worker’s predicted position distribution at step t expressed as:
N w ( t ) = N ( μ w ( t ) , Σ w ( t ) )
where μ w ( t ) is the mean vector and Σ w ( t ) is the covariance matrix of worker’s predicted position at step t.
If the worker repeats his/her normal movement, which is indicated in the process chart of the assembly process, our prediction system can predict the worker’s movement accurately enough for the system. According to our previous research, the RMSE (Root Mean Square Error) of the worker’s movement was about 0.3 m [34]. The RMSE was calculated by the comparison between the initial predicted worker’s movement and the observed worker’s movement when the worker started to move to the next working position.
When the worker moves differently from his/her normal movement, it is not easy to ensure the accuracy of the prediction. However, the proposed system operates safely even in this case, since the variance of the predicted position of the worker is included in the cost function used in the motion planning as shown in our previous study [13].
Algorithm 2: Worker’s motion prediction using GMR
Input: Current time t c ,
 Current position p c ( t c ) ,
 Position history p h ( t c ) ,
 Max prediction length T p
Output: Predicted trajectory N ( t c ) , N ( t c + 1 ) , , N ( t c + T p )
1: while   k = 1 t o T p do
2:   p h ( t c + k ) = p c ( t c + k 1 ) p c ( t c + k 2 ) p c ( t c + k d 1 ) T
3:   μ w ( t c + k ) = E [ p c ( t c + k ) | p h ( t c + k ) ]
4:   Σ w ( t c + k ) = V [ p c ( t c + k ) | p h ( t c + k ) ]
5:   p c ( t c + k ) = E [ p c ( t c + k ) | p h ( t c + k ) ]
6: end while

3.4. Trajectory Planning and Control

Figure 3 shows the concept of human-following motion planning using the worker’s predicted motion. The sequence of the worker’s predicted position distributions ( N w ( t c ) , N w ( t c + 1 ) , N w ( t c + T p ) ) is given to the trajectory planner. The sequence of the robot states, that is the robot trajectory q ( t c ) , q ( t c + 1 ) , q ( t c + T o ) , is calculated so that the robot’s state at each time step follows the corresponding predicted position of the worker.
To achieve the prediction-based human-following robot motion, we use an MPC-based planner to consider the evaluation function for finite time future robot states. This is a well-known strategy and is often used in real-time robot applications such as task-parametrized motion planning [35] and multi-agent motion planning [36].
The cost function used in MPC consists of terminal cost and stage cost. The terminal cost deals with the cost at the terminal state of the robot, which is the delivery position in our case. Stage cost considers the state of the robot during the whole trajectory from the current configuration to the goal configuration. A distinct feature of our scheme is that the optimal delivery position, found by optimizing the HRI-based cost function, is used to calculate the terminal cost. In addition, the predicted trajectory of the worker is used to calculate the stage cost. This scheme plans the collision-free robot trajectory that follows the moving worker efficiently under the safety cost constraint and the robot’s velocity and acceleration constraints.
The cost function J used for the optimization of the proposed trajectory planner is expressed as:
J = φ ( q ( t c + T o ) ) + t c t c + T o L 1 ( q ˙ ( k ) ) + L 2 ( q ( k ) ) + L 3 ( q ( k ) ) d k
where q = ( θ , θ ˙ ) T R 2 N j is the state vector of the manipulator, θ = ( θ 1 , θ 2 , , θ N j ) T R N j is the vector composed of the joint angles of the manipulator, N j is the degrees of freedom of the manipulator, T o ( T o T p ) is the length of the trajectory (in our experiments, we used T o = T p as a rule of thumb) and φ ( q ( t c + T o ) ) is the terminal cost which prevents the calculated trajectory of the robot from diverging. It is expressed as:
φ ( q ( t c + T o ) ) = 1 2 FK N j ( q ( t c + T o ) ) x del T R FK N j ( q ( t c + T o ) ) x del
where FK j is the forward kinematics of the robot that transform the robot state q from joint coordinates to position p j and velocity v j in the workspace coordinates. R is the diagonal positive definite weighting matrix. x del is the terminal state of the robot which is calculated based on the optimal delivery position and the predicted mean position of the worker. In this study, x del becomes x del = μ w ( t c + T o ) + p del , 0 T , where μ w ( t c + T o ) is the worker’s predicted position at the end of the trajectory ( t c + T o ) , and  p del is the calculated delivery position for the worker’s observed position. We calculate p del after each sampling interval and assume that the variation in p del is negligibly small during the sampling interval of the sensing system, which is 30 ms.
L 1 ( q ˙ ( k ) ) , L 2 ( q ( k ) ) and L 3 ( q ( k ) ) are the stage costs which are expressed as:
L 1 ( q ˙ ( k ) ) = 1 2 j = 1 N r j ( B vel , j ( θ ˙ j ( k ) ) + B acc , j ( θ ¨ j ( k ) ) )
L 2 ( q ( k ) ) = w j = 1 N j 1 D M FK p , j ( q ( k ) ) , μ w ( k ) , Σ w ( k ) .
L 3 ( q ( k ) ) = 1 2 j = 1 N ( FK p , j ( q ( k ) ) ( μ w ( k ) + p del ) T ) Q ( FK p , j ( q ( k ) ) ( μ w ( k ) + p del ) )
L 1 ( q ˙ ( k ) ) is the stage cost to maintain the robot velocity and acceleration within their maximum limits. B vel , j ( θ ˙ j ( k ) ) and B acc , j ( θ ¨ j ( k ) ) are defined as:
B vel , j ( θ ˙ j ( k ) ) = 0 ( | | θ ˙ j | | θ ˙ max , j ) | | θ ˙ j | | θ ˙ max , j 2 ( | | θ ˙ j | | > θ ˙ max , j )
B acc , j ( θ ¨ j ( k ) ) = 0 ( | | θ ¨ j | | θ ¨ max , j ) | | θ ¨ j | | θ ¨ max , j 2 ( | | θ ¨ j | | > θ ¨ max , j )
where θ ˙ max , j and θ ¨ max , j are the maximum velocity and maximum acceleration of the jth joint, respectively.
L 2 ( q ( k ) ) is the stage cost that prevents the robot from hitting the worker. w is a weighting coefficient of this cost function. D M ( x , μ , Σ ) = ( x μ ) T Σ 1 ( x μ ) is the Mahalanobis distance that considers the variance of the probabilistic density distribution. Using the Mahalanobis distance between the predicted worker’s position distribution N ( μ w ( k ) , Σ w ( k ) ) and the end-effector position FK p , j ( q ( k ) ) at step k, an artificial potential field is constituted according to the predicted variance. The artificial potential becomes wider in the direction of larger variance in the predicted position.
L 3 ( q ) is the stage cost to ensure that the robot follows the worker’s motion. Q is the diagonal positive definite weighting matrix. This cost function is responsible for the human-following motion of the robot based on the worker’s predicted trajectory as shown in Figure 3. For each time step of the predicted position distribution of the worker N ( μ w ( k ) , Σ w ( k ) ) , the desirable state of the robot is calculated so that the robot’s endpoint follows the predicted mean position of the worker μ w ( k ) offset by the calculated delivery position p del .
Now we can define the optimization problem that will be solved by our proposed system.
m i n i m i z e J s u b j e c t t o q ˙ = f ( q , u ) q ( t ) = q cur
where f denotes the nonlinear term of the robot’s dynamics, u is the input vector, and  q ( t ) is the initial state of the trajectory which corresponds to the current state q cur of the robot. To solve this optimization problem with the equality constraints described above, we use the calculus of variations. The discretized Euler–Lagrange equations that the optimal solution should satisfy are expressed as:
q ( k + 1 ) = q ( k ) + f ( q ( k ) , u ( k ) ) Δ t s ,
q ( t ) = q cur ,
λ ( k ) = λ ( k + 1 ) H q T ( q ( k + 1 ) , u ( k ) , λ ( k + 1 ) ) ,
λ ( t + T o ) = φ q T ( q ( t + T o ) ) ,
H u ( q ( k ) , u ( k ) , λ ( k ) ) = 0 ,
where H is the Hamiltonian and is defined as:
H ( q , u , λ ) = L 1 ( q ˙ ( k ) ) + L 2 ( q ( k ) ) + L 3 ( q ( k ) ) + λ T f ( q , u ) .
The procedure for calculating the online trajectory is shown in Algorithm 3. After the sequential optimization based on the gradient decent, we obtain the optimal trajectory q ( t ) , q ( t + 1 ) , , q ( t + T o ) . For detailed calculations, please refer to our previous study [13].
Algorithm 3: Robot Trajectory Generator
Input: Target delivery position p del ,
 Predicted worker’s trajectory N ( t ) , N ( t + 1 ) , N ( t + T p ) ,
 Current state of the robot q cur ,
 Max length of the robot trajectory T o
Output: Optimal trajectory is q ( t ) , q ( t + 1 ) , q ( t + T o )
1: Initialize the set of input vectors u
2:  q ( t ) q cur
3: while   Σ k = t t + T o | H u ( q ( k + 1 ) , u ( k ) , λ ( k + 1 ) ) | < ϵ do
4:  while  k = 1 t o T o  do
5:    q ( t + k ) q ( t + k 1 ) + f ( q ( t + k 1 ) , u ( t + k 1 ) ) Δ t s
6:  end while
7:  while  k = T o t o 1  do
8:    λ ( t + k 1 ) λ ( t + k ) H q T ( q ( t + k ) , u ( t + k ) , λ ( t + k + 1 ) )
9:  end while
10:  while  i = 1 t o T o  do
11:    s i H u T ( q ( t + i 1 ) , u ( t + i 1 ) , λ ( t + i ) )
12:  end while
13:   u u + c s
14:  end while
15:  while k = 1 t o T o do
16:    q ( t + k ) q ( t + k 1 ) + f ( q ( t + k 1 ) , u ( t + k 1 ) ) Δ t s
17:  end while

4. Experiment

4.1. Experimental Setup

To evaluate the performance of the proposed scheme in a real-world environment, we used the planar manipulator PaDY proposed in our previous study [7]. PaDY was designed to assist the workers of an automobile factory. A parts tray and a tool holder were attached to the end-effector of PaDY to store the parts and tools required for car assembly tasks. The robot delivers the parts and tools to the worker during the assembly process. For the details of the hardware design of PaDY, please refer to [7].
The proposed scheme was installed in a computer with an Intel Core i7-3740QM (Quad-core processor, 2.7 GHz) with 16GB memory. All calculations were done within 30 ms, the sampling interval of the sensing system that tracks the position of the human worker.
We designed an experiment to demonstrate the effectiveness of the worker’s motion prediction in the human-following behavior of our proposed scheme. Figure 4 shows the experimental workspace and Figure 5 shows the top view of the setup for this experiment. In this experiment, the worker needs to perform the following six tasks:
1
Tightening a bolt (Task 1);
2
Attaching three grommets (Task 2);
3
Attaching one grommet (Task 3, Task 4, Task 5, Task 6).
Each task is performed at a separate working position in the workspace. The experiment is carried out as shown in Figure 6. The experiment is carried out as follows.
1
The experiment begins when the robot starts to approach the worker standing at the working position for Task 1. The worker takes a bolt and the bolt tightening tool from the robot (Figure 6a).
2
The worker performs Task 1 (Figure 6b).
3
The worker moves to the working position for Task 2 and the robot follows him. The worker returns the bolt tightening tool to the tool holder (Figure 6c) and picks up three grommets from the parts tray.
4
The worker performs Task 2 (Figure 6d).
5
The worker moves to the working position for Task 3 and picks up a grommet from the tray (Figure 6e).
6
The worker performs Task 3 (Figure 6f).
7
The worker moves to the working position for Task 4 and picks up a grommet from the tray (Figure 6g).
8
The worker performs Task 4 (Figure 6h).
9
The worker moves to the working position for Task 5 and picks up another grommet from the parts tray (Figure 6i).
10
The worker performs Task 5 (Figure 6j).
11
The worker moves to the working position Task 6 and picks up the last grommet from the parts tray (Figure 6k).
12
The worker performs Task 6 (Figure 6l) and this concludes the experiment.
We performed this experiment with four different participants (A, B, C and D) to evaluate the robustness of the system for different workers. Each participant is asked to perform the complete work cycle ten times. The first trial is performed without using the predicted motion of the worker. Whereas, in all other trials, the predicted motion of the worker is used and the worker model is sequentially updated after completing each trial.
For more details about the experiment, please see the Supplemental Materials.

4.2. Tracking Performance

Figure 7a shows the estimated delivery position and the robot’s end-effector position for trial 1 of a participant when the robot’s motion is calculated based on the observed position of the worker without using the motion prediction. The black vertical lines show the time when the worker performs each assembly task. Figure 7b shows the estimated delivery position and the robot’s end-effector position for trial 10 of the same participant when the robot’s motion is calculated based on the predicted position of the worker using the proposed scheme.
At the beginning of the experiment, the robot is at its home position and the participant is at the working position for Task 1. The robot starts its human-following motion after arriving at the delivery position for Task 1 (at around 6 s in Figure 7a,b). We can see that the robot keeps following the participant during the whole experiment in both schemes (with and without the use of motion prediction).
The green line in Figure 7a,b shows the tracking error which is the difference between the delivery position and the end-effector position. We can see that the maximum tracking error is reduced from about 0.5 m to 0.3 m by using the motion prediction.
It is not possible to completely eliminate the tracking error since the manipulator used for the experiments has a mechanical torque limiter at each joint and the maximum angular acceleration without activating the torque limiter is 90 deg/s 2 . In both Figure 7a,b, a large tracking error around 30 s can be observed. This is because the participant makes a large movement around 30 s and the robot cannot follow the participant because of its acceleration limit.

4.3. Cycle Time

Figure 8 shows the comparison of the cycle time of the four participants in each trial. We define cycle time as the time required for a participant to complete all six tasks of the assembly process. In Figure 8, the cycle time of each trial is normalized by the time of trial 1. Remember that motion prediction was not used in trial 1.
We can see that the cycle time for each participant decreases as the number of trials increases. The cycle time of trial 10 is reduced to 65.6–74.8% of the cycle time of trial 1. This shows that motion prediction can improve the performance of participants and help them complete the assembly process faster.
Note that the proposed system ignores the dynamics of the interaction between the robot and the worker, assuming that the worker is well trained and the behavioral dynamics of the worker with respect to the robot’s movements can be ignored. If the effects of the robot’s motion on the worker can be modeled, the system can better deal with the effect of the interaction between the worker and the robot and a further improvement in the worker’s time efficiency could be expected.

4.4. HRI-Based Cost

Table 1 shows the average and maximum HRI-based costs for each participant during the human-following motion of the robot. Since the HRI-based cost increases as the safety and comfort of the worker decreases, it is desirable to have a low HRI-based cost in human–robot collaboration.
In Table 1, we see that there are no significant differences in the average and maximum HRI-based costs between trial 1 (when motion prediction is not used) and trial 10 (when motion prediction is used) for all four participants. Therefore, we conclude that the proposed prediction-based human-following control reduces the work cycle time without adversely affecting the safety and comfort of the workers.

5. Conclusions

We proposed a human-following motion planning and control scheme for a collaborative robot which supplies the necessary parts and tools to a worker in an automobile assembly process. The human-following motion of the collaborative robot makes it possible to provide anytime assistance to the worker who is moving around in the workspace.
The proposed scheme calculates an optimal delivery position for the current position of the worker by performing non-convex optimization of an HRI-based cost function. Whenever the worker’s position changes, the new optimal delivery position is calculated. Based on the observed movement of the worker, the motion of the worker is predicted and the robot’s trajectory is updated in real-time using model predictive control to ensure a smooth transition between the previous and new trajectories.
The proposed scheme was applied to a planar collaborative robot called PaDY. Experiments were conducted in a real environment where a worker performed a car assembly process with the assistance of the robot. The results of the experiments confirmed that our proposed scheme provides better assistance to the worker, improves the work efficiency, and ensures the safety and comfort of the worker.
This scheme has been designed for a single worker operating within his/her workspace. It is not designed for the cases when multiple workers are operating in the same workspace, or when the worker moves beyond the workspace. Moreover, we did not consider the dynamics of interaction between the robot and the human, assuming that the human worker in the factory is well trained and his/her behavior dynamics to the robot motion is negligible. If the effects of the robot’s motion on the human can be modeled, the system can better deal with the effect of the interaction and further improvement in time efficiency could be expected.
We believe that the human-following approach has tremendous potential in the field of collaborative robotics. The ability to provide anytime assistance is a key feature of our proposed method, and we believe it will be very useful in many other collaborative robot applications.

Supplementary Materials

The supplementary material is available online at https://youtu.be/-jkPoK5URdw (accessed on 4 December 2021), Video: Human-Following Motion Planning and Control Scheme for Collaborative Robots.

Author Contributions

Conceptualization, J.K. and K.K.; methodology, A.K. and K.K.; software, F.I.K. and A.K.; validation, F.I.K. and A.K.; formal analysis, F.I.K. and A.K.; investigation, A.K. and K.K.; resources, J.K. and K.K.; data curation, F.I.K. and A.K.; writing—original draft preparation, F.I.K. and A.K.; writing—review and editing, F.I.K., A.K. and K.K.; visualization, F.I.K.; supervision, K.K.; project administration, K.K.; funding acquisition, J.K. and K.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The dataset generated from the experiments in this study can be found at https://github.com/kf-iqbal-29/Dataset-HumanFollowingCollaborativeRobot.git.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
HRIHuman–Robot Interaction
MPCModel Predictive Control
ISOInternational Organization of Standardization
DOFDegrees Of Freedom
PaDYIn-time Parts and tools Delivery to You robot
T-RRTTransition-based Rapidly exploring Random Trees
GMRGaussian Mixture Regression
RMSERoot Mean Square Error

Appendix A. Detail Calculation of Gaussian Mixture Regression

Suppose that p c = p w ( t ) R n is the worker’s current position at time step t, p h = p w ( t 1 ) p w ( t 2 ) p w ( t d ) T R n × ( d 1 ) is the position history, and d is the order of the autoregressive model. Then the joint distribution p r of p c and p h can be expressed as
p r ( p h , p c ) = m = 1 M π m N ( p h , p c | μ m , Σ m ) ,
where
μ m = μ m p h μ m p c ,
Σ m = Σ m p h p h Σ m p h p c Σ m p c p h Σ m p c p c .
The expectation E [ p c | p h ] and the variance V [ p c | p h ] of the conditional probability density function p r ( p c | p h ) are expressed as
E [ p c | p h ] = m = 1 M h m ( p h ) μ , V [ p c | p h ] = m = 1 M h m ( p h ) Σ + μ μ T
E [ p c | p h ] E [ p c | p h ] T ,
where
h m ( p h ) = π m N ( p h | μ m p h , Σ m p h p h ) k = 1 K π k N ( p h | μ k p h , Σ k p h p h ) ,
μ = μ m p c + Σ m p c p h ( Σ m p h p h ) 1 ( p h μ m p h ) ,
Σ = Σ m p c p c Σ m p c p h ( Σ m p h p h ) 1 Σ m p h p c .
While making the prediction, the position of the worker p c ( t + 1 ) at step t + 1 is calculated as
p c ( t + 1 ) = E [ p c ( t + 1 ) | p h ( t + 1 ) ] , p h ( t + 1 ) = p c ( t ) p c ( t 1 ) p c ( t + 1 d ) T .
This calculation to predict the worker’s position, shown in Equation (A9) is repeated until the length of the predicted trajectory becomes equal to the maximum prediction length T p . The process of predicting the worker’s motion is summarized in Algorithm 2. For the details of the derivation, please see [37].

References

  1. Troccaz, J.; Lavallee, S.; Hellion, E. A passive arm with dynamic constraints: A solution to safety problems in medical robotics. In Proceedings of the IEEE Systems Man and Cybernetics Conference—SMC, Le Touquet, France, 17–20 October 1993; Volume 3, pp. 166–171. [Google Scholar] [CrossRef]
  2. Colgate, J.; Wannasuphoprasit, W.; Peshkin, M. Cobots: Robots for collaboration with human operators. In Proceedings of the 1996 ASME International Mechanical Engineering Congress and Exposition, Atlanta, Georgia, 17–22 November 1996; pp. 433–439. [Google Scholar]
  3. Yamada, Y.; Konosu, H.; Morizono, T.; Umetani, Y. Proposal of Skill-Assist: A system of assisting human workers by reflecting their skills in positioning tasks. In Proceedings of the IEEE SMC’99 Conference Proceedings 1999 IEEE International Conference on Systems, Man, and Cybernetics (Cat. No.99CH37028), Tokyo, Japan, 12–15 October 1999; Volume 4, pp. 11–16. [Google Scholar] [CrossRef]
  4. ISO. ISO 10218-1:2011—Robots and Robotic Devices—Safety Requirements for Industrial Robots—Part 1: Robots; ISO: Geneva, Switzerland, 2011. [Google Scholar]
  5. ISO. ISO 10218-2:2011—Robots and Robotic Devices—Safety Requirements for Industrial Robots—Part 2: Robot Systems and Integration; ISO: Geneva, Switzerland, 2011. [Google Scholar]
  6. Perakovic, D.; Periša, M.; Cvitić, I.; Zorić, P. Information and communication technologies for the society 5.0 environment. In Proceedings of the XXXVIII Simpozijum o Novim Tehnologijama u Poštanskom i Telekomunikacionom Saobraćaju—PosTel 2020, Belgrade, Serbia, 1–2 December 2020. [Google Scholar] [CrossRef]
  7. Kinugawa, J.; Kawaai, Y.; Sugahara, Y.; Kosuge, K. PaDY: Human-friendly/cooperative working support robot for production site. In Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, 18–22 October 2010; pp. 5472–5479. [Google Scholar] [CrossRef]
  8. Hawkins, K.P.; Vo, N.; Bansal, S.; Bobick, A.F. Probabilistic human action prediction and wait-sensitive planning for responsive human-robot collaboration. In Proceedings of the 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), Atlanta, GA, USA, 15–17 October 2013; pp. 499–506. [Google Scholar] [CrossRef]
  9. Pérez-D’Arpino, C.; Shah, J.A. Fast target prediction of human reaching motion for cooperative human-robot manipulation tasks using time series classification. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 6175–6182. [Google Scholar] [CrossRef]
  10. Unhelkar, V.V.; Lasota, P.A.; Tyroller, Q.; Buhai, R.D.; Marceau, L.; Deml, B.; Shah, J.A. Human-Aware Robotic Assistant for Collaborative Assembly: Integrating Human Motion Prediction With Planning in Time. IEEE Robot. Autom. Lett. 2018, 3, 2394–2401. [Google Scholar] [CrossRef] [Green Version]
  11. Bonci, A.; Cen Cheng, P.D.; Indri, M.; Nabissi, G.; Sibona, F. Human-Robot Perception in Industrial Environments: A Survey. Sensors 2021, 21, 1571. [Google Scholar] [CrossRef] [PubMed]
  12. Tanaka, Y.; Kinugawa, J.; Sugahara, Y.; Kosuge, K. Motion planning with worker’s trajectory prediction for assembly task partner robot. In Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, 7–12 October 2012; pp. 1525–1532. [Google Scholar] [CrossRef]
  13. Kanazawa, A.; Kinugawa, J.; Kosuge, K. Adaptive Motion Planning for a Collaborative Robot Based on Prediction Uncertainty to Enhance Human Safety and Work Efficiency. IEEE Trans. Robot. 2019, 35, 817–832. [Google Scholar] [CrossRef]
  14. Baraglia, J.; Cakmak, M.; Nagai, Y.; Rao, R.; Asada, M. Initiative in robot assistance during collaborative task execution. In Proceedings of the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, New Zealand, 7–10 March 2016; pp. 67–74. [Google Scholar] [CrossRef]
  15. Cakmak, M.; Srinivasa, S.S.; Lee, M.K.; Forlizzi, J.; Kiesler, S. Human preferences for robot-human hand-over configurations. In Proceedings of the 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA, 25–30 September 2011; pp. 1986–1993. [Google Scholar] [CrossRef] [Green Version]
  16. Cakmak, M.; Srinivasa, S.S.; Lee, M.K.; Kiesler, S.; Forlizzi, J. Using spatial and temporal contrast for fluent robot-human hand-overs. In Proceedings of the 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Lausanne, Switzerland, 6–9 March 2011; pp. 489–496. [Google Scholar] [CrossRef]
  17. Mainprice, J.; Akin Sisbot, E.; Jaillet, L.; Cortés, J.; Alami, R.; Siméon, T. Planning human-aware motions using a sampling-based costmap planner. In Proceedings of the 2011 IEEE International Conference on Robotics and Automation, Shanghai, China, 9–13 May 2011; pp. 5012–5017. [Google Scholar] [CrossRef] [Green Version]
  18. Aleotti, J.; Micelli, V.; Caselli, S. An Affordance Sensitive System for Robot to Human Object Handover. Int. J. Soc. Robot. 2014, 6, 653–666. [Google Scholar] [CrossRef]
  19. Sisbot, E.A.; Alami, R. A Human-Aware Manipulation Planner. IEEE Trans. Robot. 2012, 28, 1045–1057. [Google Scholar] [CrossRef]
  20. Nagumo, Y.; Ohya, A. Human following behavior of an autonomous mobile robot using light-emitting device. In Proceedings of the 10th IEEE International Workshop on Robot and Human Interactive Communication, ROMAN 2001 (Cat. No.01TH8591), Paris, France, 18–21 September 2001; pp. 225–230. [Google Scholar] [CrossRef]
  21. Hirai, N.; Mizoguchi, H. Visual tracking of human back and shoulder for person following robot. In Proceedings of the 2003 IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2003), Kobe, Japan, 20–24 July 2003; Volume 1, pp. 527–532. [Google Scholar] [CrossRef]
  22. Yoshimi, T.; Nishiyama, M.; Sonoura, T.; Nakamoto, H.; Tokura, S.; Sato, H.; Ozaki, F.; Matsuhira, N.; Mizoguchi, H. Development of a Person Following Robot with Vision Based Target Detection. In Proceedings of the 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, 9–13 October 2006; pp. 5286–5291. [Google Scholar] [CrossRef]
  23. Morioka, K.; Oinaga, Y.; Nakamura, Y. Control of human-following robot based on cooperative positioning with an intelligent space. Electron. Commun. Jpn. 2012, 95, 20–30. [Google Scholar] [CrossRef]
  24. Suda, R.; Kosuge, K. Handling of object by mobile robot helper in cooperation with a human using visual information and force information. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Lausanne, Switzerland, 30 September–4 October 2002; Volume 2, pp. 1102–1107. [Google Scholar] [CrossRef]
  25. Mainprice, J.; Berenson, D. Human-robot collaborative manipulation planning using early prediction of human motion. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, Japan, 3–7 November 2013; pp. 299–306. [Google Scholar] [CrossRef]
  26. Fridovich-Keil, D.; Bajcsy, A.; Fisac, J.F.; Herbert, S.L.; Wang, S.; Dragan, A.D.; Tomlin, C.J. Confidence-aware motion prediction for real-time collision avoidance. Int. J. Robot. Res. 2020, 39, 250–265. [Google Scholar] [CrossRef]
  27. Park, J.S.; Park, C.; Manocha, D. I-Planner: Intention-aware motion planning using learning-based human motion prediction. Int. J. Robot. Res. 2019, 38, 23–39. [Google Scholar] [CrossRef] [Green Version]
  28. Maeda, G.; Ewerton, M.; Neumann, G.; Lioutikov, R.; Peters, J. Phase estimation for fast action recognition and trajectory generation in human-robot collaboration. Int. J. Robot. Res. 2017, 36, 1579–1594. [Google Scholar] [CrossRef]
  29. Liu, H.; Wang, L. Human motion prediction for human-robot collaboration. J. Manuf. Syst. 2017, 44, 287–294. [Google Scholar] [CrossRef]
  30. Cheng, Y.; Sun, L.; Liu, C.; Tomizuka, M. Towards Efficient Human-Robot Collaboration With Robust Plan Recognition and Trajectory Prediction. IEEE Robot. Autom. Lett. 2020, 5, 2602–2609. [Google Scholar] [CrossRef]
  31. Sisbot, E.A.; Marin-Urias, L.F.; Alami, R.; Simeon, T. A Human Aware Mobile Robot Motion Planner. IEEE Trans. Robot. 2007, 23, 874–883. [Google Scholar] [CrossRef] [Green Version]
  32. Iqbal, K.F.; Kanazawa, A.; Ottaviani, S.R.; Kinugawa, J.; Kosuge, K. A real-time motion planning scheme for collaborative robots using HRI-based cost function. Int. J. Mechatr. Autom. 2021, 8, 42–52. [Google Scholar] [CrossRef]
  33. Jaillet, L.; Cortes, J.; Simeon, T. Transition-based RRT for path planning in continuous cost spaces. In Proceedings of the 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, Nice, France, 22–26 September 2008; pp. 2145–2150. [Google Scholar] [CrossRef] [Green Version]
  34. Kinugawa, J.; Kanazawa, A.; Arai, S.; Kosuge, K. Adaptive Task Scheduling for an Assembly Task Coworker Robot Based on Incremental Learning of Human’s Motion Patterns. IEEE Robot. Autom. Lett. 2017, 2, 856–863. [Google Scholar] [CrossRef]
  35. Calinon, S.; Bruno, D.; Caldwell, D.G. A task-parameterized probabilistic model with minimal intervention control. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation (ICRA), Hong Kong, China, 31 May–7 June 2014; pp. 3339–3344. [Google Scholar] [CrossRef] [Green Version]
  36. Du Toit, N.E.; Burdick, J.W. Robot Motion Planning in Dynamic, Uncertain Environments. IEEE Trans. Robot. 2012, 28, 101–115. [Google Scholar] [CrossRef]
  37. Sung, H.G. Gaussian Mixture Regression and Classification. Ph.D. Thesis, Rice University, Houstion, TX, USA, 2004. [Google Scholar]
Figure 1. System architecture.
Figure 1. System architecture.
Sensors 21 08229 g001
Figure 2. Example of the cost map calculated from the HRI constraints and its optimal delivery position.
Figure 2. Example of the cost map calculated from the HRI constraints and its optimal delivery position.
Sensors 21 08229 g002
Figure 3. Concept of human-following motion planning based on the predicted trajectory of the worker.
Figure 3. Concept of human-following motion planning based on the predicted trajectory of the worker.
Sensors 21 08229 g003
Figure 4. Experimental workspace.
Figure 4. Experimental workspace.
Sensors 21 08229 g004
Figure 5. Top view of the experimental setup.
Figure 5. Top view of the experimental setup.
Sensors 21 08229 g005
Figure 6. Experiment showing a complete work cycle where six tasks are performed. (a) A bolt and the tool are picked up; (b) Task 1 is performed; (c) The tool is returned and 3 grommets are picked up; (d) Task 2 is performed; (e) A grommet is picked up; (f) Task 3 is performed; (g) A grommet is picked up; (h) Task 4 is performed; (i) A grommet is picked up; (j) Task 5 is performed; (k) A grommet is picked up; (l) Task 6 is performed.
Figure 6. Experiment showing a complete work cycle where six tasks are performed. (a) A bolt and the tool are picked up; (b) Task 1 is performed; (c) The tool is returned and 3 grommets are picked up; (d) Task 2 is performed; (e) A grommet is picked up; (f) Task 3 is performed; (g) A grommet is picked up; (h) Task 4 is performed; (i) A grommet is picked up; (j) Task 5 is performed; (k) A grommet is picked up; (l) Task 6 is performed.
Sensors 21 08229 g006
Figure 7. Tracking performance. (a) When motion prediction is not used; (b) When motion prediction is used.
Figure 7. Tracking performance. (a) When motion prediction is not used; (b) When motion prediction is used.
Sensors 21 08229 g007
Figure 8. Comparison of Cycle Time.
Figure 8. Comparison of Cycle Time.
Sensors 21 08229 g008
Table 1. Summary of HRI-based costs during the human-following motion for each worker.
Table 1. Summary of HRI-based costs during the human-following motion for each worker.
WorkerAverage Cost (without Prediction)Average Cost (with Prediction)Max Cost (without Prediction)Max Cost (with Prediction)
Worker A8.9911.7936.3435.82
Worker B12.739.9038.1734.44
Worker C18.3513.5639.6531.33
Worker D16.3017.5031.4731.26
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Khawaja, F.I.; Kanazawa, A.; Kinugawa, J.; Kosuge, K. A Human-Following Motion Planning and Control Scheme for Collaborative Robots Based on Human Motion Prediction. Sensors 2021, 21, 8229. https://doi.org/10.3390/s21248229

AMA Style

Khawaja FI, Kanazawa A, Kinugawa J, Kosuge K. A Human-Following Motion Planning and Control Scheme for Collaborative Robots Based on Human Motion Prediction. Sensors. 2021; 21(24):8229. https://doi.org/10.3390/s21248229

Chicago/Turabian Style

Khawaja, Fahad Iqbal, Akira Kanazawa, Jun Kinugawa, and Kazuhiro Kosuge. 2021. "A Human-Following Motion Planning and Control Scheme for Collaborative Robots Based on Human Motion Prediction" Sensors 21, no. 24: 8229. https://doi.org/10.3390/s21248229

APA Style

Khawaja, F. I., Kanazawa, A., Kinugawa, J., & Kosuge, K. (2021). A Human-Following Motion Planning and Control Scheme for Collaborative Robots Based on Human Motion Prediction. Sensors, 21(24), 8229. https://doi.org/10.3390/s21248229

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop