Next Article in Journal
Systematization and Comparison of the Binary Successive Approximation Variants
Next Article in Special Issue
Digital Twin for Training Bayesian Networks for Fault Diagnostics of Manufacturing Systems
Previous Article in Journal
Development of 3D MRI-Based Anatomically Realistic Models of Breast Tissues and Tumours for Microwave Imaging Diagnosis
Previous Article in Special Issue
Digital Twins Supporting Efficient Digital Industrial Transformation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Communication

Digital Twin-Driven Human Robot Collaboration Using a Digital Human

1
National Institute of Advanced Industrial Science and Technology, Koto-ku, Tokyo 135-0064, Japan
2
Toyota Motor Corporation, Toyota 471-8573, Japan
*
Author to whom correspondence should be addressed.
Current address: 2-3-26 Aomi, Koto-ku, Tokyo 135-0064, Japan.
Sensors 2021, 21(24), 8266; https://doi.org/10.3390/s21248266
Submission received: 29 September 2021 / Revised: 24 November 2021 / Accepted: 3 December 2021 / Published: 10 December 2021
(This article belongs to the Special Issue Digital Twins, Sensing Technologies and Automation in Industry 4.0)

Abstract

:
Advances are being made in applying digital twin (DT) and human–robot collaboration (HRC) to industrial fields for safe, effective, and flexible manufacturing. Using a DT for human modeling and simulation enables ergonomic assessment during working. In this study, a DT-driven HRC system was developed that measures the motions of a worker and simulates the working progress and physical load based on digital human (DH) technology. The proposed system contains virtual robot, DH, and production management modules that are integrated seamlessly via wireless communication. The virtual robot module contains the robot operating system and enables real-time control of the robot based on simulations in a virtual environment. The DH module measures and simulates the worker’s motion, behavior, and physical load. The production management module performs dynamic scheduling based on the predicted working progress under ergonomic constraints. The proposed system was applied to a parts-picking scenario, and its effectiveness was evaluated in terms of work monitoring, progress prediction, dynamic scheduling, and ergonomic assessment. This study demonstrates a proof-of-concept for introducing DH technology into DT-driven HRC for human-centered production systems.

1. Introduction

The advent of Industry 4.0 [1] has led to the introduction of the digital twin (DT) [2]: A physical product in real space and a corresponding virtual product in virtual space that are connected by data and information. DTs have been utilized in various phases of the production life cycle: design, manufacturing, service, and retirement [2]. A DT corresponds one-to-one with its physical twin, it can simulate the physical twin’s behavior in virtual space exactly, it responds to the physical twin with relatively low latency, and changes to the DT or physical twin affect the other. Thus, a DT offers (1) individualized, (2) high-fidelity, (3) real-time, and (4) controllable performance [3]. It is important that the physical object and the virtual twin are bidirectionally connected, so that not only is the physical object changed based on the virtual twin, but changes in the physical object are also reflected in the virtual twin [4]. DTs are commonly applied in production management [5,6,7]. For example, Oyekan et al. [5] proposed a DT-based fan-blade maintenance system, where the detailed shape of the fan-blade surface is reflected in the DT based on real-time surface measurement by an RGB-depth (RGB-D) camera, and a grinding robot is controlled by the process being simulated with the DT. Human–robot collaboration (HRC) has been introduced to realize safe, rapid, and flexible manufacturing [8,9], and DT-driven HRC has recently emerged as a more reliable and effective approach [10,11]. In a DT-driven HRC system, physical objects such as production facilities and individual robots are reflected in the virtual space as exactly as possible. Real-time analysis and simulation can then be conducted with the DT for safety monitoring and production management. For example, Dröder et al. [12] proposed a DT-driven HRC system for dynamic planning of a safe path for a robot, where the robot and worker are reflected in virtual space by using an RGB-D camera and machine learning. However, previous research has not sufficiently considered human factors for DTs and DT-driven HRC [3]. Considering human movements and ergonomic factors can help the DT monitor the working progress and manage the health and physical loads of workers. Ergonomic indices that are easy to calculate and represent the trunk load are typically used in industrial fields [13], such as the rapid upper limb assessment (RULA) [14] and rapid entire body assessment (REBA) [15]. Such indices can be calculated from the joint angles and positions, body weight, and external forces (e.g., the weight of the held object). However, their applicability decreases as the complexity of the work operation increases. In contrast, the digital human (DH) concept is applicable to detailed analysis of any working posture because the kinematics and dynamics in virtual space can be used to estimate physical loads, such as joint torques [16]. Thus, introducing the DH to DT-driven HRC can potentially realize human-centered production management where an individual worker’s movements, capacity, and physical load are reflected and simulated by the DH in a virtual space. Although this approach has great potential to be used in conjunction with various motion analysis and ergonomic evaluation technologies, to the best of our knowledge, the DH concept and DT-driven HRC including the whole-body motion and worker dynamics have not been integrated to date.
In this study, we present a proof-of-concept of the DH-integrated DT-driven HRC system to improve production efficiency and ergonomic evaluation. In our integrated approach, the DH was used to measure worker motions and simulate the working progress and physical load. Further, the proposed system was applied to a parts-picking scenario in a factory environment to demonstrate its applicability to real-world settings. As shown in Figure 1, in the proposed system, the robot and worker movements are reflected by the DT. The proposed system consists of DH, virtual robot, and production management modules. The DH module enables real-time measurement of the full-body motion of the worker, motion analysis, and ergonomic assessment. The virtual robot module contains the robot operating system (ROS), and it controls the robot and acquires its sensor data. Therefore, the DT includes both the worker and robot. The production management module monitors and predicts the working progress for dynamic scheduling under ergonomic constraints and real-time support of the worker. As a demonstration, the proposed system was applied to a parts-picking scenario in a factory environment. The performances of the prediction, dynamic scheduling, and ergonomic constraints were validated with test data imitating various types of workers. This study is a proof-of-concept for introducing a DH into the DT-driven HRC system to enhance the production efficiency and ergonomic evaluation.
The key concepts of the DT include (1) individualized, (2) high-fidelity, (3) real-time, and (4) controllable performance [3]. According to these concepts, the proposed system is organized as follows:
  • The working capability and actual movements in real space are considered by measuring individual robot and worker behaviors.
  • The robot behavior and full-body posture are reflected in the virtual space. The DH module enables further analysis for the worker, such as picking detection and ergonomic evaluation.
  • The virtual and real spaces are connected via real-time measurements, wireless communication, and feedback systems.
  • The changes in the robot and worker in the real space are immediately reflected to the virtual twin. The dynamic scheduling results are immediately presented to the robot and worker to improve the production efficiency and the physical load of the worker.
This study makes the following contributions:
  • DH integration: A DH is applied to DT-driven HRC to realize the real-time monitoring, prediction, and ergonomic assessment based on the full-body dynamics of workers.
  • Demonstration for a picking scenario: The proposed system was applied to a picking scenario to demonstrate the advantages of DT-driven HRC. The worker and robot picked parts as instructed by the production management module while moving over a wide area.

2. Related Works

2.1. Human Representation in Research on Human–Robot Collaboration

Research on HRC has used various formats to represent workers. The worker representation is especially important for safety monitoring [8]. Michalos et al. [17] proposed an augmented-reality system for displaying the dynamic safe zone to a worker, where the safe zone is changed according to a DT simulation of the robot. However, their DT does not include the worker’s movement, which makes planning safe movements for the robot difficult. Several other studies have focused on capturing worker movements by using an RGB-D camera. Bonci et al. [18] realized collision avoidance by generating an occupancy map acquired using an RGB-D camera mounted on the manufacturing robot. They utilized the acquired point cloud without the semantic labeling of the worker. Nikolakis et al. [19] calculated the nearest-neighbor distance between the simulated robot arm and point cloud of the human user as extracted from an environment-fixed RGB-D camera. Dröder et al. [12] constructed a DT by clustering the point cloud acquired from an RGB-D camera and simulated a collision-free trajectory for the robot. In these studies, the direct use of a point cloud led to safe robot movement even if faulty skeletal recognition occurred. Cheng et al. [20] realized safe motion planning based on using a full-body skeleton to predict the trajectory of workers. They further estimated the current motion class and target object for detailed recognition of worker behavior. Recently, Kanazawa et al. [21] proposed a motion prediction-based algorithm for safe robot motion planning, where the worker’s body center and hand positions are predicted by using a Gaussian mixture model. Finally, Malik et al. [10] utilized the DH model for several ergonomic assessments during working. Thus, workers have been represented as point clouds, joint positions, full-body skeletons acquired by cameras or inertial measurement units (IMUs), the class of the current motion, and DHs. According to Fera et al. [22], because the DH contains both skeleton and volumetric body segments, it is potentially applicable to safety monitoring, motion analysis, and ergonomic assessment. Therefore, the proposed system employs a DH for human representation, and the full-body motion of the worker is assigned to the DH in real time. As shown in Figure 2, using the DH as a common representation of the worker enables the system to connect with various human measurement and analysis modules for a given production scenario.

2.2. Improvement in Production Efficiency by Human–Robot Collaboration

HRC and DTs have been applied to improving the production efficiency of flexible manufacturing [10]. Monitoring workers enables dynamic scheduling based on the actual progress [22]. For example, Cheng et al. [20] proposed an HRC system that performs dynamic scheduling by using an RGB-D sensor to recognize the current actions of workers. They confirmed that dynamic scheduling led to improved time efficiency. Lv et al. [23] proposed a comprehensive DT-based HRC assembly system where physical entities such as the robot, facilities, and worker movements are mapped to virtual space. The safety module and dynamic task allocation were performed according to dynamic changes in the DT of the assembly environment. Zhang et al. [24] proposed a dynamic scheduling algorithm based on worker capability, where they employed the hand speed and its variance, assembly accuracy, and variance of the hand trajectory as the human performance index. This enables dynamic scheduling adapted to the performance of an individual worker. Bilberg and Malik [25] used the measured cycle time to evaluate the skill index of a worker, and the assembly tasks were allocated to balance the workloads of the robot and worker. Their consideration of the variability in worker capability effectively improved the progress prediction.
In this study, dynamic scheduling was performed to reduce the delay in progress between the robot and worker while allocating or deallocating tasks with low or high physical loads, respectively. This is realized by estimating the current working progress, future working progress, and physical load during working. The current working progress is recognized by the motion analysis of the worker. The future progress is predicted on the basis of the actual cycle time of the worker. The physical load is estimated by calculating the kinematics and dynamics of the DH.

2.3. Ergonomic Assessment in Industry Fields

Ergonomic assessment is an important issue for industrial workers [1,5]. Although both physical and cognitive loads must be assessed [26], this study focused on the physical aspects (i.e., ergonomics). Oyekan et al. [27] revealed human reactions to robot motions by collecting human characteristics in the factory DT integrated with the DH simulation. Most studies on ergonomic assessment in the industry have utilized quick ergonomic indices such as RULA [14], REBA [15], the Ovako working posture analysis system (OWAS) [28], the National Institute for Occupational Safety and Health (NIOSH) lifting equation [29], and the occupational repetitive actions (ORCA) checklist [30]. Maderna et al. [31] proposed a kitting scheduling algorithm to reduce both the cycle time and physical load of the worker. The physical load is scored according to RULA and REBA for the full-body skeleton acquired by the RGB-D camera. The time-integrated value of REBA is employed as the physical load index in the literature [32], where the task planner proposed the task sequence that removes the burdensome posture scored by the REBA index. Greco et al. [33] constructed a DT of the factory layout and worker movement. They applied the OWAS, NIOSH, and ORCA checklists to evaluate the layout in terms of ergonomics. Although limited in their applicability to a given task and posture, these indices allow for quick assessment of the ergonomics. Meanwhile, Baskaran et al. [34] indicated the need for integrating DT with DH technology. Reflecting the worker as the DH enables ergonomic assessment of various physical properties (e.g., body height, weight, and range of motion), motion simulation in the design phase, and motion analysis in the manufacturing phase. For example, Malik et al. [10] utilized a DH simulation for ergonomic assessment of the workspace layout based on visual and grasp analyses. Furthermore, Menychtas et al. [35] demonstrated the effectiveness of joint torque analysis for evaluating the physical load on industrial workers.
In this study, a DH was employed for a consistent and comprehensive representation of the worker in the DT-driven HRC framework. This enabled the ergonomic assessment of not only the REBA and OWAS but also joint torques. Thus, in contrast to previous studies, dynamic scheduling using joint torques was realized in this study, where the joint torque was estimated according to the kinematic and dynamic analyses of the DH.
As introduced so far, individual studies have been performed to improve human representation, production efficiency using DT and HRC, and the accuracy of ergonomic assessment. However, to the best of the authors’ knowledge, no DT-driven HRC system has been reported with an enhanced production efficiency based on a detailed physical load assessment such as joint torque using DH. Such integration research and proof-of-concept with a realistic production scenario are essential for the development of DT research involving humans.

3. Method

3.1. Overview

Figure 3 shows the proposed DT-based HRC framework. The DT consists of the DH and virtual robots generated using real-time measurements of the worker and robot. The sensor submodule and robot-mounted sensors are the interfaces used to map the physical worker and robot to virtual space. In contrast, the feedback submodule and robot controller are the interfaces used to change the behaviors of the worker and robot based on commands of the production management module. The communication module realizes bidirectional wireless communication among the DH, virtual robot, and production management modules. Finally, the production management module monitors and predicts the progress for dynamic scheduling and ergonomic assessment based on the DT. The production management, DH, and virtual robot modules are independent and communicate via a wireless network because the integrated system adapts to various production scenarios by replacing or customizing each module, where the workplace layout, manufacturing robot and control modules, worker movements, and production management strategy change frequently. The details are described in the following subsections.

3.2. Digital Twin of the Worker

As shown in Figure 3, the DH module includes submodules for sensing, motion estimation, motion analysis, and ergonomic assessment. Worker movements are captured via the sensing submodule, and the motion estimation submodule reconstructs the full-body posture by using the DH model. The DH model is further used for motion analysis such as picking detection and for ergonomic assessment such as joint torque analysis.

3.2.1. Digital Human Model

As shown in Figure 4, the DH model consists of a link structure with 48 degrees of freedom and a skin surface that deforms with changes in the joint angles [36,37]. The 93 body dimensions and segment mass properties of the DH model are estimated to reflect the body height and weight of the worker based on principal component analysis of information taken from the body dimension database for Japanese people [38]. The details are described in [36,37].

3.2.2. Sensing Submodule

The types and combination of sensing submodules are determined using the production scenario: the target measurement space; environmental conditions such as light, occlusions, and magnetic disturbance; and the acceptability of a wearable sensor. Previous studies utilized an RGB-D camera to acquire the point cloud and body skeletal data. However, this sensor is limited to situations where the worker moves in a limited area, i.e., within the field of view. In this study, marker-based optical motion capture (OptiTrack [39]) was employed as the sensing submodule, where 3D marker positions are captured by multiple cameras arranged in the environment.

3.2.3. Motion Estimation

The full-body motion of the worker is estimated on the basis of the sensor data from the sensing submodule. An optimization-based motion estimation algorithm [40] is used, where the posture of the DH is estimated to fit to the sensor data under range-of-motion constraints. As shown in Figure 5, the markers defined on the skin surface are fitted to the 3D marker positions tracked via the optical motion capture by minimizing an objective function:
F ( p p e l v i s ,   R ) = w M F M ( p p e l v i s ,   R ) + w R F R ( R ) ,
where p p e l v i s and R = [ θ 0 ,   , θ i ,   , θ N ] are the design variables. p p e l v i s is the 3D pelvis position, and R includes the roll, pitch, and yaw angles of the i th joint θ i = [ θ i , r ,   θ i , p ,   θ i , y ] . N represents the number of joints. w M and w R are weight parameters. As shown in Figure 5,   F M ( p p e l v i s ,   R ) represents the Euclidean distance between the tracked 3D marker position q O j and predefined points on the skin surface q D H j ( p p e l v i s ,   R ) , and it is calculated as follows:
F M ( p p e l v i s ,   R ) = j [ 0 ,     M ] || q O j q D H j ( p p e l v i s ,   R ) || 2 ,
where M represents the number of tracked markers. q D H j ( p p e l v i s ,   R ) is obtained by assigning p p e l v i s and R to the DH, i.e., forward kinematics. F R ( R ) is the penalty function to ensure that the joint range of motion is satisfied, and it is calculated as follows:
F R ( R ) = i [ 0 , N ] k { r ,   p ,     y } { ρ ( ( θ i , k φ i , k m i n ) 2 ) ( θ i , k < φ i , k m i n ) ρ ( ( θ i , k φ i , k m a x ) 2 ) ( θ i , k > φ i , k m a x ) 0 ( o t h e r w i s e ) ,
where φ i , k m i n and φ i , k m a x represent the minimum and maximum angles of the i th joint, i.e., range-of-motion, and ρ ( x ) denotes the Cauchy loss function, ρ ( x ) = log ( 1 + x ) .

3.2.4. Motion Analysis

The reconstructed full-body motion of the DH can be utilized for several motion analyses. In this study, it was applied to picking detection. As shown in Figure 6, the DT includes details of the picking environment such as the racks and parts box. Thus, picking detection can be realized by detecting the collision between the hand position of the DH and the box space, which is H p [mm] higher than the parts box (Figure 6a). The height increment H p was set to H p = 350 . As shown in Figure 6b, when a worker picks a part, they store the part on a trolley. The part release motion is recognized by detecting the collision between the DH and box space representing the storage, whose height H r , width W r , and depth D r were set to H r = 300 , W r = 700 , and D r = 600 , respectively. The detection of the part picking and release is used for progress monitoring and prediction.

3.2.5. Ergonomic Assessment

Once the full-body motion of the worker is reconstructed, several ergonomic assessment methods can be applied. This study focused on joint torque analysis, which quantifies physical load considering full-body posture, body shape and weight, and external forces to a worker. As shown in Figure 7, the joint torque analysis is based on inverse dynamics analysis implemented in the in-house DH software platform DhaibaWorks [41]. The joint torque is obtained using an optimization method [41] under the assumption that the gravity, inertial, and contact forces are in equilibrium:
m G g + f C G + f K G + f H G = 0 .
m G and g are the mass of the body segment G and gravity acceleration vector, respectively. f C G , f K G , and f H G represent the contact forces for segment G , joint reaction forces between segment G and its parent segment, and the joint reaction forces between segment G and its child segment.
In addition, the moments are also assumed to be in equilibrium:
r c o g G × m G g + t C G + t K G + t H G = 0 ,
where t C G , t K G , and t H G represent the torques for segment G caused by the forces f C G , f K G , and f H G , respectively. r c o g G represents the position vector of the center of gravity of the segment G . The forces f K G and torques t K G can be written as follows:
f K G = j = 0 , 1 , 2 x K ,   j G e j ,
t K G = j = 0 , 1 , 2 x K ,   j + 3 G e j ,
where x X represents the optimization variables that describe the forces and torques. e 0 , e 1 , and e 2 represent the unit vectors e 0 = [1, 0, 0], e 1 = [ 0 ,   1 ,   0 ] , and e 2 = [ 0 ,   0 ,   1 ] , respectively. Finally, all forces and torques are estimated by minimizing the sum of the squares of the contact forces and joint torques:
F t ( X ) = x X ( ω x   x ) 2 ,
where ω x is the weight coefficient for x and has a specified constant value ( ω x = 1 ).
Figure 7 shows the external forces for the joint torque estimation. In this study, the foot reaction forces f C r F o o t and f C l F o o t and the hand force f C l H a n d = w ( p k ) g were applied for the right foot ( G = r F o o t ) , left foot ( G = l F o o t ) , and left hand segments ( G = l H a n d ) , where w ( p k ) represents the weight of part p k . As shown in Figure 7, because f C G for the body segments G [ r F o o t ,   l F o o t ] is difficult to measure in a factory setting, these forces are defined according to the friction cone:
f C G = i = [ 0 ,   N e ] x C ,   i G   c i G ,
t C G = j = 0 , 1 , 2 x C , N e + j G   e j ,
where c i G is the unit vector dividing the conical side of the friction cone into N e surfaces.
The friction cone is created so that its top vertex and central axis correspond to the contact position and normal vector, respectively. Finally, all joint torques, joint reaction forces, and contact forces are estimated by solving Equation (8) under the constraints given by Equations (4), (5), (9) and (10). The joint torques are calculated for when the worker picks a part with the hand force f C l H a n d . In this study, the joint torques of the torso t t o r s o ( p k ) and left shoulder t l s ( p k ) when picking the part p k were calculated and used as the ergonomic constraint for the dynamic scheduling.

3.3. Digital Twin of the Robot

The picking by the robot is controlled by the ROS. As shown in Figure 8, the current position and orientation of the robot are estimated according to the integration of the rotation velocities of the wheels (i.e., odometry). Magnetic tape is arranged on the floor to reflect positions in the virtual environment of the ROS. The accumulated localization error is corrected by relocating the virtual robot at the magnetic tape in the virtual space when the actual magnetic tape is detected. The posture of the robot arm is reconstructed by solving the forward kinematics from the joint angles of the robot arm. Given the ID of a target part, the picking robot moves and stops in front of the parts box and tries to pick the corresponding part from the box. The result of the part picking (i.e., success or failure) is then recognized. If the magnetic sensor detects a magnetic disturbance after a part is picked, the picking process is considered successful. Otherwise, the picking process has failed, so the robot tries to pick the part again until successful.

3.4. Communication Module

The DTs of the worker and robot are integrated by the bidirectional wireless communication module, which enables the development of the virtual robot and DH modules on different platforms. The real-time publish subscribe [42] is used as the communication protocol, and the data distribution service (DDS) [43] is employed as middleware in eProsima FirstDDS [44]. The virtual robot module initially publishes the mesh data of the robot and picking environment. It then publishes the current position and orientation of the robot, the target parts, and the picking state (before picking, success, or failure). The production management module determines the target parts for the worker and robot to pick. Then, it publishes the IDs of the target parts to the virtual robot module when the picking is successful. Thus, the proposed system integrates the DTs of the worker (i.e., DH module) and robot (i.e., virtual robot module). This enables the production management module to update the robot behavior based on the dynamic scheduling results.

3.5. Production Management Module

3.5.1. Cycle and Parts Definition

The part definition (part ID p k , weight w ( p k ) , picking time u ( p k ) , and picking attribute r ( p k ) ), part layout (position L ( p k ) and amount A ( p k ) ), cycle pattern C i = { p j } , and corresponding picking allocation R i = { r j } are imported into the production management module. The picking attribute r ( p k ) { w o r k e r ,   r o b o t ,   a n y } indicates whether the part p k can be picked only by the robot ( r ( p k ) = r o b o t ), only by the worker ( r ( p k ) = w o r k e r ), or by both ( r ( p k ) = a n y ). C i indicates the picking sequence of parts for the i th cycle. r j { w o r k e r ,   r o b o t } indicates whether the robot ( r j = w o r k e r ) or worker ( r j = r o b o t ) takes the part p j . Dynamic scheduling is performed when a delay is detected, and the production management module changes the picking allocation r j to recover the schedule.

3.5.2. Progress Monitoring and Prediction

As shown in Figure 9, the DT is used to estimate the current progress and predict future progress. The same method is applied for both the robot and worker. The current progress of the worker picking the j th part is calculated as follows:
T ( i , j ) = p k P ( i , j ) U ( p k ) P ( i , j ) = { p k   | p k C c , k [ i , j ] ,   r k = r }
where T ( i , j ) is the working time from picking the i th to j th parts in the current cycle C c . U ( k ) is the actual time required to pick the j th part. r { w o r k e r ,   r o b o t } is set to r = w o r k e r or r = r o b o t when calculating the progress of the worker or robot, respectively. The working speed representing the actual working time relative to the initial plan is calculated as follows:
v ( i , j ) = T ( i , j ) / { p k F ( i , j ) u ( p k ) + p k F ( i , j ) m ( p k 1 ,   p k ) } F ( i , j ) = { p k   | p k C t , k [ i , j ] ,   r k = r }
where v ( i , j ) is the ratio of the working time to the initial plan from the i th to j th parts in the prediction target cycle C t . m ( p k 1 ,   p k ) is the time for moving from p k 1 to p k and is calculated as m ( p k 1 ,   p k ) = d ( p k 1 ,   p k ) / v 0 , where d ( p k 1 ,   p k ) and v 0 represent the distance between the fronts of the k 1 th to k th parts and the predefined walking speed, respectively. Finally, the future progress is predicted as follows:
t ( j + 1 ,   l ) = v ( 0 , j ) { p k F ( j + 1 , l ) l u ( p k ) + p k F ( j + 1 , l ) m ( p k 1 ,   p k ) } + p k F ( j + 1 , l ) e ( p k ) e ( k ) = { E A ( p k ) = 0 0 o t h e r w i s e
where E represents the time to restock a parts box. Because the proposed system can recognize the picking of p k , changes to the amount of parts A ( p k )   can be tracked and predicted when part p k is picked. In this study, the prediction target cycle C t (i.e., the range of prediction) was set to C t = C O C O + 1 for predicting the current cycle C O and next cycle C O + 1 .

3.5.3. Dynamic Scheduling

To ensure safe and efficient production, dynamic scheduling is performed at picking of j th part when the following equation is satisfied:
T r o b o t ( 0 , j ) + t r o b o t ( j + 1 , l ) { T w o r k e r ( 0 , j ) + t w o r k e r ( j + 1 , l ) }   D t h
where T r o b o t ( 0 , j ) + t r o b o t ( j + 1 , l ) and T w o r k e r ( 0 , j ) + t w o r k e r ( j + 1 , l ) represent the working times from picking the first to l th parts picking of the robot ( r = R o b o t ) and worker ( R = W o r k e r ), respectively.
The working times comprise the actual working time until the j th part is picked T * ( 0 , j ) and the predicted working time until the l th part is picked t * ( j + 1 , l ) . D t h is the threshold value representing the acceptable delay time. If Equation (14) is satisfied, this indicates that the worker’s task will be delayed by more than D t h (s) compared with that of the robot. Then, the picking allocation r i assigned to the worker (i.e., r i = w o r k e r ) should be changed to r i = r o b o t until Equation (14) is no longer satisfied. First, the set of exchangeable parts P r that is assigned to the worker but can be assigned to the robot is extracted as follows:
P R = { p i   |   i [ j + 1 ,   l ] ,   r i = w o r k e r ,   r ( p i ) = a n y }
Then, the parts p e for which r e is changed to r e = r o b o t is selected to minimize the maximum physical load for picking p i P r :
p e = max p i P r { max { t t o r s o ^ ( p i ) ,   t l s ^ ( p i ) } } ,
where t t o r s o ^ ( p i ) [ 0 , 1 ] and t l s ^ ( p i ) [ 0 , 1 ] are the normalized values of the torso and shoulder joint torques t t o r s o ( p i ) and t l s ( p i ) , respectively, when the part p i is picked. The joint torque t ( p i ) is normalized as follows:
t ^ ( p i ) = || t ( p i ) || / max p k C c { || t ( p k ) || } .
Equations (15)–(17) are used to assign the parts p e   with the highest physical load to the robot sequentially until Equation (14) is no longer satisfied.

3.5.4. Feedback Submodule

As shown in Figure 10, the instructions are sent from the production management module to the worker via earphones and the wireless light-emitting diode (LED) submodule. The ID of the next part picked by the worker p n is announced via earphones, where p n is determined by the progress monitoring and dynamic scheduling. In addition, the request to restock the parts box is displayed by the LED submodule. When A ( p k ) becomes zero, the LED submodule attached to the box of parts p k flashes. When the parts box is restocked by the worker, then the LED submodule stops flashing. Note that the worker needs to restock the box of parts p k even if it is assigned to the robot (i.e., r j = r o b o t ).

4. Results

4.1. Experimental Settings

Figure 11 shows the picking environment of the demonstration experiment. Sixteen cameras were arranged under the ceiling for optical motion capture. Four parts boxes were located on the rack, and there were 40 parts boxes in total. We tested the proposed system through an online demonstration with an operator and an offline validation with test data.
In the online demonstration, as shown in Figure 12, 37 reflective markers were attached to the clothes, gloves, and shoes of the worker for motion capture. The production management module instructed the robot to start picking half a cycle after the worker started picking. The performance of the proposed system was then evaluated in terms of DT construction and dynamic scheduling. Videos are available as Supplementary Material (Video S1).
In the offline validation, the test data shown in Table 1 was used, where various delay factors were imitated. Condition 1 imitates the worker who delays 1.0 s for each part consistently. Condition 2 imitates the worker who slows down as the cycle progresses. Condition 3 imitates the worker whose delay follows the Gaussian distribution.

4.2. Construction of the Digital Twin

Figure 13 shows the results of the DT construction. The virtual robot and DH accurately reflected the actual postures of the robot and worker, respectively. The color of the DH represents the joint torque estimation results. Table 2 shows the elapsed time of each process. The most time-consuming process is the ergonomic evaluation, which is performed only when the worker picks the target part. Thus, the proposed system ranged from approximately 6 (when picking parts) to 50 fps.
Figure 14 shows an example of the picking detection for the worker. The rise of the red dotted line indicates that picking was detected, and the fall of the red dotted line indicates that the held part was released. As shown in the figure, it was confirmed that the picking target part p k of the worker is successfully updated when its picking is detected. At this time, the next part p k is announced to the worker via earphones as described in Section 3.5.4. In the case of the robot, p k is operated via a wireless communication module, as described in Section 3.4. Thus, the picking detection results confirmed that the proposed system could control the part picking.

4.3. Online Demonstration with an Operator

4.3.1. Results of Progress Prediction

Figure 15 shows the results for the progress prediction. The worker performed eight cycles and picked 40 parts per cycle. There was a 50 s difference between the actual cycle time and initial plan before the first cycle. This is because the worker’s performance was not measured by the system. As described in Section 3.5.2, the next cycle time was predicted according to the previous cycle. Therefore, changes in the actual cycle time, such as the difference between the initial plan and the actual cycle time at the 1st cycle, the drops of the actual cycle time at the 6th cycle, and the increases of the actual cycle time at the 7th cycle, was reflected in the predictions at the 2nd, 7th, and 8th cycles, respectively. Table 3 shows the comparison of the difference from the actual cycle time between the initial plan and prediction. The average and standard deviation ( μ ± σ ) of the difference between the initial plan and actual times for eight cycles were 33 ± 7 s, and those of the difference between the prediction and actual time were approximately 9   ±   11 s. The standard deviation of the prediction difference was larger than that of the initial plan because the prediction was performed according to the actual time of the previous cycle. Thus, the standard deviation of the prediction error might be larger as the variability of the actual cycle time increases.

4.3.2. Results for the Dynamic Scheduling

Figure 16 shows the dynamic scheduling results. Figure 16a shows the delay calculated using Equation (13), where the threshold value D t h was set to D t h = 20 s. The delay became greater than D t h at approximately 200 s. Then, the part p e originally assigned to the worker was assigned to the robot, so the remaining working time of the robot increased as shown in Figure 16b. In this experiment, part 22 (i.e., p e = 22 ) was changed from the worker to the robot. This is because part 22 had a greater physical load than the other parts. Figure 17 shows the results for the normalized joint torque estimation. The parts set P r = { 6 ,   26 ,   58 ,   4 ,   22 } could be switched from the worker to the robot. As shown in Figure 17b, part 22 had the highest physical load within P r . Therefore, the results confirmed that the proposed system could reduce the delay according to the predicted progress while assigning parts with the highest physical load from the worker to the robot.

4.4. Offline Validation with Test Data

The proposed system was further validated with the test data shown in Table 1. Figure 18 shows the validation results. We conducted a t-test and found a significant difference for all conditions ( p < 0.01 ). Figure 18a compares the difference between the actual working time and initial plan, i.e., the worker’s delay, with the difference between the actual working time and prediction using the proposed system. The proposed system could predict the working time more accurately than the initial plan for all the conditions. Moreover, Figure 18b shows the average working time of each cycle. The dynamic scheduling enabled to decrease the worker’s delay significantly, and all delays were kept below the sum of the initial plan (110 s/cycle) and user-specified threshold D t h ( D t h = 20   s / cycle ) .
Finally, Figure 18c shows the performance of the ergonomic constraints. The average physical loads reassigned to the robot through the dynamic scheduling with ergonomic constraints were significantly larger than those through the dynamic scheduling without ergonomic constraints. From these results, it was confirmed that the proposed system decreases the worker’s delay through the dynamic scheduling while changing the assignment of the parts with high physical load to the robot from the worker.

5. Discussion

The proposed system was tested through the online demonstration and offline validation with the test data. In contrast to previous DT-driven HRC studies [3,8,9], the proposed system realized the DT including the robot and worker who moves over a wide area. Through the online demonstration, we confirmed that the proposed system could perform the dynamic scheduling based on picking detection and the ergonomic evaluation of the worker. The average and standard deviation of the prediction error were approximately 9   ±   11 s. Because the prediction is performed according to the actual time of the previous cycle, the standard deviation of the prediction error might be larger as the variability of the actual cycle time increases. Therefore, the prediction accuracy may decrease in cases where the work speed increases or decreases drastically every cycle, although this may not happen with actual workers. The processing speed of the proposed system ranged from approximately 6 (when picking parts) to 50 fps. Considering that the parts-picking work requires a few seconds, this speed is sufficient to manage dynamic scheduling. However, further verification of the processing speed of the system, moving speed of the robot, and data communication speed is necessary for extending the application of the proposed system to a safety monitoring, where the real-time responsibility must be ensured.
The dynamic scheduling performance was tested, and the proposed system was confirmed to decrease the delay based on the predicted progress by assigning parts with the highest physical load from the worker to the robot. As shown in Figure 17, in contrast to discrete evaluations such as REBA [13], the proposed system can quantify the physical load for each individual movement. It leads the priorities for task allocation to reduce the physical load during dynamic scheduling. The proposed system was validated with test data imitating various types of workers. The dynamic scheduling under the ergonomic constraints kept the worker’s delay below the user-specified threshold while decreasing the physical load of the worker. Therefore, this paper presents a proof-of-concept for introducing a DH into a DT-driven HRC system with regard to production efficiency and ergonomic assessment.
To introduce the proposed concept into actual factories, further research on the human measurement system is necessary. An optical motion capture system [39] was employed as the sensing submodule, and all cameras were arranged under the ceiling to avoid occlusions. This system is highly accurate and widely available, but it is costly compared with other motion capture devices such as IMUs [45] and RGB-D cameras [46]. Moreover, attaching 37 markers to the worker and arranging cameras to avoid occlusion are cumbersome; thus, the environment in which it can be implemented is limited. A multimodal or dual measurement system might be desired in actual factories considering the features of each measurement device, e.g., IMU has drift errors, and the RGB-D is affected by occlusions. Therefore, future work will involve validating the motion estimation, picking detection, and ergonomic assessment performance with different and multiple sensing submodules such as IMU and RGB-D cameras. The proposed system was tested with subjects who do not work in an actual factory. This is sufficient for a proof-of-concept study, but the proposed system must be validated with factory workers working for realistic amounts of time for applications to real factory settings. In the case of an actual factory, human errors such as dropping parts or getting duplicate parts may occur. In addition, the workers do not always work in the correct matter, such as storing multiple parts at once. Dealing with such unintended behavior of the worker is a future issue. To understand and handle the various behaviors of actual workers, the proposed system must be tested with actual workers. In addition, only healthy workers were expected in this study, but actual workers may have diseases such as back pain. For the ergonomic evaluation considering such situations, it is necessary to improve the Equations (16) and (17) by introducing weight parameters reflecting the individual physical and kinematic characteristics. This will also be addressed in future work.

6. Conclusions

In this study, a proof-of-concept for introducing a DH into a DT-driven HRC system was presented to enhance production efficiency and ergonomic evaluation. The DT-driven HRC system incorporated the DH to reconstruct the full-body posture for motion analysis and ergonomic assessment. The integration with the DH enabled the detailed ergonomic evaluation such as joint torque analysis. Thus, the dynamic scheduling method based on the worker’s joint torque was proposed. The proposed system was applied to a part-picking scenario, and the contribution of the DH integration was confirmed. In particular, the proposed system can predict the working progress by considering the actual performance of the worker. Further, the dynamic scheduling with joint torque could improve the progress of the worker while decreasing the physical load of the worker. The results confirm that the introduction of the DH into the DT-driven HRC can improve productivity and reduce physical load based on the individual capability of the worker.
This study provided a proof of concept for DT-driven HRC and confirmed its effectiveness. However, there are still some issues that need to be addressed before it can be implemented in actual factories. The optical motion capture system in this study is highly accurate, there are issues in terms of price and installation to use such a system in an actual factory. Thus, the selection of an appropriate measurement system remains an issue. Even when the measurement system is changed according to the production scenario, the proposed work prediction and dynamic scheduling can be applied by reflecting the workers as digital human in the virtual space.
Moreover, further verification of the processing speed is necessary for its use in safety monitoring practices. In this paper, the feedback from the system to the worker included the scheduling and next picking parts. Because the proposed DT-driven HRC framework can recognize the full-body posture of the worker, picking parts, and robot behavior, it will be extended to various applications, such as fall detection, abnormal work detection, and safety monitoring. Such extension will be addressed in our future study.

Supplementary Materials

The following are available online at https://www.mdpi.com/article/10.3390/s21248266/s1, Video S1: Digital Twin-Driven Human Robot Collaboration Using a Digital Human.mp4.

Author Contributions

Conceptualization, T.M. (Tsubasa Maruyama); M.T., Y.D., Y.N., T.M. (Tatsuo Mori) and K.S.; methodology, T.M. (Tsubasa Maruyama), T.U. and T.M. (Tatsuo Mori); software, T.M. (Tsubasa Maruyama), T.U. and Y.E.; validation, T.M. (Tsubasa Maruyama), T.U. and H.T.; formal analysis, T.M. (Tsubasa Maruyama); investigation, T.M. (Tsubasa Maruyama); resources, M.T. and Y.E.; data curation, T.M. (Tsubasa Maruyama); writing—original draft preparation, T.M. (Tsubasa Maruyama); writing—review and editing, T.U., M.T., H.T. and T.M. (Tatsuo Mori).; visualization, T.M. (Tsubasa Maruyama); supervision, M.T.; project administration, Y.D.; funding acquisition, Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The experimental protocol of this study was approved by the local institutional review board of the National Institute of Advanced Industrial Science and Technology (HF2019-0981).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data sharing is not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Neumann, W.P.; Winkelhaus, S.; Grosse, E.H.; Glock, C.H. Industry 4.0 and the human factor–A systems framework and analysis methodology for successful development. Int. J. Prod. Econ. 2021, 233, 107992. [Google Scholar] [CrossRef]
  2. Grieves, M. Digital Twin: Manufacturing Excellence through Virtual Factory Replication; A White Paper; Michael Grieves, LLC: Melbourne, FL, USA, 2014; pp. 1–7. [Google Scholar]
  3. Liu, M.; Fang, S.; Dong, H.; Xu, C. Review of digital twin about concepts, technologies, and industrial applications. J. Manuf. Syst. 2021, 58, 346–361. [Google Scholar] [CrossRef]
  4. Sepasgozar, S.M.E. Differentiating Digital Twin from Digital Shadow: Elucidating a Paradigm Shift to Expedite a Smart, Sustainable Built Environment. Buildings 2021, 11, 151. [Google Scholar] [CrossRef]
  5. Oyekan, J.; Farnsworth, M.; Hutabarat, W.; Miller, D.; Tiwari, A. Applying a 6 DoF Robotic Arm and Digital Twin to Automate Fan-Blade Reconditioning for Aerospace Maintenance, Repair, and Overhaul. Sensors 2020, 20, 4637. [Google Scholar] [CrossRef]
  6. Guo, D.; Zhong, R.Y.; Lin, P.; Lyu, Z.; Rong, Y.; Huang, G.Q. Digital twin-enabled Graduation Intelligent Manufacturing System for fixed-position assembly islands. Robot. Comput. Integr. Manuf. 2020, 63, 101917. [Google Scholar] [CrossRef]
  7. Sun, X.; Bao, J.; Li, J.; Zhang, Y.; Liu, S.; Zhou, B. A digital twin-driven approach for the assembly-commissioning of high precision products. Robot. Comput. Integr. Manuf. 2020, 61, 101839. [Google Scholar] [CrossRef]
  8. Robla-Gómez, S.; Becerra, V.M.; Llata, J.R.; González-Sarabia, E.; Torre-Ferrero, C.; Pérez-Oria, J. Working Together: A Review on Safe Human-Robot Collaboration in Industrial Environments. IEEE Access 2017, 5, 26754–26773. [Google Scholar] [CrossRef]
  9. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–Robot Collaboration in Manufacturing Applications: A Review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef] [Green Version]
  10. Malik, A.A.; Bilberg, A. Digital twins of human robot collaboration in a production setting. Procedia Manuf. 2018, 17, 278–285. [Google Scholar] [CrossRef]
  11. Kousi, N.; Gkournelos, C.; Aivaliotis, S.; Giannoulis, C.; Michalos, G.; Makris, S. Digital twin for adaptation of robots’ behavior in flexible robotic assembly lines. Procedia Manuf. 2019, 28, 121–126. [Google Scholar] [CrossRef]
  12. Dröder, K.; Bobka, P.; Germann, T.; Gabriel, F.; Dietrich, F. A Machine Learning-Enhanced Digital Twin Approach for Human- Robot-Collaboration. Procedia CIRP 2018, 76, 187–192. [Google Scholar] [CrossRef]
  13. Bortolini, M.; Faccio, M.; Gamberi, M.; Pilati, F. Motion Analysis System (MAS) for production and ergonomics assessment in the manufacturing processes. Comput. Ind. Eng. 2020, 139, 105485. [Google Scholar] [CrossRef]
  14. McAtamney, L.; Nigel Corlett, E. RULA: A survey method for the investigation of work-related upper limb disorders. Appl. Ergon. 1993, 24, 91–99. [Google Scholar] [CrossRef]
  15. Hignett, S.; McAtamney, L. Rapid Entire Body Assessment (REBA). Appl. Ergon. 2000, 31, 201–205. [Google Scholar] [CrossRef]
  16. Ma, L.; Chablat, D.; Bennis, F.; Zhang, W.; Hu, B.; Guillaume, F. Fatigue evaluation in maintenance and assembly operations by digital human simulation in virtual environment. Virtual Real. 2011, 15, 55–68. [Google Scholar] [CrossRef] [Green Version]
  17. Michalos, G.; Kousi, N.; Karagiannis, P.; Gkournelos, C.; Dimoulas, K.; Koukas, S.; Mparis, K.; Papavasileiou, A.; Makris, S. Seamless human robot collaborative assembly–An automotive case study. Mechatronics 2018, 55, 194–211. [Google Scholar] [CrossRef]
  18. Bonci, A.; Cen Cheng, P.D.; Indri, M.; Nabissi, G.; Sibona, F. Human-Robot Perception in Industrial Environments: A Survey. Sensors 2021, 21, 1571. [Google Scholar] [CrossRef]
  19. Nikolakis, N.; Maratos, V.; Makris, S. A cyber physical system (CPS) approach for safe human-robot collaboration in a shared workplace. Robot. Comput. Integr. Manuf. 2019, 56, 233–243. [Google Scholar] [CrossRef]
  20. Cheng, Y.; Sun, L.; Liu, C.; Tomizuka, M. Towards Efficient Human-Robot Collaboration With Robust Plan Recognition and Trajectory Prediction. IEEE Robot. Autom. Lett. 2020, 5, 2602–2609. [Google Scholar] [CrossRef]
  21. Kanazawa, A.; Kinugawa, J.; Kosuge, K. Adaptive Motion Planning for a Collaborative Robot Based on Prediction Uncertainty to Enhance Human Safety and Work Efficiency. IEEE Trans. Robot. 2019, 35, 817–832. [Google Scholar] [CrossRef]
  22. Fera, M.; Greco, A.; Caterino, M.; Gerbino, S.; Caputo, F.; Macchiaroli, R.; D’Amato, E. Towards Digital Twin Implementation for Assessing Production Line Performance and Balancing. Sensors 2020, 20, 10097. [Google Scholar] [CrossRef] [Green Version]
  23. Lv, Q.; Zhang, R.; Sun, X.; Lu, Y.; Bao, J. A digital twin-driven human-robot collaborative assembly approach in the wake of COVID-19. J. Manuf. Syst. 2021, 60, 837–851. [Google Scholar] [CrossRef]
  24. Zhang, S.; Chen, Y.; Zhang, J.; Jia, Y. Real-Time Adaptive Assembly Scheduling in Human-Multi-Robot Collaboration Accord-ing to Human Capability*. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 3860–3866. [Google Scholar] [CrossRef]
  25. Bilberg, A.; Malik, A.A. Digital twin driven human–robot collaborative assembly. CIRP Ann. 2019, 68, 499–502. [Google Scholar] [CrossRef]
  26. Gualtieri, L.; Rauch, E.; Vidoni, R. Emerging research fields in safety and ergonomics in industrial collaborative robotics: A systematic literature review. Robot. Comput. Integr. Manuf. 2021, 67, 101998. [Google Scholar] [CrossRef]
  27. Oyekan, J.O.; Hutabarat, W.; Tiwari, A.; Grech, R.; Aung, M.H.; Mariani, M.P.; López-Dávalos, L.; Ricaud, T.; Singh, S.; Dupuis, C. The effectiveness of virtual environments in developing collaborative strategies between industrial robots and humans. Robot. Comput. Integr. Manuf. 2019, 55, 41–54. [Google Scholar] [CrossRef]
  28. Veikko, L. OWAS: A Method for the Evaluation of Postural Load during Work; Institute of Occupational Health: Edinburgh, UK, 1992. [Google Scholar]
  29. Waters, T.R.; Putz-Anderson, V.; Garg, A. Applications Manual for the Revised NIOSH Lifting Equation; National Institute for Occupational Safety and Health: Washington, DC, USA, 1994. [Google Scholar]
  30. Colombini, D.; Occhipinti, E.; Alverez-Casado, E. The Revised OCRA Checklist Method; Editorial Factor Humans: Barcelona, Spain, 2013. [Google Scholar]
  31. Maderna, R.; Poggiali, M.; Zanchettin, A.M.; Rocco, P. An online scheduling algorithm for human-robot collaborative kitting. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 11430–11435. [Google Scholar] [CrossRef]
  32. der Spaa, L.V.; Gienger, M.; Bates, T.; Kober, J. Predicting and Optimizing Ergonomics in Physical Human-Robot Cooperation Tasks. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), Paris, France, 31 May–31 August 2020; pp. 1799–1805. [Google Scholar] [CrossRef]
  33. Greco, A.; Caterino, M.; Fera, M.; Gerbino, S. Digital Twin for Monitoring Ergonomics during Manufacturing Production. Appl. Sci. 2020, 10, 7758. [Google Scholar] [CrossRef]
  34. Baskaran, S.; Niaki, F.A.; Tomaszewski, M.; Gill, J.S.; Chen, Y.; Jia, Y.; Mears, L.; Krovi, V. Digital Human and Robot Simulation in Automotive Assembly using Siemens Process Simulate: A Feasibility Study. Procedia Manuf. 2019, 34, 986–994. [Google Scholar] [CrossRef]
  35. Menychtas, D.; Glushkova, A.; Manitsaris, S. Analyzing the kinematic and kinetic contributions of the human upper body’s joints for ergonomics assessment. J. Ambient. Intell. Humaniz. Comput. 2020, 11, 6093–6105. [Google Scholar] [CrossRef] [Green Version]
  36. Endo, Y.; Tada, M.; Mochimaru, M. Hand modeling and motion reconstruction for individuals. Int. J. Autom. Technol. 2014, 8, 376–387. [Google Scholar] [CrossRef]
  37. Endo, Y.; Tada, M.; Mochimaru, M. Estimation of arbitrary human models from anthropometric dimensions. In Proceedings of the International Conference on Digital Human Modeling and Applications in Health Safety, Ergonomics and Risk Management, Los Angeles, CA, USA, 2–7 August 2015; pp. 3–14. [Google Scholar]
  38. AIST Japanese Body Size Database. Available online: https://unit.aist.go.jp/hiri/dhrg/ja/dhdb/91-92/index.html (accessed on 11 September 2019). (In Japanese).
  39. OptiTrack Motive. Available online: https://optitrack.com/software/motive/ (accessed on 8 December 2021).
  40. Maruyama, T.; Toda, H.; Ishii, W.; Tada, M. Inertial Measurement Unit to Segment Calibration Based on Physically Constrained Pose Generation. SICE J. Control. Meas. Syst. Integr. 2020, 13, 122–130. [Google Scholar] [CrossRef]
  41. Endo, Y.; Tada, M.; Mochimaru, M. Dhaiba: Development of virtual ergonomic assessment system with human models. In Proceedings of the 3rd International Digital Human Symposium, Tokyo, Japan, 20–22 May 2014; pp. 1–8. [Google Scholar]
  42. Specification, O.M.G. The Real-Time Publish-Subscribe Protocol (RTPS) DDS Interoperability Wire Protocol Specification. Object Management Group Pct07-08-04. 2007. Available online: https://www.omg.org/spec/DDSI-RTPS/2.2 (accessed on 8 December 2021).
  43. Yang, J.; Sandström, K.; Nolte, T.; Behnam, M. Data Distribution Service for industrial automation. In Proceedings of the 2012 IEEE 17th International Conference on Emerging Technologies Factory Automation (ETFA 2012), Krakow, Poland, 17–21 September 2012; pp. 1–8. [Google Scholar] [CrossRef] [Green Version]
  44. eProsima FastDDS. Available online: https://www.eprosima.com/index.php/products-all/eprosima-fast-dds (accessed on 8 December 2021).
  45. Xsens MTwAwinda. Available online: https://www.xsens.com/products/mtw-awinda (accessed on 8 December 2021).
  46. AzureKinect DK. Available online: https://azure.microsoft.com/en-us/services/kinect-dk/ (accessed on 8 December 2021).
Figure 1. Digital twin in robot, human, and production settings.
Figure 1. Digital twin in robot, human, and production settings.
Sensors 21 08266 g001
Figure 2. Application of digital humans to human–robot collaboration.
Figure 2. Application of digital humans to human–robot collaboration.
Sensors 21 08266 g002
Figure 3. Digital twin-driven human–robot collaboration framework.
Figure 3. Digital twin-driven human–robot collaboration framework.
Sensors 21 08266 g003
Figure 4. Construction of the digital human model.
Figure 4. Construction of the digital human model.
Sensors 21 08266 g004
Figure 5. Optimization-based motion estimation for marker-based motion capture [40].
Figure 5. Optimization-based motion estimation for marker-based motion capture [40].
Sensors 21 08266 g005
Figure 6. Detection of (a) part picking and (b) release. Box colliders are created for the parts box and trolley.
Figure 6. Detection of (a) part picking and (b) release. Box colliders are created for the parts box and trolley.
Sensors 21 08266 g006
Figure 7. Joint torque estimation for parts picking from upper (left) and lower (right) shelves. The foot contact forces f C l F o o t and f C r F o o t are estimated using the friction cone. The joint torque estimation result is visualized using a heat map. The friction cone is approximated by c i G   ( i [ 0 , 3 ] ) when N e is set to N e = 4 .
Figure 7. Joint torque estimation for parts picking from upper (left) and lower (right) shelves. The foot contact forces f C l F o o t and f C r F o o t are estimated using the friction cone. The joint torque estimation result is visualized using a heat map. The friction cone is approximated by c i G   ( i [ 0 , 3 ] ) when N e is set to N e = 4 .
Sensors 21 08266 g007
Figure 8. Digital twin of the robot. Magnetic tape in real space is represented in the virtual space.
Figure 8. Digital twin of the robot. Magnetic tape in real space is represented in the virtual space.
Sensors 21 08266 g008
Figure 9. Working time prediction. p j is the part currently being picked. The upper row represents the working time calculation for the initial plan. The lower row represents the calculation of the actual working time T ( 0 ,   j ) and prediction of the future working time t ( j + 1 ,   l ) .
Figure 9. Working time prediction. p j is the part currently being picked. The upper row represents the working time calculation for the initial plan. The lower row represents the calculation of the actual working time T ( 0 ,   j ) and prediction of the future working time t ( j + 1 ,   l ) .
Sensors 21 08266 g009
Figure 10. Feedback to the worker.
Figure 10. Feedback to the worker.
Sensors 21 08266 g010
Figure 11. 3D laser-scanned point clouds of the picking environment.
Figure 11. 3D laser-scanned point clouds of the picking environment.
Sensors 21 08266 g011
Figure 12. Work cloth with 37 reflective markers.
Figure 12. Work cloth with 37 reflective markers.
Sensors 21 08266 g012
Figure 13. Results for the digital twin construction.
Figure 13. Results for the digital twin construction.
Sensors 21 08266 g013
Figure 14. Example detection of a worker picking a part.
Figure 14. Example detection of a worker picking a part.
Sensors 21 08266 g014
Figure 15. Predicted cycle time of the worker. The prediction is performed at the beginning of each cycle.
Figure 15. Predicted cycle time of the worker. The prediction is performed at the beginning of each cycle.
Sensors 21 08266 g015
Figure 16. Results for the dynamic scheduling: (a) worker delays and (b) changes in the predicted working time of the robot. When the delay was detected, the dynamic scheduling was performed so that the delay becomes smaller than the threshold D t h . Consequently, the remaining working time of the robot was increased from the initial plan.
Figure 16. Results for the dynamic scheduling: (a) worker delays and (b) changes in the predicted working time of the robot. When the delay was detected, the dynamic scheduling was performed so that the delay becomes smaller than the threshold D t h . Consequently, the remaining working time of the robot was increased from the initial plan.
Sensors 21 08266 g016
Figure 17. Ergonomic assessment for dynamic scheduling: (a) joint torque of the torso, (b) joint torque of the left shoulder, and (c) estimation of parts with the highest physical loads. Note: Orange: parts that can be assigned to the robot ( r k = w o r k e r , r ( p k ) = a n y ), Blue: parts that must be picked by the worker ( r k = w o r k e r , r ( p k ) = w o r k e r ).
Figure 17. Ergonomic assessment for dynamic scheduling: (a) joint torque of the torso, (b) joint torque of the left shoulder, and (c) estimation of parts with the highest physical loads. Note: Orange: parts that can be assigned to the robot ( r k = w o r k e r , r ( p k ) = a n y ), Blue: parts that must be picked by the worker ( r k = w o r k e r , r ( p k ) = w o r k e r ).
Sensors 21 08266 g017aSensors 21 08266 g017b
Figure 18. Offline validation with test data: (a) the performance of the working time prediction, (b) performance of the dynamic scheduling, and (c) performance of the ergonomic constraints. The error bars represent the standard deviation.
Figure 18. Offline validation with test data: (a) the performance of the working time prediction, (b) performance of the dynamic scheduling, and (c) performance of the ergonomic constraints. The error bars represent the standard deviation.
Sensors 21 08266 g018
Table 1. Test data for validation. c represents the current cycle index. N ( μ ,   σ 2 ) represents the Gaussian distribution.
Table 1. Test data for validation. c represents the current cycle index. N ( μ ,   σ 2 ) represents the Gaussian distribution.
ConditionPicking Speed [s/Parts]Cycles
Initial plan 14.410
Condition 14.4 + 1.010
Condition 24.4 + 0.2 c 10
Condition 34.4 + d   ( d ~ N ( 1.0 ,   0.5 2 ) ) 10
1 If the worker follows the initial plan, no delay occurs.
Table 2. Elapsed time of the proposed system performed on a laptop computer (CPU: Intel(R) Core(TM) i7-1065G7, RAM: 32GB, GeForce GTX 1650). The background model (3D point clouds in Figure 13) was not rendered.
Table 2. Elapsed time of the proposed system performed on a laptop computer (CPU: Intel(R) Core(TM) i7-1065G7, RAM: 32GB, GeForce GTX 1650). The background model (3D point clouds in Figure 13) was not rendered.
ProcessElapsed Time [ms] 1
Motion estimation15
Visualizing virtual robot 1
Motion analysis3
Ergonomic evaluation 2138
Progress monitoring and prediction1
Dynamic scheduling 21
1 Average value in a cycle including dynamic scheduling. 2 Performed only when the worker picks parts.
Table 3. Comparison of the initial plan and the prediction.
Table 3. Comparison of the initial plan and the prediction.
Actual
Cycle Time [s]
Difference between Actual and the Initial Plan [s]Difference between Actual and the Prediction [s]
Cycle 1853434
Cycle 283362
Cycle 380393
Cycle 481381
Cycle 584353
Cycle 679405
Cycle 7952416
Cycle 8100195
μ ± σ 33 ± 79 ± 11
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Maruyama, T.; Ueshiba, T.; Tada, M.; Toda, H.; Endo, Y.; Domae, Y.; Nakabo, Y.; Mori, T.; Suita, K. Digital Twin-Driven Human Robot Collaboration Using a Digital Human. Sensors 2021, 21, 8266. https://doi.org/10.3390/s21248266

AMA Style

Maruyama T, Ueshiba T, Tada M, Toda H, Endo Y, Domae Y, Nakabo Y, Mori T, Suita K. Digital Twin-Driven Human Robot Collaboration Using a Digital Human. Sensors. 2021; 21(24):8266. https://doi.org/10.3390/s21248266

Chicago/Turabian Style

Maruyama, Tsubasa, Toshio Ueshiba, Mitsunori Tada, Haruki Toda, Yui Endo, Yukiyasu Domae, Yoshihiro Nakabo, Tatsuro Mori, and Kazutsugu Suita. 2021. "Digital Twin-Driven Human Robot Collaboration Using a Digital Human" Sensors 21, no. 24: 8266. https://doi.org/10.3390/s21248266

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop