Next Article in Journal
Coupled Indirect Torque Control and Maximum Power Point Tracking Technique for Optimal Performance of 12/8 Switched Reluctance Generator-Based Wind Turbines
Previous Article in Journal
Localization and Mapping for UGV in Dynamic Scenes with Dynamic Objects Eliminated
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Behavior Analysis for Increasing the Efficiency of Human–Robot Collaboration

1
Institute of Electrical and Control Engineering, National Yang Ming Chiao Tung University, Hsinchu City 30010, Taiwan
2
College of Mechanical and Electrical Engineering, National Taipei University of Technology, Taipei 10608, Taiwan
*
Author to whom correspondence should be addressed.
Machines 2022, 10(11), 1045; https://doi.org/10.3390/machines10111045
Submission received: 29 August 2022 / Revised: 1 November 2022 / Accepted: 6 November 2022 / Published: 8 November 2022
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)

Abstract

:
In this study, we proposed a behavior analysis for increasing the efficiency of human–robot collaboration in an assembly task. This study was inspired by previous research, in which a set of operator intentions in assembly was translated into an intention graph to formulate a probabilistic decision model for planning robot actions in the presence of operator intention ambiguity and perception uncertainty. Here, we achieved improvement by considering the analysis of human behavior in the form of fatigue and adaptation ability. We also switched the collaboration scheme from cooperative to collaborative, in which both the robot and operator work in parallel, not sequentially. We then tested the proposed method with chair assembly and the results indicated that shortening the assembly duration increased the effectiveness of the assembly process. The results also indicated that the proposed method for assembling 50 chairs was 4.68 s faster than the previous method.

1. Introduction

Several technologies have been developed to identify human intention or activity. Various nonverbal cues, such as deliberate body movements, eye gaze, head nodding, upper-body posture and gestures can be used to reveal a person’s intentions or future behavior [1]. In this study, we established the reliability of a gaze-enabled model based on intention recognition. This model should be filtered to account for the changing nature of intents. These aforementioned nonverbal cues are formed using five personality qualities [2]. In a previous study [3], the roles of both body posture and movements in the production and recognition of nonverbal emotions were studied. The results indicated that arms and limited torso adjustments can be used to express emotions. In another study, intentions were recognized by analyzing the speed and direction of human movement [4]. This analysis is referred to as an activity recognition algorithm. Our goal was to use this algorithm in human–computer and human–robot (HR) interfaces by preserving additional details and using gesture recognition.
Over the past decade, human-safe robots have enabled collaboration between people and robots in industrial settings, which is considered a substantial improvement in this field. HR collaboration is a method of performing tasks that combines the skills of human labor and those of robots. This method is also known as HR cooperation. Among the qualities needed in HR cooperation are intelligence and the ability to learn and adapt to new situations. This allows for the implementation of automation in contemporary industrial sectors with a relatively low production volume and a high degree of customization. These two characteristics, however, hinder the independent deployment of robots. Therefore, combining these two factors allows for the implementation of automation.
As shown in Figure 1, HR collaboration scenarios range from robot workers being indirectly involved in a task by acting as assistants to human workers to robot workers being fully involved in a task, in which case robots are treated as peers to human workers and directly affect the outcomes of tasks [5].
Indirect involvement in a task occurs when robot workers act as assistants to human workers, whereas full involvement occurs when robot workers directly affect the outcome of a task. To date, no specific strategy has been proposed to build intelligent multiagent robotic systems for HR applications. This is because ensuring coordinated interaction between the technological and biological components of an unpredictable intelligent robotic system is a highly challenging task. Hence, further studies are required to increase the efficiency of HR collaboration [7].
This work proposes a behavior analysis for increasing human–robot collaboration in the wood-chair assembly task. The proposed system used a Partially Observable Markov Decision Process (POMDP) equipped with Behavior Analysis calculation to estimate human fatigue and adaptability while performing the task. This work aims to increase efficiency during the HRC process, especially in the assembly process. The collaboration includes one operator as an operator assembler and one robot to help or assist the assembly process. A detailed discussion of contribution is mentioned in Section 2.4. As a comparison, we notably highlight the improvement from previous work mentioned in Section 3.1. The rest of the manuscript consists of Existing Research, Proposed Method, Experiments & Results and Conclusions.

2. Existing Research

Recognizing human nonverbal behaviors necessitates identifying the features of humans themselves (i.e., human skeleton) [1]. This is achievable with devices such as cameras, red-green-blue-depth (RGBD) cameras, or sensors that can identify human features (e.g., human skeleton). In the following, we discuss a recognition model of human behavior with human features.

2.1. Human–Robot Interaction

Collaborative robots, also known as “Cobots”, can physically interact with people in a shared workspace. Cobots are readily reprogrammable, even by non-specialists. This allows them to be repurposed within constantly updating workflows [8]. Introduced to the manufacturing sector several years ago, Cobots are currently used in a variety of tasks. However, further research is still required to maximize the productive potential of interactions between humans and robots, as shown in Figure 2. No particular strategy has yet been proposed to increase the level of productivity during the construction of intelligent multiagent robotic systems for HR interaction. This is because guaranteeing coordinated interaction between the technological and biological components of an unpredictable intelligent robotic system is a highly challenging process [7].
In [9], a gesture-based HR interaction architecture was proposed, in which a robot helped a human coworker by holding and handing tools and parts during assembly operations. Inertial measurement unit (IMU) sensors were used to capture human upper-body motions. The captured data were then partitioned into static and dynamic blocks by using an unsupervised sliding window technique. Finally, these static and dynamic data blocks were fed into an artificial neural network (ANN) to classify static, dynamic and composed gestures.
In [10], a technique was proposed for the teleportation of a team of mobile robots with diverse capabilities and human gestures. Both unmanned aerial vehicle–unmanned ground vehicle and unmanned aerial vehicle–unmanned aerial vehicle configurations were tested. In addition, high-level gesture patterns were developed at a distant station and an easily trainable ANN classifier was provided to recognize the skeletal data gathered by an RGBD camera. The classifier used certain data to generate gesture patterns, which allowed the use of natural and intuitive motions for the teleportation of mobile robots.
In [11], an intelligent operator advice model based on deep learning aimed at generating smart operator advice was investigated. Two mechanisms were used at the core of the model: an object detection mechanism, which involved a convolutional neural network (CNN), and a motion identification mechanism, which involved a decision tree classifier. Motion detection was accomplished using three separate and parallel decision tree classifiers for motion identification based on input from three cameras. The inputs used for these classifiers were the coordinates and speeds of the objects observed. A majority vote was used to approve the appropriate motion receiving the most votes. Overall, the operator guidance model contributed to the final output by providing task checking and advice.
Control systems based on vision play a substantial role in the development of modern robots. Developing an effective algorithm for identifying human actions and the working environment and creating easily understandable gesture commands is an essential step in operationalizing such a system. In [12], an algorithm was proposed to recognize actions in robotics and factory automation. In the next section, we discuss the role of human feature recognition in HR interaction.

2.2. Recognition of Human Feature

In [13], the existence of a subway driver system combining a CNN model and a time series diagram was used as an example of nonverbal behavior applied as input for a behavior recognition system. Hand movement was used as the input to recognize behavior. In [14], a unique infrared hand gesture estimation approach based on CNNs was investigated. This approach helped distinguish a teacher’s behavior by studying the teacher’s hand movements in a real classroom and in a live video. A two-step process algorithm for detecting arms in image sequences was proposed and the relative locations of the whiteboard and the teacher’s arm were used to determine hand gestures in the classroom. According to the experimental results, the developed network outperformed state-of-the-art approaches on two infrared image datasets.
Assessment of human position is considered a crucial phase because it directly affects the quality of a behavior identification system. Skeleton data are usually obtained in either a two-dimensional (2D) or 3D format. For instance, in [15], a state-of-the-art error correction system combining algorithms for person identification and posture estimation was developed. This was achieved using a Microsoft Kinect sensor, which was used to take deep RGB images to gather 3D skeletal data. An OpenPose framework was then used to identify 2D human postures. In addition, a deep neural network was used during the training process of the classification model. To identify behaviors, both previously proposed approaches and approaches based on skeletons were evaluated.
Generally, the analysis of human behavior based on activity remains substantially complicated. Body movements, skeletal structure and/or spatial qualities can be used to describe an individual in a video sequence. In [16], a highly effective model for 2D or 3D skeleton-based action recognition called the Pose Refinement Graph Convolutional Network was proposed. OpenPose was also used to extract 2D human poses and gradually merging the motion and location data helped improve the human pose data. In [17], a hybrid model combining a transfer-learning CNN model, human tracking, skeletal characteristics and a deep-learning recurrent neural network model with a gated recurrent unit was proposed to improve the cognitive capabilities of the system and obtain excellent results. In [18], an At-seq2seq model was proposed for human motion prediction, which considerably decreased the difference between the predicted and true values and greatly increased the time required for accurate human motion recognition prediction. However, most of the current prediction methods are used for a single individual in a simple scenario and their practicality remains limited.
Techniques based on vision and wearable technology are considered the most developed for establishing a gesture-based interaction interface. In [19], an experimental comparison was conducted between the data of two movement estimation systems: position data collected from Microsoft Kinect RGBD cameras and acceleration data collected from IMUs. The results indicated that the IMU-based approach, also called OperaBLE, achieved a recognition accuracy rate reaching up to 8.5 times that of Microsoft Kinect, which turned out to be dependent on the movement execution plane, the subject’s posture and the focal distance.
Automated video analysis of interacting objects or people has several uses, including activity recognition and the detection of archetypal or unusual actions [20]. Temporal sequences of quantifiable real-valued features, such as object location or orientation, are used in the current approaches. However, several qualitative representations have been proposed. With a novel and reliable qualitative method for identifying and clustering pair activities, qualitative trajectory calculus (QTC) was used to express the relative motion of two objects and their interactions as a trajectory of QTC states.

2.3. Human Robot Collaboration and Its Safety Measurement Approaches

In the industry, the Human Machine Collaboration paradigm influences safety implementation regulation. However, enforcing these rules might result in very inefficient robot behavior. Current HRC research [21] focuses on increasing robot and human collaboration, lowering the possibility of injury owing to mechanical characteristics and offering user-friendly programming strategies. According to studies [22], more than half of HRC research does not employ safety requirements to address challenges. This is a fundamental problem since industrial robots or robot collaboration might endanger human employees if suitable safety systems are not implemented.
There are five types of human-robot collaboration: (1) cell, (2) cohabitation, (3) synchronized, (4) cooperation and (5) collaboration. Human and robot collaboration begin to occur in synchronized cooperation and collaboration in a shared workstation. The three stages are defined by the moment humans enter the robot workstation. At the synchronized level, robots and humans work on identical products but join the same work differently. At the collaboration level, robots and humans collaborate in the same workplace and at the same time, but their jobs are distinct. Robots and humans collaborate concurrently and share the same workspace by doing the same activity. As a result, people and robots can work together to complete jobs quickly. The HRC workplace design may be adapted to the industry, but it must adhere to the regulations outlined in ISO 10218-1:2011 and 10218-2:2011. Because the standards for collaborative robots have been amended recently, it is vital to consider safety following ISO/TS 15066:2016. To reduce work accidents, four collaboration modes are employed in this regulation: Safety-rated Monitored Stop (SMS), Hand-guiding (HG), Speed and Separation Monitoring (SSM) and Power and Force Limiting (PFL).
As an extension of HR collaboration research, safety research based on the current standard has been deployed. Based on ISO technical criteria 15066, Marvel and Norcross [23] advised on safety operations and elaborated on the mechanism during HR collaboration. By doing so, Andres et al. [24] defined the minimum separation distance by combining static and dynamic safety regulations. The authors believe that their proposed dynamic safety regulations is more practical than traditional safety regulations (SSM).
Hereafter, some applications based on dynamic safety zones for the standard collaborative robot started to grow rapidly. Cosmo et al. [25] verified the possibility of collision during the collaboration using their proposed dynamic safety regulation. From their result, the proposed system produces a better impact on completion time with respect to the traditional static safety zones. Similarly, Byner et al. [26] proposed two methods to calculate robot velocity by considering the separation distance. This approach is applicable in industry, but it uses sensors to provide precise human position information. In addition, an integrated fuzzy logic safety approach strengthened the protective barrier between people and machines [27]. It improved the safety of HR collaboration by maximizing the production time and guaranteeing the safety of human operators inside the shared workspace.
On the other hand, study based on external sensor is also being developed in this field. Ref. [28] detects humans using a heat sensor on the robot arm. It lets the robot become aware of human appearance in their working space. Then, a safety standard is applied in HRC by including a potential run-away motion area. This method allows the industry to create a protection separation distance, which can anticipate a robot’s pace. Kino-dynamic research [29] built a 2D laser scanner for safety distance with calculated minimum collaboration distance. A safety standard with two layers was also used in [30] to let the robot collaborate by implementing trajectory scaling to limit the robot’s speed and avoid interference. Moreover, Ref. [31] presents a safety zone that adapts to the robot’s direction. The safety zone adapts according to the robot’s location.
Apart from the robotic and instrumental point of view, HRC can be achieved using a human skeleton or human movement as an input of the recognition model (i.e., vision-based approaches). One piece of research was the SMI Begaze program. It was utilized for expressive faces to create two areas of interest (AOI) to understand human emotion perception [32]. By analyzing AOI, one around the eyes and one around the mouth, there is a relationship between callous-unemotional traits, externalizing problem behavior, emotion recognition and attention. Real-time classification for autonomous drowsiness detection using eye aspect ratio also has been conducted [33].
Furthermore, identifying per-object situation awareness of the driver by using gaze behavior was conducted [34]. Mixing eye tracking and computational road scene perception make this work possible. Developing a low-cost monocular system that was as accurate as commercial systems for visual fixation registration and analysis was studied [35]. Two mini cameras were used to track human eye gaze. The logarithm of ocular data acquisition used random sample consensus (RANSAC) and CV functions to improve pupil edge detection. The main goal of this mask-style eye tracker system was to provide researchers with a cost-effective technology solution for high-quality recording of eye fixation behaviors, such as studies of human adaptation and relationships in several professional performance domains.
To conclude, various approaches in safety research are still being developed in HR collaboration. There are two mainstream approaches in this area, namely SSM and PFL. Standard-based SSM works well to limit the robot’s motion while the operator approaches certain areas of the robot’s workspace. However, the SSM-based model is less effective when the operator and robot are within a close distance. Most SSM-based approaches command the robot to stop or idle until the operator leaves the robot’s working area. In contrast to SSM, the PFL-based framework detects the initial collision between the operator and the robot by a built-in or external force sensor in case it exceeds the threshold of contact force.

2.4. Contributions

In this study, we used behavior recognition to analyze human nonverbal behaviors in order to increase the efficiency of HR collaboration in assembly process, especially wood-chair assembly. We proposed a system that can increase the performance of individuals working with robots; the system is based on evaluating human behaviors during assembly process, including weariness, adaptability and preferred sequence during assembly. The main contributions of this study are as follows:
  • We proposed a novel behavior analysis method to increase the efficiency of HR collaboration in assembly. Regarded as a breakthrough in robotic collaboration, this method integrates human nonverbal behaviors in assembly activities and can be used in different robotic collaboration scenarios.
  • We proposed a framework of human preference sequence assembly using a Partially Observable Markov Decision Process (POMDP) and human adaptation ability based on learning curves (LC). Understanding humans’ adaptability may allow a robot’s actions to adjust the adaptation level so that workers can work safely. It used human fatigue during assembly as an observation factor in POMDP architecture, as shown in Figure 3. We used a cost function to formulate fatigue, which is described in Section 3.3 and its sub-section. These functions allow the assembly duration of HR collaboration to be maintained even if the human becomes fatigued.
Here, the standard POMDP model from previous work [36] is shown in the upper red-dashed rectangle. Then, we show behavior analysis as a part of human–robot collaboration. For more details regarding our system, please refer to Section 3.1.

3. Proposed Method

In this section, we describe and discuss our proposed method and further elaborate on the proposed method of behavior analysis to increase the efficiency of HR collaboration. We also briefly discuss our novel design, equations, POMDP, fatigue and learning ability method.

3.1. System Overview

This study builds upon the work introduced in [36]. To increase the effectiveness of POMDP and enable robots to perform at their best during collaboration, we conducted a behavior analysis of the operator’s adaptability to robots and the level of operator fatigue during operation. Unlike [36], we implement the proposed method for a robot to collaborate with a human in an assembly process.
The previous work used a human to substitute in a collaboration to assembly components, as shown in Figure 4 [36]. The authors used a human to substitute for the robot role during the collaboration process. The robot and human collaborate was based on the textual command generated by the system. However, it is not easy to evaluate the effectiveness of their approach, as the experiment was not in real-time conditions. To briefly review the previous approach, please refer to the link: shorturl.at/lmLS5.
That said, it would be hard to actually evaluate the condition of the real collaboration because at that moment the robot’s action server collaborates with textual instructions on the task at hand, which are displayed to the person taking the robot’s role [36]. To briefly review the previous approach, please refer to the corresponding link: shorturl.at/lmLS5.
Figure 5 shows a graphical representation of the framework system. The originality of this work lies in the design highlighted by the green box.
The overall procedure is divided into five distinct stages. During the first stage, a web camera is used to obtain frames from human workers and extract their features with MediaPipe. Feature extraction is performed by using deep learning performs to classify the worker activity. The output produces the human activity classification as a set of observations for the input of POMDP. In addition to analyzing the worker’s intention, the two inputs, human fatigue and worker adaptability are equipped in our POMDP model to design the robot action. The originality of our study lies in the investigation of operator tiredness and adaptability capacity. A similar work also proposed early prediction in HR collaboration [37]. This proposed method was based on learning from a demonstration model using a state-enhanced convolutional long short-term memory (ConvLSTM)-based framework. For HR collaboration simulation, hidden Semi-Markov Models (HSMMs) were used for HR collaboration scenarios [38]. This enabled the robot to predict the human assembly rate and boost the operation during the process.
In this paper, we propose a method to enhance the previous methods. Table 1 shows the comparison. We also consider the human factors (human fatigue and adaptation ability) during the assembly process as these would affect the robustness and effectiveness of HR collaboration. Specifically speaking, we do not focus on the LSTM network, robot control, human feature extraction and improvement of safety mechanisms (SSM or PFL-based framework). Table 2 concludes and highlights our approach compared to previous work.

3.2. POMDP

Partially Observable Markov Decision Process (POMDP) provides descriptions of human intentions, robot behaviors and reward systems. It also models operator intent as a collection of hidden variables called intentions. These intentions are then used when a chair is assembled.
  • Intention-I
An operator intent iI represents the mental path through the assembly state graph from the current assembly state toward the goal state. Two possible intentions are involved in the assembly of a chair. Figure 6 shows all the components required for the assembly. To assemble a chair, four legs are required. The first step in the assembly process is to apply glue to the hole, which should be performed before the leg is inserted.
The first intention scenario is to randomly apply glue to all holes at the base of the chair and then insert the legs into the holes. This scenario involves four combinations. The second intention scenario is to apply glue to a single hole and then place a leg into that hole. This scenario also involves four combinations. Figure 7 shows the two intention scenarios, where A, B, C and D indicate the locations of the holes at the base of the chair.
  • Robot Action-A
Robot actions refer to the set of possible assembly operations that a robot can execute. Robots can perform the same action at three speeds: low, normal and high. The speed is selected depending on the human worker’s adaptation ability. Human adaptation ability is divided into three categories: low, normal and high. Any variations in robot actions depend on the human worker’s intention and their fatigue level. Figure 8 shows different robot actions, where circles with A, B, C and D refer to the chair assembly process, circles with 0 refer to the robot being idle, and circles with 1, 2 and 3 refer to robot actions when the human worker is fatigued. The purple and green colors represent the closest and farthest areas from the human worker, respectively. Table 3 shows the robot’s decision speed depending on the operator’s adaptation ability and Table 4 shows the robot’s decision speed depending on the operator’s adaptation ability and fatigue.
  • Reward-R
The reward is a function that specifies the immediate reward received by a robot when it performs an assembly operation responsive to an operator’s intention. Table 5 shows the reward applied in our system. If the robot performs as it is expected, the system gives a positive reward to encourage the robot. By doing this, the robot perform at its best speed to collaborate with humans. If a problem occurs during the assembly, which makes the human feel uncomfortable, it will give a negative score to discourage the robot. Thus, our approach makes the robot learn to collaborate with humans.
The reward system also provides safety measurements during the operation. An uncomfortable situation during collaboration with the robot will lead the robot to work in slow mode during collaboration. Some uncomfortable situations during assembly are:
(1) The human stops working with the robot;
(2) The human does not trust the ability of the robot;
(3) The human does not need the robot’s help;
(4) In case of collision, the robot will return to the idle position and reassess the condition. The possible reward for this situation is (−50), which leads to less effective HR collaboration.

3.3. Behavior Analysis System

This section describes the behavior analysis of the proposed system and highlights the mathematical equation used and the concept behind it. Both human fatigue and adaptation ability are input into POMDP to affect the actions of a robot. Figure 9 shows an integration of behavior analysis scheme with POMDP.

3.3.1. Worker Adaptation Ability

A Learning Curve (LC) is a mathematical depiction of the performance of workers engaged in repetitive tasks. Workers tend to take less time to complete tasks as the number of repetitions increases. This is because they become more familiar with the task and tools and they also find shortcuts to task completion. LC parameters can be estimated using a nonlinear optimization procedure to reduce errors in the sum of squares. However, if convergence is not reached, which frequently occurs when working with nonlinear regression, the initial values of the parameters may instead be altered. In addition, the adherence of the model to a validation sample, the sum-of-squares error and the coefficient of determination R2 can be used to evaluate how well the model fits the data. Another option is to use the sum-of-squares error. Wright’s model, also called the log-linear model, is generally considered the first formal LC model and is mathematically represented as follows:
t x = a x b
where t is the operation time in seconds, a represents the first operation time for some value of time, b is a negative constant, the absolute value of which represents the learning index and x is the operating time in order. To obtain an LC range for the chair assembly process, chair assembly data were collected 10 times from 10 respondents. According to these data, the LC was categorized as low, normal, or high.

3.3.2. Fatigue Measurement

In scenarios involving humans working for extended durations, fatigue is considered to be one of the crucial factors affecting performance. To confirm the capabilities of a human worker, the cost function should include a representation of fatigue [5]. In this study, we calculated the degree of tiredness as the difference between the completion times of human workers and their nominal expected performance. We also used weariness as a factor to determine the amount of fatigue. We thus calculated the degree of exhaustion by comparing the completion times with the performance predicted on the basis of the nominal level. This means that the fatigue variable did not represent the absolute degree of exhaustion that a human worker experienced; rather, it offered a relative measurement of fatigue for each subtask. To obtain data from a typical competent human worker, we simulated the completion times of different workers, denoted by Mi,j, by using the model expressed by the following equation over 90 task iterations:
M i , j = t P , j + τ j . ln i
To perform this calculation, we obtained the historical data for each worker. First, the initial completion time, denoted by t P , j , was derived from the first iteration of the task that was successfully completed within the current task assignment period so as to develop the model. Next, historical data from previous job assignment periods for a human worker were obtained to calculate the variable τ j , which provided a synthetic measure of the tiredness of human workers. This step is critical before proceeding. The value of this variable was determined by the limit:
τ j   T j N j . t P , j N j . ln N j N j
where Nj is the number of task iterations completed in an assignment period of length Tj; however, in this study, we used an upper limit for the variable τ j . According to [39], the estimated duration of assembly was calculated using Equation (4), where n is the number of repetitions. The expected task duration at the nth repetition y(n) was calculated as follows:
y n = y 1 + M y 1 1 n τ y 1 m 1 n λ
where y(1) is the expected task duration at the first repetition, M is the expected worst limit performance due to fatigue after a large number of repetitions and m is an estimate of incompressible worker performance. This incompressible worker performance reflects the most optimal performance, based on physiological limits related to manipulative movements and is evaluated as the 75% quantile of performance time computed by the methods–time measurement technique. Here, is the learning factor and τ is the tiredness factor (0 < < 1 and 0 < < 1).

3.4. Extracting Human Features

We used a web camera to capture human facial characteristics. This camera captured 30 frames per second (i.e., 30 data sequences per second). The human feature data collected included 132, 63, 63 and 1404 flattened data points from the skeleton, right palm, left palm and face, respectively. Data were collected from each frame. Figure 10 shows the data points collected by MediaPipe.

3.5. Human Activity Classification

Classification of human activity refers to the sign language detection performed using long short-term memory (LSTM) and MediaPipe [40]. Figure 11 shows the LSTM architecture used in this study. This architecture, which comprises three hidden and three dense layers, was tested to classify human activity by using the AAMAZ dataset.
Then, we apply LSTM input with 20 frames and 1662 human features. The LSTM data comprised three layers, which all used a rectified linear unit (ReLU) as their activation function. The ReLU function was used on the first two levels of the dense layer stack, whereas the softmax function was used on the last layer to convert a vector of value to a probability distribution. The final dense layer produced 11 classes based on the AAMAZ dataset.
We tested the classification system on the basis of model performance by using a confusion matrix to determine how well this system performed with the model created using LSTM. This allowed for the evaluation of system performance. Confusion matrices are one of the methods used to evaluate the effectiveness of a categorization system.

4. Experimental Setup and Results

4.1. Experimental Setup

The environmental configuration used for simulation is presented in Figure 12. The figure shows that the human worker and the robot occupied the same workstation. The goal of this experiment is to help the human to accomplish the chair-assembly task. The robot will assist the humans in the assembly task. As discussed on Section 3.2 (POMDP-Intention), the workpieces consists of four legs and one base. The web camera works as data acquisition to capture the human motion features. All the collected data were processed further in the PC controller to compute POMDPs of behavior analysis explained in Section 3.2 (POMDP—Robot Action).
Table 6 shows the detailed experiment setup of the research. As explained in Section 3.4, human detection was carried out using the Mediapipe library. The vision-based function will extract the human activity feature of the operator during the assembly operation, as is elaborated further in Section 4.4—Human Intention.

4.2. Robot Trajectory Planning

The robot control trajectory planning in the experimental stage consists of three primary sequence operations. To begin with, all the robot motions were recorded (teaching process) to initialize the initial position, pick the object and handle the chair leg. These initial locations are the robot’s only information in beginning our experiment. The coordinate recording system was carried out using the Kinova SDK application.
Next, the system will analyze the operator behavior from the first ten cycles duration of the collaboration. It computes the adaptation ability, which is related to the robot’s movement speed during collaboration in the next assembly cycle. Once the system retrieves all the information, the robot starts to assist the operator by hand on the chair stick according to the operator assembly step reference. Here, the operator could have two possibilities: (1) gluing all the holes at the beginning and starting to place the chair leg, or (2) gluing and placing the chair leg alternately. Figure 7 shows the illustration of these possibilities of assembly. For more illustrations on the demonstration of the experiment, please refer to the following links: shorturl.at/delpy.
Figure 13 and Figure 14 describe the robot move according to the sequences:
First cycle: Idle Position—observe the operator behavior during the first ten cycle.
Latter cycle:
0.
Retrieve the human behavior data
1.
Start at the idle position
2.
Pick the chair leg (1, 2…4) in the station
3.
Handing the object to human
4.
Analyze
5.
Return to phases 2–4
6.
Once all the chair leg has been completed, return to step 1 for the preparation for the next cycle of assembly.

4.3. Human Activity Classification Results

Figure 15 displays the accuracy of human activity classification and the confusion matrices. Each confusion matrix shows the classification performance for each data test. In the confusion matrices, each row represents the correct output class for a given image and each column represents the predicted class. The confusion matrix is constructed on the basis of the data test. The AAMAZ confusion matrix is shown on the left. True and predicted classes are denoted by 1–10. These numbers represent left boxing, right boxing, hand-waving, left hand-waving, right hand-waving, handclapping, crawling, jumping and walking. For human worker activities, true and predicted classes are denoted by 1–7. These numbers represent gluing at hole A, gluing at hole B, gluing at hole C, gluing at hole D, placing a leg in hole A, placing a leg in hole B and placing a leg in hole C.
The LSTM architecture was used to classify human worker activities during the assembly process of a chair over 250 epochs. A set of inputs was used to choose the human worker assembly preference in the intention part of POMDP. Table 7 shows the set of inputs for the first and second scenarios in the chair assembly process.
The aforementioned set of inputs considerably affected the generation of operator intentions inside the POMDP system. Because both the first and the second scenarios exhibited an equal number of steps, both scenarios shared the same ID. The POMDP system produced an intention operator corresponding to the first two activities received by LSTM by considering these activities.

4.4. Human Intentions

All human intentions were generated by POMDP and the belief state in POMDP was updated after the development of operator intentions. The operator intention IDs are presented in Figure 16 for the first and second assembly scenarios. As shown in Table 8, the belief update resulting from the previous and proposed POMDP were the same. Figure 17 shows the generated intention on an ID map, where the red color represents the real intention and the cyan color represents the generated intention based on POMDP. During the implementation of POMDP, the second scenario of chair assembly was selected.
According to the Table and the ID map, the intentions generated by POMDP almost resulted in the right operator intention when the operator intentions in the first and second assembly scenarios were matched with the real operator intention. However, when the third operator intention was generated, a slight misprediction was observed after the sixth step. However, this mismatch was resolved in the next prediction (fourth prediction). This case is described in Table 6.

4.5. Human Fatigue—Measurement

The data collection for Human fatigue was acquired from ten respondents while assembling the chair 90 times. Equation (4) was adopted to obtain the duration of the chair assembly conditions under fatigue, over fatigue and fatigue. Using Equation (4), the value of y(1) = 13.7 s was also obtained from the average data observed in the first assembling of each respondent. Duration of fist assembling a chair can be seen in Table 9. The value of m (17.25 s) was determined using the MTM-technique [41]. Based on the data, the value of M = 23 s was estimated. Finally, model parameters λ and τ used in this experiment are 0.5. The calculated duration expectation for assembling a chair 50 times is 25.19 s. The operator is assumed to be fatigued when the duration of assembling a chair is more than 25.19 s.

4.6. Human Worker Adaptation Ability

In addition to fatigue data, adaptation ability data were obtained from ten respondents who repeated the chair assembly process 90 times. Data were obtained from the first 10 instances of chair assembly by each respondent to investigate their adaptation ability. A negative constant of learning index b was then calculated using Equation (4). According to the calculated b value, adaptation ability was divided into three categories: low (from 0 to 0.245), normal (0.25 to 0.5) and high (higher than 0.5). Subsequently, the R2 value was calculated to validate the value of b, as shown in Table 10, where “Res1” to “Res10” denote respondents 1 to 10.
The average correlation coefficient of the two models was 0.82 (greater than 0.8), indicating a strong linear correlation between the models and verifying the reliability of the complexity measure model based on the operation time.

4.7. Performance Comparison

We tested both the previous method [36] and the proposed method. We used the duration to obtain the product based on demand and the correctness of product assembly to validate the implemented method. The chair assembly process was repeated 50 times.
When the previous model was employed, the operator independently assembled the chair. During the 12th iteration, the operator delegated their work to the robot. The total time taken to fulfill the demand of 50 chairs was 29.7 min. During the chair assembly process, the operator’s intentions, obtained from the POMDP results, were correct according to the production process. Table 11 shows the time taken to assemble a chair with the previous model compared with the time taken with the proposed model. Table 12 and Table 13 show the robot actions during the assembly process at cycles 1–39 and cycles 40–50, respectively.
When using the proposed method, the operator independently assembled the chair in the first iteration. The POMDP system started to generate the operator’s intention and regularly update it according to feedback from the operator. To assess adaptation ability, the duration of assembly for the first 10 iterations of collaboration was analyzed. The operator’s adaptation ability exhibited a negative constant of learning index of b = 0.031 with R2 = 0.881. The robot then started to help the operator based on their adaptation ability, which was categorized as poor (below 0.245). Because the operator had poor adaptation ability, the robot’s speed was decreased. During the assembly process, the operator remained idle for more than 2 s, which the system interpreted as the operator becoming tired, allowing the robot to take lead.
As shown in Table 11, Table 12 and Table 13, the proposed method helped generate robot actions depending on the adaptation ability and fatigue of the operator. The proposed method also allowed the robot to actively help the operator during the assembly process, unlike the previous method, which only allowed the robot to passively help the operator. When using the previous method, the robot was able to help the operator during the assembly process only when the operator handed over the unfinished product to the robot. By contrast, the proposed method allowed the robot to take charge of the assembly process when the operator remained idle for more than 2 s.

5. Conclusions

This study was performed to further develop a robot collaboration system introduced in a previous study [36]. Operator behavior analysis, including analysis of operator fatigue and adaptation ability, was used to improve the efficiency of HR collaboration. We analyzed the operator’s behavior by classifying their activities using a web camera rather than by relying on the material selected by the operator during the assembly process. The classified activities were then used as input in POMDP to generate operator intentions during the chair assembly process. An experiment involving 10 respondents repeating a chair assembly process 90 times was conducted to assess operator fatigue and the adaptation ability was implemented in POMDP. The results revealed three fatigue categories (low, normal and high) and three adaptation ability categories (low, normal and high). The experimental results (see Table 8) indicated that the completion time when applying the proposed method was 4.68 s shorter than when using the previous method [36]. This further indicated that the proposed method is more stable than the previous method [36], which had a completion time of 4.99 s. These results indicate that the proposed method improves upon the previous method thanks to its combination of POMDP and behavior analysis.

Author Contributions

Conceptualization, H.-I.L.; methodology, H.-I.L. and N.L.; software, N.L.; validation, N.L.; formal analysis, H.-I.L. and F.S.W.; investigation, H.-I.L.; resources, H.-I.L. and W.-H.C.; data curation, N.L.; writing–original draft preparation, F.S.W. and N.L.; writing–review and editing, H.-I.L. and F.S.W.; visualization, H.-I.L.; supervision, H.-I.L.; project administration, H.-I.L.; funding acquisition, H.-I.L. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Science and Technology Council under Grant: MOST 111-2218-E-027-001.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Singh, R.; Miller, T.; Newn, J.; Velloso, E.; Vetere, F.; Sonenberg, L. Combining gaze and ai planning for online hu-man intention recognition. Artif. Intell. 2020, 284, 103275. [Google Scholar] [CrossRef]
  2. Ishii, R.; Ahuja, C.; Nakano, Y.I.; Morency, L.-P. Impact of Personality on Nonverbal Behavior Generation; ACM: New York, NY, USA, 2020; pp. 1–8. ISBN 9781450375863. [Google Scholar] [CrossRef]
  3. Reed, C.L.; Moody, E.J.; Mgrublian, K.; Assaad, S.; Schey, A.; McIntosh, D.N. Body matters in emotion: Restricted body movement and posture affect expression and recognition of status-related emotions. Front. Psychol. 2020, 11, 1961. [Google Scholar] [CrossRef]
  4. Darafsh, S.; Ghidary, S.S.; Zamani, M.S. Real-time activity recognition and intention recognition using a vision-based embedded system. arXiv 2021, arXiv:2107.12744. [Google Scholar]
  5. Smith, T.; Benardos, P.; Branson, D. Assessing worker performance using dynamic cost functions in human robot collaborative tasks. Proc. Inst. Mech. Eng. Part C J. Mech. Eng. Sci. 2020, 234, 289–301. [Google Scholar] [CrossRef]
  6. Bauer, W.; Bender, M.; Kazmaier, B.; Kemmler, B.; Rally, P. Leichtbauroboter in der Manuellen Montage*/Lightweight robots in manual assembly—Lightweight robots open up new possibilities for work design in today’s Manual Assembly. wt Werkstattstech. Online 2015, 105, 610–613. [Google Scholar] [CrossRef]
  7. Galin, R.; Meshcheryakov, R.V. Human-robot interaction efficiency and human-robot collaboration. In Robotics: Industry 4.0 Issues & New Intelligent Control Paradigms; Springer: Cham, Switzerland, 2020. [Google Scholar] [CrossRef]
  8. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–robot collaboration in manufacturing applications: A review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef] [Green Version]
  9. Neto, P.; Simão, M.; Mendes, N.; Safeea, M. Gesture-based human-robot interaction for human assistance in manufacturing. Int. J. Adv. Manuf. Technol. 2019, 101, 119–135. [Google Scholar] [CrossRef]
  10. de Carvalho, K.B.; Villa, D.K.D.; Sarcinelli-Filho, M.; Brandão, A.S. Gesturesteleoperation of a heterogeneous multi-robot system. Int. J. Adv. Manuf. Technol. 2022, 118, 1999–2015. [Google Scholar] [CrossRef]
  11. Wang, K.-J.; Santoso, D. A smart operator advice model by deep learning for motion recognition in human–robot coexisting assembly line. Int. J. Adv. Manuf. Technol. 2022, 119, 865–884. [Google Scholar] [CrossRef]
  12. Voronin, V.; Zhdanova, M.; Semenishchev, E.; Zelenskii, A.; Cen, Y.; Agaian, S. Action recognition for the robotics and manufacturing automation using 3-d binary micro-block difference. Int. J. Adv. Manuf. Technol. 2021, 117, 2319–2330. [Google Scholar] [CrossRef]
  13. Huang, S.; Yang, L.; Chen, W.; Tao, T.; Zhang, B. A specific perspective: Subway driver behaviour recognition using cnn and time-series diagram. IET Intell. Transp. Syst. 2021, 15, 387–395. [Google Scholar] [CrossRef]
  14. Wang, J.; Liu, T.; Wang, X. Human hand gesture recognition with convolutional neural networks for k-12 double teachers instruction mode classroom. Infrared Phys. Technol. 2020, 111, 103464. [Google Scholar] [CrossRef]
  15. Lin, F.-C.; Ngo, H.-H.; Dow, C.-R.; Lam, K.-H.; Le, H.L. Student behavior recognition system for the classroom environment based on skeleton pose estimation and person detection. Sensors 2021, 21, 5314. [Google Scholar] [CrossRef] [PubMed]
  16. Li, S.; Yi, J.; Farha, Y.A.; Gall, J. Pose refinement graph convolutional network for skeleton-based action recognition. IEEE Robot. Autom. Lett. 2021, 6, 1028–1035. [Google Scholar] [CrossRef]
  17. Jaouedi, N.; Perales, F.J.; Buades, J.M.; Boujnah, N.; Bouhlel, M.S. Prediction of human activities based on a new structure of skeleton features and deep learning model. Sensors 2020, 20, 4944. [Google Scholar] [CrossRef]
  18. Sang, H.-F.; Chen, Z.-Z.; He, D.-K. Human motion prediction based on attention mechanism. Multimed. Tools Appl. 2020, 79, 5529–5544. [Google Scholar] [CrossRef]
  19. Roda-Sanchez, L.; Garrido-Hidalgo, C.; García, A.S.; Olivares, T.; Fernández-Caballero, A. Comparison of rgb-d and imu-based gesture recognition for human-robot interaction in remanufacturing. Int. J. Adv. Manuf. Technol. 2021, 116, 1–13. [Google Scholar] [CrossRef]
  20. AlZoubi, A.; Al-Diri, B.; Pike, T.; Kleinhappel, T.; Dickinson, P. Pair-activity analysis from video using qualitative trajectory calculus. IEEE Trans. Circuits Syst. Video Technol. 2018, 28, 1850–1863. [Google Scholar] [CrossRef] [Green Version]
  21. Villani, V.; Pini, F.; Leali, F.; Secchi, C. Survey on human–robot collaboration in industrial settings: Safety, intuitive interfaces and applications. Mechatronics 2018, 55, 248–266. [Google Scholar] [CrossRef]
  22. Arents, J.; Abolins, V.; Judvaitis, J.; Vismanis, O.; Oraby, A.; Ozols, K. Human–robot collaboration trends and safety aspects: A systematic review. J. Sens. Actuator Netw. 2021, 10, 48. [Google Scholar] [CrossRef]
  23. Marvel, J.A.; Norcross, R. Implementing speed and separation monitoring in collaborative robot workcells. Robot. Comput.-Integr. Manuf. 2017, 44, 144–155. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Andres, C.P.C.; Hernandez, J.P.L.; Baldelomar, L.T.; Martin, C.D.F.; Cantor, J.P.S.; Poblete, J.P.; Raca, J.D.; Vicerra, R.R.P. Tri-modal speed and separation monitoring technique using static-dynamic danger field implementation. In Proceedings of the 2018 IEEE 10th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Baguio City, Philippines, 29 November–2 December 2018. [Google Scholar]
  25. Di Cosmo, V.; Giusti, A.; Vidoni, R.; Riedl, M.; Matt, D.T. Collaborative Robotics Safety Control Application using dynamic safety zones based on the ISO/TS 15066:2016. In Advances in Service and Industrial Robotics; Springer: Cham, Switzerland, 2019; pp. 430–437. [Google Scholar]
  26. Byner, C.; Matthias, B.; Ding, H. Dynamic speed and separation monitoring for collaborative robot applications—Concepts and perfor-mance. Robot. Comput.-Integr. Manuf. 2019, 58, 239–252. [Google Scholar] [CrossRef]
  27. Campomaggiore, A.; Costanzo, M.; Lettera, G.; Natale, C. A fuzzy inference approach to control robot speed in human-robot shared workspaces. In Proceedings of the 16th International Conference on Informatics in Control, Automation and Robotics, Prague, Czech Republic, 29–31 July 2019. [Google Scholar]
  28. Himmelsbach, U.B.; Wendt, T.M.; Hangst, N.; Gawron, P.; Stiglmeier, L. Human–machine differentiation in speed and separation monitoring for improved efficiency in human–robot collaboration. Sensors 2021, 21, 7144. [Google Scholar] [CrossRef] [PubMed]
  29. Kim, E.; Yamada, Y.; Okamoto, S.; Sennin, M.; Kito, H. Consider-ations of potential runaway motion and physical interaction for speed and separation monitoring. Robot. Comput.-Integr. Manuf. 2021, 67, 102034. [Google Scholar] [CrossRef]
  30. Pupa, A.; Arrfou, M.; Andreoni, G.; Secchi, C. A safety-aware kino- dynamic architecture for human-robot collaboration. arXiv 2021, arXiv:210301818. [Google Scholar]
  31. Karagiannis, P.; Kousi, N.; Michalos, G.; Dimoulas, K.; Mparis, K.; Dimosthenopoulos, D.; Tokçalar, Ö.; Guasch, T.; Gerio, G.P.; Makris, S. Adaptive speed and separation monitoring based on switching of safety zones for effective human robot collaboration. Robot. Comput.-Integr. Manuf. 2022, 77, 102361. [Google Scholar] [CrossRef]
  32. Hartmann, D.; Schwenck, C. Emotion processing in children with conduct prob-lems and callous-unemotional traits: An investigation of speed, accuracy, and attention. Child Psychiatry Hum. Dev. 2020, 51, 721–733. [Google Scholar] [CrossRef] [Green Version]
  33. Maior, C.B.S.; Moura, M.J.d.C.; Santana, J.M.M.; Lins, I.D. Real-time classification for autonomous drowsiness detection using eye aspect ratio. Expert Syst. Appl. 2020, 158, 113–505. [Google Scholar] [CrossRef]
  34. Stapel, J.; Hassnaoui, M.E.; Happee, R. Measuring driver perception: Com-bining eye-tracking and automated road scene perception. Hum. Factors J. Hum. Factors Ergon. Soc. 2022, 64, 714–731. [Google Scholar] [CrossRef]
  35. Mart´ın, J.M.; del Campo, V.L.; Fernandez-Arguelles, L.J.M. Design and development of a low-cost mask-type eye tracker to collect quality fixation measure-ments in the sport domain. Proc. Inst. Mech. Eng. Part P J. Sport. Eng. Technol. 2019, 233, 116–125. [Google Scholar] [CrossRef]
  36. Cramer, M.; Kellens, K.; Demeester, E. Probabilistic decision model for adaptive task planning in human-robot collaborative assembly based on designer and operator intents. IEEE Robot. Autom. Lett. 2021, 6, 7325–7332. [Google Scholar] [CrossRef]
  37. Zhang, Z.; Peng, G.; Wang, W.; Chen, Y.; Jia, Y.; Liu, S. Prediction-based human-robot collaboration in assembly tasks using a learning from demonstration model. Sensors 2022, 22, 4279. [Google Scholar] [CrossRef] [PubMed]
  38. Lin, C.-H.; Wang, K.-J.; Tadesse, A.A.; Woldegiorgis, B.H. Human-robot collaboration empowered by hidden semi-markov model for operator behaviour prediction in a Smart Assembly System. J. Manuf. Syst. 2022, 62, 317–333. [Google Scholar] [CrossRef]
  39. Digiesi, S.; Kock, A.A.; Mummolo, G.; Rooda, J.E. The effect of dynamic worker behavior on flow line performance. Int. J. Prod. Econ. 2009, 120, 368–377. [Google Scholar] [CrossRef]
  40. Renotte, N. Sign Language Detection Using Action Recognition with Python—Lstm Deep Learning Model. 2021. Available online: https://www.youtube.com/watch?v=doDUihpj6ro (accessed on 27 June 2022).
  41. Denis, A.; Joao, F. Analysis of the Methods Time Measurement (MTM) Methodology through Its Application in Manufacturing Companies; Research Gate: Middlesbrough, UK, 2009; Available online: https://www.researchgate.net/profile/Joao-Ferreira-2/publication/273508544_Analysis_of_the_Methods_Time_Measurement_MTM_Methodology_through_its_Application_in_Manufacturing_Companies/links/5504b3590cf231de07744412/Analysis-of-the-Methods-Time-Measurement-MTM-Methodology-through-its-Application-in-Manufacturing-Companies.pdf?origin=publication_detail (accessed on 9 October 2022).
Figure 1. Types of Human–Robot Interaction [6].
Figure 1. Types of Human–Robot Interaction [6].
Machines 10 01045 g001
Figure 2. Human–Robot Collaboration—Case Study: Wood-Chair Assembly.
Figure 2. Human–Robot Collaboration—Case Study: Wood-Chair Assembly.
Machines 10 01045 g002
Figure 3. A Breakdown of POMDP framework with Behavior Analysis.
Figure 3. A Breakdown of POMDP framework with Behavior Analysis.
Machines 10 01045 g003
Figure 4. Design Experiment on the Previous Work [36].
Figure 4. Design Experiment on the Previous Work [36].
Machines 10 01045 g004
Figure 5. A Framework System of POMDP and Behavior Analysis.
Figure 5. A Framework System of POMDP and Behavior Analysis.
Machines 10 01045 g005
Figure 6. Experiment Workpieces.
Figure 6. Experiment Workpieces.
Machines 10 01045 g006
Figure 7. Assembly Intention Scenario.
Figure 7. Assembly Intention Scenario.
Machines 10 01045 g007
Figure 8. Robot Actions Depending on the Human Worker’s Intention.
Figure 8. Robot Actions Depending on the Human Worker’s Intention.
Machines 10 01045 g008
Figure 9. A Framework of POMDP with Behavior Analysis.
Figure 9. A Framework of POMDP with Behavior Analysis.
Machines 10 01045 g009
Figure 10. Facial and Skeletal Data Points on MediaPipe.
Figure 10. Facial and Skeletal Data Points on MediaPipe.
Machines 10 01045 g010
Figure 11. LSTM Architecture.
Figure 11. LSTM Architecture.
Machines 10 01045 g011
Figure 12. 3D Visualization.
Figure 12. 3D Visualization.
Machines 10 01045 g012
Figure 13. Collaboration Activity.
Figure 13. Collaboration Activity.
Machines 10 01045 g013
Figure 14. Trajectory Plan.
Figure 14. Trajectory Plan.
Machines 10 01045 g014
Figure 15. Model Performance.
Figure 15. Model Performance.
Machines 10 01045 g015
Figure 16. First (a) and Second (b) Assembly Scenarios.
Figure 16. First (a) and Second (b) Assembly Scenarios.
Machines 10 01045 g016
Figure 17. Generation of an ID Map.
Figure 17. Generation of an ID Map.
Machines 10 01045 g017
Table 1. Comparison with another model.
Table 1. Comparison with another model.
MethodAdvantagesWeakness
Original HR-POMDP [36]Act optimally under a given belief or operator intentionThe robot action is not responsive; cooperative mode, simulated work
HSMM [38]



Enhanced HR ConvLSTM Model [37]
High training and verification flexibility, fewer training data

Long-term prediction, good adaptability settings, personalized application
Short-term prediction, complex tuning process, simulated work

High data training, black box model
POMDP Behavior AnalysisThe work considers human factors, i.e., fatigue and adaptability, Flexible rulesNeed a large amount of training data
Table 2. Main Comparison Approach.
Table 2. Main Comparison Approach.
ParameterProposed MethodPrevious Method [34]
Collaboration MethodUsing RobotSimulated using Human
ObjectWood-Chair AssemblySimulated Bourjauli’s Ballpoint
POMDP ApproachOperator Fatigue and Adaptation AbilityOperator Intention
Table 3. Robot’s Decision Speed.
Table 3. Robot’s Decision Speed.
Operator’s Adaptation AbilityRobot’s Speed
Good Adaptation AbilityHigh Robot Speed
Normal Adaptation AbilityNormal Robot Speed
Bad Adaptation AbilityLow Robot Speed
Table 4. Robot’s Decision Speed Depending on the Operator’s Adaptation Ability.
Table 4. Robot’s Decision Speed Depending on the Operator’s Adaptation Ability.
Operator’s Adaptation AbilityRobot’s SpeedRobot’s Action
Good Adaptation AbilityHigh Robot SpeedHand on stick to operator
Normal Adaptation AbilityNormal Robot SpeedHand on stick to operator
Bad Adaptation AbilityLow Robot SpeedHand on stick to operator
Table 5. Reward in the proposed system.
Table 5. Reward in the proposed system.
RewardObservation Result
+10Robot performs action and human is able to work together with robot
−2Normal Robot Speed
−50Low Robot Speed
0Robot idle
Table 6. Details of the experimental setup.
Table 6. Details of the experimental setup.
DetailsRoleSpecification
RobotCollaboratorKinova JACO 3 Robot
CameraFeature ExtractorC390e—1920 × 1080 Pixels—30 FPS
PersonOperatorUnderstand the flow of Assembly
Robot APIInitial Motion PlanKinova SDK
Table 7. Assembly Scenarios.
Table 7. Assembly Scenarios.
Activity 1Activity 2Assembly Scenario
Gluing hole 1Gluing hole 41
Gluing hole 2Gluing hole 41
Gluing hole 3Gluing hole 41
Gluing hole 4Gluing hole 11
Gluing hole 4Gluing hole 21
Gluing hole 4Gluing hole 31
Gluing hole 1Gluing hole 31
Gluing hole 2Gluing hole 31
Gluing hole 4Gluing hole 31
Gluing hole 3Gluing hole 11
Gluing hole 3Gluing hole 21
Gluing hole 3Gluing hole 41
Gluing hole 1Gluing hole 21
Gluing hole 3Gluing hole 21
Gluing hole 4Gluing hole 21
Gluing hole 2Gluing hole 11
Gluing hole 2Gluing hole 31
Gluing hole 2Gluing hole 41
Gluing hole 2Gluing hole 11
Gluing hole 3Gluing hole 11
Gluing hole 4Gluing hole 11
Gluing hole 1Gluing hole 21
Gluing hole 1Gluing hole 31
Gluing hole 1Gluing hole 41
Gluing hole 1Place stick hole 12
Gluing hole 2Place stick hole 22
Gluing hole 3Place stick hole 32
Gluing hole 4Place stick hole 42
Table 8. Belief Update Performance.
Table 8. Belief Update Performance.
IntentionStep 0 Step 1Step 2Step 3Step 4Step 5Step 6Step 7 Step 8
Assembly 11691451219773492510
Generated 11691451219773492510
Assembly 21691451219773492510
Generated 21691451219773492510
Assembly 3190166142118947046220
Generated 3189165141117936945210
Assembly 4190166142118947046220
Generated 4190166142118947046220
Table 9. MTM-UAS Calculation.
Table 9. MTM-UAS Calculation.
ActivityUnit TimeQuantityTotal Time
Gluing a hole2 s4 holes8 s
Place the chair leg3 s4 pieces of leg12 s
Idle (Prepare next Iteration)3 s13 s
Table 10. Adaptation Ability Data.
Table 10. Adaptation Ability Data.
Assembly IterationRes 1Res 2Res 3Res 4Res 5Res 6Res 7Res 8Res 9Res 10
1171813922119111017
219171181999111019
31381281899101012
4106118177810911
510710716679911
61079716779811
7107971667989
898971356989
997971666877
1097871666877
b−0.33−0.51−0.18−0.12−0.17−0.3−0.2−0.14−0.15−0.35
R20.780.770.860.880.830.90.790.860.770.79
Table 11. Duration of Each Cycle in the Chair Assembly Process.
Table 11. Duration of Each Cycle in the Chair Assembly Process.
Statistic NameProposed Method TimePrevious Method Time [34]
Assembly Cycle Total5050
Mean3135.68
Standard Deviation4.9911.30
Variance24.44125.33
Table 12. Adaptation Ability Data.
Table 12. Adaptation Ability Data.
CycleProposed Method Robot’s ActionIn [34] Robot’s Action
1IdleIdle
2–12Place stick at hole D, C, A and B with low speedIdle
13–25Place stick at hole D, C, A and B with low speedPlace stick at hole A & D
26–28Lead the assemblyPlace stick at hole A & B
29–35Place stick at hole D, C, A and B with low speedPlace stick at hole A & B
36–38Lead the assemblyPlace stick at hole A & B
39Lead the assemblyIdle
Table 13. Adaptation Ability Data.
Table 13. Adaptation Ability Data.
CycleProposed Method Robot’s ActionIn [34] Robot’s Action
40–45Place stick at hole D, C, A and B with low speedPlace stick at hole A & B
46Lead the assemblyPlace stick at hole A & B
47Lead the assemblyIdle
48Lead the assemblyPlace stick at hole A & B
49Lead the assemblyPlace stick at Hole A & B
50Lead the assemblyIdle
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Lin, H.-I.; Wibowo, F.S.; Lathifah, N.; Chen, W.-H. Behavior Analysis for Increasing the Efficiency of Human–Robot Collaboration. Machines 2022, 10, 1045. https://doi.org/10.3390/machines10111045

AMA Style

Lin H-I, Wibowo FS, Lathifah N, Chen W-H. Behavior Analysis for Increasing the Efficiency of Human–Robot Collaboration. Machines. 2022; 10(11):1045. https://doi.org/10.3390/machines10111045

Chicago/Turabian Style

Lin, Hsien-I, Fauzy Satrio Wibowo, Nurani Lathifah, and Wen-Hui Chen. 2022. "Behavior Analysis for Increasing the Efficiency of Human–Robot Collaboration" Machines 10, no. 11: 1045. https://doi.org/10.3390/machines10111045

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop