Next Article in Journal
Analysis and Optimization of a Regenerative Snubber for a GaN-Based USB-PD Flyback Converter
Previous Article in Journal
RBFNN-Based Adaptive Integral Sliding Mode Feedback and Feedforward Control for a Lower Limb Exoskeleton Robot
Previous Article in Special Issue
A Novel Online Path Planning Algorithm for Multi-Robots Based on the Secondary Immune Response in Dynamic Environments
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dynamic Affect-Based Motion Planning of a Humanoid Robot for Human-Robot Collaborative Assembly in Manufacturing

by
S. M. Mizanoor Rahman
Mechanical Engineering, The Pennsylvania State University, 120 Ridge View Drive, Dunmore, PA 18512, USA
Electronics 2024, 13(6), 1044; https://doi.org/10.3390/electronics13061044
Submission received: 31 December 2023 / Revised: 14 February 2024 / Accepted: 20 February 2024 / Published: 11 March 2024
(This article belongs to the Special Issue Autonomous and Intelligent Robotics)

Abstract

:
The objective was to investigate the impacts of the robot’s dynamic affective expressions in task-related scenarios on human–robot collaboration (HRC) and performance in human–robot collaborative assembly tasks in flexible manufacturing. A human–robot hybrid cell was developed to facilitate a human co-worker and a robot to collaborate to assemble a few parts into a final product. The collaborative robot was a humanoid manufacturing robot with the ability to display its affective states due to changes in task scenarios on its face. The assembly task was divided into several subtasks, and based on an optimization strategy, the subtasks were optimally allocated to the human and the robot. A computational model of the robot’s affective states was derived inspired by that of humans following the biomimetic approach, and an affect-based motion planning strategy for the robot was proposed to enable the robot to adjust its motions and behaviors with task situations and communicate (inform) the situations to the human co-worker through affective expressions. The HRC and the assembly performance for the affect-based motion planning were experimentally evaluated based on a comprehensive evaluation scheme and were compared with two alternative conditions: (i) motion planning that did not display affective states, and (ii) motion planning that displayed text messages instead of displaying affective states to communicate the situations to the human co-worker. The results clearly showed that the dynamic affect-based motion planning produced significantly better HRC and assembly performance than that produced by motion planning associated with the display of no affective states or text messages. The results encouraged employing manufacturing robots with dynamic affective expressions to collaborate with humans in flexible assembly in manufacturing to improve HRC and assembly performance.

1. Introduction

Manufacturing industries face severe challenges and global competition due to the requirements of high productivity and quality, cost-effectiveness, and skilled workforces [1,2]. Based on our case study experiences in a few renowned manufacturing companies, we realize that assembly in manufacturing (e.g., automotive, aerospace, electronics, furniture, etc.) is one of the stages where extensive manpower is involved for long hours that increases production time and raises labor and utility costs. Manual assembly is tedious, burdensome to workforces, inefficient, and can impact workers’ health and safety [3]. Hence, automation in assembly should be prioritized. However, industrial automation is usually expensive and inflexible [4]. We believe that the recent advancements in lightweight and low-cost flexible industrial robots, e.g., Baxter [5], Kinova [6], and KUKA [7], can be used to provide flexibility and ensure cost-effectiveness in assembly through proper collaboration between humans and robots, which can also greatly improve assembly performance in terms of productivity, quality, and safety [8]. Being motivated by the above prospects, Human–Robot Collaboration (HRC) in assembly has become an active area of research, and a significant number of contributions have been made toward HRC in assembly [9,10,11,12,13,14,15,16,17,18,19,20,21]. However, in our view, the following two areas seem to be very important for HRC in assembly but have not received much attention yet:
First, we believe that anthropomorphic manufacturing robots with human-like affects can be used in HRC in assembly to make collaboration more natural and intuitive [22]. This has already been observed in Human-Human Collaboration (HHC). Affective expressions of humans in a human-human collaborative task help one human perceive the mental states of another human, convey messages non-verbally between the humans relevant to existing situations, and thus help decide appropriate responses, actions, interactions, and behaviors of the humans [23]. Affective expressions of one human can also stimulate the attention and cognition of another collaborating human [24]. Like the HHC, affective expressions of a robot can have a set of contributions in HRC in assembly as follows:
  • It can be incorporated into robot motion planning to help the robot communicate its responses to the collaborating human in different situations that can enhance the transparency of the robot to the human.
  • It can serve as a nonverbal visual guide to the human or a visual interface between the robot and the human that can help the human perform the assembly while keeping pace with the robot, especially in a noisy industrial environment.
  • It can have cognitive effects on humans that can impact human perceptions, decisions, and actions [24].
  • All these can improve human–robot interaction (HRI) and assembly performance, etc.
There are other alternative modes of communicating situations/contexts to human co-workers by a robot, e.g., displays of specific colors and text messages, speech/voice, gestures, etc. Displays of colors (e.g., red, yellow, green color signs) and text messages (messages displayed on robot faces) seem to be less natural and intuitive; the cognitive effects may be low, and thus they may not convey the situations/contexts naturally and intuitively [22,24]. In addition, the color signs may be ambiguous if the robot needs to communicate several situations through several colors in a short period of time, and the workers also need to remember the meaning of each color sign correctly. Text message display is also restricted by linguistic barriers. The speech/voice-based communication of the robot with humans to express situations is also restricted by linguistic barriers and noises experienced in an industrial environment. The gestures of robots as a means of communicating situations to humans can be restricted by the robot’s mechanical limitations in producing appropriate human-like, non-ambiguous gestures. The robot’s affective expressions can be combined with these alternative modes of communicating situations/contexts to humans, but the feasibility and effectiveness of such a combination have not been investigated yet [22,24,25].
The state-of-the-art literature shows a significant number of robots that are enriched with affective abilities, such as Kismet [25], iCAT [26], Flobi [27], ERWIN [28], Kobian [29], NAO [30], Kamin [31], Ifbot [32], WE-3R III [33], etc., and more recently, the realistic robots Robokind [34] and Geminoid [35]. However, these robots are not proposed for HRC in assembly in manufacturing, and their suitability for HRC in assembly is also not tested. Instead, state-of-the-art robots with affective abilities are used mainly for driver’s guidance in transportation [31], educational supports [32], social services [36,37], health care training [38], etc. Thus, applications of robots with affective expressions to HRC in assembly have not received much attention yet, except some preliminary initiatives [39,40]. Again, like the human, the robot’s affective expressions should be dynamic (i.e., affect should change autonomously with changes in situations) [39]. However, only a few of the existing robots with affect abilities as above can adjust affective expressions in dynamic contexts (e.g., [31,32,33,38]), but the adjustability is not so robust, autonomous, or versatile. Recently, the manufacturing robot Baxter with a face screen has opened the possibility of using the robot’s affective states to foster HRC and performance in assembly. However, the existing capability of Baxter’s affective expressions is very limited (e.g., only eye blink, gaze, and attention changes); affective expressions do not change with situations/contexts, and thus the potential contribution of the robot’s affective ability to HRC and assembly performance is still unclear [41].
Second, we realized that a comprehensive evaluation scheme for HRC in a human–robot hybrid cell (a dedicated production unit or workspace where a human and a robot can work simultaneously side by side without being separated from each other by physical cages with objectives to increase efficiency, safety, and reduce costs) is necessary to evaluate, monitor, and benchmark HRI and assembly performance. However, such an evaluation scheme has not received much importance yet. At present, the formation of the human–robot hybrid cell for HRC assembly is being prioritized due to its advantages, such as convenient task allocation, easy resource mobilization, and ease of communication and supervision [11,16]. A few detached initiatives are taken for the evaluation of HRC in assembly in human–robot hybrid cells; for example, safety and efficiency are evaluated in [16], and safety and cognitive workload are analyzed in [11]. Nonetheless, the scope of the evaluation is limited, and the evaluation methods and metrics are not so appropriate. It seems that there is still scope and necessity to propose a comprehensive evaluation scheme for HRC in assembly.
Based on the above background and the limitations of the state-of-the-art HRC in assembly in manufacturing, we have determined three innovative objectives for this article:
  • Investigate the effects of the dynamic affective expressions of a robot collaborating with a human in a hybrid cell for assembly tasks on HRI and task performance.
  • Propose a comprehensive evaluation scheme for HRI and assembly performance for the collaborative task.
  • Compare the HRI and performance for the robot enriched with affective expressions to that for the cases when the robot communicates situations to the human through text messages or the robot communicates without showing any expression. Such comparisons can help understand the contributions of the robot’s affective expressions to HRI and assembly performance clearly in comparison with other alternatives.
We presented similar initiatives previously (e.g., [39,40]). However, those initiatives were preliminary because the dynamic affective expressions were not compared with other possible non-affective expressions, such as text message displays. In [39], the evaluation and analysis of HRI and assembly performance for different expression conditions were limited. In this article, we extended the contributions of [39,40], and to do so, we compared dynamic affective expressions with text message displays and no display of affective states and enriched the methods of evaluation and the analysis of HRI and performance. In addition, we present different automatic detection systems in detail and study human interactions with the detection systems for the collaborative task.

2. Case Studies and the Selected Assembly Task

We visited four different manufacturing companies to conduct case studies. Based on the case studies, we identified that center console and front fender assemblies, automation cell design, especially the layout and control element implementation in automation integration for automotive manufacturing, and assembly of wings, especially the wing-to-fuselage joint in aerospace manufacturing could be the cases where HRC could contribute significantly. This article is based on the center console assembly case of automotive manufacturing. Three main parts, i.e., the faceplate, I-drive, and switch rows, were assembled to produce a final center console product (see Figure 1).
As Figure 2 shows, in current practices in industries, these parts are stored on shelves. A human worker picks the parts from the shelves sequentially, places them on a table, and assembles the parts manually using screws and a screwdriver. We argued that this manual assembly could be performed through HRC, which we presented in this article in detail. However, the presented case is representative of all flexible light assemblies in manufacturing that can be performed through HRC, and the research findings for this case should apply to all relevant assemblies without loss of generality.

3. Formation of the Hybrid Cell and the Allocation of Subtasks

3.1. The Hybrid Cell

We developed a one human-one robot hybrid cell as shown in Figure 3 [16], where a human and a robot (Baxter, research version [42]) collaborated with each other to accomplish the selected assembly task. The robot had two arms, an infrared (IR) rangefinder sensor at the wrist, vision cameras at the head and wrists, and a face screen at the head that could be used to display its affective states, assembly instructions, and text messages if any [16,42]. As Figure 3 shows, in the hybrid cell, the parts were initially put at “A” and “C”, then the parts were sequentially manipulated to “E” (a position near the human), assembled at “B”, and then dispatched to “D”. However, in real industrial applications, the faceplate could arrive at “A” and the switch rows and I-drive could arrive at “C” just-in-time (JIT) via belt conveyors, and the finished (assembled) products could be dispatched from “D” to another location via another belt conveyor or stored on a shelf being manipulated by the right arm of the robot [39].

3.2. The Allocation of Subtasks

The entire assembly task was divided into several subtasks, which were assigned to the human and the robot based on a subtask allocation optimization strategy [43,44]. Subtask allocation made it possible for the human and the robot to work in parallel, and no agent (human or robot) remained idle. It also impacted the overall assembly performance [43]. There was still debate on which task types might be better suited for human operators and which ones might be suited for robots. The tasks might have varying characteristics, e.g., pure handling (picking a faceplate), handling and cognition (picking the correct switch rows), cognition only (checking), etc. The state-of-the-art task allocation strategies are of the feedback type [43], i.e., the optimum subtask allocation is determined after the assembly is performed. This approach is reliable as it takes the information of the actual assembly into account. However, it is not practical in industrial applications because the optimum subtask allocation is necessary at the beginning of the assembly, but actual assembly information is not available at that time [44]. This problem emphasizes the feedforward optimization of subtask allocation, where the optimum subtask allocation can be determined before the assembly starts based on the potential feasibility of subtask allocation instead of actual assembly information [44]. Nonetheless, a possible drawback of the feedforward strategy may be that optimization results may not be very reliable, as optimization is conducted based on feasibility information instead of actual assembly performance. However, feedforward optimization of subtask allocation can be proved effective if the feasibility is assessed realistically [44].
The assembly task in Figure 3 was divided into several subtasks, and the subtask execution sequence was determined based on our knowledge of the task as shown in Table 1. As in Table 1, the total number of subtasks for the selected assembly, n was 7. Hence, the total number of possible allocations of the subtasks, N = 2 n . We, based on our experiences, identified four assessment criteria relevant to the selected assembly task. Two criteria reflected the potential capabilities of the robot to perform the assigned subtasks for a possible allocation, and two criteria reflected the requirements for the human to collaborate with the robot in the hybrid cell. Those four criteria constituted a cost function. The four criteria we used were as follows:
  • Level of potential incapability of the robot to generate the required motion, taking its available DOFs and the existing mechanical constraints into account, to perform a subtask assigned to it in an allocation.
  • Level of potential insufficiency of skills and precision of the robot to perform a subtask assigned to it in an allocation.
  • Level of potential fatigue level (if the human continues the assigned subtask for a long time during the actual assembly) of the human to perform a subtask assigned to the human in an allocation. We assumed low fatigue of the human as a requirement of the hybrid cell.
  • Level of the requirement for potential movement of the human from his/her sitting position to perform a subtask assigned to the human in an allocation. We considered it a requirement of the hybrid cell that the human should not physically move from his/her sitting/standing position while collaborating with the robot.
We applied some feasibility constraints to phase out the most infeasible allocations from the total allocation, N. Then, we assessed the infeasibility levels of the remaining allocations using a Likert scale based on the four criteria, considering a set of constraints. The least infeasible allocation obtained through the cost function was considered the optimum subtask allocation, as shown in Table 1. The table shows that the subtasks that required dexterous skills were allocated to the human and the repetitive subtasks were allocated to the robot. We adjusted the robot’s working speed with the standard speed of the human to avoid idle time for the robot and the human in the ideal case. The human and the robot collaborated to perform the subtasks. Here, collaboration means that the human and the robot performed the subtasks assigned to them sequentially and in parallel, keeping pace with each other, to complete the assembly.

3.3. The Detection Systems

We developed algorithms for three detection systems of the robot as follows, using different sensors of the robot:
Part correctness detection: to ensure the quality of the assembly, the infrared (IR) range finder located at the robot’s wrist was used to detect whether the robot was picking the correct part (IR range finder determined the correctness of the input part by comparing the part’s height to the reference (correct) height [39]).
Potential collision detection: to ensure the safety of the hybrid cell, the robot’s head camera was used to detect any potential collision between the robot arm and the human body if the human body parts (e.g., hand, head) were very close to the robot arm during its movement, especially when placing the parts at “E” (the vision system detected the potential collision by comparing the distance between the robot arm and the human body to a minimum reference distance).
Part orientation detection: to ensure the quality of the assembly, the robot’s head camera was used to detect if the human attempted to assemble the parts incorrectly, e.g., with the wrong orientation (the vision system detected the wrong orientation by comparing the image of the attempted orientation to that of the reference/correct orientation).
The collaboration between the human and the robot in the hybrid cell (Figure 3) for the subtasks in Table 1 was necessary because (i) the correct parts were manipulated to “E” (near the human) by the robot just in time from “C” that was out of the reach of the human, (ii) the human did not need to move to fetch the parts and to think of their correctness and timely arrival as it was done by the robot, and thus the human could devote his/her time and concentration to attaching the parts only, which could enhance the assembly productivity and quality, and (iii) the robot and the human worked in parallel, which could reduce the assembly cycle time and improve the productivity.

4. Determination of Robot’s Affective Expressions

In this section, we determined what situations the robot usually experienced during collaboration with the human for the selected assembly task, how the robot communicated the situations to its human collaborator through its affective expressions, and how those expressions impacted HRI and assembly performance. We determined the robot’s affective states using the following two methods: (i) taking inspiration from a human-human collaborative task, and (ii) developing a computational model of affective states based on the robot’s affect intensities for different situations. In fact, one method could be complementary to another, and two methods were used to cross-check the results to ensure reliability.

4.1. Inspiration from a Human-Human Collaborative Task: The Biomimetic Approach

We investigated a similar representative human-human collaborative task as shown in Figure 4. During the collaboration, one human (first human) manipulated a few assembly parts sequentially for another human (second human) in a shared space, and the second human assembled the parts together and fastened the parts using a screw and a screwdriver to produce a complete product. The humans wore IMU devices at their wrists so that the manipulation and assembly trajectories for each part performed by the first and the second human, respectively, could be recorded in real time. We videotaped the collaboration for 10 pairs (20) of subjects separately. Then, the videos were displayed to 20 human subjects separately. We observed the collaboration through the videotapes and analyzed the manipulation and assembly trajectories. Based on our analyses and responses of the subjects for the selected human-human collaborative task, we identified the following:
  • The situations/events the humans usually experienced, i.e., the activities they exactly carried out during the collaboration, e.g., checking the correctness of the objects/parts manipulated by the first human or assembled by the second human, maintaining safety, e.g., they became aware that the body components of one human could not hit the second human and the object could not fall or scatter, etc. The recorded trajectories for manipulation and assembly helped relate those observed phenomena to the temporal and spatial propagation of the task.
  • The types of different affects developed in human faces for different situations/events during their collaborative task (i.e., names of the affects), and the ways the humans expressed their affective states to each other to communicate the situations they experienced during the collaboration.
  • The bilateral affective expressions of humans were beneficial to collaboration and task accomplishment.
Though the collaborative task was simple, it was helpful to understand the existence, underlying expression approaches, and contributions of affects in a human-human collaboration task. We considered the first human as the experiment subject and the second human as the control subject. Based on the human-human study, we found the following major task situations/contexts/events and the corresponding affects, as given in Table 2 [45,46,47,48]. Table 2 shows the major events that occurred during the human-human collaboration and the ways the humans, especially the first humans, expressed the situations/events through affective expressions. We called this test the affect recognition test. To implement the HRC of Figure 3, we just replaced the first human of Figure 4 with a robot, selected a slightly more representative task instead of the simple task of Figure 4, and enabled the robot to express various affective expressions similar to that the first human did at different events [45,46,47,48].
We then drew the observed affective states of Table 2 being inspired by that of the first human and displayed them on the robot’s face, as shown in Figure 5a–g. To validate those affective expressions, each affective expression displayed on the robot’s face was shown to each of the 20 subjects separately, and they were asked whether they could recognize the names of the affective states displayed on the robot’s face. We called it the affect validation test. The results showed that in 95 percentile cases, the subjects were able to recognize the drawn affective states (Figure 5a–g) correctly. This is why, for instance, the face labeled “concentrating” represented concentration as it was drawn for that purpose and was perceived by humans as the concentrating affect, which was validated through the test results.

4.2. Computational Model to Estimate Robot’s Affective States

We took inspiration from the state-of-the-art affective models of humans [45,46,47,48] and adapted those models to a computational model of affective states for the robot as (1) based on affect intensities for different prospective situations during assembly tasks. If I a t is the intensity (I) of affect a at the current time step t, then I a t is expressed as (1).
I a t = w m   I m a t + w s l = 1 , l a N I s l a   t w u l = 1 , l a N I u l a   t w d e I a t  
In (1), I m a t is the main contributor for the change of intensity of affect a at the current time step; I s l a t and I u l a t are the intensities at the current time step for stimulations and suppressions on a by other affective states (if any) l, respectively, where l a , but l belong to the set of all N affects at that time step, and e I a t is the exponential decay of the intensity of a at the current time step. The terms w m , w s , w u , and w d are the concerned weights that can be estimated based on the assembly task, collaborating humans, working environment, and robot features.
The affective intensity in (1) can be measured objectively if the terms on the right-hand side are measured objectively by sensors at the current time step. Note that for the proposed assembly task with Baxter, our aim was not to generate affective expressions on the robot’s face. Instead, the aim was to just display various affective expressions on the face screen of the robot with relevance to assembly situations/contexts. In such cases, the usage of sensors to measure the affect intensities of the robot seemed irrelevant. Again, mapping between measured sensor values and affect categories (names) was determined through the subjective evaluation by the experimenter/participant [45,46,47,48]. Hence, we here used subjective measures of each of the terms in the right-hand side of (1) at the current time step to get the measure of I a t   [45,46,47]. The subjectively estimated affective intensities, i.e., I a t values could be used to determine the affect categories (affect names) to be displayed on the robot face screen for various task situations.
Equation (2), as follows, shows a general approach to measure I m a   t subjectively based on task situations [46].
I m a   t = G i m p o r t a n c e   t × I L E   t × v ( t )
In (2), G i m p o r t a n c e   ( t ) is the importance of the goal of the robot for a particular situation.   G i m p o r t a n c e   ( t ) can take any value between 0 and 1 based on situations, and it can be estimated subjectively based on the importance of the goal for the particular situation. For the selected assembly task, we always maintained G i m p o r t a n c e   ( t ) = 1 as the robot had one main goal in all situations: that it would safely manipulate (pick and place) the right parts in the right way, and the goal was always very important. I L E   ( t ) means the Intensity Level Evaluation that measures the severity of the situation such as how difficult or how easy it is to reach the goal in a given situation.   I L E   ( t ) can take a value between 0 and 1 depending on the level of difficulty or easiness to reach the goal. For example, we assumed that the parts that were input to the robot were the wrong parts. In such a situation, it was very difficult or almost impossible for the robot to proceed with the manipulation, as the robot’s goal was to manipulate the right parts only. Hence,   I L E   ( t ) = 0 should be used because the situation was very difficult for the robot to go. Again, we might assume in another situation that the right parts were input to the robot and there were no other hindrances for the robot to proceed with the manipulation, i.e., the situation was very easy to go. In such a case, we used I L E   ( t ) = 1 because the situation was very easy. We could use I L E   ( t ) = 0.5 if the situation was moderately difficult or moderately easy, and so forth. For example, we could use more intermediate values such as I L E   ( t ) = 0.25 , I L E   ( t ) = 0.75 , etc. However, it would be difficult to connect those values with practical scenarios, and we ignored those intermediate values to keep the model simple.
Finally, v ( t ) is the valence, meaning whether the situation is positive or negative. v t has only two values either 1 (positive) or −1 (negative). For the above examples, if the input parts were correct, then v ( t ) = 1 , or v ( t ) = −1 if the parts were wrong [46].
Note that the intensity values I s l a t and I u l a t could also be expressed mathematically and measured objectively using sensors. However, we estimated those two terms subjectively due to potential difficulties in the interpretation of the objectively measured sensor values. Finally, we could get the intensity measure of a, i.e., I a t in the form of discrete values once all the terms in the right-hand side of (1) were estimated. The magnitude and sign of I a t determined the category (name) of the robot’s affective states [46]. Changes in intensity values could change the affective expressions and vice versa [46,47].

5. Robot Motion Planning Strategy Associated with Dynamic Affective Expressions

Based on the prospective affective states of the robot recognized and developed in Section 4.1, we proposed a motion planning strategy for the robot in Figure 3 associated with its affective expressions in dynamic contexts. We also cross-checked the associated potential affective states using the computational model of affect in Section 4.2. The motion planning strategy for the robot is given in Figure 6, which enabled the robot to generate its motions in different task situations as well as express the situations through affective expressions in dynamic contexts. Based on our observations summarized in Table 2, the motion planning scheme was divided into eleven events. Each event was a prospective situation that the robot might likely experience during the assembly. The robot might not experience all the events in each trial, but the events that the robot might usually experience were to be in a subset of those eleven events. The affect model in (1) could be updated at every time step t, but for the selected assembly task, we subjectively estimated the intensity of the robot’s affective states based on the situations for every event following (1) and determined the affect category based on the affect intensity value and sign for each event as given in Table 3 (i.e., t was the time when an event was identified and the affect intensity for the event was assessed). The relevant affective states pre-determined based on the affect recognition test (Table 2) and verified by the computational model of affect comprising the affect intensity and the magnitude were displayed on the robot’s face as soon as the robot was able to recognize the event. The affect-based motion planning strategy shown in Figure 6 was to be implemented in the robot via coding using Python integrating different sensors. The robot “Baxter” we used followed the task-optimal motion planning approach. The motion planning algorithm in Figure 6 was a decision-tree type algorithm triggered by different sensors, and the robot generated task-optimal motions based on the decisions of the decision-tree type algorithm triggered by sensed events. The motion planning strategies, along with the prospective events based on situations that the robot was likely to experience during the collaborative assembly task, are explained below:
Event-1: Let us assume that the robot arm is at the initial position, it is ready to receive a command for the task, but no command has been made yet. The robot should show a neutral affect as it has no reason to show any positive or negative affect (Table 2). In this situation, I a t is subjectively estimated as I a t   = 0 (see Table 3 for details), which indicates the ‘neutrality affect’ as I a t   = 0 is neither positive nor negative, which makes it compatible with the situation. Thus, the neutrality affect was displayed on the robot’s face (Figure 5a), and it lasted until the robot received a command.
Event-2: The robot receives a command to move or to pick a part, and so the robot needs to concentrate on its actions (e.g., movement, detection). Thus, it should show concentration affect on its face. We see that the computed intensity for this situation is +0.5, i.e., I a t   = +0.5 (see Table 3 for details), which is positive but not highly positive. I a t   = +0.5 can indicate ‘concentration’ (see Figure 5b) because concentration is a positive affect, but not highly positive, and thus the estimated affect is compatible with the magnitude and the value of I a t   = +0.5 [45,46,47]. Hence, the robot displayed a ‘concentration’ affect on its face in such situations.
Event-3: The part correctness detection system determined whether the part was correct. If the part was the correct one, the robot might be happy as it had found the right part. The computed affect intensity for such situations was +2, i.e., I a t   = +2 (see Table 3 for details), which could indicate happiness (see Figure 5c) because I a t   = +2 was a highly positive affect [45,46,47].
Event-4: The robot started picking the part. It also needed to manipulate the part and detect potential collisions (e.g., whether the human hand/head was close to the robot arm). All those needed concentration. The computed intensity in such situations was I a t   = +0.5 (see Table 3 for details), which could indicate concentration as concentration was a moderately positive affect [45,46,47].
Event-5: The robot held its movement if it could detect a potential collision. In such a situation, the robot should be ‘scared’ because there is a potential risk. The robot’s affect intensity computed for that situation was −2 (see Table 3 for details), which could indicate ‘scared’ as shown in Figure 5d because −2 was highly negative and the ‘scared’ affect was also highly negative [45,46,47]. The robot continued holding its movement with a scared affect and checked whether the situation improved, i.e., whether the risk of collision was removed.
Event-6: If the human co-worker understood the reason for the potential collision based on the scared affective expression of the robot and took quick action to remove the risk of the collision (e.g., human hand/head was moved away), the robot could move toward the designated location near the human to place the part. At that time, the scary event was over, and the robot needed to concentrate on its work. Hence, the robot should show a concentration affect. The computed intensity for such an event was I a t = +0.5 (see Table 3 for details), which could indicate the concentrating affect as we explained for Event-2. Hence, the robot showed a concentration affect on its face during such situations.
Event-7: If the risk of the collision was not removed for any reason (e.g., the human could not understand the situation), the robot needed to hold its movement and wait for the contingency action of the supervisor (experimenter). The robot should remain scared in that situation, and thus the display of the scary affect should continue. The computed affect intensity for that situation was found to be the same as that found for Event-5, which could indicate the scary affect. After the supervisor’s action, the motion planning algorithm might run from the beginning.
Event-8: If at Event-2, the robot detected that the part was not the right one, the robot could be confused, and it would need to hold its movement. Confusion had a moderate negative affect because the robot was not positive as the robot faced some sort of uncertainty about the part to reach its goal. The robot was also not very negative at that stage as the robot could hope that the situation could be better if the wrong part were replaced by the right one. The affect intensity computed for that situation was −0.5 (see Table 3 for details), which could indicate the confusion affect, as shown in Figure 5e because −0.5 was a moderate negative value. The robot held the movement for a while and allowed the supervisor (experimenter) to understand the situation and replace the wrong part. The detection system detected whether the wrong part was replaced by the experimenter with the right part within a specified time. The confusion affect continued.
Event-9: If the wrong part was replaced by the right one by the experimenter, the robot could be happy again following the same principle as explained for Event-3, and thus could express the happiness affect (see Figure 5c). The robot was happy as it had got the right part after waiting for a while. The affect intensity computed for that situation was +2 (see Table 3 for details), which could indicate the happiness affect as explained in Event-3. Then, the motion planning proceeded through Event-3.
Event-10: If the wrong part was not replaced by the right one, the robot neither found the right part nor could hope for that and thus became angry. The robot did not pick up the part and stopped working. The robot’s affect intensity computed for that situation was −1.5 (see Table 3 for details), which was a very large negative value. The value could indicate the anger affect (see Figure 5f) because anger was also a very large (high) negative affect [45,46,47]. The anger affect continued until intervened by the experimenter. After the experimenter’s intervention, the motion planning algorithm might run from the beginning.
Event-11: When the human assembled the parts for subtasks #3 and 5 (see Table 1), the wrong orientation detection algorithm detected if the human made a wrong orientation. At that time, the robot held its movement and might want to remind the human to be cautious by showing a cautious affect on its face. The cautious was a large negative affect [45,46,47]. The computed affect intensity for that situation was −1 (see Table 3 for details), which could indicate a ‘cautious’ affect (Figure 5g) because −1 was also a large negative value for the model in (1). The cautious affect continued until the human corrected the wrong orientation and then the robot resumed its previous movement. Event-11 was not shown in the motion planning strategy in Figure 6 as that event could occur at any time during the collaborative assembly.
Remark 1: The estimation of affect intensity and the recognition of affect name based on the estimated intensity magnitude and sign (see Table 3) were subjective [45,46,47]. It was challenging because the expressions of affects could vary with individuals and situations, and similar intensity magnitudes could indicate multiple different affects. Hence, the generalization of affective expressions was also subjective and challenging. Careful estimation of affect intensities and recognition of affect names based on intensity magnitude and signs were very important.
Remark 2: The determination of affect colors was also subjective and challenging [45,46,47,48]. The affect colors could vary with individuals and situations. We received inspiration from [48] and determined the affect colors for different affects subjectively.

6. Experimental Methods

6.1. Experiment Design

The independent variables (IVs) were the types of robot expressions on its face to communicate various task situations to the human. We considered three types of expressions of the robot as follows: (i) no affective expression (“no affect”), (ii) dynamic affective expressions (“dynamic affect”), (iii) text message display on robot face (“text message”). The dependent variables (DVs) were physical HRI (pHRI), cognitive HRI (cHRI), and assembly performance. The speed of the robot was to be kept fixed (e.g., at 50%) for all cases. Therefore, there would be no impact of the robot speed on the experimental results of three different experimental conditions (three expressions).

6.2. Subjects

Twenty students (17 males, 3 females, mean age 24.39 ± 2.18 years) were selected to participate in the experiment. The subjects did not report any physical or mental problems. The subjects were provided with detailed information and instructions about the research objectives, experiment procedures, evaluation methods, evaluation criteria, and scales. The subjects supported the experiment voluntarily. The study was approved by the Institutional Review Board (IRB).

6.3. Evaluation Criteria and Evaluation Method

Based on our experience, we proposed a comprehensive evaluation scheme to evaluate HRI and performance for the human–robot system. In the evaluation scheme, we divided the HRI into pHRI and cHRI and proposed several criteria for each. We also proposed different criteria to evaluate performance. The proposed evaluation scheme adapts and integrates different relevant individual criteria available in the literature and forms a comprehensive scheme as follows:
The pHRI were expressed in terms as described below:
Safety: The safety criterion included injury, accident, damage, collision, and impact that could harm the human, robot, materials, and the environment during the assembly [11,16]. The number of collisions, injuries, and damages that occurred during the assembly were the objective criteria to evaluate safety. The collision detection system was used to measure the number of potential collisions objectively. Injuries, accidents, and damage were recorded manually if occurred.
Engagement: Engagement was how physically involved, connected, attuned, and synchronized the human was with the robot [49]. Engagement was assessed subjectively, and the subject was asked, “how engaged did you find yourself with the robot and the assembly task during the collaborative assembly?”, and the subject scored his/her response using a 7-point equal-interval subjective rating scale (Likert scale [50]) as shown in Figure 7.
Symbiosis: Symbiosis is mutualism or interdependence between the human and the robot for the assembly [51]. It was assessed subjectively, and the subject was asked, “how clearly could you realize the interdependence between you and the robot during the assembly?”, and the response was assessed using the Likert scale (Figure 7).
Cooperation: Cooperation is the sense of working together, partnership, or joint efforts for the assembly task [11,19]. It was assessed subjectively, and the subject was asked, “how clearly could you realize the cooperation level between you and the robot for the assembly?”, and the response was assessed using the scale in Figure 7.
Team Fluency: Team fluency is the coordinated meshing of joint efforts and synchronization between the human and the robot during the collaboration [52]. Four criteria were used to measure team fluency objectively [52], as follows:
  • Robot and human’s idle time: It was the time that the robot and the human waited for sensory input, information processing, computing, decision-making, etc.
  • Non-concurrent activity time: It was the amount of time during a trial (run) that was not concurrent between the human and the robot when it was supposed to be concurrent.
  • Functional delay: It was the time between the end of one agent’s action and the start of another agent’s action.
Each criterion was expressed as a % of the total trial time. The criteria might be interrelated.
The cHRI were expressed in terms as follows:
Cognitive workload: Cognitive workload is the amount of information needed to be processed by the human brain for thinking, reasoning, memorizing, and deciding for the assembly [11,53]. NASA TLX was used to assess the workload [54]. Table 4 shows the six dimensions of the NASA TLX and their descriptions relevant to the assembly task.
Trust: Trust is the willingness of the human to rely on or to believe in the cooperation of the robot for the assembly [55]. The performance and faults of the robot influenced human’s trust in the robot [56]. The trust was subdivided into three relevant terms, as in Table 5. The subject assessed his/her own trust in the robot subjectively for each of the criteria of Table 5 using the Likert scale shown in Figure 7.
Communication: The communication criterion refers to the communication level between the robot and the human during the assembly. Here, communication meant sharing/conveying information [57]. The meanings of the term communication could include rating and understanding a scenario, the ability of an agent to convey a ground or a message to another agent, the ability of an agent to provide an understanding to another agent or correct a misunderstanding of another agent, etc. [57,58,59]. The robot’s nonverbal communication through affective expressions to the human could mean requesting help, informing something, and sharing attitudes and intentionality (e.g., joint attention, common ground) as a way of bonding [58]. Here, the common ground was the cooperation between the robot and the human for the selected common assembly task. The communication level was assessed subjectively, and the subject was asked, “what was the level of communication between the robot and you during the assembly?”, and the subject scored the response using the Likert scale (Figure 7).
Situation awareness: Situation awareness was whether the human possessed a constant, clear mental picture and track of what was going on, and was able to figure out situations, integrate the information with the past, and anticipate future needs/status of the assembly [60]. The situation awareness of the subject was assessed by the experimenter following “Situational Awareness Global Assessment Tool-SAGAT” [61]. Based on the goal directed-task analysis method (a cognitive task analysis method) [61], we identified the goals associated with the selected task and the major sub-goals necessary to meet each of the goals, understood the decisions needed to meet each of the sub-goals, and determined the queries related to the 3 levels of situation awareness (SA), which were perception, comprehension, and projection/prediction needed to make each decision and carry out each sub-goal [62].
The quires corresponded to the situation awareness requirements and were determined from the results of an SA requirements’ analysis [63]. There might be multiple queries for each SA level [64], and the queries could be asked randomly (verbally queried by the researcher) [62]. The queries were made in a cognitively compatible manner [62]. Table 6 shows the SA dimensions, sample queries, and assigned scores used to assess the situation awareness for each of the subjects. The expert experimenter’s knowledge of the real situation was treated as the ground truth. Operator perceptions were then compared to the real situation (ground truth) based on the expert experimenter’s responses to the same queries to provide an objective measure of SA. The responses were scored as correct or incorrect. For the queries in Table 6, for a correct answer, the subjects obtained a score of 1, and for a wrong answer, the subjects obtained a score of 0 [64,65]. A high SA in SAGAT meant that the situation and the subjective image of the situation matched to a high degree [66].
The assembly performance was expressed in terms as follows:
Part correctness: Part correctness was whether the parts manipulated by the robot to the human were the right parts. The part correctness was related to assembly quality (accuracy). The IR range finder sensor of the robot was used to detect the wrong parts (if any) before the parts were picked by the robot, and then the robot communicated the situation to the human through its affective expressions on the face screen (Figure 5). The number of wrong parts detected by the detection system, reported by the subject, and replaced by the right parts by the experimenter (supervisor) guided by the robot’s affective expressions was used as the objective criterion to evaluate the correctness of the part manipulation, the effectiveness of the detection system, and the robot’s expressions.
Part orientation: Wrong orientation of parts was related to assembly quality (precision). The number of wrong orientations detected by the detection system during the assembly and reported by the subject and the number of wrong orientations identified in the finally assembled products by the experimenter were used as the objective criteria to evaluate the quality of the assembly and the effectiveness of the detection system.
Efficiency: Efficiency could be evaluated by the ratio between the average targeted time to finish an assembly and the average time required to finish the assembly as expressed in (3). It was related to assembly productivity.
E f f i c i e n c y = T a r g e t e d   t i m e   t o   f i n i s h   a n   a s s e m b l y M e a s u r e d   t i m e   t o   f i n i s h   a n   a s s e m b l y × 100 %  
We used the above evaluation criteria for HRC because those criteria perfectly addressed the expected HRI and assembly performance in real assembly cases in industries [44]. We used the subjective assessments as complementary to the objective measurements of HRI and assembly performance. We believed that the evaluation scheme was complete and balanced as it included the evaluation of pHRI, cHRI, and assembly performance separately using both subjective and objective assessment methods.

6.4. Experimental Procedures

The subtask allocation with the subtask sequence (Table 1) and the assembly procedures were instructed for each subject separately. Each subject practiced the assembly task manually six times before he/she participated in the actual experiment, which could raise the subject’s skills to steady-state levels and balance the learning effects. However, in the actual experiments, each human subject performed a subtask only once in each condition. The prospective affective expressions (Figure 5) were introduced to each subject, and the meanings (contexts) of the expressions were also explained to them. Then, the actual experiment started, and the human and the robot collaboratively performed the selected assembly task in the hybrid cell (Figure 3) for three separate experiment conditions (IVs): (i) no-affect display, (ii) dynamic affect display, and (iii) text message display. For each trial in each experimental condition, the human subjects were selected randomly.
For the dynamic affect condition, the human and the robot performed the subtasks allocated to them (Table 1) sequentially, keeping pace with each other, and produced center console products. In each trial, three similar center console products were assembled consecutively by each subject, as shown in Figure 3. The motion planning algorithm in Figure 6 was implemented to enable the robot to perform the subtasks assigned to it while keeping pace with the human, and the robot also expressed its affective states on its face screen based on task situations to communicate the situations to the human collaborator. For example, Figure 8 shows how the robot produced affective expressions in various situations. The time required to finish each assembly, the idle time for the human and the robot, the non-concurrent activity time, and the functional delay time were recorded using stopwatches. Then, the pHRI, cHRI, and assembly performance were evaluated following the evaluation methods and criteria described in Section 6.3. For the SAGAT, just at the end of the physical activity of an experimental trial, we quickly separated the subject from the experimental site so that the subject could not see the experimental scenario anymore, and then asked the queries to assess the SA of the subjects, and the subjects responded based on their very recent memories [61,64].
The same procedures were followed for no affect and text message conditions. However, for the no affect condition, the algorithm in Figure 6 was implemented excluding the robot’s affective expressions, i.e., nothing was expressed on the robot’s face as illustrated in Figure 5h. For the text message condition, the algorithm in Figure 6 was implemented, but the affective expressions were replaced by relevant text messages and the text messages were displayed on the robot face screen dynamically with task situations. For example, the message ‘I am confused’ was displayed on the robot’s face as illustrated in Figure 5i instead of the ‘confused’ affect shown in Figure 5e if the robot did not find any right part to manipulate. We used text message display as a case where the robot expressed the situations, but in contrast with the affective expressions, the text messages might not have cognitive impacts.
Each subject separately participated in each condition of the experiment. The conditions were randomized for the subjects so that the learning effects were balanced. There were no other differences in the three experiment conditions except the differences in robot’s expressions on its face for different task situations. Hence, any difference in evaluation results for the three conditions could be considered as the effects due to differences in the robot’s expressions of task situations.

7. Experimental Results

7.1. Evaluation of Assembly Performance

As Table 7 shows, the average cycle time to finish an assembly task for the dynamic affect case was about 20% less than that for the no-affect case, which indicates a 20% higher assembly efficiency (productivity) for the dynamic affect case based on (3). The table also shows a long idle time for the human and the robot for the no-affect case, but minor idle time for the dynamic affect case (in the ideal case, there was no idle time for the robot and the human). For the no-affect case, the detection systems could detect any wrong part before manipulating the part to the human, or any unsafe incidents or any wrong orientation of the part attempted by the human during the assembly and withheld the robot’s movement until the issues were resolved. However, there was no expression of the robot to communicate the situation to the human, and thus the human was confused, took a longer time to realize what happened, and then took corrective actions to resolve the problems, which increased the idle time for the human and the robot (as the robot also withheld its motion until the problems were resolved). On the contrary, the idle time was small for the dynamic affective expressions as humans could promptly understand the situations through the affective expressions of the robot and resolve the issues quickly. It was related to transparency in HRC [67,68]. We posited that the higher transparency in the dynamic affect case contributed to resolving problems faster, reduced idle time, and thus increased efficiency.
We noticed that there was still a short idle time for the human in the dynamic affect case. Analysis showed that it occurred at the beginning of the assembly cycle because the robot arm moved toward the parts from its initial position and then picked the part and manipulated it for the human, and the human waited for the part until it arrived. The small idle time for the robot for the dynamic affect case occurred when the robot detection system detected a wrong part, showed confused affective expressions, and then waited for a few seconds to allow the experimenter to replace the wrong part with the right part. The robot also stopped for a short time after a collision detection, or a wrong orientation detection occurred. However, such a wait time was very small, as the humans could understand the situations quickly due to improved transparency and take corrective actions according to the dynamic affective expressions. Again, the assembly efficiency for the dynamic affect case was higher (the cycle time and the idle time were lower) than that for the text message case, though the text message display also made the situation transparent. It happened because the transparency provided through affective expressions produced more cognitive effects on robot-human communication than that provided through text messages [69]. Analysis of Variances, ANOVAs (subjects, IVs) conducted separately for the cycle time and the idle time showed that the variations in the cycle time and the idle time among the IVs (no affect, text message display, affect display) were statistically significant (p < 0.05 for both the cycle time and the idle time), which indicated the differential effects of the IVs on the task performance. The variations between the subjects were not statistically significant (p > 0.05 in both cases), which indicated the generality of the findings.
Table 8 shows that no wrong part was manipulated by the subject for any of the three expression conditions. Instead, the wrong parts were detected before the manipulation occurred. Table 9 also shows that there was no wrong orientation in the finally assembled product for any of the three expression conditions. Instead, the wrong orientations were detected during the assembly. The results thus proved the effectiveness of the detection systems. The tables show that the detected wrong parts (Table 8) and the wrong orientations (Table 9) per assembly (on average) were not the same for the three expression conditions because the conditions were implemented independently. Nevertheless, the results were very similar, as the detection systems were equally active in both cases. However, the number of detected wrong parts and wrong orientations correctly reported by the subject per assembly was much lower (nil) for the no-affect case than that for the dynamic affect case, which showed the effectiveness of the dynamic affective expressions.

7.2. Evaluation of pHRI

Table 10 shows that collision, injury, and damage did not occur for any of the three expression conditions (IVs) because the detection system detected potential collisions before they occurred. We believe that it was also related to transparency in HRC [67,68] and the cognitive effects of the IVs on robot-human communication [69]. It happened because the detection systems detected potential collisions as well as wrong parts (Table 8) and wrong orientations (Table 9), and the robot suspended its motions. As there was no expression of the robot to express the situation, the human was confused and could not understand whether the robot would suspend its motion for a potential collision, a potential wrong orientation, or some other reason, and thus the human could not report the incidents correctly. However, the numbers of detected collisions as well as wrong parts and wrong orientations, and the numbers of correctly reported potential collisions, wrong parts, and wrong orientations were the same for the dynamic affective expressions.
We believe that it happened because the dynamic affective expressions increased transparency in HRC and helped humans understand the detected incidents and report them promptly and correctly. The results thus proved the effectiveness of the detection systems and their transparency through the dynamic affective expressions in the no-affect case. The results in Table 8, Table 9 and Table 10 also show that the safety (in terms of detected potential collisions) and quality (in terms of detected wrong parts and wrong orientations) for the dynamic affect case were significantly better than those for the text message case. Both the text messages and the affective expressions communicated the robot’s responses to human on-task situations. However, we believe that the affective expressions produced more cognitive effects on humans that produced better performance than in the text message case [24,69].
Figure 9 shows that a human’s perception of his/her engagement with the robot increased due to the dynamic affective expressions. In fact, humans were engaged with robots even when there were no affective expressions. However, the affective expressions of the robot could influence human cognition, attention, and concentration and thus help the human feel/realize more physically and mentally engaged with the robot [49]. The realization of symbiosis and cooperation between the human and the robot also increased due to the dynamic affective expressions of the robot. In fact, the symbiosis and cooperation level mainly depended on the task allocation between the human and the robot [43,44], which was pre-decided for the selected task. However, the affective expressions made the collaboration transparent, which helped the human realize the prevailing interdependence and cooperation between the human and the robot [67,68]. Again, affective expressions could also enhance agent capabilities. We assume that the better pHRI results for the dynamic affect case over the text message display case were due to higher cognitive influences of the affective expressions on humans in the HRC [69]. ANOVAs (subjects, IVs) conducted separately for each of the remaining pHRI criteria (engagement, symbiosis, cooperation) showed that variations in pHRI among the three IVs (no affect, text message display, affect display) were statistically significant for all three pHRI criteria (p < 0.05 for all three pHRI criteria), which indicated the differential effects of the IVs on pHRI. The variations between subjects were not statistically significant (p > 0.05 in all three cases), which indicated the generality of the findings.
Table 11 compares team fluency among three expression conditions (IVs), where less idle time, less non-activity time, and less functional delay indicate more team fluency. We believe that higher engagement, transparency in collaboration, and improved cooperation between the human and robot for the dynamic affect case reduced idle, non-activity, and delay times and produced enhanced team fluency over the no-affect case [52]. In addition, higher cognitive effects through affective states could produce better fluency in the text message case [69].

7.3. Evaluation of cHRI

As per Figure 10a, trust (reliability, confidence, dependability) was higher for the affective expressions than that for the no-affect case. The trust of human in robot depends on the robot’s performance (e.g., efficiency) and faults status (e.g., wrong part manipulation) [56]. The higher performance (Table 7) and lower reported faults (Table 9) might enhance human’s trust in the robot for the dynamic affect case over the no-affect case. Differences in cognitive effects between affective expressions and text message displays might result in different trust levels between those two expression conditions [69]. ANOVAs (subjects, IVs) conducted separately for each of the trust dimensions (reliability, confidence, dependability) showed that variations in trust among the three IVs (no affect, text message display, affect display) were statistically significant for all three trust criteria (p < 0.05 for all three trust criteria), which indicated the differential effects of the IVs on trust. The variations between the subjects were not statistically significant (p > 0.05 in all three cases), which indicated the generality of the findings.
Figure 10b shows that communication between the robot and the human during the assembly increased significantly due to the affective expressions of the robot in dynamic contexts. We posited that the affective expressions served as visual guides or interfaces for humans in dynamic contexts that made communication easier and clearer. ANOVA also showed statistically significant variations in communication among no affect, text message display, and affect display cases (p < 0.05), but no variation among the subjects (p > 0.05).
For the SAGAT, we determined the % of accuracy in the responses made by the subjects to the SA queries for each SA dimension for each expression condition (IV) [62]. Figure 11 shows the SA results in terms of accuracy (%) in responses to the SA queries [65]. The results show that situation awareness increased for affective expressions. At no affect condition, the human had some situational perception but little situational comprehension and prediction. It means that the human could perceive the situation based on his/her own observations but face difficulty comprehending and predicting the situation due to a lack of or confusion with the contextual information. We posited that it happened because the communication between the human and the robot was poor, with no affective expression due to the absence of transparent visual interfaces. However, due to the high-level communication, transparency, and predictability of robot behaviors through affective expressions with favorable effects on human cognition, situation awareness, especially situation comprehension and prediction, was high for the dynamic affect case. Lack of cognitive effects could result in lower situation awareness for the text message case in comparison with that resulted in the dynamic affect case [69]. ANOVAs (IVs, SA dimensions) showed that variations in situation awareness (accuracy %) between the three IVs (no affect, text message display, affect display) were statistically significant (p < 0.05), which indicated the differential effects of the IVs on situation awareness. Variations in situation awareness (accuracy %) between the SA dimensions were also statistically significant (p < 0.05), which indicated that subjects’ abilities to perceive, comprehend, and predict situations were different, which is rational [62].
As Figure 12 shows, the workload for mental demand for dynamic affective expressions was lower than that for the no-affect case. It might happen due to better communication, smoother workflow, improved transparency in situations, and clearer cooperation between the human and the robot, as well as the higher cognitive ability of the human through affective expressions [24]. The workload for physical demand was low for all cases, which indicated that the assembly was light (less weight). We see that the dynamic affect produced a lower workload for physical demand than that produced with no affect. The human attempted to make the wrong orientations of the parts and then reworked for the correction for the no-affect case as the human could not understand his/her fault easily and promptly due to the absence of visual interfaces between the robot and the human to display the information about the mistake. We think that the additional reworks/corrective works could increase the workload for physical demand for the no-affect case.
For the dynamic affect case, the workload for temporal demand was higher in contrast with other workload dimensions. It happened because the dynamic affective expressions resulted in better communication, smoother and faster workflow, and clearer cooperation between the human and the robot that decreased the idle time or loss time and influenced the worker to assemble quickly keeping pace with the robot, which imposed more workload on the human. For dynamic affective expressions, human’s faults (attempts to make wrong orientation) were reduced, and the work cycle time decreased, which helped the human meet the expected performance easily, which reduced the workload for performance.
In contrast, for no affect, the idle time and faults were high, which imposed a high workload on the performance. For dynamic affect, the human had real-time communication with the detection systems through affective expressions of the robot, which reduced the idle time, the uncertainty of input parts, risk of wrong manipulation or wrong orientation and thus reduced the workload for frustration. In contrast, for no affect, there was high idle time, uncertainties, and risks that imposed a high workload for frustration. Better communication, smoother workflow, and clearer cooperation between human and robot through affective expressions reduced hardship for the human, which imposed a lower workload for effort than that imposed a no-affect case. The text message displays produced more cognitive workload than that produced by affective expressions, probably due to the lower cognitive, perceptual, and attentional effects of text messages on human compared to the cognitive effects of affective expressions [24,69]. ANOVAs (subjects, IVs) conducted separately for each of the six cognitive workload dimensions showed that variations in workload among the three IVs (no affect, text message display, affect display) were statistically significant for all six workload criteria (p < 0.05 for all six criteria), which indicated the differential effects of the IVs on cognitive workload. The variations between subjects were not statistically significant (p > 0.05 in all six criteria), which indicated the generality of the results.
The text messages were less exciting to draw human attention, which might be the reason for the higher cognitive workload and lower situation awareness. We believe that the better cognitive, perceptual, and attentional effects, superior situation awareness, and favorable workload for the dynamic affective expressions resulted in better trust (Figure 10) and pHRI (Figure 9 and Table 10 and Table 11) for the dynamic affect case over the text message case. We also believe that the favorable workload, higher situational awareness, and improved pHRI for the dynamic affect case contributed to better efficiency (Table 7), safety (Table 10), and quality (Table 8 and Table 9) over the text message case or the no-affect case.

8. Discussion

The proposed research contributed to the integration of several innovative aspects such as understanding human affect in collaborative tasks, modeling human affects, deriving an affect-based motion planning strategy (Figure 6), and evaluating the effectiveness of the strategy following a comprehensive evaluation scheme. The selected assembly task was simple, which could be more complex, e.g., the robot could hand over the parts to the human instead of just manipulating the parts, the robot itself could attach the parts using screws and screwdrivers, and so forth. However, our objective was not to show how complex the assembly of the Baxter robot and the human could perform collaboratively. Instead, the objective was to investigate the roles of robot’s situation-based dynamic affective expressions as human–robot visual cognitive interfaces on HRI and assembly performance. We believe that the presented approaches should be applicable to all types of HRC in assembly including changes in assembly processes and addition of new subtasks without any loss of generality. However, the events (Section 5) and the types of affects could vary due to changes in assembly types, processes, subtasks, etc. The generality of the findings could be further enhanced by increasing the number of subjects, interaction modes, tasks, etc. In addition, to make the findings applicable in industrial settings, it might be necessary to include subjects of different demographics (e.g., different ages, cognitive capacities, etc.) instead of similar individuals.
Different types of expressions of situations by the robot in dynamic contexts (e.g., text message displays on robot face, robot’s voice/sound and speech, robot gestures, body language, affections, etc.) might play different roles in conveying the task situations to the human, and their effects on HRI and assembly performance might also be different. We found the affective expressions superior to the text message displays. However, the results were intended to just benchmark the benefits of dynamic affective expressions with respect to another potential mode of feedback or expression of situations (e.g., text message displays) by the robot. In our study, the text message was comparable with affect. However, both text and affect had different strengths and weaknesses. The meaning of the text was clear, but it had no emotional value. On the other hand, the affect impacted the emotion of the human, but it could be ambiguous. It is not clear whether the combination of both text and affect eases the bilateral communication or makes the communication more complex. In real applications, proper integration of other modalities with dynamic affects such as the integration of text, affect and sound might produce better results over an individual mode.
We used subjective criteria to measure HRI and assembly performance. A few other objective criteria were also used. We believe that the subjective evaluations were reliable because (i) they followed standard methods and metrics such as SAGAT, NASA TLX, Likert scale, etc., (ii) the objective results were in line with the subjective results, and (iii) the subjective evaluations were proven effective and reliable in many state-of-the-art works [70], etc.
Robot-human communication occurred during collaboration, i.e., communication was a part of the collaboration, but the study was not focused only on communication. The robot communicated through affective expressions that were nonverbal messages to convey emotional meanings, and such messages could establish and maintain relationships [59]. For example, a happiness sign could convey pleasure or satisfaction [59]. It was not always mandatory that the communication be two-way [58,59]. In most cases, the communication between the robot and the human presented in this article was one-way, i.e., from the robot to the human. It is true that displays of different faces by the robot could represent some error conditions, but those error conditions could also convey a ground or a message to the human from the robot [58,59]. Here, the robot had the role of information or message provider through affective expressions, and the human co-worker had the role of information or message receiver and action taker based on what the human perceived about the affective expressions. Such information flow (providing and receiving information) could be treated as communication [58,59], and it had an impact on HRI and assembly performance, as the results showed.
Computing the affective states of the robot based on affect intensities did not mean sensing affects objectively. More specifically, we did not mean “objectively sensed affective states”. Instead, we wanted to mean “objectively computed affective states”. It was an issue of computation, not an issue of sensing or feeling. In addition, the affective states could certainly be coded by numeric numbers in terms of affect intensity, but those numbers might be nominal and be added and subtracted. In fact, the computed values of affective states obtained through the computational model were not able to give actual feelings of affects neither to the human nor to the robot. The computed affective state values could be used to map the affective states of humans for different scenarios that could be convenient to use with different algorithms, control systems, etc. for artificial systems that might aim for human-like affects [39]. The efforts to derive computational models of human affects were not new. Literature shows strong interest in such computational models of human affects [45,46,47,48]. We were inspired by such existing initiatives, but we applied the affect model to the motion planning of the robot and verified its effectiveness in real scenarios, which was novel. As alternatives, we could use a data-driven learning and control approach [71], or the fuzzy approach [72] to heuristically determine and control the affective states of the robot instead of the proposed subjective-type deterministic affect model. The data-driven and fuzzy approaches are broader in scope, but the proposed deterministic affect model simplified the solution of the problem in the presented scenario with limited affective states.
The state-of-the-art works on SAGAT [62,63,64,65,66,73,74,75,76,77,78] show that its implementation needs major facilities as follows:
  • A real-time human in the loop in the simulation environment with the ability to randomly pause/freeze the simulation and blank all simulation windows (so that the subject cannot see what was going on but can recollect from the memory).
  • A set of queries corresponding to the 3 levels of SA for the task at the time of freeze that are determined from the results of a SA requirements analysis.
  • Knowledge of the real situation (ground truth) based on computer simulation databases or on expert experimenters’ responses to the same queries.
  • A scoring system to objectively measure/evaluate the situation awareness level comparing (within an acceptable range of tolerance) the actual responses to the ground truth.
However, the current requirements and scope for SAGAT as the above were very preliminary and needed to be modified to adjust to task situations to broaden the horizon of SAGAT. For example, the simulation needed expensive simulators, sometimes it was difficult to have the capability to pause the simulation [77], extensive preparation was required for the simulation [1], and disruptive simulations could halt causing interruption of the natural flow of the task [73], and the simulation could not be applied to real tasks in real fields [78]. Hence, instead of using a simulation environment, freezing a physical activity at random times could help SAGAT apply to real activities [61,63]. For example, the paper and pencil version of SAGAT as we implemented here was used before the computer simulation for SAGAT was developed [64].
For SAGAT, freezing in the middle of the task could affect the task performance, and it might not fit with real applications [61]. We here stopped the physical activity at the end of an experimental trial (equivalent to freezing a simulation at a random time), and quickly separated the subject from the experimental site so that the subject could not see the experimental scenario (equivalent to blanking all simulation windows), and then we asked queries to assess the SA of the subject based on his/her very recent memories. Each subject conducted the experiment separately and the subject kept the experimental experience including the SA queries confidential to anyone else [62]. The subject did not lose the memories necessary to respond to the SA queries at the end of the trial [61,63]. Administering SAGAT at the end of the trial did not reduce its effectiveness [61,64]. Instead, it might allow the subject to build a complete mental picture of the situation in his/her mind, which might ultimately help the subject respond to the SA queries in a better way [62]. There might be multiple queries for each SA level [64], and the queries could be asked randomly [62], though Table 6 contains only a few sample SA queries.
A scoring system was needed for the SAGAT. However, as the literature shows [62,65], the scoring system could be customized to fit the specific needs of SAGAT applications. Expressing the SAGAT scoring results in terms of % of correct SA query responses (accuracy) reported by subjects for a particular task seemed to be suitable for the task we presented herein [62,65]. We here customized the SAGAT application to adjust to the need and situation for the selected task, which was neither a limitation of our work nor a deviation from the SAGAT principles [62,63,64,65,66,73,74,75,76,77,78]. Instead, the approach could be treated as innovations toward reducing the limitations of the state-of-the-art SAGAT implementations and making the technique/approach more practical, useful, and diversified.

9. Conclusions and Future Work

A hybrid cell was developed where a human and a humanoid robot collaborated to assemble a few parts into a final product based on an optimized subtask allocation strategy. A computational model of the robot’s affect was determined being inspired by that of the human, and an affect-based motion planning strategy was developed to help the robot generate its motion and change its affective expressions on dynamic contexts as a method of expressing its feedback to the collaborating human on the situations it experienced during the assembly task. Three detection systems were developed to detect wrong part manipulation, potential collisions between the human and the robot, and wrong orientation of the assembly parts, and the detected situations were also expressed by the robot through its affective expressions in dynamic contexts. To compare the effects of the dynamic affect-based robot motion planning, robot’s responses to the human on task situations were expressed through displaying relevant text messages on the robot face screen. A comprehensive evaluation scheme for HRI and assembly performance was developed. The results justified the effectiveness of the subtask allocation strategy, dynamic affect-based motion planning strategy, detection systems, and evaluation scheme. The results proved that the dynamic affect-based robot motion planning produced significantly better HRI and assembly performance than that produced by the motion planning strategy that expressed the robot’s responses through text message displays on the robot face screen or that used no affective expressions of the robot. The results were significantly novel and important and encouraged to apply the dynamic affect-based motion planning algorithms for the robots collaborating with humans for flexible light assembly in manufacturing that could improve the assembly performance in terms of productivity, quality, and safety.
In the near future, we will enable the robot to perceive human’s affective states to improve the bilateral communication between the human and the robot, and we will investigate its impacts on HRI and assembly performance. To investigate human–robot bilateral affects, we will design and evaluate different novel data-driven and fuzzy control approaches for the robot. We will include subjects of different demographics to make the results more reliable and general.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board of Clemson University (approved code is 2014-241).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The author acknowledges the support of BMW Manufacturing Co., Greenville, SC, USA, for the Baxter robot used in the experiments. The author also acknowledges the support of the subjects of the experiments for their voluntary participation.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. The Manufacturing Institute. The Facts about Modern Manufacturing, 8th ed.; The Manufacturing Institute: Manchester, UK, 2009. [Google Scholar]
  2. Baily, M.N.; Bosworth, B.P. U.S. manufacturing: Understanding its past and its potential future. J. Econ. Perspect. 2015, 28, 3–26. [Google Scholar] [CrossRef]
  3. Aziz, R.; Rani, M.; Rohani, J.; Adeyemi, A.; Omar, N. Relationship between working postures and MSD in different body regions among electronics assembly workers in Malaysia. In Proceedings of the IEEE International Conference on Industrial Engineering and Engineering Management, Bangkok, Thailand, 10–13 December 2013; pp. 512–516. [Google Scholar]
  4. Patel, R.; Hedelind, M.; Lozan, P. Enabling robots in small-part assembly lines: The ROSETTA approach—An industrial perspective. In Proceedings of the 7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; pp. 1–5. [Google Scholar]
  5. Rethink Robotics. Available online: http://www.rethinkrobotics.com (accessed on 10 February 2024).
  6. Kinova Robot. Available online: https://www.kinovarobotics.com (accessed on 10 February 2024).
  7. KUKA Robotics. Available online: www.kuka-robotics.com (accessed on 10 February 2024).
  8. Fryman, J.; Matthias, B. Safety of industrial robots: From conventional to collaborative applications. In Proceedings of the 7th German Conference on Robotics, Munich, Germany, 21–22 May 2012; pp. 1–5. [Google Scholar]
  9. Schmidbauer, C.; Zafari, S.; Hader, B.; Schlund, S. An empirical study on workers’ preferences in human–robot task assignment in industrial assembly systems. IEEE Trans. Human-Mach. Syst. 2023, 53, 293–302. [Google Scholar] [CrossRef]
  10. Zhang, M.; Li, C.; Shang, Y.; Liu, Z. Cycle time and human fatigue minimization for human-robot collaborative assembly cell. IEEE Robot. Autom. Lett. 2022, 7, 6147–6154. [Google Scholar] [CrossRef]
  11. Tan, J.; Duan, F.; Zhang, Y.; Watanabe, K.; Kato, R.; Arai, T. Human-robot collaboration in cellular manufacturing: Design and development. In Proceedings of the 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA, 10–15 October 2009; pp. 29–34. [Google Scholar]
  12. Wilcox, R.; Shah, J. Optimization of multi-agent workflow for human-robot collaboration in assembly manufacturing. In Proceedings of the of AIAA Infotech@Aerospace, Garden Grove, CA, USA, 19–21 June 2012. [Google Scholar]
  13. Wilcox, R.; Nikolaidis, S.; Shah, J. Optimization of temporal dynamics for adaptive human-robot interaction in assembly manufacturing. In Proceedings of the Robotics: Science and Systems Conference, Sydney, NSW, Australia, 9–13 July 2012. [Google Scholar]
  14. Nikolaidis, S.; Lasota, P.; Rossano, G.; Martinez, C.; Fuhlbrigge, T.; Shah, J. Human-robot collaboration in manufacturing: Quantitative evaluation of predictable, convergent joint action. In Proceedings of the 44th International Symposium on Robotics, Seoul, Republic of Korea, 24–26 October 2013. [Google Scholar]
  15. Hawkins, K.; Bansal, S.; Vo, N.; Bobick, A. Modeling structured activity to support human-robot collaboration in the presence of task and sensor uncertainty. In Proceedings of the IEEE/RSJ IROS Workshop on Cognitive Robotics Systems, Tokyo, Japan, 3 November 2013. [Google Scholar]
  16. Kaipa, K.; Morato, C.; Liu, J.; Gupta, S. Human-robot collaboration for bin-picking tasks to support low-volume assemblies. In Proceedings of the Workshop on Human-Robot Collaboration for Industrial Manufacturing, Robotics: Science and Systems Conference, Hong Kong, China, 31 May–7 June 2014. [Google Scholar]
  17. Gadensgaard, D.; Bourne, D. Human/robot multi-initiative setups for assembly cells. In Proceedings of the Seventh International Conference on Autonomic and Autonomous Systems (IARIA 2011), Venice, Italy, 22–27 May 2011; pp. 1–6. [Google Scholar]
  18. Sauppe, A.; Mutlu, B. Effective task training strategies for instructional robots. In Proceedings of the Robotics: Science and Systems Conference, Rome, Italy, 13–15 July 2014. [Google Scholar]
  19. Sun, Y.; Wang, W.; Chen, Y.; Jia, Y. Learn how to assist humans through human teaching and robot learning in human–robot collaborative assembly. IEEE Trans.Syst. ManCybern. Syst. 2022, 52, 728–738. [Google Scholar] [CrossRef]
  20. Li, S.; Zheng, P.; Fan, J.; Wang, L. Toward proactive human–robot collaborative assembly: A multimodal transfer-learning-enabled action prediction approach. IEEE Trans. Ind. Electron. 2022, 69, 8579–8588. [Google Scholar] [CrossRef]
  21. Gleeson, B.; Maclean, K.; Haddadi, A.; Croft, E. Gestures for industry intuitive human-robot communication from human observation. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; pp. 349–356. [Google Scholar]
  22. Malchus, K.; Jaecks, P.; Damm, O.; Stenneken, P.; Meyer, C.; Wrede, B. The role of emotional congruence in human-robot interaction. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, Tokyo, Japan, 3–6 March 2013; pp. 191–192. [Google Scholar]
  23. Kawai, S.; Takano, H.; Nakamura, K. Pupil diameter variation in positive and negative emotions with visual stimulus. In Proceedings of the 2013 IEEE International Conference on Systems, Man and Cybernetics, Manchester, UK, 13–16 October 2013; pp. 4179–4183. [Google Scholar]
  24. Hudlicka, E.; Broekens, J. Foundations for modelling emotions in game characters: Modelling emotion effects on cognition. In Proceedings of the 2009 International Conference on Affective Computing and Intelligent Interaction and Workshops, Amsterdam, The Netherlands, 10–12 September 2009; pp. 1–6. [Google Scholar]
  25. Breazeal, C. Sociable Machines: Expressive social exchange between humans and robots. Ph.D. Thesis, EECS Department, Massachusetts Institute of Technology, Cambridge, MA, USA, 2000. [Google Scholar]
  26. Breemen, A.; Yan, X.; Meerbeek, B. iCat: An animated user-interface robot with personality. In Proceedings of the International Joint Conference on Autonomous Agents and Multiagent Systems, Amsterdam, The Netherlands, 25–29 July 2005; pp. 143–144. [Google Scholar]
  27. Wachsmuth, S.; Schulz, S.; Lier, F.; Siepmann, F. The robot head ‘Flobi’: A research platform for cognitive interaction technology. In Proceedings of the German Conference on Artificial Intelligence, Saarbrücken, Germany, 24–27 September 2012; pp. 3–7. [Google Scholar]
  28. Murray, J.; Canamero, L.; Hiolle, A. Towards a model of emotion expression in an interactive robot head. In Proceedings of the 2009 IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 27 September–2 October 2009; pp. 627–632. [Google Scholar]
  29. Zecca, M.; Endo, N.; Momoki, S.; Itoh, K.; Takanishi, A. Design of the humanoid robot KOBIAN—Preliminary analysis of facial and whole-body emotion expression capabilities. In Proceedings of the 2008 IEEE-RAS International Conference on Humanoid Robots, Daejeon, Republic of Korea, 1–3 December 2008; pp. 487–492. [Google Scholar]
  30. Nanty, A.; Gelin, R. Fuzzy controlled PAD emotional state of a NAO robot. In Proceedings of the 2013 Conference on Technologies and Applications of Artificial Intelligence, Taipei, Taiwan, 6–8 December 2013; pp. 90–96. [Google Scholar]
  31. Hashimoto, M.; Yamano, M.; Usui, T. Effects of emotional synchronization in human-robot KANSEI communications. In Proceedings of the 2009 IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, 27 September–2 October 2009; pp. 52–57. [Google Scholar]
  32. Jimenez, F.T.; Yoshikawa, T.; Furuhashi, T.; Kanon, M. Psychological effects of educational-support robots using an emotional expression model. In Proceedings of the 2014 Joint 7th International Conference on Advanced Intelligent Systems and 15th International Symposium on Soft Computing and Intelligent Systems, Kitakyushu, Japan, 3–6 December 2014; pp. 95–99. [Google Scholar]
  33. Takanishi, A.; Sato, K.; Segawa, K.; Takanobu, H.; Miwa, H. An anthropomorphic head-eye robot expressing emotions based on equations of emotion. In Proceedings of the 2000 IEEE International Conference on Robotics and Automation, San Francisco, CA, USA, 24–28 April 2000; pp. 2243–2249. [Google Scholar]
  34. Rahman, S. Generating human-like social motion in a human-looking humanoid robot: The biomimetic approach. In Proceedings of the 2013 IEEE International Conference on Robotics and Biomimetics, Shenzhen, China, 12–14 December 2013; pp. 1377–1383. [Google Scholar]
  35. Nishio, S.; Ishiguro, H.; Hagita, N. Geminoid: Teleoperated android of an existing person. In Humanoid Robots: New Developments; I-Tech Publishing: Vienna, Austria, 2007; pp. 343–352. [Google Scholar]
  36. Eyssel, F.; Kuchenbrandt, D.; Hegel, F.; Ruiter, L. Activating elicited agent knowledge: How robot and user features shape the perception of social robots. In Proceedings of the 2012 IEEE International Symposium on Robot and Human Interactive Communication, Paris, France, 9–13 September 2012; pp. 851–857. [Google Scholar]
  37. Hegel, F.; Krach, S.; Kircher, T.; Wrede, B.; Sagerer, G. Understanding social robots: A user study on anthropomorphism. In Proceedings of the 2008 IEEE International Symposium on Robot and Human Interactive Communication, Munich, Germany, 1–3 August 2008; pp. 574–579. [Google Scholar]
  38. Kitagawa, Y.; Ishikura, T.; Song, W.; Mae, Y.; Minami, M.; Tanaka, K. Human-like patient robot with chaotic emotion for injection training. In Proceedings of the 2009 ICCAS-SICE, Fukuoka, Japan, 18–21 August 2009; pp. 4635–4640. [Google Scholar]
  39. Rahman, S.; Wang, Y. Dynamic affection-based motion control of a humanoid robot to collaborate with human in flexible assembly in manufacturing. In Proceedings of the ASME Dynamic Systems and Controls Conference, Columbus, OH, USA, 28–30 October 2015; Paper No. DSCC2015-9841. p. V003T40A005. [Google Scholar]
  40. Rahman, S.M.M. Bioinspired dynamic affect-based motion control of a humanoid robot to collaborate with human in manufacturing. In Proceedings of the 2019 12th International Conference on Human System Interaction (HSI), Richmond, VA, USA, 25–27 June 2019; pp. 76–81. [Google Scholar]
  41. Rethink Robotics Videos. Available online: https://robotsguide.com/robots/baxter (accessed on 24 December 2023).
  42. Rethink Robotics Baxter. Available online: https://spectrum.ieee.org/rethink-robotics-baxter-robot-factory-worker (accessed on 24 December 2023).
  43. Chen, F.; Sekiyama, K.; Cannella, F.; Fukuda, T. Optimal subtask allocation for human and robot collaboration within hybrid assembly system. IEEE Trans. Autom. Sci. Eng. 2014, 11, 1065–1075. [Google Scholar] [CrossRef]
  44. Rahman, S.; Sadr, B.; Wang, Y. Trust-based optimal subtask allocation and model predictive control for human-robot collaborative assembly in manufacturing. In Proceedings of the ASME Dynamic Systems and Controls Conference, Columbus, OH, USA, 28–30 October 2015; Paper No. DSCC2015-9850. p. V002T32A004. [Google Scholar]
  45. Godfrey, W.; Nair, S.; Hwa, K. Towards a dynamic emotional model. In Proceedings of the 2009 IEEE International Symposium on Industrial Electronics, Seoul, Republic of Korea, 5–8 July 2009; pp. 1932–1936. [Google Scholar]
  46. Belhaj, M.; Kebair, F.; Said, L. A computational model of emotions for the simulation of human emotional dynamics in emergency situations. Int. J. Comput. Theory Eng. 2014, 6, 227–233. [Google Scholar] [CrossRef]
  47. Becker, C.; Kopp, S.; Wachsmuth, I. Simulating the Emotion Dynamics of a Multimodal Conversational Agent; Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2004; Volume 3068, pp. 154–165. [Google Scholar]
  48. Ryoo, S. Emotion affective color transfer. Int. J. Softw. Eng. Appl. 2014, 8, 227–232. [Google Scholar]
  49. Sidner, C.; Lee, C.; Kidd, C.; Lesh, N.; Rich, C. Explorations in engagement for humans and robots. Artif. Intell. 2005, 166, 140–164. [Google Scholar] [CrossRef]
  50. Carifio, J.; Rocco, J.P. Ten common misunderstandings, misconceptions, persistent myths and urban legends about Likert scales and Likert response formats and their antidotes. J. Soc. Sci. 2007, 3, 106–116. [Google Scholar] [CrossRef]
  51. Kawamura, K.; Rogers, T.; Hambuchen, K.; Erol, D. Towards a human–robot symbiotic system. Robot. Comput. Integr. Manuf. 2003, 19, 555–565. [Google Scholar] [CrossRef]
  52. Hoffman, G. Evaluating fluency in human-robot collaboration. In Proceedings of the RSS Workshop on Human-Robot Collaboration, Berlin, Germany, 24–28 June 2013. [Google Scholar]
  53. Carlson, T.; Demiris, Y. Collaborative control for a robotic wheelchair: Evaluation of performance, attention, and workload. IEEE Trans. Syst. Man Cybern. Part B 2012, 42, 876–888. [Google Scholar] [CrossRef]
  54. Hart, S.; Staveland, L. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Adv. Psychol. 1988, 52, 139–183. [Google Scholar]
  55. Desai, M.; Medvedev, M.; Vazquez, M.; McSheehy, S.; Gadea-Omelchenko, S.; Bruggeman, C.; Steinfeld, A.; Yanco, H. Effects of changing reliability on trust of robot systems. In Proceedings of the 2012 IEEE International Conference on Human-Robot Interaction, New York, NY, USA, 5–8 March 2012; pp. 73–80. [Google Scholar]
  56. Lee, J.D.; Moray, N. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 1992, 35, 1243–1270. [Google Scholar] [CrossRef]
  57. Cobley, P. Communication: Definitions and concepts. In The International Encyclopedia of Communication; Donsbach, W., Ed.; Wiley Publishing: Hoboken, NJ, USA, 2008. [Google Scholar] [CrossRef]
  58. Tomasello, M. Origins of Human Communication; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  59. De Vito, J. Human Communication: The Basic Course, 13th ed.; Pearson Publishers: London, UK, 2014. [Google Scholar]
  60. Larochelle, B.; Kruijff, G.; Smets, N.; Mioch, T.; Groenewegen, P. Establishing human situation awareness using a multi-modal operator control unit in an urban search & rescue human-robot team. In Proceedings of the 2011 IEEE International Symposium on Robot and Human Interactive Communication, Atlanta, GA, USA, 31 July–3 August 2011; pp. 229–234. [Google Scholar]
  61. Cooper, S.; Porter, J.; Peach, L. Measuring situation awareness in emergency settings: A systematic review of tools and outcomes. Open Access Emerg. Med. 2014, 6, 1–7. [Google Scholar] [CrossRef]
  62. Endsley, M. Direct measurement of situation awareness: Validity and use of SAGAT. In Situation Awareness Analysis and Measurement; Endsley, M.R., Garland, D.J., Eds.; Lawrence Erlbaum Associates: Mahwah, NJ, USA, 2000; pp. 147–173. [Google Scholar]
  63. Endsley, M.; Selcon, S.; Hardiman, T.; Croft, D. A comparative analysis of SAGAT and SART for evaluations of situation awareness. In Proceedings of the Human Factors and Ergonomics Society 42nd Annual Meeting, Chicago, IL, USA, 5–9 October 1998; pp. 82–86. [Google Scholar]
  64. Endsley, M. Situation awareness global assessment technique (SAGAT). In Proceedings of the 1988 IEEE National Aerospace and Electronics Conference, Dayton, OH, USA, 23–27 May 1988; Volume 3, pp. 789–795. [Google Scholar]
  65. Gartenberg, D.; Breslow, L.; McCurry, J.; Trafton, J.G. Situation awareness recovery. Hum. Factors J. Hum. Factors Ergon. Soc. 2014, 56, 710–727. [Google Scholar] [CrossRef]
  66. Jeannot, E.; Kelly, C.; Thompson, D. The Development of Situation Awareness Measures in ATM Systems; EUROCONTROL: Brussels, Belgium, 2003. [Google Scholar]
  67. Kim, T.; Hinds, P. Who should I blame? effects of autonomy and transparency on attributions in human-robot interaction. In Proceedings of the 15th IEEE International Symposium on Robot and Human Interactive Communication, Hatfield, UK, 6–8 September 2006; pp. 80–85. [Google Scholar]
  68. Sanders, T.L.; Wixon, T.; Schafer, K.E.; Chen, J.Y.C.; Hancock, P.A. The influence of modality and transparency on trust in human-robot interaction. In Proceedings of the 2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), San Antonio, TX, USA, 3–6 March 2014; pp. 156–159. [Google Scholar]
  69. Jägemar, M.; Dodig-Crnkovic, G. Cognitively Sustainable ICT with ubiquitous mobile services—Challenges and opportunities. In Proceedings of the 2015 IEEE/ACM 37th International Conference on Software Engineering, Florence, Italy, 16–24 May 2015; pp. 531–540. [Google Scholar]
  70. Ikeura, R.; Inooka, H.; Mizutani, K. Subjective evaluation for maneuverability of a robot cooperating with humans. J. Robot. Mechatron. 2002, 14, 514–519. [Google Scholar] [CrossRef]
  71. Wang, M.; Qiu, J.; Yan, H.; Tian, Y.; Li, Z. Data-driven control for discrete-time piecewise affine systems. Automatica 2023, 155, 111168. [Google Scholar] [CrossRef]
  72. Wang, M.; Lam, H.-K.; Qiu, J.; Yan, H.; Li, Z. Fuzzy-affine-model-based filtering design for continuous-time Roesser-Type 2-D nonlinear systems. IEEE Trans. Cybern. 2023, 53, 3220–3230. [Google Scholar] [CrossRef]
  73. Loft, S.; Bowden, V.; Braithwaite, J.; Morrell, D.B.; Huf, S.; Durso, F.T. Situation awareness measures for simulated submarine track management. Hum. Factors 2015, 57, 298–310. [Google Scholar] [CrossRef]
  74. Endsley, M. Measurement of situation awareness in dynamic systems. Hum. Factors 1995, 37, 65–84. [Google Scholar] [CrossRef]
  75. Endsley, M. Level of automation effects on performance, situation awareness and workload in a dynamic control task. Ergonomics 1999, 42, 462–492. [Google Scholar] [CrossRef]
  76. Kaber, D.; Onal, E.; Endsley, M. Design of automation for telerobots and the effect on performance, operator situation awareness and subjective workload. Hum. Factors Ergon. Manuf. 2000, 10, 409–430. [Google Scholar] [CrossRef]
  77. Salmon, P.; Stanton, N.; Walker, G.; Green, D.D. Situation awareness measurement: A review of applicability for C4i environments. Appl. Ergon. 2006, 37, 225–238. [Google Scholar] [CrossRef]
  78. Stanton, N.; Salmon, P.; Walker, G.; Baber, C.; Jenkins, D. Human Factors Methods: A Practical Guide for Engineering and Design, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
Figure 1. Assembly of automotive center console, (a) the components (parts) to be assembled, (b) back views, and (c) front views of the finally assembled product (center console) [courtesy: BMW Manufacturing Co., Spartanburg, SC, USA].
Figure 1. Assembly of automotive center console, (a) the components (parts) to be assembled, (b) back views, and (c) front views of the finally assembled product (center console) [courtesy: BMW Manufacturing Co., Spartanburg, SC, USA].
Electronics 13 01044 g001
Figure 2. Current practices of center console assembly, (a) the parts are stored on different shelves in the assembly area so that a human worker can pick them up for assembly, and (b) the human worker standing in front of a small table in the assembly area assembles the parts manually using screws and a screwdriver [courtesy: BMW Manufacturing Co., Spartanburg, SC, USA].
Figure 2. Current practices of center console assembly, (a) the parts are stored on different shelves in the assembly area so that a human worker can pick them up for assembly, and (b) the human worker standing in front of a small table in the assembly area assembles the parts manually using screws and a screwdriver [courtesy: BMW Manufacturing Co., Spartanburg, SC, USA].
Electronics 13 01044 g002
Figure 3. The hybrid cell for human–robot collaborative assembly. The left arm of the robot was used for the assembly, but the right arm was unused. The components/parts required for three finally assembled products (center consoles) are shown together.
Figure 3. The hybrid cell for human–robot collaborative assembly. The left arm of the robot was used for the assembly, but the right arm was unused. The components/parts required for three finally assembled products (center consoles) are shown together.
Electronics 13 01044 g003
Figure 4. A simple human-human collaborative assembly task to understand humans’ affective expressions for different task situations. In (a), the first human sequentially manipulated 3 objects from ‘A’ to very close to the second human and placed the objects near the second human at ‘B’. In (b), the second human sequentially assembled the 3 objects at ‘B’ following specified instructions and used a screw and a screwdriver to fasten the objects (parts) to produce a finished product (assembled product).
Figure 4. A simple human-human collaborative assembly task to understand humans’ affective expressions for different task situations. In (a), the first human sequentially manipulated 3 objects from ‘A’ to very close to the second human and placed the objects near the second human at ‘B’. In (b), the second human sequentially assembled the 3 objects at ‘B’ following specified instructions and used a screw and a screwdriver to fasten the objects (parts) to produce a finished product (assembled product).
Electronics 13 01044 g004
Figure 5. (ag) various affective expressions that the robot might show based on task situations/events during HRC in assembly, (h) expression of no affect, and (i) expression of situation/event through text message instead of affect.
Figure 5. (ag) various affective expressions that the robot might show based on task situations/events during HRC in assembly, (h) expression of no affect, and (i) expression of situation/event through text message instead of affect.
Electronics 13 01044 g005
Figure 6. The flowchart of the robot’s motion planning is associated with its affective expressions on dynamic contexts (in different assembly situations/events) for the pick-up and place operations to help the human co-worker assemble the parts and produce finished products. The affective expressions were estimated based on the subjective estimations of affect intensities (Table 3) and verified by subjective evaluation results (Table 2). The discrete state-machine type diagram explained which affective state transitions were feasible/rational at various situations during HRC in assembly and what triggered them.
Figure 6. The flowchart of the robot’s motion planning is associated with its affective expressions on dynamic contexts (in different assembly situations/events) for the pick-up and place operations to help the human co-worker assemble the parts and produce finished products. The affective expressions were estimated based on the subjective estimations of affect intensities (Table 3) and verified by subjective evaluation results (Table 2). The discrete state-machine type diagram explained which affective state transitions were feasible/rational at various situations during HRC in assembly and what triggered them.
Electronics 13 01044 g006
Figure 7. The 7-point equal-interval rating scale (Likert scale) assesses the HRI subjectively by the subject (human co-worker).
Figure 7. The 7-point equal-interval rating scale (Likert scale) assesses the HRI subjectively by the subject (human co-worker).
Electronics 13 01044 g007
Figure 8. (a) The robot expressed the happiness affect when it detected that the part (green circled) it was going to pick was the right (correct) part, (b) the robot expressed the confusion affect when it detected that the part (yellow circled) it was going to pick was the wrong part, (c) the robot expressed the anger affect if the wrong part (red circled) was not replaced by the right one within a specified time, (d) the robot expressed the scared affect when it detected that the human hand (red circled) was close to the robot’s arm and there was a potential collision, (e) the robot expressed the cautious affect when it detected that the human made a mistake (e.g., assembled a part with the wrong orientation, red circled).
Figure 8. (a) The robot expressed the happiness affect when it detected that the part (green circled) it was going to pick was the right (correct) part, (b) the robot expressed the confusion affect when it detected that the part (yellow circled) it was going to pick was the wrong part, (c) the robot expressed the anger affect if the wrong part (red circled) was not replaced by the right one within a specified time, (d) the robot expressed the scared affect when it detected that the human hand (red circled) was close to the robot’s arm and there was a potential collision, (e) the robot expressed the cautious affect when it detected that the human made a mistake (e.g., assembled a part with the wrong orientation, red circled).
Electronics 13 01044 g008aElectronics 13 01044 g008b
Figure 9. Comparison of pHRI evaluation results among the three expression conditions of the robot for the collaborative assembly task (the higher pHRI the better).
Figure 9. Comparison of pHRI evaluation results among the three expression conditions of the robot for the collaborative assembly task (the higher pHRI the better).
Electronics 13 01044 g009
Figure 10. (a) Comparison of human’s trust in the robot among three expression conditions (IVs) of the robot for the collaborative assembly (the higher trust was the better, the highest trust score was limited to 7 as Figure 7 shows, over-trust was not considered), and (b) communication between the robot and the human worker for the robot’s three expression conditions.
Figure 10. (a) Comparison of human’s trust in the robot among three expression conditions (IVs) of the robot for the collaborative assembly (the higher trust was the better, the highest trust score was limited to 7 as Figure 7 shows, over-trust was not considered), and (b) communication between the robot and the human worker for the robot’s three expression conditions.
Electronics 13 01044 g010
Figure 11. Comparison of human’s situation awareness in terms of accuracy (%) in the responses of the subjects to the SA queries among the three expression conditions (IVs) of the robot for the collaborative assembly (the higher situation awareness was the better).
Figure 11. Comparison of human’s situation awareness in terms of accuracy (%) in the responses of the subjects to the SA queries among the three expression conditions (IVs) of the robot for the collaborative assembly (the higher situation awareness was the better).
Electronics 13 01044 g011
Figure 12. Comparison of human’s cognitive workload (out of maximum score 100) among the three expression conditions (IVs) of the robot for the collaborative assembly (the lower workload was the better).
Figure 12. Comparison of human’s cognitive workload (out of maximum score 100) among the three expression conditions (IVs) of the robot for the collaborative assembly (the lower workload was the better).
Electronics 13 01044 g012
Table 1. Subtasks, subtask sequence (serial), and subtask allocation between the human and the robot in the hybrid cell to produce a center console.
Table 1. Subtasks, subtask sequence (serial), and subtask allocation between the human and the robot in the hybrid cell to produce a center console.
SubtasksSerial No.Allocation to Agent
Picking a faceplate from “A” and placing it at “B”1Human
Picking a correct switch row from “C” and placing it at “E”2Robot
Attaching the switch rows to the faceplate correctly using screws and a screwdriver3Human
Picking a correct I-drive from “C” and placing it at “E”4Robot
Attaching the I-drive to the faceplate correctly using screws and a screwdriver5Human
Checking if the human attaches parts correctly6Robot
Dispatching the final assembled product to “D” on the table7Human
Table 2. The observed situations/contexts/events and the expressed affective states of the collaborating humans are perceived by the subjects.
Table 2. The observed situations/contexts/events and the expressed affective states of the collaborating humans are perceived by the subjects.
Observed Situations/Contexts/EventsObserved Affective States of the First Human Perceived by the Subjects
The first human was ready to receive a command for the task from the second human, but the command had not been made yet, and thus the first human was doing nothing but waiting and expecting to start following a command soon. The same happened to the second human, who waited to receive parts from the first human to assemble.Neutral
The first human received a command to manipulate a part. The human needed to concentrate on his actions, e.g., detect the correct part to pick and manipulate it for the second human. The second human also concentrated on the assembly when he/she received the necessary parts from the first human.Concentrating
While detecting the correct part by the first human before manipulating it for the second human, if the part was the correct one, the human expressed happiness as reflected on his/her face. It happened because the human found the right part that enabled him/her to move forward. Happiness was expressed on the second human’s face when he/she found the correct input part and the assembled product was found correct.Happy
While working, sometimes one human (or human’s body parts) might come very close to another human. For example, the first human’s head could come very close to the second human, and the two heads might be about to collide with each other, or one human might unconsciously move his/her hand, which was about to hit another human or the assembly parts, etc. All those could cause potential harm or risk to humans and the task environment.Scared
The first human checked the correctness of an input part, identified it as the right part, and then manipulated the part to the second human for assembly. However, the second human identified that the manipulated part was wrong. In addition, the second human might make mistakes during the assembly, e.g., he/she made the wrong orientation of the parts, etc. Cautious
The part was available to the first human. The human needed to manipulate the part to the second human, who would assemble the part with other parts. However, the first human could not decide whether the available part would be the right one or not. A similar phenomenon might also happen if the second human could not decide how to orient the part during the assembly or whether the assembled product would be correct.Confused
Before manipulating parts to the second human, the first human found the parts wrong, and then waited with the hope that the experimenter would correct it, but the wrong part was not replaced by the right part. Thus, the first human could not proceed until the problem was solved, and he/she did not see any hope to find the solution immediately, etc. Angry
Table 3. Estimation 1 of affect intensities using a computational model and recognition (naming) of the robot’s affect for each event for the collaborative assembly task based on affect intensity magnitude and sign.
Table 3. Estimation 1 of affect intensities using a computational model and recognition (naming) of the robot’s affect for each event for the collaborative assembly task based on affect intensity magnitude and sign.
Events I m a t l = 1 , l a N I s l a t l = 1 , l a N I u l a t I a t Affect Name
Event-10000Neutral
Event-2+0.500+0.5Concentrating
Event-3110+2Happy
Event-4+0.500+0.5Concentrating
Event-5−10−1−2Scared
Event-6+0.500+0.5Concentrating
Event-7−10−1−2Scared
Event-8−0.50.5−0.5−0.5Confused
Event-9110+2Happy
Event-10−10−0.5−1.5Angry
Event-11−0.50−0.5−1Cautious
1  I m a t was estimated subjectively using (2). I s l a t and I u l a t were estimated subjectively based on the events. We ignored the decay in the intensity, i.e., we used w d = 0 as the time duration for the assembly event was small and thus the amount of potential decay in the intensity could also be small. We assumed all the non-zero weights as unity, i.e., w m = w s = w u = 1 , which meant that the concerned intensity terms on the right-hand side of (1) carried equal importance/priority. The non-zero weights could be estimated through additional experiments, or they could be learned through machine learning algorithms. However, we considered them united to keep our algorithm simple and practical for the selected human–robot collaborative task.
Table 4. Six dimensions of the NASA TLX and the descriptions relevant to the selected assembly task.
Table 4. Six dimensions of the NASA TLX and the descriptions relevant to the selected assembly task.
DimensionsQuestions on the
Dimension
Description Specific to the Selected Assembly Task
Mental DemandHow much mental and perceptual activity was required?The major sources of mental demand were ensuring correct parts were assembled with the correct orientation avoiding any unsafe events.
Physical DemandHow much physical activity was required? The major source of physical demand was the strength required to move human’s own arm to pick the parts and assemble the parts with the correct orientation, tighten the screws, and dispatch the finished products.
Temporal
Demand
How much time pressure did you feel during the assembly? The time pressure was to keep pace with the speed of the robot and to finish the task timely avoiding idle time and bottleneck and meeting targeted efficiency.
PerformanceHow successful you were in meeting the performance criteria for the assembly? The mental pressure that the subjects felt was to maintain the overall performance (e.g., efficiency, precision of the assembly) up to a satisfactory level.
Frustration LevelHow irritated, stressed, and annoyed situations did you feel during the assembly?Potential frustrating phenomena during the assembly were as follows: (i) human’s mistake(s), e.g., the wrong orientation of parts; (ii) robot’s poor performance, e.g., the robot could not pick the part(s) for some reasons, or the part(s) fell down from the robot end-effector during manipulation and thus it did not reach the human at the right time; (iii) the robot was temporarily out of connection with the operating system due to poor connectivity, sensor data were transmitted at delayed times, etc., (iv) safety issues such as collisions between human and robot; and (iv) noises and disturbances in the work environment, etc.
EffortHow hard did you have to work (mentally and physically) to accomplish your performance?The level of effort or endeavors required to achieve the performance level through satisfying the mental, physical, and temporal demands and overcoming frustrating phenomena.
Table 5. Trust dimensions with descriptions relevant to the assembly.
Table 5. Trust dimensions with descriptions relevant to the assembly.
DimensionsDescriptions Specific to the Selected Assembly Task
Reliability (of the robot)Whether the subject, based on the robot’s performance, skills, and precision, could rely on or believe in the robot as a collaborator to maintain the required assembly performance and quality level.
Confidence (in the robot)Confidence of the human in the robot that the human’s performance might go down or the human himself might make mistakes, but the robot’s performance would remain high, and it would not make any mistakes. It was not the human’s self-confidence.
Dependability (on the robot)Dependability of the human on the robot was the human’s intention that the human would collaborate only with the robot (he/she would neither work alone nor collaborate with another human) for all similar assembly tasks in the future.
Table 6. SAGAT: Situational awareness dimensions, queries relevant to the assembly task, and the possible evaluation scores.
Table 6. SAGAT: Situational awareness dimensions, queries relevant to the assembly task, and the possible evaluation scores.
SA DimensionsExample Queries Asked to the Subjects by the Experimenter for the Selected Assembly TaskScore
Correct AnswerWrong Answer
(Situation) PerceptionDid the robot suddenly stop its motion for a short time during the assembly?10
(Situation) ComprehensionWhat might be the reason for the sudden stop of the robot motion during the assembly?10
(Situation) PredictionWhat might happen if the robot did not stop its motion during the assembly?10
Table 7. Comparison of assembly efficiencies among three experiment conditions (the digits in the parentheses indicate standard deviations).
Table 7. Comparison of assembly efficiencies among three experiment conditions (the digits in the parentheses indicate standard deviations).
Efficiency Parameters Experiment Conditions
No AffectText MessageDynamic Affect
Average Cycle Time in Seconds to Finish an Assembly117.44 (3.51)102.37 (2.86)94.11 (3.06)
Average idle time in seconds per assemblyIdle time for human9.98 (0.39)6.94 (0.26)1.277 (0.0917)
Idle time for robot13.35 (0.53)8.13 (0.31)1.0067 (0.0569)
Table 8. Comparison of wrong part detection among three experiment conditions.
Table 8. Comparison of wrong part detection among three experiment conditions.
Part Correctness ParametersExperiment Conditions
No AffectText
Message
Dynamic Affect
Wrong part was manipulated to the human subject per assembly000
Wrong part detected per the assembly0.050.0330.033
Detected wrong part was correctly reported by the subject per assembly00.0170.033
Table 9. Comparison of wrong orientation detection among three experiment conditions.
Table 9. Comparison of wrong orientation detection among three experiment conditions.
Part Orientation ParametersExperiment Conditions
No AffectText
Message
Dynamic Affect
Wrong orientation identified in the final product per assembly000
Wrong orientation detected per assembly0.1500.1330.117
Wrong orientation correctly reported by the subject per assembly00.0830.117
Table 10. Comparison of safety incidents among three experiment conditions.
Table 10. Comparison of safety incidents among three experiment conditions.
Safety IncidentsExperiment Conditions
No AffectText
Message
Dynamic
Affect
Collision occurred per assembly000
Injury occurred per assembly000
Damage occurred per assembly000
Potential collision detected per assembly 20.100.100.083
Detected potential collision correctly reported by the subject per assembly00.0670.083
2 for example, a subject assembled three products (three assemblies) in each condition. There were 20 selected subjects and hence the total number of assemblies in each condition was 3 × 20 = 60. Out of the 60 total assemblies, the total number of detected potential collisions was 6 for no-affect condition, which resulted in 0.1 detected potential collision per assembly, and so forth.
Table 11. Comparison of team fluency among the three expression conditions (IVs).
Table 11. Comparison of team fluency among the three expression conditions (IVs).
No AffectText MessageDynamic Affect
Mean of % of Total Trial Time (Standard Deviation)Mean of % of Total Trial Time (Standard Deviation)Mean of % of Total Trial Time (Standard Deviation)
Human idle timeRobot idle timeNon-concurrent activity timeFunctional delayHuman idle timeRobot idle timeNon-concurrent activity timeFunctional delayHuman idle timeRobot idle timeNon-concurrent activity timeFunctional delay
8.5 (0.39)11.37 (0.53)5.23 (0.18)4.71
(0.08)
6.78 (0.26)7.94 (0.31)2.86 (0.09)2.43 (0.07)1.36 (0.09)1.07 (0.06)0.67 (0.003)0.36 (0.001)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Rahman, S.M.M. Dynamic Affect-Based Motion Planning of a Humanoid Robot for Human-Robot Collaborative Assembly in Manufacturing. Electronics 2024, 13, 1044. https://doi.org/10.3390/electronics13061044

AMA Style

Rahman SMM. Dynamic Affect-Based Motion Planning of a Humanoid Robot for Human-Robot Collaborative Assembly in Manufacturing. Electronics. 2024; 13(6):1044. https://doi.org/10.3390/electronics13061044

Chicago/Turabian Style

Rahman, S. M. Mizanoor. 2024. "Dynamic Affect-Based Motion Planning of a Humanoid Robot for Human-Robot Collaborative Assembly in Manufacturing" Electronics 13, no. 6: 1044. https://doi.org/10.3390/electronics13061044

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop