Usability Study through a Human-Robot Collaborative Workspace Experience

: The use of collaborative robots (cobots) in industrial and academic settings facilitates physical and cognitive interaction with operators. This framework is a challenge to determine how measures on concepts, such as usability, can be adapted to these new environments. Usability is a quality attribute prevalent in the ﬁeld of human-computer interaction concerning the context of use and the measure of effectiveness, efﬁciency, and satisfaction of products and systems. In this work, the importance of the role of benchmarking usability with collaborative robots is discussed. The introduced approach is part of a general methodology for studying people and robots’ performance in collaboration. It is being designed and developed on a concrete experience into a human-robot collaborative workspace. Outcomes from the study include a list of steps, resources, recommendations, and some customized questionnaires to obtain cobot-oriented usability analysis and case study results.


Introduction
The Fourth Industrial Revolution is a new paradigm favoring the insertion of collaborative and autonomous robots in industrial environments. Insertion of new technologies means workplace redesign. Manual tasks carried out by human operators in industrial environments are being transformed into tasks now shared with collaborative robots, usually named cobots [1]. Cobots are complex machines working hand-in-hand with human operators. Their main task is to help and support human operators in the activities deemed necessary. These machines do not replace human labor, but they complement human capabilities [2].
Introduction of cobots is substantially modifying workplaces in industry. It would be recommended to analyze which usability engineering methods can be useful to measure this workplace modification, such as the ISO/TR 16982 standard [3]. Elements, such as ergonomics, safety, robot acceptance, human ability to understand the behavior of complex systems, trust, and tasks assignment between operator and robot, are modified [4,5]. Improvement in system performance should be demonstrated through human-robot frameworks design, new standard regulations, methods, metrics, and experimental tests [6][7][8][9].
Taking into account the project management cycle-requirement, specification, solution design, implementation, test, and validation-a human-centered design approach should allow introduce ergonomics guidelines and recommendations in early stages of the life cycle and usability methods throughout the entire product cycle management. In this moment, a question raises about the cost of adding usability into the project cycle management. However, whether the decision has been made to transform the task and workspace with a collaborative robot, it is a key issue to focus on the human-centered approach. When adapting a small and medium enterprise for human-robot collaborative tasks, some usability methods should be taken into account: focus groups, thinking aloud, questionnaires, and expert evaluation. Human factor experts can choose and adapt the better methods oriented to a specific project. For instance, the thinking aloud method turns out to be a highly efficient method to capture aspects related to the cognitive/mental activities of the potential users of the evaluated system. By involving operators to verbalize their opinion, the symbiosis between operator and robot is promoted. Moreover, in the academic context, when creating a new teaching/research laboratory with collaborative robots, we are contributing to the synergy between disciplines, and to the promotion of creativity and innovation.
User research is the systematic study of the goals, needs and capabilities of users. Usability testing is a basic element for determining whether users are accomplishing their goals. Following the international standard definition ISO 9241, Part 11 [10], usability is the extend to which a product can be used by specific users to achieve specific goals with (i) effectiveness, (ii) efficiency, and (iii) satisfaction in a specified context of use. Effectiveness refers to the number of errors, number of successfully completed activities; efficiency relates with the task time, physical effort, fatigue, cognitive workload; finally, satisfaction is usually measured using subjective questionnaires. It is worth noting that other researchers [11] include usability in a broader methodological framework focused on user experience. A pleasurable user experience is an essential design target of humancollaborative robot interaction, and more goals could be added as: fellowship, sympathy, inspiration, and accomplishment.
The main objective in this research is to present a specific usability test plan to evaluate the usability of human-robot collaborative workspaces. It is developed through a concrete human-robot collaborative workspace experience (HRCWE) to illustrate how this usability test plan can be applied in a real environment. A description of the experience is provided, specifying objectives, roles, and responsibilities of all involved and associated timelines. It should be noted that main outcomes are not about the results obtained for this illustrative usability test plan, but the design of the plan itself.
This work is structured as follows. Section 2 introduces the purpose of the usability test in the HRCWE. Turn to this section to read the objectives of the test and understand why these objectives were selected. Next, it is described the experience under test and the context within which the experience is being working. Then, participants and responsibilities of everyone involved in the usability test are described, including testing team, observers, and test participants. Moreover, the evaluation procedure of the usability test is presented, including location and dates, test facilities and how test sessions are organized. Section 3 presents data collected during the usability sessions and contains the agreed usability metrics being tested. Next, results of experiment and statistics for evaluation and subsequent discussion are presented. A discussion of the findings of the experiment is provided in Section 4. Finally, Section 5 establishes some conclusions. The appendices contain useful working documents for the usability test, such as the recruitment screen, letters to participants, and proposed questionnaires.

The Proposed Usability Test
When a new collaborative robot is introduced into production, tasks under development change from the point of view of the human operator. It is necessary to evaluate how these changes affect human operator behavior, as well as measure and analyze human-robot collaboration tasks in detail.

Purpose of the Usability Test
The purpose of a usability test is, giving a specific context of use, measure the performance of a system in terms of task effectiveness, efficiency, and satisfaction. Thus, the necessary feedback is provided to help decision-making in the redesign of systems. In particular, in this work, a collaborative workspace is considered where the operator is developing a main task. It is supposed that the operator have some experience in this main task and the workspace is correctly designed. Next, the operator is required for developing a secondary task, implying collaboration with a robot. The general objective of the usability test in this scenario would be to evaluate the usability of the workspace when a secondary collaborative task with cobots is added to the operator.
The usability of the proposed human-robot collaborative workspace's experience (HRCWE) is evaluated on the basis of the international usability standard ISO 9241-11 [10], which takes into account both objective and subjective metrics. According to this standard, the effectiveness and efficiency of usability is evaluated through the measurement of Task to Time, i.e., is, the time in seconds to complete a task, and Task Completion rate as the percentage of tasks that users complete correctly, performed by each participant while performing the tasks. In addition to these objective measures, a questionnaire has been developed to evaluate subjective usability using the System Usability Scale (SUS), whose definition is based on psychometric methods [12].
A usability test is a standard human-computer interaction test. However, human-robot interaction (HRI) is different from usual human-computer interaction (HCI) in several dimensions. HRI differs from HCI and human-machine interaction (HMI) because it concerns systems with complex, dynamic control procedures, exhibiting autonomy, cognition, and operating in changing real-world environments [13]. Our proposed usability test is not oriented to evaluate the design characteristics of the implemented prototype, nor its productive performance. Moreover, this test is focused in early steps of design, and not in final launch of products to the market. Human factors and ergonomics use too the same orientation in early steps of design. The aim is to use the collaborative robot as a partner, not as a substitute of human operator. Thus, the context implies the formation of human-robot teams, each contributing with their best skills [14].

Process under Experiment
Specifying the study's characteristics is the first step in a study of usability, to specify the context of use where the experimental study will be carried out, the description of the process, the type of participants, and the academic purpose of the workspace [15,16].

Context of Use
In this case, the experience is designed in a University laboratory, participants being recruited among students and teaching staff laboring in this environment. The laboratory endows two identical robot stations and a set of computer workplaces. It is designed to introduce students in the managing of collaborative robots in two training steps: the first one is understand the robot and how to program it, and the second step is adopting the role of human operator in a realistic scenario.
The human-robot collaborative workspace experience (HRCWE) is based on a prototype implemented in the laboratory with the aim of developing teaching and research on the relationships between humans and robots. In particular, in collaborative mode, with a focus on the cognitive and mental tasks of the human operator. Certainly, the use of a collaborative robot facilitates physical interaction with humans. However, the cognitive and mental aspects should not be underestimated. The perception of the human regarding the complexity of the task, or the trust that the human places in the robot are also relevant elements.

Procedure
To evaluate the effects on the human operator when a collaborative human-robot task is added in the original workspace, a workplace composed of two tasks is defined, in particular with a focus on assembly tasks. The workspace tasks are: • Task 1: Tower of Hanoi, only performed by the human operator; • Task 2: Collaborate with a cobot in the assembly of a product.

Task 1: Tower of Hanoi
The original task for the human operator is about solving the Tower of Hanoi (TOH5) with five pieces. This problem consists of five perforated disks of increasing radius stacked by inserting them into one of the three posts fixed to a board, as seen in Figure 1. The objective is to move one disk from the first pole to another one, making only one move at a time and placing a bigger disk on top of a smaller one. The Tower of Hanoi puzzle, was established as a robotics challenge as a part of EU Robotics coordination action in 2011 and IEEE IROS Conference in 2012. In our experiment, the Tower of Hanoi is performed only by the human operator. A digital version of the TOH is used, available at Google Play HANOI 3D. This program allows manipulating the disks and records the number of moves and total time in seconds required to complete the task. No experimental cognitive variation exists if either wood pieces or a digital version is used [17].
Task 2: Collaborative assembly of a product The introduced secondary task consists of collaborative assembling (CA) of a product composed of 3 components: a base, a bearing, and a cap, as shown in Figure 2. The task is classified as adding a low cognitive workload, from a human-centered perspective, because eye-hand coordinated skills are the more relevant in this task. Moreover, the assembly task shows a low level of physical risk for the human, and no action is necessary to decrease this risk.

The Human-Robot Collaboration Workspace Experience, HRCWE
The HRCWE shown in Figure 3 is a workspace composed of two working areas: the work area one, called Tower of Hanoi with 5 disks (TOH5), is considered the main task to be developed by the human operator; the work area two, called Assembly, is a secondary added task where the collaboration with the cobot will be held (CA). In particular, the implemented assembly process area using a collaborative robot from the company Universal Robots, model UR3 is shown in Figure 4. The table where the cobot is anchored is divided into different work sub-areas. On the top right, a sub-area for parts feeding; on the bottom in the middle, a sub-area where the human executes actions on the teach pendant of the cobot; and on the top left, a sub-area where the human operator receives visual feedback of the cobot working, a red color light in a light tower.

Participants and Responsibilities
The roles involved in a usability test are as follows. It is worth noting that an individual may play multiple roles and tests may not require all roles.

Participants
Participants are University's bachelor students and some teaching staff. The participants' responsibilities are attempting to complete a set of representative task scenarios presented to them in as efficient and timely a manner as possible, and to provide feedback regarding the usability and acceptability of the experience. The participants are addressed to provide honest opinions regarding the usability of the application, and to participate in post-session subjective questionnaires. These participants have good skills in engineering methods, computer science and programming. They do not have previous knowledge about collaborative robots and how manage human-robot activities.
The form Participants in Appendix A contains a recruitment form to be used to recruit suitable participants. The more relevant elements in this form are: summarize the results in a usability report.
The facilitator must supervise the ethical and psychological consequences for research participants. Any foreseeable effect on their psychological well-being, health, values, or dignity must be considered and, in the case of judging it negative, even to a minimal degree, eliminated [18]. From a robotic perspective, roboethics has as objective the development of technical tools that can be shared by different social groups. These tools aim to promote and encourage the development of robotics and to help in preventing its misuse against humankind [19].
The facilitator must ensure that the test can be carried out effectively. To do this, they must previously set the level of difficulty for the Tower of Hanoi solution-in this case 5 disks-, adjust the speed of the collaborative robot's movement, and program the cobot's task.

Ethics
All participants involved with the usability test are required to adhere to the following ethical guidelines: • The performance of any test participant must not be individually attributable. The individual participant's name should not be used in reference outside the testing session. • A description of the participant's performance should not be reported to his or her manager.
Considering that this study involves work with humans, the usability plan has been endorsed by the Ethics Committee of the UPC with the identification code 2021-06.

Evaluation Procedure
Elements for the evaluation procedure are now listed.

Location and Dates
The address of the test facility is: Automatic Control Department, FIB Faculty, Automatic Control Department Laboratory C5-202, Universitat Politècnica Catalunya Barcelona Tech. Authors plan to test participants according to the schedule shown in Table 1.  The purpose of the pilot test is to identify and reduce potential sources of error and fix any technical issue with the recording equipment or with the experiment that might cause delays to the current experiment. It is expected that the pilot test would take two hours, at maximum. The problems found would be immediately fixed. Observers are not invited due to the nature of the pilot.

Usability Sessions
Each participant session will be organized in the same way to facilitate consistency. Users will be interviewed at the end of the tasks.

Introduction
The main facilitator begins by emphasizing that the testing is being carried out by a Ph.D. student. This means that users can be critical of the experience without feeling that they are criticizing the designer. The main facilitator is not asking leading questions. The main facilitator explains that the purpose of the testing is to obtain measures of usability, such as effectiveness, efficiency, and user satisfaction, when working with collaborative robots. It is made clear that it is the system, not the user, that is being tested, so that, if they have trouble, it is the workspace's problem, not theirs.

Pre-Test Interview
At the beginning of the experiment, explained to the participant are: details of the experience, which are his/her tasks, how the experiment is developed (Appendix C is used for this), and that, at the end of the experiment, there is one questionnaire to be answered.
Participants will sign an informed consent that acknowledges: participation is voluntary, participation can cease at any time, and the session will be videotaped, but their privacy of identification will be safeguarded. The facilitator will ask the participant if they have any questions. The form Consent form in Appendix B is used for this aim. The more relevant aspects of this form are: • description of objectives in the experiment, • safety explanation for the participant, and • participant's rights.
Next, the facilitator explains that the amount of time taken to complete the test task is measured and that exploratory behavior outside the task flow should not occur until after task completion. Time-on-task measurement begins when the participant starts the task.
The facilitator presents a demonstration to the user according to the guide Demonstrations in Appendix D. The more relevant aspects of this form are: • demonstration of the use of the Tower of Hanoi game on the tablet; • demonstration of operator involvement in product assembly After all tasks are attempted, the participant completes the post-test satisfaction questionnaire.

Case Study
The experimental scenario is composed of two tasks within the HRWCE. The main task is Task 1 TOH5; a secondary task is added, Task 2 Collaborative Assembly (CA), as an additional human-robot collaboration task (see Figure 5). Table 3 shows the conditions of performance for the tasks. Time allocated for this scenario is 15 min. Figure 5. Scenario of the experience. Left, the TOH5 task, the main one, is performed. Right, the CA secondary collaborative assembly task is being developed. The objective for the participant in the task TOH5 is to perform as many replays as possible. The secondary task, Collaborative Assembly, is related to responding to requests for collaboration from the cobot, which are indicated by the green light of the beacon in the assembly area. The time that the human takes to place caps is saved as Wait Time and Cycle time and recorded in a data table, like the one shown in Table 4, jointly with figures for Task 1 when the operator is in the experimental scenario. In the collaborative assembly task, the activities of the participant, are: • performing quality control of the assembly process, • place the caps on the sub-assembly zone, and • feeding the base and bearing warehouses.

Adapted Post-Test Questionnaire
At the end of the experience the participant answers the adapted System Usability Scale (SUS) as the satisfaction questionnaire shown in Table 5. As it has been shown [20], SUS can be applied to a wide range of technologies. This feature allows the questionnaire to be adapted to this particular experiment. In the SUS standard questionnaire, the word 'system' has been changed to 'human-robot collaborative workspace' because it is not a human-computer task but a human-robot task. A medium or low SUS score means that is necessary a discussion and effort redesign the experiment.

Experimental Results
In a benchmark test, the usability of products is made measurable and enables a comparison with the competition. Based on different metrics, the usability dimensions, i.e., effectiveness, efficiency, and user satisfaction [10], are assessed and summarized into a meaningful overall score.

Data Collection and Metrics
A dataset with data collected from experiments is organized as shown in Figure 6. A set of statistical measures and tests available in the tool Usability Statistics Packages (Jeff Sauro's formulation is available online at: http://www.measuringusability.com/products/ statsPak (accessed on 8 May 2021)) [21] are used for dataset analysis.

Task Effectiveness and Efficiency
The quantitative component involves handling of numerical variables and use of statistical techniques to guard against random events. This quantitative component includes information about the statistical significance of the results.

Effectiveness
The efficiency is evaluated using the Task Completion rate measure. For this binary variable, a maximum error of 10% of the optimal number of moves needed to solve the problem is set. Hence, it is coded with 1 (pass) for participants who solve the task and 0 (fail) for those who do not solve it.

Efficiency
To evaluate efficiency, the Time to Task measure is obtained from the dataset associated with TOH5. Only participants who completed the task are considered, and the analysis is made with the mean values obtained from each participation.

Satisfaction
The System Usability Scale (SUS) is used to evaluate the level of user satisfaction. The advantage of using SUS is that it is comparatively quick, easy, and inexpensive, whilst still being a reliable way of gauging usability. Moreover, the SUS questionnaire allows for providing us with a measure of people's subjective perceptions of the usability of the experience in the very short time available during evaluation sessions.
However, interpreting SUS scoring can be complex. The participant's scores for each question are converted to a new number, added together, and then multiplied by 2.5 to convert the original scores of 0-40 to 0-100. Though the scores are 0-100, these are not percentages and should be considered only in terms of their percentile ranking [22,23]. A way to interpret SUS score is to convert it into a grade or adjective [22], as shown in Figure 7.

Key Performance Indicators for Cobot
To evaluate the CA task, i.e., the secondary task in the experimental scenario, some Key Performance Indicators (KPIs) have been collected, based in KPIs defined for cobots in Reference [24]. Table 6 shows the definitions used in this work.
The data gathering procedure to calculate cobot's KPIs has been implemented through a tool for obtaining the values of the variables recorded in the robot through the communication protocol with an external desktop. The values for Cycle Time, Wait Time, Products as the number of assembled products, and Bases as the number of bases, are acquired using this tool. Shown in Figure 8 is an example employing the Visual Components software, so this information is saved as an electronics sheet, for KPI analysis. The CapTime in the figure is the operator's time to place the cap. This value will be considered as the idle time of the robot, and also the Human Operator Time, since the robot is stopped.

Video Recording
Experimental sessions are recorded on video; hence, the facilitator can later measure events, like: operator travel time from one task to another, how many times the operator check whether the robot has finished, and the right development of user actions in the assembly task.
While the user is developing the Tower of Hanoi task, the robot is working placing bases and bearings (red light). The vertical column of lights indicates with a green light when the presence of the user is required. The video recording can show if the user is paying attention to the column of lights or concentrating on the Tower of Hanoi task.
As a further analysis in collaborative human robot stations, video recording allows the observation of repetitive hand and arm movements that allow risk analysis and physical ergonomic assessment of the task.

Experimental Study
According to the usability test plan, the experiment dataset contains information for Time to Task and Task Completion rate, as well as results from questionnaire SUS and KPI values collected from 17 participants. Among them, 12 participants (70.6%) are undergraduate students, 2 participants (11.7%) are vocational students, and 3 participants (17.64%) from the teaching staff.

Results
Out of the seventeen participants, three are discarded as they have not correctly completed the proposed questionnaires. For this reason, results on fourteen participants are presented (n = 14). For the analysis, the following values are configured, the statistical significance level set at p < 0.05, and confidence level is 95%.

Effectiveness
The next 'pass and fail' histograms show the results in solving the tasks within the HRCWE: the histogram in Figure 9 shows 11 participants solving Task 1 (TOH5) and the histogram in Figure 10 shows the results of the human-robot (H-R) team in solving task 2. Values for Task Completion rate are calculated using the equation Task Completion rate = Number of participants with successfully task Total number of participants undertaken , according to experimental results shown in Table 7.   Sauro and Lewis experience states that a Task Completion rate lower than 56% could indicate a very low level of trustness for this variable. Results show a a superior level in both tasks (p-value = 0.044 and 0.002).
Results for the Task Completion rate level for TOH5 have a mean value of 78.6%. Based on benchmark with percentile Jeff Sauro, locate this value at percentile 50, and this value is acceptable as it is a first experience of the participant; however, for a continuous work, it is necessary to improve this value. The Task Completion rate level for the CA on 70% could be considered standard given the characteristics of human participation in the human-robot task.

Efficiency
To evaluate the efficiency, the Time to Task variable is analyzed. Firstly, with all the data obtained from the experiment, a percentile scale is generated, with five ranges for the variable defined, as shown in Table 8, according to the time spent in solving it.  Table 9, with a mean value of 56.2 s, brings the above percentile to 50%. Table 9 also shows values of the coefficient of variation (CV) (see Equation (2)) for the Time to Task. Task 1 (TOH5), with a value of 0.42, represents a high level of variation, as a result of solving the task only with human participation. CV = standard deviation mean value .
(2)  (3). This time is composed of the human's time (t H ) and the cobot's time (t C ).
Time to Task = t H + t C .
(3) Figure 11 shows the results obtained by each participant. It can be observed the time composition of the task. For the statistical analysis, we consider Time to Task equal to Cycle Time for the cobot, and t H equal to Wait time, obtained directly from the cobot controller.
Results in Table 10 show a Time to Task of 108.9 s as mean value. This corresponds to a percentile of 50%. CV value is 0.04, equivalent to 4% of variation. This is considered a low variation. For t H , the mean value is 17.05 s, with CV 0.25 equivalent to 25%. In this case, this is considered a high variation, typical for human tasks. Finally, for t C , the mean value is 90.58 s, and CV is down to 0.027, equivalent to 2.7%, which is a minimum variation, as expected, considering the high stability of the cobot.   Table 11 provides an overview of mean values, standard deviations, incomplete questionnaires, and checking the coding of the questionnaires, as well as reliability indices values (Cronbach's α) for the obtained measures.  Table 12 shows results about SUS, the interpretation in percentiles, and a descriptive adjective of its value. The value of 81.1 qualifies the HRCWE as Excellent and the degree of acceptability as Acceptable. Considering HRCWE as a hardware system, following the Sauro and Lewis classification, the benchmark of experience with hardware shows that a raw SUS score of 81.1 has a higher SUS score than 88.14% for Hardware.
The main value from SUS is providing the single total score. However, it is still necessary to look into detail the individual score for each statement [22]. This information in presented in Table 5. Caglarca [25] suggested taking into account individual evaluation by verifying the shape of a "five-pointed star" visualization. Hence, raw scores from Table 5 have been transformed into a radial chart, as shown in Figure 12. Caglarca also concluded that the more the five-pointed star looks, the more positive the usability will get. Although it tends to be a subjective assessment, it is worth noting that the five-pointed star shape is almost in perfect form, as in this study.
The statistical analysis in Table 13, with a mean value of 83%, shows an Per Utilization greater than 80% in the use of cobot for task collaborative assembly. The percentage value of the efficiency (Per Efficiency) is calculated considering the total time of a work cycle; in this case, it was set at 900 s.
The statistical analysis of the Per Efficiency in Table 14 shows with a mean value of 84% a Per Efficiency higher than the 75% (p-value = 0.03) established as a reference in this experiment.

Discussion
The feasibility of extrapolating the usability experience in HCI towards HRI is clearly defined along with this study: the context of use, requirements, workspace design, task allocation between human and robot, experimental testing, and validation steps.
In this study, with a Task Completion rate value of 78.6 in the effectiveness of the task 1, it can be considered that the human operator can effectively solve Task 1 in the HRCWE. To increase the effectiveness in this task, a first alternative is the incorporation of a training stage, and a second alternative could be the use of an assistant that considers assisting the operator when the real-time value of the Task Completion rate is lower than a minimum set value. For the second task, the value of the Task Completion rate shows that the humanrobot team effectively solves the Task 2. However, a redesign of the physical architecture of HRWCE, in which the human operator is closer to the work area, could improve the efficiency of the work team.
The efficiency in Task 1, measured through the Time to Task with mean values of 56.2 s for Task 1 and 108.09 s for Task 2, places the efficiency between the low and standard level, with a higher accumulation towards the standard level. Hence, it can be considered that the human operator is able to efficiently solve the tasks in the HRCWE.
The SUS score shows that the collaborative workspace is perceived as acceptable for working with humans, and the star chart proves that the performance of its components is balanced, as expected.
The evaluation carried out about HWRCE, through the KPIs, corroborates the capacity of the team human-robot, with values higher than 80% of Per Utilization and higher than 75% of Per Efficiency, and a value of Task Completion rate over 80%. The variability analysis show that the system is able to absorb the variability introduced by the human operator.
In order to improve the results obtained in the efficiency and effectiveness of the tasks within the HWRCE, adding the real-time variable Difficulty Task, which considers the variables Task Completion rate and Time to Task, could be used to work with an assistant to guide a strategy for solving the tasks.
Overall, the usability benchmark additionally demonstrates the flexibility of the human operator to work in conjunction with a cobot operator in collaborative assembly tasks within the HRCWE.

Conclusions
This article introduces a methodological and systematic guide for experimenting and evaluating experiments related to human-robot interaction in assembly task workspace. Taking advantage of usability in human-computer interaction, this experience has been expanded and adapted to the field of collaborative human-robot interaction to have a solid and well-founded basis for evaluating the collaborative workspace, where the human operator shares tasks with a robot. Reviewing and incorporating best practices from relations area can reduce the number of testing iterations required and save time and money developing and evaluating a process or system.
In the future, it is expected to expand this guide for the assembly of products with a greater number of components and/or with variants in components. The following experimentation will evaluate the different forms of human-robot collaboration with other architectural of workspace. It is also expected to use it for other types of human-robot collaborative work applications, such as picking, packing, and palletizing.
If usability is included within the user experience, there is research work to be done. It is convenient to expand the registry of variables to be measured and take into account aspects of trust, acceptance of technology, and empathy [26].
Funding: This work has been co-financed by the European Regional Development Fund of the European Union in the framework of the ERDF Operational Program of Catalonia 2014-2020, grant number 001-P-001643. Prof. Cecilio Angulo has been partly supported by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 825619 (AI4EU). Prof. Pere Ponsa has been partly supported by the Project EXPLainable Artificial INtelligence systems for health and well-beING (EXPLAINING) (PID2019-104829RA-I00/AEI/10.13039/501100011033).

Informed Consent Statement: Not applicable.
Data Availability Statement: Not applicable.

Conflicts of Interest:
The authors declare no conflict of interest.

Appendix A. Participants Selection
We need 30 participants in total. Each session lasts <25> min. We must have a candidate's completed screener at least one day in advance of the session, so we know which experimental group to assign him or her to.

•
Introduction: This experiment is part of a research project related to Human Robot Interaction (HRI), the basic objective is to determine the variation of mental load on the operator when a collaborative task with a robot is added. • Selection questions: 1.
The participant is familiar with information and communication technologies.

Appendix C. Case Study
Two different task are defined on these scenario, with different conditions and operating characteristics, as shown in Table A1. Each participant participates in two tasks, being always the task1 the first one. One iteration of scenario is performed for each operator for 15'. The objective for the participant in the TOH5 throuble is to perform as many repetitions as possible. The number of movements and the time of each repetition are recorded by the participant in a data table as Table A2.
The second task, Assembly, is related to responding to requests for collaboration from the robot, which are indicated by the green light of the beacon in the area of assembly. The time that the human takes to place caps, defined as Wait Time and Cycle time, are recorded in a data table as Table A2, jointly with figures for Task 1 when the operator is in the scenario 2. In the assembly task, the activities of the participant, are: • performing quality control of the assembly process, • place the caps on the sub-assembly zone, and • refill the base and bearing warehouses. At the end of the experiment the participant answers, the System Usability Scale (SUS) as a satisfaction questionnaire.

Appendix D. Demonstrations
The main facilitator shows the participant the two areas and how the tasks are performed, in particular highlighting the activities that the operator must perform.

Appendix D.1. TOH5
By using the app's own functions, the facilitator shows once how to solve the game with the least number of moves; see Figure A1.