Next Article in Journal
Injection Mold for Plastics Manufactured by Metal-FFF with Conformal Cooling Channels: A Proof-of-Concept Case
Previous Article in Journal
Handheld Dual-Point Docking Mechanism for Spacecraft On-Orbit Service of Large-Scale Payloads
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating Mental Workload and Productivity in Manufacturing: A Neuroergonomic Study of Human–Robot Collaboration Scenarios

1
Faculty of Engineering, Free University of Bolzano, Via Bruno Buozzi 1, 39100 Bolzano, Italy
2
Faculty of Engineering, University of Kragujevac, Sestre Janjic 6, 34000 Kragujevac, Serbia
*
Author to whom correspondence should be addressed.
Machines 2025, 13(9), 783; https://doi.org/10.3390/machines13090783
Submission received: 10 July 2025 / Revised: 25 August 2025 / Accepted: 27 August 2025 / Published: 1 September 2025
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)

Abstract

The field of human–robot collaboration (HRC) still lacks research studies regarding the evaluation of mental workload (MWL) through objective measurement to assess the mental state of operators in assembly tasks. This research study presents a comparative neuroergonomic analysis to evaluate the mental workload and productivity in three laboratory experimental conditions: in the first, the participant assembles a component without the intervention of the robot (standard scenario); in the second scenario, the participant performs the same activity in collaboration with the robot (collaborative scenario); in the third scenario, the participant is fully guided in the task in collaboration with the robot (collaborative guided scenario) through a system of guiding labels according to Poka-Yoke principles. The assessment of participants’ mental workload is shown through combinative analysis of subjective (NASA TLX) and objective (electroencephalogram—EEG). Objective MWL was assessed as the power waves ratio β/α (Beta—stress indicator, Alpha—relaxation indicator). Furthermore, the research used observational measurements to calculate the productivity index in terms of accurately assembled components across the three scenarios. Through ANOVA RM, mental workload significantly decreased in the activities involving the cobot. Also, an increase in productivity was observed shifting from the manual scenario to the cobot-assisted one (18.4%), and to the collaborative guided scenarios supported by Poka-Yoke principles (33.87%).

1. Introduction

From the 18th century to today, industrial development has evolved through successive phases known as Industrial Revolutions (IRX.0). IR1.0 (1760–1840) marked the shift from agrarian to industrial economies [1]. IR2.0 (1870–1914) brought rapid industrialization, urbanization, and the rise in large corporations [2]. IR3.0 (1950–2000s), or the Digital Revolution, introduced electronics, telecommunications, computers, and the internet [3]. IR4.0 (2000s–2020) advanced interconnectivity and data-driven production through cyber–physical systems [4].
The recent fifth Industrial Revolution (2020-ongoing), or IR5.0, has been considered a successor or complement to IR4.0. While IR4.0 highlighted the high level of interconnectedness that crossed the barriers between the physical, digital, and biological spheres, IR5.0 has emphasized human’s collaboration and engagement with modern technologies [5]. Demir and colleagues proposed that humans collaborate with robotic machines in all feasible scenarios and contexts, resulting in the widespread integration of robots into organizations [6]. Despite some writers’ criticism that IR5.0 has not yet begun, both IR4.0 and, more importantly, I5.0, stress human–robot collaboration (HRC) as a critical feature to support industrial operators and promote their occupational well-being [7]. In this context, assessing mental workload (MWL) in human–robot interaction (HRI) becomes crucial. While collaborative robots, or cobots, can serve as valuable resources to enhance efficiency and reduce strain, they may also introduce safety risks or cognitive overload if not properly integrated. Understanding MWL therefore ensures that collaboration remains both productive and safe for human operators [8].
This study builds upon the previous work of some the authors [9]. Compared to the previous investigation by the authors, which focused on two experimental conditions, this study extends the framework by incorporating a third scenario with collaborative guided modules. This addition enables a more comprehensive evaluation of MWL by directly comparing standard, collaborative, and collaborative guided settings, thereby offering new insights into how different levels of robot involvement influence operator performance and cognitive load.
The structure of this paper is organized as follows. Section 2 presents a State-of-the-Art analysis, reviewing relevant contributions and highlighting current research gaps in the assessment of MWL within HRI. Section 3 describes the Materials and Methods, including the experimental design, sample definition, experimental framework, and the overview of selected metrics. Section 4 introduces the case study, detailing the assembly process and the three workstation configurations: standard, collaborative, and collaborative guided. Section 5 reports the experimental results, encompassing EEG-based MWL analysis, subjective evaluation through NASA-TLX, and observational measurements of task performance and user experience. Section 6 provides a critical discussion of the findings with reference to prior research, emphasizing both the contributions and limitations of the study. Finally, Section 7 outlines the conclusions and suggests future directions for advancing neuroergonomic evaluations in collaborative industrial environments.

2. State-of-the-Art Analysis

IR5.0 considers humans as the core of production processes, requiring the design of anthropocentric workspaces. [10,11]. The introduction of cobots in manufacturing environments increased productivity and reduced human error in industrial manufacturing processes by combining the advantages of robots, such as high-precision, endurance, and repeatability, with the human’s ability of problem solving, awareness and manual dexterity [12,13]. Thus, human–robot collaboration (HRC) has become of paramount importance in IR5.0 aiming at human-centered manufacturing and production. It allowed the creation of advanced manufacturing systems where humans can be supported by intelligent and adaptable machines. In that regard, different applications/modalities of human–robot collaboration are determined by criteria such as task complexity, safety and ergonomic issues, and desired amount of human involvement [14]. However, the involvement of cobots in safety critical systems introduced a new level of complexity in which the MWL, as the amount of mental labor needed to complete an activity, of operators might change [15].
In this sense, neuroergonomics, as a branch of neuroscience applied to ergonomics, investigated how the brain functions in connection to job activities, utilizing neuroscientific tools to better understand MWL. Among the innovative neuroergonomic tools, the electroencephalogram (EEG) cap has paved the way to a new methodology of objective ergonomic assessment, monitoring, and evaluation of brainwaves activity to further analyze the MWL of operator while performing tedious, repetitive, and stressful activities [16].
Various power ratios of brainwaves through EEG were investigated to determine the MWL of the human in a relaxation (Alpha waves) or stress/engagement phase (Beta waves) [17,18].
MWL has a substantial impact on productivity when humans collaborate with cobots [19]. The deployment of cobots is frequently meant to offload labor from the operator to the cobot; yet the introduction of cobots, if not properly managed, might raise cognitive load on the user by providing more complicated tasks or needing greater situation awareness and cognitive resources to complete the activity, especially for the most advanced HRI applications. In HRC scenarios, some authors, through subjective measurements, claimed that cobots’ intervention reduces the operator’s MWL while performing some assembly tasks [20,21]. Other studies, however, indicated that cobots increased operator’s MWL [22,23]. In the study of [22], the author holistically assessed a higher level of MWL in repetitive tasks between human and robot. This is also in line with [23], in which the MWL assessed through NASA TLX from a HRI experience was shown as high from the participants. However, these studies relied mostly on subjective assessments obtained through surveys or qualitative approaches. Recent studies showed promising results regarding the usability and reliability of EEG devices in HRC scenarios [24]. Thus, there is still a need to explore MWL in HRC tasks by integrating qualitative with quantitative data [25,26]. In this regard, EEG could be suitable for the analysis of MWL to acknowledge the effective contribution of cobots in terms of MWL in manufacturing contexts [24]. Unlike other wearable sensors, EEG measures electrical activity in the brain directly to provide real-time insights into cognitive processes rather than relying on physiological or behavioral proxies [27].
This study focuses on the β/α ratio to analyze the MWL perceived by participants working in collaboration with a cobot. The purpose of this research is to evaluate the different responses of participants in terms of MWL and task performance according to three different levels of support provided by the collaborative robotic system. The research questions (RQs) are the following:
-
RQ1: What is the impact, considering mixed qualitative and quantitative (i.e., EEG-based) measures, of different levels of robotic assistance in terms of operator’s mental workload (MWL) during collaborative assembly activities?
-
RQ2: What is the impact of different levels of robotic assistance in terms of productivity in collaborative assembly activities?

3. Materials and Methods

3.1. Experimental Design

The research presents a comparative evaluation of three successive experimental conditions involving different HRI modalities. The process adopted for this study is a wire harness assembly (see Section 4.2 for details). The experimental design closely followed the protocol described in [9], with the addition of the third scenario involving collaborative guidance. The purpose of this analysis is to comprehensively evaluate the different responses of participants in terms of MWL and productivity. Based on changes in the workstation layout and integrated elements, as well as in terms of robotic support, the three scenarios varied as follows. More details about each Scenario are reported in Section 4.4.
  • Standard work (Scenario 1, named SS): Manual assembly activities are completed without any specific intervention or enhancement at the workplace. Work is carried out without any support from other systems. This condition is used as the baseline to assess the contribution of the cobotic system introduced in the following scenarios.
  • Collaborative work (Scenario 2, named CS): Participants complete work activities collaborating with a cobot, which performs repetitive, uncomplicated tasks that do not involve thinking or decision-making.
  • Collaborative guided work (Scenario 3, named CGS): Participants complete the identical labor activities as in the second scenario, but with the addition of Poka-Yoke (P-Y) solutions [24]. The P-Y plays a function in directing operators through the repeated process of assembling parts and components from operation to operation, generating the start of each future phase in a predetermined sequence of steps and thereby preventing human errors. The idea is to strengthen the role of the collaborative robotic system by implementing a guidance module in the workplace.
Each participant performed the assembly tasks in three different conditions, starting with Scenario 1 and concluding with Scenario 3. The time of the overall experiments was fixed at 90 min (i.e., Time_experiments).
This within-subject design allowed each participant to serve as their own control, enabling direct comparison across conditions.
The study was conducted in accordance with the Declaration of Helsinki and approved by the or Ethics Committee of Faculty of Medical Sciences, University of Kragujevac (Decision number: 01-6471, based on submitted study protocol no. 01-5578, on the 3 June 2021). The patients/participants provided their written informed consent to participate in this study.

3.2. Sample Definition

The minimum number of participants for this experiment was defined according to the sample size criteria computed by the software tool G*Power—version 3.1.9.7—with an analysis of variance in case of repeated measures (ANOVA RM) [28]. This statistical approach was selected as appropriate for experimental design in which the same participants are exposed to all levels of the independent variable, allowing for the evaluation of differences across multiple conditions or time points, while accounting for inter-subject variability [29]. The trials included three scenarios (i.e., standard—SS, collaborative—CS, and collaborative guided—CGS; number of conditions = 3), each with three observations (number of periods observed during the task = 3). The statistical test selected is F-test, which is used to compare the variances between conditions of one or more groups to analyze the significant differences from each other. The following other statistical input parameters have been selected for the G*Power computation: effect size f = 0.4 (i.e., moderate magnitude range); error probability α = 0.05; power β = 0.8; number of groups = 1 (the experiments involved the same group of number of participants); number of measurements = 9 (number of periods observed during the task multiplied by the number of conditions).
Thus, the minimum required number of participants, as calculated through ANOVA RM with the G*Power tool, was 9 (Figure A1, Appendix A). In this study, a total of 10 participants were recruited, slightly exceeding the minimum threshold. The decision to include 10 participants was motivated by both methodological and practical considerations. From a methodological standpoint, recruiting one additional participant helped to increase the robustness of the dataset and to mitigate the risk of potential data loss (e.g., due to technical issues with EEG recordings or incomplete sessions). From a practical perspective, the time limitations of the recruitment period constrained the possibility of enlarging the sample further. Therefore, the final sample size of 10 participants was considered appropriate, as it fulfilled the statistical power requirements while also providing an additional margin of reliability. This resulted in a total of N_experiments = 30 (number of participants × number of scenarios). Participants were all engineering students, all male, right-handed, and having a mean age of 23.3 ± 3.3 years. None of the subjects had previous experience in product assembly or with cobots.

3.3. Experimental Framework and Measurement Overview

To provide a clear and structured overview of the experimental design, Table 1 summarizes all key measures evaluated in this study.
MWL was assessed both objectively, through EEG recordings (β/α ratio), and subjectively, via the NASA Task Load Index (NASA-TLX), with a rating scale from 0 to 10 [30]. Furthermore, the NASA TLX measured further subscales: Physical Demand, Temporal Demand, Effort, Frustration, and Fluency (adapted to evaluate smoothness of task execution). These subscales provided insights into the participants’ perceived task demands under each condition.
Task performance was quantified using a standardized checklist based on the number of correctly assembled components, serving as an indicator for productivity.
Additionally, qualitative user feedback was collected through open-ended questions following the CS and CGS conditions, i.e., the ones involving HRI. Participants were asked, at the end of the collaborative scenario, about their impressions of working with and without the robot, the quality of interaction, perceived safety and comfort during interaction, and their workspace preference in terms of layout. These responses added depth to the quantitative findings, offering insights into the acceptability and perceived benefits of HRC.
Together, the combination of neurophysiological, psychometric, performance-based, and qualitative data allowed for a comprehensive evaluation of mental workload, user experience, and production performance across different collaborative assembly scenarios.

4. Case Study

4.1. Overview

The experimental setup imitates standard working conditions and allows for testing participants’ behavior during different assembly activities. It encompasses a quasi-realistic industrial workplace for the assembly of connection plates as part of wire harnesses (see Figure 1).
The experimental setup included three different areas. The first one (highlighted as “1” in Figure 1) is the modular assembly workstation. It has a touchscreen industrial PC used to manage the experiment and monitor process workflow. A height-adjustable work chair is used to respect the different anthropometric characteristics of the participants [31]. This area was designed to replicate a realistic industrial workstation, while at the same time providing the flexibility needed to test and record experimental conditions in a controlled manner.
The second area (highlighted as “2” in Figure 1) is the collaborative robotic workstation. It comprises the industrial cobot MELFA ASSISTANT from Mitsubishi Electric used for the test. The location of this workstation was located 1000 mm from the area where participants accomplished the tasks, in accordance with [32]. This layout ensured that the cobot could efficiently assist the participant without obstructing their working space, while also maintaining a safe and comfortable distance for interaction. The design of this area emphasized both safety and quality of interaction, reflecting the practical requirements of HRC in real industrial contexts.
The third area (highlighted as “3” in Figure 1) is dedicated to the quality check. It comprises the Mitsubishi Electric industrial robot RV-2FRL-D-S25 added next to the Melfa Assista cobot to carry the component during the quality check before the assembly is carried out by the participant. This ensured that only correctly prepared parts were used, reducing potential errors and guaranteeing uniformity across trials. From an experimental standpoint, the integration of a dedicated quality-check station added an additional layer of realism to the setup, since quality assurance is a fundamental step in industrial workflows.

4.2. Assembly Process

The component used in this study is represented in Figure 2. It is a sub-assembly composed of a metal sheet base with built-in threaded elements and a transparent acrylic cover connected by an aluminum hinge (three materials combined). For practical reasons, only the white upper flat plate of the sub-assembly of the connection component has been chosen as it is lightweight, has no sharp edges, and is made of plastic. This task aims to replicate typical operations of wire harness assembly activities. These activities necessitate a combination of manual dexterity, attention to detail, and the ability to understand complex wiring diagrams. Some components of wire harness installation might become repetitious, resulting in mental fatigue and decreased attention over time [33]. The combination of these features makes the product/process suitable for collaborative assembly. In the study, the total number of components assembled by every participant for each scenario was 75 (i.e., N_components).
Before the experiments, the participants had training of five minutes to familiarize themselves with the workplace, the product, and the equipment involved in the experiments. The training time was defined according to preliminary pre-tests. The experiments took place from the winter of 2021–2022 to the end of 2023, with a minimum timeframe of four months. The reason was to avoid recall bias when comparing the cognitive effort in the three scenarios [34]. The assembly tasks accomplished by the participants consisted of different steps performed in the three scenarios:
  • The participants take the component on the right side of the workstation and place it in front of him/her. In the first scenario (SS), the sub-components are grouped and located on the operator’s right side of the manual assembly desk. In the other scenarios (CS and CGS), the cobot delivered the component to the operator from the right side, then entered the manual assembly area and waited for the participant to finish the work. The cobot arranged the component for the participant to take. Throughout this phase, ergonomic concepts were employed to allow participants to grip the component without overextending their arms [35].
  • The participants take seven wires from the container, one by one, set in the assembly area, and connect them to the sub-component. The connections were illustrated by visual instruction provided through a touchscreen. The participant did not know in advance in which order the connections would be displayed on the monitor to avoid memory retention during the task. Also, to eliminate bias in the results, the succession of wire connections randomly differs for all the repetitions in every scenario. In the first scenario (SS), the participant accomplished the task without any external assistance in the assembly. In the collaborative scenario (CS), while the operator prepared the component given by the robotic system, the robot picked and placed the following component to the location where the participant would retrieve it. In both first and second scenarios, the participant followed the assembly instructions given by the touchscreen. On the other hand, in the collaborative guided scenario (CGS), the participant was guided through the tasks by using labels applied on the sub-assembly to avoid errors, thus applying PY principles. Such labels are attached by the industrial robot module, as explained in the following Section 4.4.
  • In all the scenarios, at the end of the assembly tasks, the participant poses the final component on the slide located to the left side and enabled the touchscreen to progress to the next product.

4.3. Experimental Setup: Standard and Collaborative Workstation

In the design of the standard scenario (SS), the developed workstation is adjustable in height and tailored to the anthropological traits of the participants. In this scenario, the participant accomplished the task without any intervention of the robot in the workplace.
In the collaborative scenario (CS), the participant accomplished the task with the assistance of the MELFA ASSISTANT from Mitsubishi Electric cobot [36]. Manual and collaborative workstations are reported in Figure 3a and Figure 3b, respectively.
The robot’s speed was set considering that the participant and the cobot worked in proximity at the same time. Considering that the robotic system was located 1000 mm far from the operator, the cobot speed was set at 250 (mm/s). This value has been defined based on the literature review and risk assessment. The end-effector used to handle the pieces was the VGC10 Electrical Vacuum Gripper, suitable for HRI activities [37]. The gripper is compact and lightweight, making it ideal for use with smaller industrial robots or cobots. It was selected to pick and position light components with a thin coating. To detach the components from the gripper, a minimum pressure of detachment equal to 20 Kpa was selected. This is the minimum value to let both the cobot carry the component without any detachment during the movements, and the human to take the component without any physical effort.

4.4. Experimental Setup: Collaborative Guided Workstation

In this scenario (CGS), in addition to the activities performed in the first (SS) and second scenario (CS), the assembly task is also supported by an automatic quality control phase in which a second robot carries the component in a dedicated area to inspect whether the labels printed on the component are correctly placed or not, as shown in Figure 4.
In particular, the CGS setup involved the implementation of other modules:
  • The Mitsubishi Electric industrial robot RV-2FRL-D-S25 (see “1” in Figure 4): This module is added next to the Melfa Assista cobot for the quality-check phase.
  • The Inkjet printer domino A100 (see “2” in Figure 4): This module is added as the end-effector of the Mitsubishi Electric industrial robot RV-2FRL-D-S25. It is used to print the labels (PY solutions) to be attached on the piece to guide the operator during the assembly activities. It uses inkjet technology to spray droplets of ink onto the components. The labels are composed of a combinative sequence of images for the wire connections of the components. In this way, participants are not required to follow the illustrations on the touchscreen, but they only must focus on the components, checking the correct wire assembly through the visual combination of images presented on the labels.
  • The SICK Inspector 611 (see “3” in Figure 4): This module is added in the workplace with the function to support the quality-check phase of the printed labels. The inspection is visible on the touchscreen device mounted on the robotic work desk. Furthermore, as the industrial robot RV-2FRL-D-S25 was not collaborative, for safety reasons, the S300 Sick Safety Laser Scanner has been integrated to implement a Safety-Rated Monitored Stop modality.
An illustration of the CGS is presented in Figure 5.

4.5. EEG Sensor System

The SMARTING wireless EEG system is a technology designed for capturing brainwave data without the restrictions of traditional, wired EEG systems [38]. This system offers a more comfortable and flexible way to record EEG data, making it suitable for various research and clinical applications. One of the primary features of SMARTING is its wireless capability, which allows for greater mobility for the user. The SMARTING interacted with the recording computer using Bluetooth. The electrodes are positioned according to the 10–20 System [39]. In this study, an mBrainTrain EEG device has been used.
The pre-processing phase of EEG data is a crucial step in EEG analysis, where raw data are cleaned and prepared for further analysis. In this phase, a sample rate of 250 Hz was deployed. Pre-processing EEG signals often entail filtering the signal to remove artifacts such as eye movements, muscle tension, and noise. This study evaluated the MWL using the power ratio (β/α) [40,41]. Noise was reduced using a band-pass filter with a frequency range of 1–40 Hz. The discovery of poor channels enabled intervention in channels that were not collecting high-quality signals. In this case, because the EEG cap included more channels in different parts of the scalp, it was possible to interpolate these channels with those near the scalp’s area of interest. Then, the Matlab Library’s artifact subspace reconstruction (ASR) technique for EEG pre-processing [41] has been used to detect and eliminate artifacts like eye movements and muscle strain. Finally, the independent component analysis (ICA) was performed to separate the signals into additive and independent components [42].
The MWL was analyzed in three consecutive halves, each one 30 min. The choice of 30 min periods for evaluating MWL through EEG enabled meaningful within-subject comparisons across experiments time (beginning, middle, and end) or scenarios (SS, CS, and CGS). The analysis could have been affected by EEG spikes from the interaction with the cobot, due to an instinctive reaction from the participants. However, we assume that participant did not feel any fear from the interaction with the cobot as they familiarized themselves with it in the training phase before the experiments [43]. These time observations reflected a balance between scientific validity, participant endurance, and data stability.

5. Results

5.1. Mental Workload

The MWL index of the participants in the three scenarios was evaluated by using the beta/alpha ratio in three consecutive parts of the tests, and is presented in Appendix A from Table A1, Table A2 and Table A3 and Figure 6 below:
Figure 6 illustrates the MWL index values, measured using the β/α EEG ratio, across 10 participants under three experimental conditions: standard scenario (SS—highlighted in solid lines), collaborative scenario (CS—highlighted in long dash dot), and collaborative guided scenario (CGS highlighted in dashed lines), each assessed at three time intervals (0–30 min—highlighted in blue color, 30–60 min—highlighted in red, 60–90 min—highlighted in green). Overall, a clear trend emerged across participants showing that MWL was highest during the initial phase of the experiment (0–30 min), particularly under the standard scenario (SS), where participants had to complete tasks without any collaborative support or guidance. On the other hand, participants engaged in the collaborative scenario (CS) and, more notably, in the collaborative guided scenario (CGS), demonstrated lower β/α ratios, indicating reduced cognitive strain.
From the statistical analysis for the SS, the following results emerged: ANOVA RM—α = 0.05—yielded p-value = 0.194, F = 2.459, and F_crit = 3.633. In the SS, the MWL between the three test parts is not substantially different (p-Value > α), hence the null hypothesis cannot be rejected (H = H0). In contrast, in the CS, all participants’ MWL dropped along the activity. In the CS, the ANOVA RM analysis (α = 0.05) yielded a p-value of 0.00005, F value of 19.32, and F_crit of 3.633. The CC resulted in a substantial drop (p-Value < α) in MWL across all three tests, rejecting the null hypothesis (H ≠ H0). Finally, in the CGS, the MWL of the participants is the lowest compared with the other scenarios. Indeed, from the ANOVA RM analysis (α = 0.05), p-Value = 0.00003, F = 15.42, F_crit = 2.633. In the CG, the MWL significantly decreased (p-Value < α) along the three parts of the tests observed and the null hypothesis is rejected (H ≠ H0).

5.2. NASA-TLX Results

Figure 7 presented participants’ subjective NASA TLX ratings for each scenario, using a standardized 0–10 scale. The NASA TLX t-test comparing three scenarios (α = 0.05) revealed the following results: (a) Mental Demand, p-Value = 0.0004; (b) Physical Demand, p-Value = 0.08; (c) Temporal Demand, p-Value = 0.088; (d) Performance, p-Value = 0.046; (e) Effort, p-Value = 0.0085; and (f) Frustration, p-Value = 0.01. The NASA TLX data show no significant difference in Physical and Temporal Demand (p-value > α) between standard (SS) and collaborative situations (CS and CGS), supporting the null hypothesis (H = H0). The t-test analyses of Mental, Effort, Performance, and Frustration revealed significant differences (p-value < α) across the three situations, leading to the rejection of the null hypothesis (H ≠ H0). In keeping with the EEG data, the NASA TLX demonstrated a lower level of MWL of the participants in the scenarios with the cobot (CS and CGS) than the conventional scenario without the cobot (SS).

5.3. User Feedback

In addition to the NASA TLX, at the conclusion of the experiments, participants were asked additional direct open questions about their experience with and without the robot, its fluency (quality of interaction) and trajectory (i.e., whether predictable or not), how safe and comfortable the interaction with it was, and whether the workplace setting was better in the standard or collaborative scenario. According to the answers, the assembly task supported by the robot (i.e., in CS and CGS) resulted in a safer and more comfortable way to pick the plate from the gripper. In terms of motion, the participants did not feel stressed when the robot moved, and from its response when they grasped the plates from the gripper. Furthermore, regarding the layouts, the absence of plates on the workstation (where the participant installed the component) was regarded more positively in the collaborative situation. The applicants had more room to assemble the component and were more confident in their ability to complete the assignment, as they felt less distracted. Nevertheless, the participants felt more at ease executing the work in the CGS.

5.4. Task Performance

Data about task performance are summarized in Table A4 in Appendix A. Figure 8 presents the productivity index as the number of components accomplished by each of the 10 participants during the three assembly scenarios. A consistent trend was observed across all participants: the number of assembled components improved progressively from SS to CS to CGS. In SS (blue bars), participants generally achieved the lowest number of completed components. With the introduction of the collaborative robotic system, in the CS condition (red bars), performance improved for most participants, suggesting that shared task execution had a positive impact on throughput.
The most significant performance gain was observed in the CGS condition (green bars), where participants received both physical support and visual guidance (i.e., through the PY solution). This scenario resulted in the highest number of components completed for every participant.
The t-test examination of performance comparing three scenarios yielded a p-value of 0.00018 (<α = 0.05). In conclusion, the participants accomplished the tasks more successfully in the CS and CGS than in SS, assembling more components during the activity. On average, an increase of 18.4% from SS to CS was observed, and an increase of 15.5% from CS to CGS. Thus, the overall increase in the number of components accomplished well from SS to CGS was 33.9%

6. Discussion

6.1. Results Discussion

According to the data, MWL was significantly lower in the CS and CGS compared to the SS. In the first scenario (SS), the MWL did not change considerably after three consecutive periods (i.e., every 30 min). In the other two cases, MWL reduced with time. In the second part (30–60 min) and third part (60–90 min) of the CS and CGS, the reduction in the MWL was larger compared to the SS. These effects became more pronounced over time, with a general decrease in MWL observed from the first to the third time interval across all conditions. The CGS condition consistently yielded the lowest MWL values, particularly in the final interval (60–90 min), highlighting the effectiveness of combining HRC with additional real-time guidance in mitigating cognitive demands. Additionally, inter-individual differences were evident: some participants (e.g., Participants 4 and 5) exhibited elevated workload levels regardless of the condition, whereas others maintained lower and more stable β/α ratios. These variations may reflect differences in task engagement, stress reactivity, or cognitive strategies, and underscore the importance of accounting for user-specific factors in neuroergonomic evaluations.
These findings are consistent with earlier research on industrial HRI, where a lower level of MWL was perceived during (well-designed) collaborative tasks compared to purely manual tasks [44]. This suggested that structured physical assistance not only reduced cognitive effort but potentially facilitated task execution.
In addition, MWL results are aligned with the subjective analysis carried out using the NASA TLX. The measurements of Mental Demand, Effort, and Frustration indicated a significant decrease in these parameters pointed out by the participant during tasks with the robot (i.e., CS and CGS). Results are in line with previous research about the analysis of performance in HRC [45].
According to the answers to the open questions, in SS, participants felt more stressed during the tasks, desiring to move and have some stretch on the arms potentially due to a lack of physical support. The tasks supported by the cobot (in CS and CGS) and guided by PY solutions (in CGS) were perceived better, having the feeling of being aided during the whole activity. However, considering the NASA TLX results, there was no significant variation in Physical and Time Demands between the three scenarios. Since the task was not physically demanding, one probable reason is that the participants accomplished the activity in a seated and comfortable position throughout the test. In addition, the physical workload was almost equal in all three scenarios.
Finally, the checklist revealed a higher level of performance in terms of successfully assembled components in the CS and CGS, To corroborate this, in the NASA TLX, participants reported to be more satisfied in accomplishing the task in the CS and CGS.
Regarding the highest-level productivity in the last scenario (CGS), the label guidance helped reduce uncertainty, improved task flow, and reduced the MWL of participants. The consistency of this improvement across all participants highlights the effectiveness of combining HRC with real-time guidance for enhancing assembly performance.
Taken together, these findings support the hypothesis that well-structured collaborative and guided interaction with robots can reduce MWL, improve production performance, and promote user adaptation over time. As a result, participants performed better in completing the work in the collaborative settings in terms of ergonomics and task performance.

6.2. Answers to RQs

Manual assembly operations, such as wire harnessing, remain a demanding and mentally stressful task in industrial processes. This has prompted the investigation and study of collaborative systems that enable operators to work alongside robots. Thus, understanding the MWL in HRI is crucial [23]. However, the method of using wireless, real-time, objective metrics like EEG in industrial collaborative application to define the MWL in terms of brainwave activity is still in its early phases [46]. In previous works, authors have provided a strategy for measuring MWL in smart factories. However, the efficacy of these tests revealed no significant difference between a scenario with and without HRI [47,48].
In this study, the power ratio β/α was found to be suitable in predicting participants’ MWL. Other studies employed other brainwaves, such as Theta waves, to evaluate the MWL [49]. The suitable MWL index in HRI scenarios and for the type of task is still the object of debate [50,51]. This is why, in this study, the authors combined the quantitative analysis via the EEG with subjective assessments to define the fluctuation of the operator’s cognitive traits in the three scenarios.
Regarding RQ1, the findings indicate that a proper implementation of cobots in industrial assembly workstations leads to a significant reduction in mental workload (MWL). This outcome was confirmed by repeated-measures ANOVA, combining objective (EEG-based β/α ratio) and subjective (NASA-TLX) assessments. Moreover, the results demonstrate that the degree of robotic assistance played a decisive role in shaping MWL levels. Specifically, while the collaborative scenario (CS) already alleviated part of the cognitive demand compared to the standard condition, the collaborative guided scenario (CGS) produced the lowest MWL values across participants. This suggests that structured robotic support, combined with task guidance, can substantially reduce the operator’s cognitive burden during collaborative assembly activities.
Regarding RQ2, the results suggest that operators’ performance improved thanks to the introduction of collaborative robotic systems and related PY solutions and, potentially, was also due to a reduction in MWL that may have been brought about. The number of components accomplished by the participants over the time was shown to be higher when participants worked alongside the cobot. Specifically, participants accomplished 18.4% more components in the collaborative scenario (CS) compared to the standard scenario (SS), and a further 15.5% increase was observed in the collaborative guided scenario (CGS) compared to CS.
In addition, subjective evaluations such as the NASA-TLX and answers to open questions revealed that the deployment of cobots had a favorable impact on participants during the assembly activity. According to participants’ responses, the assembly activity involving the robot enhanced the ease and pleasantness of retrieving the plate from the gripper. Moreover, the robot’s motion was perceived as predictable and smooth, and its response during the handover was considered non-intrusive.

6.3. Limitations of This Work

This study presents some limitations.
Firstly, the analysis was carried out for a limited sample of homogeneous participants (N = 10). Even if the sample size was determined as a priori using the G*Power approach [28] to ensure statistically robust within-subject comparisons, authors acknowledge that the size number of participants remains limited in scope. Moreover, the sample consisted solely of male engineering students, which restricts the generalizability of our findings. Cognitive responses to collaborative robotics may differ across genders, backgrounds, skills, and cultural backgrounds. Individuals with technical and analytical backgrounds might be more enthusiastic or comfortable engaging with novel technologies such as cobots and EEG-based neuroergonomic equipment [52], potentially resulting in lower resistance and faster adaptation compared to a more diverse or industrially experienced workforce. Furthermore, the novelty of the task and tools might have elicited higher initial stress levels that diminished quickly due to rapid learning curves, a pattern that may differ in older, less tech-savvy, or more experienced operators. Indeed, some participants who completed the collaborative scenario second may have benefited from familiarity with the task gained during the standard condition, possibly skewing results. These factors limit the generalizability of our findings to real-world industrial environments. Future studies should therefore validate this approach with a more heterogeneous sample, including participants of different genders, ages, and professional backgrounds, ideally in operational factory settings. Future research should therefore aim to include a larger, more diverse participant pool, including female participants and industrial workers from varied sectors and geographic regions, to validate and extend the present findings in real-world operational settings.
Secondly, to adhere to the stipulated comparative evaluation, the same participants completed the task in all three scenarios over a minimum of four months. While research investigations in the field of neuroergonomics demonstrated that this timeframe was suitable for not having a memory bias of an activity performed by humans, the studies were only empirically shown [53].
Thirdly, regarding the task performed by participants, as wire harness assembly is known for its precision requirements and mid/high cognitive demand, authors acknowledge that relying on a single task type may limit the breadth of conclusions that can be drawn regarding MWL across varied industrial contexts. Although the study employed three distinct scenarios to simulate varying levels of support and complexity, all scenarios were based on the same core task. Future research could extend this work by incorporating a broader spectrum of assembly tasks, varying in cognitive, perceptual, and motor demands, to evaluate how task characteristics influence mental workload. Including more complex or error-prone tasks (e.g., multi-stage mechanical assembly, real-time inspection, or time-critical decision-making) would strengthen the representativeness of the findings and support more robust generalization in diverse manufacturing environments.

7. Conclusions

The purpose of this research is to evaluate the different responses of participants in terms of MWL and task performance (i.e., productivity according to different levels of support provided by a collaborative robotic system. This involves the analysis of subjective (NASA TLX), objective (EEG-based), and observational measurements, comparing participants’ results in three distinct scenarios: a standard scenario in which participants had to carry out random manual assembly tasks without any external intervention; a collaborative scenario in which the participant had to carry out the assembly task along with the robot; and a collaborative guided scenario in which the participants worked alongside the cobot and received instruction through Poka-Yoke labels.
Results show a significant reduction in MWL, as measured both objectively (EEG—β/α ratio) and subjectively (NASA-TLX and open questions), alongside a clear improvement in task performance in terms of productivity: participants accomplished 18.4% more components in the collaborative scenario (CS) compared to the standard scenario (SS), and a further 15.5% increase was observed in the collaborative guided scenario (CGS) compared to CS. These findings highlight the added value of combining collaboration with real-time guidance to enhance productivity.
The research also contributed to developing an EEG-based neuroergonomic assessment framework for MWL, integrating qualitative and quantitative measurement techniques. The study was conducted across three distinct but structurally identical scenarios, with an equal number of participants. This setup enabled a controlled comparison of cognitive- and performance-related effects when working with or without robotic assistance. Overall, the results suggest that guided collaboration not only reduces cognitive effort but also supports more effective task execution in assembly settings.
This research opened the door for the neuroergonomic analysis of MWL in industrial HRI. Further studies should be addressed in the evaluation of MWL through other measurements to have a comprehensive analysis of its trend. The combination of qualitative and quantitative measurements would be suitable for a thorough understanding of the cognitive demands of operators in collaborative tasks. Ongoing research would consider the relationship between mental workload with physical workload in industrial HRI. For this analysis, the deployment of electromyogram (EMG) sensors would be suitable for the analysis. Like the EEG, this measurement offers a real-time, efficient, and compact analysis of the physical demand of the operator while performing the task. The design of the system necessary to use, record, and interpret EMG data would be crucial to define the level of physical fatigue and to assess the relationship with the MWL.

Author Contributions

Conceptualization, C.C., M.S., and M.D.; methodology, C.C., M.D., and M.S.; software, C.C., A.V., and M.D.; validation, C.C., M.S., M.D., and L.G.; formal analysis, C.C. and M.D.; investigation, C.C.; resources, M.D. and D.M.; data curation, C.C. and A.V.; writing—original draft preparation, C.C.; writing—review and editing, C.C., M.D., and L.G.; visualization, C.C., M.S., and A.V.; supervision, M.D. and L.G.; project administration, M.D.; and funding acquisition, M.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the or Ethics Committee of Faculty of Medical Sciences, University of Kragujevac (Decision number: 01-6471, based on submitted study protocol no. 01-5578, on the 3 June 2021).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data used in this study were not released publicly due to restrictions imposed by the University and the company involved.

Acknowledgments

This study was supported by the project Collaborative Intelligence for Safety Critical Systems, CISC, founded by the European Commission, HORIZON 2020 Marie Sklodowska-Curie International Training Network (Grant Agreement ID: 955901).

Conflicts of Interest

The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Appendix A

Figure A1. Selected input parameters and related results of the G*Power Tool Statistical Analysis.
Figure A1. Selected input parameters and related results of the G*Power Tool Statistical Analysis.
Machines 13 00783 g0a1
Table A1. Mental Workload (MWL)—Standard Scenario (SS).
Table A1. Mental Workload (MWL)—Standard Scenario (SS).
Candidate Number1st Part SS (0–30 min)2nd Part SS (30–60 min)3rd Part SS (60–90 min)
10.7724746060.6928603510.704460397
21.0417564820.992353860.975106769
31.0097621461.0211248551.028503097
41.2819207721.3647124461.369240439
50.7358370450.6806046620.656824445
61.1645152781.1287454831.13843409
71.0606496241.0028792830.926139392
81.0097621461.0211248551.028503097
91.0336991441.0529057381.026548916
101.1645152781.1287454831.13843409
Table A2. Mental Workload (MWL)—Collaborative Scenario (with robot—CS).
Table A2. Mental Workload (MWL)—Collaborative Scenario (with robot—CS).
Candidate Number1st Part CS (0–30 min)2nd Part CS (30–60 min)3rd Part CS (60–90 min)
10.7692861170.6836123020.613492
20.6931320660.6773247760.5973241
31.0611119571.0451000411.036104363
41.2895453411.2593358511.131426059
50.473502410.4081517240.399456098
61.2138562421.1639202431.15775252
70.8514682780.830682850.794087422
80.9611119570.8451000410.836104363
90.9301941530.9226991710.918183756
101.0138562421.0039202430.95775252
Table A3. Mental Workload (MWL)—Collaborative Guided Scenario (with robot and guidance module—CGS).
Table A3. Mental Workload (MWL)—Collaborative Guided Scenario (with robot and guidance module—CGS).
Candidate Number1st Part CGS (0–30 min)2nd Part CGS (30–60 min)3rd Part CGS (60–90 min)
10.468185770.4490824620.414646928
20.4957419940.4830226350.451628618
30.862632580.8064753830.78148261
40.8753758960.7659497650.759658133
50.292927570.2792129890.257896826
60.962232970.8624042350.821719726
70.7186517680.6547360410.631027896
80.8073226250.7818139410.734006918
90.6855692520.6802372270.630932719
100.7062100340.6751401460.647749013
Table A4. Number of components accomplished by the participants in the three scenarios (standard scenario—SS, collaborative scenario with robot—CS, collaborative guided scenario with robot and Poka-Yoke design—CGS).
Table A4. Number of components accomplished by the participants in the three scenarios (standard scenario—SS, collaborative scenario with robot—CS, collaborative guided scenario with robot and Poka-Yoke design—CGS).
Candidate NumberN. Components Accomplished in SSN. Components Accomplished in CSN. Components Accomplished in CGS
1486275
2396475
3607270
4495473
5526173
6404675
7346569
8455575
9657475
10436069

References

  1. Mokyr, J. The Enlightened Economy: An Economic History of Britain 1700–1850; Yale University Press: New Haven, CT, USA, 2010. [Google Scholar]
  2. Mokyr, J.; Strotz, R.H. The Second Industrial Revolution, 1870–1914; Northwestern University: Evanston, IL, USA, 2000. [Google Scholar]
  3. Teixeira de Azevedo, M.; Martins, A.B.; Kofuji, S.T. Digital Transformation in the Utilities Industry. In Research Anthology on Digital Transformation, Organizational Change, and the Impact of Remote Work; IGI Global: Hershey, PA, USA, 2019. [Google Scholar]
  4. Heath, J. The Fourth Industrial Revolution. Teaching and Learning in the 21st Century; Brill Academic Publishers: Leiden, The Netherlands, 2016. [Google Scholar]
  5. Tiwari, S.; Bahuguna, P.C.; Walker, J. Industry 5.0. In Handbook of Research on Innovative Management Using AI in Industry 5.0; IGI Global: Hershey, PA, USA, 2022. [Google Scholar]
  6. Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and Human-Robot Co-working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
  7. Mourtzis, D.; Angelopoulos, J.D.; Panopoulos, N. A Literature Review of the Challenges and Opportunities of the Transition from Industry 4.0 to Society 5.0. Energies 2022, 15, 6276. [Google Scholar] [CrossRef]
  8. Nahavandi, S. Industry 5.0—A Human-Centric Solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
  9. Caiazzo, C.; Savković, M.; Pusica, M.; Milojevic, D.; Leva, M.C.; Djapan, M. Development of a Neuroergonomic Assessment for the Evaluation of Mental Workload in an Industrial Human–Robot Interaction Assembly Task: A Comparative Case Study. Machines 2023, 11, 995. [Google Scholar] [CrossRef]
  10. Ivanov, D.A. The Industry 5.0 framework: Viability-based integration of the resilience, sustainability, and human-centricity perspectives. Int. J. Prod. Res. 2022, 61, 1683–1695. [Google Scholar] [CrossRef]
  11. Pizoń, J.; Gola, A. Human–Machine Relationship—Perspective and Future Roadmap for Industry 5.0 Solutions. Machines 2023, 11, 203. [Google Scholar] [CrossRef]
  12. Wang, X.; Kemény, Z.; Váncza, J.; Wang, L. Human-robot collaborative assembly in cyber-physical production: Classification framework and implementation. CIRP Ann.-Manuf. Technol. 2017, 66, 5–8. [Google Scholar] [CrossRef]
  13. Picco, E.; Miglioretti, M.; Blanc, P.M. Sustainable employability, technology acceptance and task performance in workers collaborating with cobots: A pilot study. Cogn. Technol. Work. 2023, 26, 139–152. [Google Scholar] [CrossRef]
  14. Gualtieri, L.; Fraboni, F.; De Marchi, M.; Rauch, E. Development and evaluation of design guidelines for cognitive ergonomics in human-robot collaborative assembly systems. Appl. Ergon. 2022, 104, 103807. [Google Scholar] [CrossRef]
  15. Ruo, A.; Villani, V.; Sabattini, L. Use of EEG Signals for Mental Workload Assessment in Human-Robot Collaboration. In Proceedings of the International Workshop on Human Friendly Robotics, Delft, The Netherlands, 22–23 September 2022. [Google Scholar]
  16. Dehais, F.; Lafont, A.; Roy, R.N.; Fairclough, S.H. A Neuroergonomics Approach to Mental Workload, Engagement and Human Performance. Front. Neurosci. 2020, 14, 268. [Google Scholar] [CrossRef]
  17. Coelli, S.; Sclocco, R.; Barbieri, R.; Reni, G.; Zucca, C.; Bianchi, A.M. EEG-based index for engagement level monitoring during sustained attention. In Proceedings of the 2015 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, Italy, 25–29 August 2015; pp. 1512–1515. [Google Scholar]
  18. Bagheri, M.; Power, S.D. EEG-based detection of mental workload level and stress: The effect of variation in each state on classification of the other. J. Neural Eng. 2020, 17, 056015. [Google Scholar] [CrossRef]
  19. Pereira, E.; Sigcha, L.; Silva, E.; Sampaio, A.; Costa, N.; Costa, N. Capturing Mental Workload Through Physiological Sensors in Human–Robot Collaboration: A Systematic Literature Review. Appl. Sci. 2025, 15, 3317. [Google Scholar] [CrossRef]
  20. Chacón, A.; Ponsa, P.; Angulo, C. Cognitive Interaction Analysis in Human–Robot Collaboration Using an Assembly Task. Electronics 2021, 10, 1317. [Google Scholar] [CrossRef]
  21. Borges, G.D.; Reis, A.M.; Ariente Neto, R.; de Mattos, D.L.; Cardoso, A.; Gonçalves, H.; Merino, E.; Colim, A.; Carneiro, P.; Arezes, P.M. Decision-Making Framework for Implementing Safer Human–Robot Collaboration Workstations: System Dynamics Modeling. Safety 2021, 7, 75. [Google Scholar] [CrossRef]
  22. Mühlemeyer, C. Assessment and Design of Employees-Cobot-Interaction. In Proceedings of the International Conference on Human Interaction and Emerging Technologies, Nice, France, 22–24 August 2019. [Google Scholar]
  23. Di Pasquale, V.; De Simone, V.; Giubileo, V.; Miranda, S. A taxonomy of factors influencing worker’s performance in human–robot collaboration. IET Collab. Intell. Manuf. 2022, 5, e12069. [Google Scholar] [CrossRef]
  24. Savković, M.; Caiazzo, C.; Djapan, M.; Vukicevic, A.M.; Pušica, M.; Mačužić, I. Development of Modular and Adaptive Laboratory Set-Up for Neuroergonomic and Human-Robot Interaction Research. Front. Neurorobotics 2022, 16, 863637. [Google Scholar] [CrossRef] [PubMed]
  25. Storm, F.A.; Chiappini, M.; Dei, C.; Piazza, C.; André, E.; Reißner, N.; Brdar, I.; Delle Fave, A.; Gebhard, P.; Malosio, M.; et al. Physical and mental well-being of cobot workers: A scoping review using the Software-Hardware-Environment-Liveware-Liveware-Organization model. Hum. Factors Ergon. Manuf. Serv. Ind. 2022, 32, 419–435. [Google Scholar] [CrossRef]
  26. Faccio, M.; Granata, I.; Menini, A.; Milanese, M.; Rossato, C.; Bottin, M.; Minto, R.; Pluchino, P.; Gamberini, L.; Boschetti, G.; et al. Human factors in cobot era: A review of modern production systems features. J. Intell. Manuf. 2022, 34, 85–106. [Google Scholar] [CrossRef]
  27. Mijović, P.; Milovanović, M.; Ković, V.; Gligorijevic, I.; Mijovic, B.; Mačužić, I. Neuroergonomics Method for Measuring the Influence of Mental Workload Modulation on Cognitive State of Manual Assembly Worker. In Proceedings of the International Symposium on Human Mental Workload, Dublin, Ireland, 28–30 June 2017. [Google Scholar]
  28. Aarts, S.; van den Akker, M.; Winkens, B. The importance of effect sizes. Eur. J. Gen. Pract. 2014, 20, 61–64. [Google Scholar] [CrossRef] [PubMed]
  29. Kirk, R.E. Experimental Design: Procedures for the Behavioral Sciences, 3rd ed.; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 1995. [Google Scholar]
  30. Fiorineschi, L.; Becattini, N.; Borgianni, Y.; Rotini, F. Testing a New Structured Tool for Supporting Requirements’ Formulation and Decomposition. Appl. Sci. 2020, 10, 3259. [Google Scholar] [CrossRef]
  31. Wilks, S.; Mortimer, M.; Nylén, P. The introduction of sit–stand worktables aspects of attitudes, compliance and satisfaction. Appl. Ergon. 2006, 37, 359–365. [Google Scholar] [CrossRef]
  32. Arai, T.; Kato, R.; Fujita, M. Assessment of operator stress induced by robot collaboration in assembly. CIRP Ann.-Manuf. Technol. 2010, 59, 5–8. [Google Scholar] [CrossRef]
  33. Navas-Reascos, G.E.; Romero, D.; Rodriguez, C.A.; Guedea, F.; Stahre, J. Wire Harness Assembly Process Supported by a Collaborative Robot: A Case Study Focus on Ergonomics. Robotics 2022, 11, 131. [Google Scholar] [CrossRef]
  34. Xiao-Ming, S.; Jie-Fang, Z. Agreement Dynamics of Memory-Based Naming Game with Forgetting Curve of Ebbinghaus. Chin. Phys. Lett. 2009, 26, 048901. [Google Scholar] [CrossRef]
  35. Stanton, N.A.; Salmon, P.M.; Rafferty, L.; Walker, G.H.; Baber, C. Human Factors Methods: A Practical Guide for Engineering and Design; Taylor & Francis Group: Abingdon, UK, 2012. [Google Scholar]
  36. Mitsubishi Electric. Available online: https://dl.mitsubishielectric.com/dl/fa/document/manual/robot/bfp-a3727/bfp-a3727p.pdf (accessed on 22 August 2024).
  37. OnRobot. The VGC10 Electric Vacuum Gripper—Small, But Powerful. Available online: https://onrobot.com/en/products/vgc10 (accessed on 21 August 2025).
  38. mBrainTrain. Available online: https://mbraintrain.com/smartfones/ (accessed on 1 January 2025).
  39. Ives-Deliperi, V.L.; Butler, J.T. Relationship between EEG electrode and functional cortex in the international 10–20 system. J. Clin. Neurophysiol. 2018, 35, 504–509. [Google Scholar] [CrossRef]
  40. Ismail, L.E.; Karwowski, W. Applications of EEG indices for the quantification of human cognitive performance: A systematic review and bibliometric analysis. PLoS ONE 2020, 15, e0242857. [Google Scholar] [CrossRef]
  41. Pušica, M.; Kartali, A.; Bojović, L.; Gligorijević, I.; Jovanović, J.; Leva, M.C.; Mijović, B. Mental Workload Classification and Tasks Detection in Multitasking: Deep Learning Insights from EEG Study. Brain Sci. 2024, 14, 149. [Google Scholar] [CrossRef]
  42. Kaliraman, B.; Nain, S.; Verma, R.; Thakran, M.; Dhankhar, Y.; Hari, P.B. Pre-processing of EEG signal using Independent Component Analysis. In Proceedings of the 2022 10th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), Noida, India, 13–14 October 2022; pp. 1–5. [Google Scholar]
  43. Mastropietro, A.; Pirovano, I.; Marciano, A.; Porcelli, S.; Rizzo, G. Reliability of Mental Workload Index Assessed by EEG with Different Electrode Configurations and Signal Pre-Processing Pipelines. Sensors 2023, 23, 1367. [Google Scholar] [CrossRef]
  44. Knežević, N.; Savić, A.M.; Gordić, Z.; Ajoudani, A.; Jovanovic, K.M. Toward Industry 5.0: A Neuroergonomic Workstation for a Human-Centered, Collaborative Robot-Supported Manual Assembly Process. IEEE Robot. Autom. Magazine 2024, 2–13. [Google Scholar] [CrossRef]
  45. Simone, V.D.; Pasquale, V.D.; Giubileo, V.; Miranda, S. Human-Robot Collaboration: An analysis of worker’s performance. Procedia Comput. Sci. 2022, 200, 1540–1549. [Google Scholar] [CrossRef]
  46. Zhou, Y.; Huang, S.; Xu, Z.; Wang, P.; Wu, X.; Zhang, D. Cognitive Workload Recognition Using EEG Signals and Machine Learning: A Review. IEEE Trans. Cogn. Dev. Syst. 2022, 14, 799–818. [Google Scholar] [CrossRef]
  47. Caterino, M.; Rinaldi, M.; Di Pasquale, V.; Greco, A.; Miranda, S.; Macchiaroli, R. A Human Error Analysis in Human–Robot Interaction Contexts: Evidence from an Empirical Study. Machines 2023, 11, 670. [Google Scholar] [CrossRef]
  48. Villani, V.; Ciaramidaro, A.; Iani, C.; Rubichi, S.; Sabattini, L. To collaborate or not to collaborate: Understanding human-robot collaboration. In Proceedings of the 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE), Mexico City, Mexico, 20–24 August 2022; pp. 2441–2446. [Google Scholar]
  49. Hopko, S.K.; Wang, J.; Mehta, R.K. Human Factors Considerations and Metrics in Shared Space Human-Robot Collaboration: A Systematic Review. Front. Robot. AI 2022, 9, 799522. [Google Scholar] [CrossRef]
  50. Eyam, A.T.; Mohammed, W.M.; Lastra, J.L. Emotion-Driven Analysis and Control of Human-Robot Interactions in Collaborative Applications. Sensors 2021, 21, 4626. [Google Scholar]
  51. Katmah, R.; Al-Shargie, F.; Tariq, U.; Babiloni, F.; Al-Mughairbi, F.; Al-Nashash, H. A Review on Mental Stress Assessment Methods Using EEG Signals. Sensors 2021, 21, 5043. [Google Scholar] [CrossRef]
  52. Lin, C.; Shih, H.; Sher, P.J. Integrating technology readiness into technology acceptance: The TRAM model. Psychol. Mark. 2007, 24, 641–657. [Google Scholar] [CrossRef]
  53. Frey, J.; Daniel, M.; Castet, J.; Hachet, M.; Lotte, F. Framework for Electroencephalography-based Evaluation of User Experience. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, San Jose, CA, USA, 7–12 May 2016. [Google Scholar]
Figure 1. Experimental setup: (1) modular assembly workstation; (2) collaborative robotic workstation; (3) robotic-assisted quality-check area.
Figure 1. Experimental setup: (1) modular assembly workstation; (2) collaborative robotic workstation; (3) robotic-assisted quality-check area.
Machines 13 00783 g001
Figure 2. Sub-assembly used for the experiments.
Figure 2. Sub-assembly used for the experiments.
Machines 13 00783 g002
Figure 3. Testing Scenarios: (a) manual (used in SS), and (b) collaborative (used in Scenario CS and CGS).
Figure 3. Testing Scenarios: (a) manual (used in SS), and (b) collaborative (used in Scenario CS and CGS).
Machines 13 00783 g003
Figure 4. Collaborative guided Module: the components are checked in this module through a Quality Inspection Phase. In order: (1) The Mitsubishi Electric industrial robot RV-2FRL-D-S25; (2) The Inkjet printer domino A100; (3) The SICK Inspector 611.
Figure 4. Collaborative guided Module: the components are checked in this module through a Quality Inspection Phase. In order: (1) The Mitsubishi Electric industrial robot RV-2FRL-D-S25; (2) The Inkjet printer domino A100; (3) The SICK Inspector 611.
Machines 13 00783 g004
Figure 5. Collaborative guided scenario (a): the participant accomplishes the task in collaboration with the cobot and is guided during the tasks through the labels (i.e., Poka-Yoke system) (b).
Figure 5. Collaborative guided scenario (a): the participant accomplishes the task in collaboration with the cobot and is guided during the tasks through the labels (i.e., Poka-Yoke system) (b).
Machines 13 00783 g005
Figure 6. Mental workload (β/α) (Y-axis) of the participants (X-axis), in three consecutive parts (30 min each) analyzed in the standard (SS—long dash dot), collaborative scenario (CS—highlighted in dotted lines), and collaborative guided scenario (CGS—highlighted in dashed lines).
Figure 6. Mental workload (β/α) (Y-axis) of the participants (X-axis), in three consecutive parts (30 min each) analyzed in the standard (SS—long dash dot), collaborative scenario (CS—highlighted in dotted lines), and collaborative guided scenario (CGS—highlighted in dashed lines).
Machines 13 00783 g006
Figure 7. NASA TLX results (Y-axis; rating scale from 0 to 10) in the three scenarios over the participants (X-axis). Onthe Y-axis in order: (a) Mental Demand; (b) Physical Demand; (c) Temporal Demand; (d) Performance; (e) Effort; (f) Frustration.
Figure 7. NASA TLX results (Y-axis; rating scale from 0 to 10) in the three scenarios over the participants (X-axis). Onthe Y-axis in order: (a) Mental Demand; (b) Physical Demand; (c) Temporal Demand; (d) Performance; (e) Effort; (f) Frustration.
Machines 13 00783 g007aMachines 13 00783 g007b
Figure 8. Number of assembly components produced correctly in the three scenarios (Y-axis) over the participants (X-axis).
Figure 8. Number of assembly components produced correctly in the three scenarios (Y-axis) over the participants (X-axis).
Machines 13 00783 g008
Table 1. Overview Summary of Parameters and Evaluation Methods.
Table 1. Overview Summary of Parameters and Evaluation Methods.
MetricDescriptionMeasurement MethodTool/Instrument
Mental Workload (EEG)Neurophysiological indicator of cognitive loadBrain β/α waves ratio analysis (quantitative)EEG headset and dedicated software for data processing
Mental Demand Cognitive effort required to complete the taskNASA-TLX subscale (subjective)NASA-TLX self-reported questionnaire
Physical DemandPhysical effort required to perform the taskNASA-TLX subscale (subjective)NASA-TLX self-reported questionnaire
Temporal DemandTime pressure or urgency perceived during the taskNASA-TLX subscale (subjective)NASA-TLX self-reported questionnaire
Effort Overall exertion to accomplish task goalsNASA-TLX subscale (subjective)NASA-TLX self-reported questionnaire
FrustrationEmotional response to task complexity and robot interactionNASA-TLX subscale (subjective)NASA-TLX self-reported questionnaire
Fluency of TaskSmoothness and ease of task executionNASA-TLX/adapted subscale (subjective)NASA-TLX self-reported questionnaire
ProductivityNumber of correctly assembled componentsChecklist-based accuracy scoring (quantitative)Manual observation and post-process product verification
User PerceptionImpressions about the interaction with the robot, safety, comfort, and layout preferenceOpen-ended questions (qualitative)Post-task written responses, thematic analysis
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Caiazzo, C.; Djapan, M.; Savkovic, M.; Milojevic, D.; Vukicevic, A.; Gualtieri, L. Evaluating Mental Workload and Productivity in Manufacturing: A Neuroergonomic Study of Human–Robot Collaboration Scenarios. Machines 2025, 13, 783. https://doi.org/10.3390/machines13090783

AMA Style

Caiazzo C, Djapan M, Savkovic M, Milojevic D, Vukicevic A, Gualtieri L. Evaluating Mental Workload and Productivity in Manufacturing: A Neuroergonomic Study of Human–Robot Collaboration Scenarios. Machines. 2025; 13(9):783. https://doi.org/10.3390/machines13090783

Chicago/Turabian Style

Caiazzo, Carlo, Marko Djapan, Marija Savkovic, Djordje Milojevic, Arso Vukicevic, and Luca Gualtieri. 2025. "Evaluating Mental Workload and Productivity in Manufacturing: A Neuroergonomic Study of Human–Robot Collaboration Scenarios" Machines 13, no. 9: 783. https://doi.org/10.3390/machines13090783

APA Style

Caiazzo, C., Djapan, M., Savkovic, M., Milojevic, D., Vukicevic, A., & Gualtieri, L. (2025). Evaluating Mental Workload and Productivity in Manufacturing: A Neuroergonomic Study of Human–Robot Collaboration Scenarios. Machines, 13(9), 783. https://doi.org/10.3390/machines13090783

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop