Next Article in Journal
Research on MPC Path-Tracking Control Algorithm Based on the Generalized-Dynamics Model of “Steering Robot-Controlled Vehicle”
Previous Article in Journal
Study of the Effect of Micro-Parameters of Intragranular Contacts on the Mechanical Behavior of Granite Based on Three-Dimensional GBM and Force Chain Network
Previous Article in Special Issue
A Carpometacarpal Thumb Tracking Device for Telemanipulation of a Robotic Thumb: Development, Prototyping, and Evaluation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Human–Robot Multistation System—Visual Task Guidance and Human Initiative Scheduling for Collaborative Work Cells

1
PROFACTOR GmbH, 4407 Steyr-Gleink, Austria
2
Independent Researcher, 4040 Linz, Austria
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(22), 12230; https://doi.org/10.3390/app152212230
Submission received: 17 October 2025 / Revised: 7 November 2025 / Accepted: 16 November 2025 / Published: 18 November 2025
(This article belongs to the Special Issue Human–Robot Collaboration and Its Applications)

Abstract

In this paper, we present enabling concepts for Zero Defect Manufacturing (ZDM) based on flexible human–robot interaction. We introduce the Human–Robot Multistation System (HRMS) as a novel framework for flexible, human-initiated task allocation across multiple workstations. A HRMS is defined as one or more workstations that support human–robot collaborative task execution and integrate intelligent perception and interaction systems with coordination logic, enabling alternating or collaborative task execution. These components allow human workers to interact with the system through a gesture-based modality and to receive task assignments. An agent-based task scheduler responds to human-initiated ‘Ready’ signals to pace activities ergonomically. We built a laboratory demonstrator for an Industry 5.0 ZDM final inspection/rework use case and conducted a first pilot study (n = 5, internal participants) to evaluate system usability (SUS), perception (Godspeed), mental workload (NASA-TLX), completion times, and error rates. Results indicated technical feasibility under laboratory conditions and acceptable usability, with SUS 70.5 ± 22 (‘OK’ toward ‘Good’), overall GQS 3.2 ± 0.8, RAW NASA-TLX 37 ± 16.3, mean job throughput time 232.5 ± 46.5 s, and errors in 9/10 jobs (E1–E4). In simulation, a proximity-aware shortest-path heuristic reduced walking distance by up to 70% versus FIFO without productivity loss. We conclude that HRMS is feasible with acceptable user experience under lab constraints, while recurrent task-level failures require mitigation and larger-scale validation.

1. Introduction

In Industry 5.0 and Zero Defect Manufacturing (ZDM), human–robot collaboration (HRC) is becoming increasingly important [1,2,3]. Where possible, robots should relieve humans of repetitive and burdensome tasks [4,5]. This naturally raises the question of how to best organize HRC. Trust, usability, and acceptance are key requirements for the successful introduction of HRC, according to [6]. In order for humans and robots to work together effectively, it is often necessary to redesign workplaces accordingly [7]. Important criteria in this context are, for example, safety and anxiety, fear of surveillance, but also concepts for user guidance and achieving the highest possible level of acceptance among the workforce. According to [7,8,9], different forms of human–robot interaction (HRI) exist, such as collaboration, synchronization, cooperation, and co-existence, given particular capabilities of robotic systems. Based on data reported by [7], in 2021, 32.5% (113) of 348 publications in collaborative robotics addressed human–robot collaboration, but only 3.7% (13) considered task allocation.

1.1. Ergonomics-Oriented Manufacturing Scheduling

In terms of using a human-centered approach to manufacturing (as proposed in the context of Industry 5.0), the assignment of work, in particular organizing work in a human–robot and human-system collaborative setting, requires the consideration of human factors in planning and scheduling. This includes the incorporation of worker ergonomics into scheduling, balancing productivity with a worker’s well-being. Human factors, such as fatigue, physical workload, and skill proficiency, are considered part of a collection of optimization criteria to schedule the workload of both workers and robotic systems. In [10], an approach to human-centered workforce scheduling is presented. In [11], a dynamic scheduling concept for a hybrid flow shop [12] (similar to the approach outlined in this paper) is developed based on a self-organizing multiagent system (MAS) and reinforcement learning. In this scheduling approach, fatigue of workers and skill levels are considered. Human fatigue is also a concern addressed in [13] and the presented task scheduling approach plans micro-breaks during job cycles in order to avoid human fatigue accumulation. In [14], worker fatigue problems and productivity are addressed with a simulation-based approach that focuses on task scheduling in the context of human–robot collaboration. The digital panopticon [15] describes a continuous, persistent data collection that makes workers feel constantly visible and self-disciplined.
In the context of the work presented in this paper (see Section 2.2), the concept of human initiative is regarded as a basic principle in the scheduling approach developed. In this work, task allocation is driven by human initiative and paced according to worker preferences and ergonomic needs. It is expected to minimize human discomfort that is frequently reported as arising in collaborative human–robot activities, while still allowing effective human–robot collaboration.

1.2. Agent-Based Manufacturing Scheduling

Agent-based manufacturing scheduling has seen considerable advancements. As pointed out in [16], a MAS approach allows the distribution of scheduling control among autonomous agents, with the goal of enhancing flexibility, scalability, and responsiveness in complex production settings. Effects, such as reducing production makespan and improving overall performance under disturbance conditions, are reported in [17]. The work of [18] considers a make-to-stock problem in the construction industry. A combination of functional and physical manufacturing system decomposition is employed to obtain agent types, e.g., a schedule agent controls schedule generation (functional agent), and machine agents represent physical machines on the shop floor. For schedule generation, a heuristic algorithm has been developed. A job shop scheduling problem (JSSP) in the context of re-manufacturing is solved in [19]. The JSSP is extended with scheduling autonomous guided vehicles (AGVs), transporting material between machines. The extended JSSP is solved with Constraint Programming. The scheduler component is embedded in an agent-based hybrid control architecture, comprising centralized and decentralized components.
Recent work integrates MASs with AI methods, such as embedding deep reinforcement learning to enable schedule optimization in a flexible job shop integrating AGVs [20]. A dynamic JSSP is tackled in [21], with stochastic process times and machine breakdowns as dynamic events. Scheduling policies are trained with multiagent reinforcement learning, employing a knowledge graph that represents dynamic machines in a factory. The knowledge graph encodes machine-specific job preference information and has been trained with historical manufacturing records.
In this paper, an agent-based scheduling approach is used, which considers a final end-of-line inspection scenario to achieve a zero-defect situation for products leaving a multistation work cell. A particular scheduling problem arises from this scenario due to properties immanent to Zero Defect Manufacturing (ZDM).

1.3. Zero Defect Manufacturing

ZDM is a quality management method to maintain both product and process quality at a level of zero defect output to ensure that defective products do not leave a production facility [22]. ZDM is made possible through the ongoing digitalization of manufacturing environments, using data-driven and artificial intelligence (AI) approaches in line with Industry 4.0 concepts. ZDM is an important means of achieving sustainability goals in manufacturing and aligning them with Industry 5.0 objectives. An early proposal for ZDM was made in [23]. In [24], it is stated that a zero defect output in manufacturing is achieved through corrective, preventive, and predictive techniques. The authors outline a ZDM framework that defines four basic activities for achieving ZDM, namely, detecting or predicting defects (Detect, Predict), followed by repair activities (Repair), as well as implementing measures to prevent failures situation (Prevent). In this model, Detect and Predict identify quality issues and are regarded as the triggering strategies, whereas Repair and Prevent are considered as actions to remedy a defect or prevent it. This model proposes three so-called ZDM strategy pairs: Detect–Repair, Detect–Prevent, and Predict–Prevent. Modern data-driven approaches may improve beyond just a Detect–Repair activity and use the data to improve production after defect detection (Detect–Prevent). Specifically, predicting potential defects before they occur (Predict–Prevent) utilizes modern AI-driven methods (see, for example, [25]). According to the work of [3], humans are now regarded as assets rather than sources of error. However, human-centric aspects of ZDM have received little attention and are often overlooked.

1.4. Research Gap and Contributions

This paper addresses the following research question: ‘How to implement a human-centric approach to task scheduling in an end-of-line quality assurance situation with human–robot collaboration?’ None of the existing work provides a comprehensive answer to that question. Approaches to task scheduling in HRC scenarios prevalently address assembly use cases, avoiding the dynamic character of quality assurance, where important information such as the number of rework cycles is not available for scheduling in advance. Moreover, existing HRC approaches rarely unify multistation resource sharing, human-initiated task assignment, and agent-based scheduling.
Addressing the above research question, a Human–Robot Multistation (HRMS) framework has been implemented as lab setup to address ZDM problems with a human–robot collaborative scheduling approach. The ZDM scenario shown is the end-of-line quality inspection of products with possible repair activities, in order to satisfy a zero defect requirement for a particular production process. In terms of the model proposed by [22,24], the ZDM approach implemented with our HRMS framework can be categorized as a Detect–Repair strategy, where repetitive inspection and rework tasks are assigned to teams of human workers and robots. One or more workstations are operated flexibly by both humans and robots and a close interaction between human workers and robots is enabled through a multiagent scheduling approach integrated with intelligent perception and interaction systems in order to allow for an alternating or collaborative task execution in a multistation setting.
The HRMS framework builds on prior research [26,27]. In [26], the agent-based approach to task scheduling used in the framework is described, and in [27] that scheduling approach is evaluated in simulation, focusing on scalability and performance aspects. The simulation model used for evaluation implements the ZDM scenario shown in this paper, and it realizes a virtual HRMS environment with up to three workers, two robots, and six workstations. The work presented in this paper extends the prior research with a real-world, laboratory-scale implementation of the HRMS concept, integrating the agent-based scheduler from [26] and introducing a human-initiated human–robot interaction scheme.
The contributions of this paper can be summarized as follows:
1.
Conceptual contribution: We define and introduce the Human–Robot Multistation System (HRMS) as a novel framework for flexible human–robot collaboration. HRMS generalizes existing human–robot work cells towards multistation environments with shared robotic resources.
2.
Integration and validation of scheduling concepts for human-driven work assignments: Building on our previous work on agent-based scheduling [26,27], we integrate the scheduling mechanism into the HRMS framework and validate it in a ZDM laboratory setting.
3.
Interaction contribution: We introduce a human-initiated human–robot interaction scheme that integrates gesture recognition, helmet-based identity detection, and projection-based task guidance for the HRMS context, enabling workers to initiate, control, and coordinate tasks intuitively across multiple stations.
4.
Laboratory proof-of-concept: We implement a real-world HRMS demonstrator with a gantry-style robot and two human workstations and conduct a pilot study assessing technical feasibility, task performance, and user experience.
The remainder of this paper first introduces the HRMS concept as a novel integration of prior research streams in Section 2. In Section 2.2, the task scheduler as a central enabler of HRMS is then presented. Section 3 describes the laboratory demonstrator as the first proof-of-concept implementation, followed by a user study assessing task performance and worker experience. Our findings are then described in detail in Section 4. Section 5 discusses the methods and results. Finally, our paper is concluded in Section 6.

2. Human–Robot Multistation System (HRMS)

We introduce the concept of the Human–Robot Multistation System (HRMS) as a new framework for flexible human–robot collaboration. The HRMS approach focuses on multistation environments, where human workers and robots share several workplaces in a dynamic and ergonomic way.
Figure 1 shows the conceptual overview of the HRMS. It is characterized by the following three pillars:
1.
Multistation Architecture: One or more robots serve multiple workstations where human workers can also operate. Robots are not permanently bound to a single station, but act as shared resources that flexibly support human workers in manufacturing tasks (e.g., polishing).
2.
Interaction and Guidance Modalities: Humans interact with the system via human-initiated, natural, and intuitive modalities, such as gesture recognition (‘Ready’ and ‘Done’), helmet-based identity detection, and projection-based guidance directly onto the workspace. These modalities reduce cognitive load and allow workers to remain in control of their tasks while seamlessly collaborating with robotic resources.
3.
Agent-Based Scheduling: A decentralized scheduling mechanism coordinates task allocation between human and robotic agents. Unlike traditional push-based task allocation, HRMS follows the principle of human initiative: human workers actively signal ‘Ready’, and the scheduling mechanism responds by presenting appropriate tasks. This prevents cognitive overload, respects autonomy, and embeds ergonomic heuristics (e.g., minimal walking distances).
By integrating these elements, HRMS goes beyond conventional HRC setups in three ways:
  • Scaling from a single workstation to multistation environments.
  • Balancing human-initiated action with robotic assistance to avoid rigid top-down allocation.
  • Emphasizing ergonomic acceptance and intuitiveness, both essential for Industry 5.0.
In the remainder of this paper, we use the HRMS framework as the conceptual foundation.

2.1. Use Case

As a particular use case, the final inspection (and rework) of high-value ceramic components was chosen. In this use case, the process of achieving zero defect involves human workers inspecting the product to detect and mark defects and robots performing the polishing areas on the ceramic component marked as defect.
Figure 2 illustrates the quality assurance process plan. All tasks, except the robotic polishing step, are performed manually. A quality assurance job, corresponding to one ceramic component, executes that process plan. The job starts with loading the component into a workstation and the job ends with unloading the component from the workstation after all surface defects have been reworked. During the inspection task, the human worker decides whether the quality is acceptable. If no critical defects are found, the final human task is unload. If defects remain, the worker must mark identified defective regions on the component during task configure. The robot then polishes the marked areas to remove the surface defects. This is repeated until no defects remain.

2.2. Task Scheduling

An essential component of the HRMS concept is a scheduling mechanism for task allocation to workstations, human workers, and robots in a collaborative setting. In [26,27], such an approach is described for task scheduling in end-of-line quality assurance (QA) processes, with a focus on achieving ZDM through human–robot cooperation. End-of-line QA with a goal to achieve zero defects is characterized by a need to repeat inspection and rework multiple times. In particular, neither the number of these inspection–rework cycles nor their duration can be predicted in advance. This unpredictability poses a major challenge to task scheduling in ZDM. In order to address this specific problem arising in the context of ZDM, a decentralized, agent-based task scheduler [26,27] has been developed that, based on real-time conditions, dynamically allocates tasks to workstations, human workers, and robots, while respecting human initiative and ergonomic principles in the workload generated by this task scheduling approach. It is used in the presented HRMS concept to allow the execution of human–robot collaborative ZDM tasks in a multistation setting.
In order to implement a ZDM inspection cell in an HRMS setting, the approach outlined in [26,27] is followed and digital avatars or agents are created that act as the digital counterparts of HRMS resources (workers/robots/workstations) and comprise the control element of the system. These resources are managed in their actions and collaboration on ZDM task execution by a particular agent-based task scheduler. The concept of this scheduler is outlined in Figure 3. As a core concept, job and operation pools are maintained from which next activities are pulled on-demand. A job pool holds quality assurance jobs, and an operation pool holds the next pending operations of these jobs. A job consists of a sequence of operations. When an operation is executed by a human or the robot we refer to it as a task. When a workstation agent recognizes that the physical station in the HRMS is free (without assigned work), it pulls a job from the job pool, which leads to corresponding first job operations being added to the operation pool according to a process plan that describes the necessary operations to be taken to complete the job (see Figure 2). Consequently, the worker and robot agents then pull operations from the operation pool when corresponding workers and robots are ready to perform a task.
When an operation on a workstation has been finished by a robot/worker, the next operation of the job being processed on the workstation is put by the workstation agent into the operation pool, according to the job’s process plan.
Crucially, task assignment is not pushed onto workers by the system. Instead, workers themselves signal ‘Ready’ (via a gesture recognition system, see Section 3) to their corresponding avatars to receive the next operation of a ZDM job. This principle of human initiative (see [26]) forms part of an effort to avoid disruptive system-driven task allocation and grant workers greater autonomy over their workload. Activities in such an HRMS are paced according to worker preferences and ergonomic needs.
The scheduling system also explicitly integrates ergonomic considerations through heuristics that guide how job operations are pulled from the operation pool. A ‘shortest distance’ heuristic has been implemented, prioritizing the assignment of tasks that are geographically closer to the current location of the worker, thereby reducing unnecessary physical movement on the shop floor. This human-centered design aligns with the principles of Industry 5.0, which emphasize the well-being of human workers alongside productivity.
Figure 3 illustrates the architecture of the task scheduler, showing the interplay between the job pool, operation pool, workstation agents, worker/robot agents, and the data blackboard that manages communication with the shop floor.
The task scheduler was realized as real-time simulation in Python (version 3.10.9) using SimPy (version 4.1.1) [28], with Redis (version 5.2.1) [29] serving as a communication blackboard for exchanging task and status updates between the agents and the shop floor system.
In our HRMS environment, human workers can interact with the task scheduling system via cameras and software that recognizes characteristics such as helmet wearing, helmet color, and human gestures. This allows human workers who want to participate in the ZDM process to be recognized, with specific gestures triggering the ‘Ready’ and ‘Done’ signals, indicating the workers willingness to take on a new task or task completion. The assignment of appropriate tasks to human workers is indicated using Spatial Augmented Reality (SAR) by projecting information directly into the work area (e.g., displaying text such as ‘inspection’ onto the component). The projector uses the helmet color to show the next task to the workers. An illustrative overview of the lab setting is given in Figure 4.

2.3. Testing by Means of a Digital Twin

Prior to the lab setup, we developed a Webots-based [30] digital twin that uses the identical Redis blackboard (Ready/Done signals, job/operation pools, status updates) as the production setup. The simulation serves as a software-in-the-loop substitute for hardware, so no scheduler logic or Redis interfaces had to be changed.
This digital twin developed in [27] allowed us to explore HRMS configurations also beyond physical lab constraints (up to six workstations, three workers, and two robots), calculate appropriate key performance indicators (KPIs), and verify the robustness of agent-oriented scheduling under controlled repeatable conditions. A sample view of the running simulation and the scheduler is depicted in Figure 5.
In the simulation, humans, robots, and workstations are modeled as autonomous agents that interact with the scheduler via Ready and Done signals, thus reproducing the principle of human initiative. Worker agents pull operations from the operation pool using two policies: a simple First-In-First-Out (FIFO) strategy and a shortest-path algorithm (SPA) heuristic that prioritizes nearby operations to reduce walking distances. This setup enables systematic comparison of ergonomic and throughput effects across many system sizes. Since the implementation of polishing robots in concrete HRMS can vary, the first experiments modeled them as mobile units capable of serving any workstation, ensuring flexibility in the system design. The computational experiments showed that the SPA heuristic reduced worker walking distances by up to 70% compared to FIFO, particularly in larger multistation settings, without compromising productivity. In the largest configuration tested (three workers, two robots, six workstations), productivity improved by 175% compared to the configuration with one worker, one robot, and two workstations. A detailed description of the model and the complete statistical evaluation of 36 configurations are reported in [27].
For the laboratory demonstration, we adopted a ceiling-mounted gantry-style robot assigned to fixed workstations. Accordingly, both the scheduler and the digital twin were updated to reflect this setup. The enhanced simulation model, shown in Figure 6, now also introduces the concept of different helmet colors and the use of color codes to emulate the planned worker guidance. A video illustrating the simulation is provided in [32].
That said, the current simulation still lacks several important technical aspects. For example, it cannot recognize gestures since the digital twin does not yet include internal cameras or the necessary software modules. The main advantages of the current simulation lie in its ability to conduct preliminary tests with the Redis interface and associated heuristics. This is particularly valuable when interfaces are modified, and it also helps customers in what-if sizing to determine the optimal number of worker robots and workstations by allowing arbitrary task execution times to be set and tested in new scenarios.

3. Materials and Methods

For the laboratory HRMS demonstrator, a setup with two workstations and one robot mounted on an overhead gantry was built that allows the inspection and rework activities by humans in collaboration with the robot. Our HRMS laboratory setup is depicted in Figure 7a,b. It provides space for up to six workstations that can be flexibly arranged. For the current study, a setup with two workstations and one robot was used.
In the real-world implementation, the following hardware components are integrated:
  • Ceiling-mounted robotic system: A UR10e collaborative robotic arm (Universal Robots A/S, Odense, Denmark) mounted at a height of 1.9 m above the floor on a 4-m linear axis (OH-au2mate, Ponitz, Germany), forms a hybrid system with a simplified gantry-style configuration that enables access to all workstations (driving speed of the robot: ≈ 0.5 m / s ).
  • Projection system: LCD laser projectors PT-MZ880 and PT-VMZ51S (Panasonic Corporation, Osaka, Japan) provide visualization and projection of task-relevant information for the workstations, with each projector capable of serving two workstations. For the pickup/delivery station, a cost-effective Epson EF-11 (Seiko Epson Corporation, Suwa, Japan) is used, providing sufficient brightness for this use case.
  • Wide-angle cameras: Multiple Genie Nano cameras (Teledyne DALSA Inc., Waterloo, ON, Canada) with a 2/3″ sensor and a short focal length lens are used for scene understanding.
For marking defect positions, six OptiTrack PrimeX 13W cameras (NaturalPoint, Inc., Corvallis, OR, USA) were additionally integrated during the project to enable marker-based tracking of a handheld 3D pen (tracked 6-DoF stylus) [33]. These hardware components are integrated with their corresponding systems in the overall architecture, which are interconnected via Redis, enabling perception, projection, and robotic control. Figure 8 shows an overview of the system architecture and the Redis topic-based communication, indicating which systems publish and subscribe to specific keys, excluding the OptiTrack defect position marking. The systems are configured to delete the entries once the corresponding events have been processed.
The resource/skill agent layer acts as the execution bridge between the task scheduler and the physical systems. The layer runs software agents for each worker and robot, responsible for controlling the UR10e on the linear axis, equipped with a Mirka AIROS 650CV robotic sanding unit (150 mm pad diameter, 5 mm orbit)—also suitable for polishing applications (Mirka Ltd., Jeppo, Finland). Moreover, the resource/skill agent layer generates appropriate task guidance information through the projection system described in [34]. For the vision system, gesture recognition using wide-angle cameras was implemented as an interaction modality. Humans wearing helmets are identified and the corresponding helmet color is used in the projections on the workstation to which they are assigned for tasks. Both tasks could benefit from previous work [35,36]. The assignment of tasks occurs only for helmet-wearing humans (Figure 9a) who actively signal ‘Ready’ by raising a hand, reflecting a human-initiative approach. Three types of gestures are supported (Figure 9b–d): hand raising to signal to be ‘Ready’ for the next task, swipe gestures to confirm task completion (‘Done’), and contextual swipes to answer yes/no questions, e.g., whether an inspection was acceptable. The task scheduler, described in detail in Section 2.2, coordinates the consistent execution of the process plans among all participants.
Following initial tests, two issues were observed: workers could not reliably tell whether their gestures were correctly interpreted, and sometimes forgot to raise a hand to request a new task after completing the previous one. To address this, an audio feedback module was added to the projection system. The module subscribes to the same Redis topics as the visual cues and emits brief text-to-speech messages or tones on events. Specifically, the system now plays in- and out- tones for ‘Ready’ and ‘Done’ events to confirm that the vision system has recognized the workers’ gestures and remind workers to raise a hand to request the next task.

3.1. Task and Interaction Specification for HRMS

Unlike conventional single-station HRC setups, the HRMS is characterized by multiple stations, shared robotic resources, human-initiative task allocation, and perception-driven events. This requires a precise specification of resources, preconditions, projections, and tasks, to ensure consistent coordination across the perception, projection, scheduling, and robotic subsystems. To implement the process plan in Figure 2, the individual steps were detailed with respect to the required resources, dependencies, projections, and tasks assigned to humans and robots. Table 1 summarizes these specifications. For each step, it lists (i) the resources (Human, Robot, Workstation, Feeder/Exit), (ii) the preconditions (e.g., human wearing helmet in color, human signaled ‘Ready’ (raised hand) for task, robot ready, flag set if the previous inspection was signaled as Ok or Nok), (iii) the projected cues (project color on feeder, workstation or exit, contextual swipe detection), and (iv) the task actions. The preconditions reflect the principle of human-initiative interaction (e.g., human signaled ‘Ready’), while projections provide context-sensitive SAR support at the stations.
An illustrative example of the process-step loading is shown in Figure 10a–d. The helmet-wearing worker first signals ‘Ready’ (Figure 10a) and then is guided to load a basin from a feeder (Figure 10b) into a workstation (Figure 10c). During insertion of the basin into the workstation, the text load is displayed at the basin (Figure 10d). After successful insertion of the basin into the workstation, the end of the load task is confirmed by a swipe gesture (Figure 10e). Additionally, Figure 10f shows the Ok/Nok display for contextual swipe detection to ask for the result of the inspection task, which always follows the load task.
For defect marking, we build on prior work: 3D positions can be specified either with a 3D pen [37,38] (Figure 11a–c) or by freehand finger drawing using SketchGuide [39,40] (Figure 11d,e). For our lab study, we used the previously developed 3D-pen-based interface because it is integrated with the existing OptiTrack setup and requires no additional sensors, minimizing setup time. After a worker defines and confirms regions on the workpiece, the projection mapping visualizes the suggested polishing regions on the workpiece. Finally, after the worker has confirmed all work regions, the scheduler assigns the polishing task to the robot. In the next step of the process, the confirmed regions are translated into robot paths, and the robot executes the polishing accordingly (see Figure 11f), following the projection-based approach described in [37]. The inspection–setup–polish cycle repeats until no defects remain.

3.2. Study Design

We conducted an observational within-subjects pilot study of a human–robot collaboration (HRC) workflow with five target users (n = 5; PROFACTOR employees) in our laboratory setup described. The pilot study was designed as a formative, safety-critical pilot to establish technical feasibility, interaction flow, and failure modes under controlled laboratory conditions. For safety and rapid iteration, we recruited internal volunteers to enable a low-risk first validation of the system. The age of the participants (4 males, 1 female) ranged from 22 to 42. Only one participant had no robotic experience. Recruitment followed an internal call; participation was not compensated. Inclusion: normal or corrected-to-normal vision, ability to stand and manipulate small parts. Exclusion: direct prior participation in designing the HRMS prototype evaluated. The task that all participants had to perform was to execute two jobs (Job A, Job B) together with the polishing robot with the help of the HRMS. This allowed us to compare Job A vs. Job B learning effects. For the task scheduler we used only the SPA heuristics. All sessions began with a briefing/informed consent and a safety induction. The participants then received scripted standardized instructions on the study procedure (task sequence, timing, permitted interactions, pause/stop rules) and completed a practice trial. An emergency stop was within reach and predefined stop criteria were applied.
For the two jobs, participants had to first fetch a basin on a tray from a feeder and then transport it to an appropriate workstation at the HRMS. Then they had to work together with the polishing robot, as defined by the process plan given in Figure 2, until no more defects were found. Finally, the basin had to be transported to the exit area. After both jobs were completed, the total session ended. Figure 12 shows the placement of workstations, feeders, and jobs in the laboratory. A video illustrating the task setup of the pilot study is provided in [41].
For defects, we chose to create a single issue (defect) with a removable marker on each basin. The participants were told to mark these issues for the robot to be polished with the 3D pen. This way it was guaranteed that only one rework cycle occurred, as the robot later polished the removable marked issue position away in the first cycle. For each session, performance-related metrics were recorded by analyzing the videos taken. Job throughput time was measured from the start signal (first confirmed user event) to each job completion. The following errors were recorded: (E1) safety-critical /procedural errors (for example, the tray was not secured to the base with the safety clips before the robot started polishing, basin was fetched/moved from/to the wrong station, …); (E2) the need for intervention by the study management (had to give an instruction so that the process could continue); (E3) system misbehavior (for example, button not clickable, correct gesture not recognized); and (E4) technical interruptions/stop events (e.g., emergency stop). The analyzes are descriptive. For continuous metrics, we report the mean ± SD and 95% CI per job (A/B) and overall. For count measures, we report totals (and percentages). Inferential tests are not reported.
After the session, the participants completed the System Usability Scale (SUS) [42], the Godspeed Questionnaire Series (GQS) [43] to capture robot-related perceptions (Anthropomorphism, Animacy, Likeability, Perceived Intelligence, Perceived Safety), and the RAW NASA Task Load Index (NASA-TLX) to estimate human workload [44,45]. The German translations followed established sources: SUS [46], GQS [47], and NASA-TLX [48]. To obtain feasible improvement ideas, we added brief post-task open-ended prompts (for example, ‘The first change you would make’, ‘Lack of feedback’, ‘Moments with low perceived safety’).

3.3. Statistical Analysis

Given the pilot sample (n = 5), analyses were descriptive. Continuous outcomes are summarized as mean ± SD with two-sided 95% CIs computed via bias-corrected and accelerated (BCa) bootstrap. Unless stated otherwise, we used 10,000 resamples with participant-level resampling. For job-specific metrics (Job A, Job B) we report mean ± SD and 95% CIs based on participant-level resampling. For the overall time, we first compute a within-subject average for each participant m i = ( A i + B i ) / 2 and then obtain its BCa 95% CI by applying the same participant-level bootstrap to the set { m i } .
Questionnaire scores are treated as continuous summaries; scale ranges are SUS (0–100), GQS (1–5; higher is better), and RAW NASA-TLX (0–100; higher indicates greater workload).
Implementation details: analyses were performed in R (v4.0.3, 2020-10-10) using the boot package (v1.3-25); random seed 1234. We did not perform hypothesis tests and do not draw confirmatory inferences. Adjective mapping for SUS [49] and RAW NASA–TLX [50] was used only as orientation. This pilot serves to explore feasibility and signaling.

3.4. Protocol Notes and Rationale

We (a) normalized Godspeed Safety anchors (safe on the right) and added anchor instructions to avoid reverse-coding; (b) moved NASA–TLX Performance to the end and labeled anchors; and (c) avoided concurrent/retrospective think-aloud in favor of brief post-task open-ended prompts to minimize interference in a safety-critical setting. Questionnaire scores are treated as continuous summaries. Scale ranges are as follows: SUS out of 100, GQS 1–5 (higher is better), and RAW NASA-TLX 0–100 (higher indicates greater workload). The overall GQS is computed as the simple unweighted mean of the subscales. Counts (e.g., error classes E1–E4) are reported as raw frequencies (and percentages) without CIs.

4. Results

In view of the pilot design (n = 5), outcomes for technical performance and usability were summarized descriptively (mean ± SD; 95% BCa CIs); see Section 3.3 for statistical details. Questionnaire outcomes in Table 2 and performance metrics in Table 3. Details are as follows:

4.1. System Usability (SUS)

Overall usability was SUS M = 70.5 , S D = 22 ; 95 % C I = [ 46 , 83 ] (Table 2). CIs are computed as specified in Section 3.3. This is typically interpreted as ‘OK’ bordering ‘Good’ [49]. Interpretive bands are used for orientation only.

4.2. User Perceptions (Godspeed, 1–5)

Overall GQS was 3.2 ± 0.8 with 95% CI [ 2.9 , 3.6 ] (Table 2). Mean (SD) subscales were Anthropomorphism 2.4 ( ± 0.7 ) , Animacy 3.1 ( ± 0.6 ) , Likeability 3.7 ( ± 0.6 ) , Perceived Intelligence 3.7 ( ± 0.7 ) , and Perceived Safety 3.5 ( ± 0.9 ) . Corresponding 95 % CIs were [ 1.6 , 2.8 ] , [ 2.5 , 3.5 ] , [ 3.3 , 4.2 ] , [ 3.2 , 4.4 ] , and [ 2.5 , 4 ] . CIs computed as specified in Section 3.3.

4.3. Workload (NASA-TLX, RAW)

The RAW NASA-TLX total averaged M = 37 , S D = 16.3 ; 95 % C I = [ 25.7 , 51.7 ] (Table 2). CIs computed as specified in Section 3.3. This is typically interpreted as the moderate ‘Somewhat high’ workload [50]. Totals used the standard RAW-TLX computation (unweighted mean of the six dimensions).
Dimension means (0–100; lower = less load) were Mental 38, Physical 29, Temporal 46, Performance 45 (higher = worse performance), Effort 30, Frustration 34. Temporal (46), and Performance (45) were highest; Physical was lowest (29).

4.4. Performance Metrics

Mean throughput time was 232.5 ± 46.5 s; 95 % C I = [ 205.7 , 268.2 ] overall, with Job A 258.4 ± 46.8 s; 95 % C I = [ 225 , 297.6 ] and Job B 206.6 ± 31.7 s; 95 % C I = [ 182.6 , 234.6 ] . CIs computed as specified in Section 3.3. To compare Job A and Job B throughput time (paired data, n = 5), we used the exact Wilcoxon signed-rank test. We reported p-values for both the directed and two-sided alternatives (V = 15; one-sided (A > B) p = 0.031; two-sided p = 0.063). The Hodges–Lehmann (HL) estimator indicated that Job A exceeded Job B by a median of 53.0 s (95%, BCa CI = [35.5, 72.0], consistent with the Wilcoxon signed-rank test, although with considerable uncertainty due to n = 5. But because Job A experienced systematic disadvantages due to scheduling interactions with Job B (longer block when switching from A→B—for details, see Section 5), processing times cannot be interpreted as evidence of a learning effect.

4.5. Errors

We instructed all participants to lock the wheels after placing the basin in the workstation. Surprisingly, this was only forgotten in one out of ten cases (E1). Most errors happened on the handling of swipe gestures. In total, five situations occurred where participants forgot to carry out a final swipe gesture (E2), and in one case the user wanted to know if a swipe gesture is necessary (E2). Only one participant forgot to take the 3D pen (E2). And one participant forgot to raise the hand and could not proceed (E2), whereas a lot of participants then raised their hands when they recognized that no new job appeared for them for a while. Analyzing the most interesting (E3) errors, the most disappointing system behavior was a wrong configuration of the green button on the basin at workstation 1. Here, the correct location in space where the gesture was expected seemed to be 20 cm more right than the green beaming suggested. Pressing the red button on workstation 1 was sometimes not recognized by the system. Most users used their right or left hand for the finger gesture, only two preferred to use only the finger, which surprisingly also worked. And for workstation 2, a situation occurred where the participant had to repeat the swipe gesture seven times until the system recognized it correctly. A technical restart (E4) was necessary when the system did not assign new tasks due to blocked gesture recognition.
In total, E2 errors (procedural/intervention) occurred seven times for Job A and one time for Job B, while E3 errors (system misbehavior) were counted three times for Job A and three times for Job B. No Job A and only one Job B were completed without errors. In total, only 10% of the jobs were completed without errors.
Across five E2 paired observations, the second condition produced fewer errors in 3/5 pairs, with two ties and none worse. To compare E2 for Job A and Job B (paired data, n = 5), we used the exact Wilcoxon signed-rank test with ties excluded (effective n = 3). We reported p-values for both the directed and two-sided alternatives: (V = 6; one-sided (A > B) p = 0.125; two-sided p = 0.250). The HL estimate (ties excluded) indicated that Job A exceeded Job B by a median of two errors (95% BCa CI = [1.0, 2.75]), which is consistent with the Wilcoxon test, but represents only weak evidence given the very small effective sample size (n = 3).

4.6. Open-Ended Post-Task Feedback + Observed Issues from the Videos

Thematic coding (descriptive counts; no inferential statistics) yielded the three most recurrent themes: (i) improve the audio signal’s sound level (3/5 mentions), (ii) improve the display output (3/5), and (iii) reduce unnecessary gestures (for example, after configuring the task, users prefer to mark failures immediately without the additional ‘inspection’ step) (3/5). The videos showed that sometimes participants started to perform tasks before the display on the basin showed the corresponding step text. Three of the five users used our internal central debugging monitor to find out what gestures were expected for them to perform. This would not work in multi-user settings. Two users appeared to get nervous when they heard the robot moving. And the sound of the moving robot prevented hearing the in–out audio signals.

5. Discussion

The HRMS implemented in the laboratory works well for the most part. For example, it consistently recognized gestures to determine whether people had arrived at their workstations or not. The task scheduler worked well in the laboratory—the SPA algorithm always scheduled the tasks closest to the participants. In tests, the robot arm sometimes obscured the cameras, making gesture recognition more difficult. One finding is that, where possible, at least two cameras should always be used to cover each relevant area in order to achieve good detection.
Our objectives were technical feasibility, usable task guidance, and human-initiative scheduling in a multistation ZDM cell. The pilot confirms feasibility under lab conditions but also reveals recurrent task-level failures (notably E2/E3) that currently limit robust operation and must be addressed before field deployment.
Despite the very narrow testing conditions in our laboratory, all test subjects were able to recognize where their next task was located because of the flashing LED light on the projectors. And there was not a single wrong transport. Nevertheless, although the light from the inexpensive EPSON projector used in the tests at the feeder/storage locations was sufficient, a more powerful projector should be used in the future.
The SUS mean of 70.5, which can be rated as ‘OK’ near to the ‘Good’ range, indicates basic acceptability despite remaining systemic errors, with room to improve confirmation flows and cue clarity, consistent with participant comments
As part of a more detailed comparison of the throughput times for Job A and Job B, a subsequent review of the session videos revealed that Job A was systematically disadvantaged by the sequence of tasks. After switching from Job A to Job B, when the robot had started the polish task for Job A, the SPA algorithm left the operator at Job B, resulting in a longer blockage of Job A. After later returning to Job A, when the robot had started the polish task for Job B, only short finishing tasks were required for Job A, so Job B was not comparably blocked. This asymmetry between jobs, caused by the sequence and interdependence, distorts the time measurements. Accordingly, the observed job throughput time differences between Job A and Job B (including Wilcoxon tests and the HL estimate) should not be interpreted as a learning or acceleration effect. They reflect planning/interaction effects rather than changes in participants performance.
The fact that only one out of ten jobs could be completed completely error-free was due to the still excessive number of system errors. Thus, addressing these specific failure modes and confirming usability and throughput in larger and more diverse samples remain essential next steps. Contextual swipe in general worked well, regardless of whether users pressed the button with their entire hand or with just one finger, except for the green buttons that were incorrectly set on workstation 1. Contextual swipe recognition occasionally underperformed, as changing laboratory conditions for other activities required repeated, time-consuming recalibration of the vision system. Unfortunately, this led to challenges on the day of the tests. Most E3 problems may become avoidable if these setup conditions are met. The swipe gesture, on the other hand, needs to be significantly revised from the current perspective. In many cases, it did not work as desired. Or, users forgot to do it altogether, which accounts for more than half of the E2 errors. The problem here is that, unlike the contextual swipe, the projector does not display information to indicate that a swipe gesture is expected. Replacing this with a simple contextual button could possibly lead to better results in the medium term.
Currently, error marking is not sufficiently integrated. For example, there is no signal to indicate when all errors have been marked. Instead, the system listens for a final swipe gesture while marking can be performed with the 3D pen. A major change in the architecture may become necessary to remedy this situation, also because users would have preferred to perform the inspection and configure tasks in a single step.
The E2 data (procedural/intervention) consistently favor fewer errors in the second condition (no pair worsened), but inference is weak due to the very small effective sample size (n = 3) and many effects of ties/floor. Exact tests and the BCa interval point in the same direction, yet they are not significant, so small or null effects cannot be ruled out. Practically, the trend is encouraging, but preliminary.
On the NASA–TLX, Physical Demand received the lowest score, likely because the trays simplify basin handling. The temporal Demand was comparatively higher, indicating some perceived time pressure. A comparable Performance score suggests that some participants were not fully satisfied with their own performance. This perception may be partly attributable to system-related issues. For example, a participant had to repeat the swipe gesture seven times. Although this was not the fault of the participant but a technical limitation, such incidents may have contributed to lower perceived performance in this pilot. Moderate RAW NASA-TLX scores point to cognitive/temporal load from confirmations and ambiguous cues. Reducing confirmation steps and increasing frequency of cues are expected to reduce perceived workload. Scheduling appointments based on proximity and clear signaling in the workplace can also reduce unnecessary walking and shifts in attention.
In current tests, only a single low-cost central speaker was used to generate the acoustic in- and out-signals. As a result, user feedback criticized the low volume. Future use of industrial ceiling speakers, for example, could improve this. Some of these devices also have microphones, which would enable voice input.
Regarding the Godspeed questionnaire, Perceived Safety was the main focus. The Perceived Safety score averaged 3.5 (SD = 0.9), suggesting generally favorable perceptions; interpretation remains descriptive given the pilot design. Maybe the robot’s driving noise contributed to the signs of nervousness observed in the video recordings. This raises the question whether it is possible to reduce the volume. Accordingly, for safety reasons and as part of standard commissioning, the maximum permissible speed should be confirmed. If necessary, the driving speed has to be reduced, which will also reduce the noise level.
The small sample size (n = 5) reflects an exploratory objective, namely, to identify system and interaction problems at an early stage and is not intended to draw general conclusions. The laboratory environment at a single location may cause location-specific effects (calibration, special features of the facility).
Related work shows that effective HRC requires specially tailored workplace adjustments [7]. Our HRMS meets these requirements through projection-based guidance, camera calibration procedures, and agent-based schedules that adjust access and workflow. The scheduler and the implemented task allocation are designed to take into account changes in HRMS actors and tasks. For our use case, for example, the basins could be delivered by AGVs that dock under the trays and position them at the workstations, which would also reduce E1 errors (tray not secured before polishing). Similarly, human inspection and error marking could be preceded by robotic AI-based pre-inspection and defect detection, followed by targeted human checks. More broadly, the HRMS concept may extend beyond ZDM to production processes requiring HRC, e.g., assembly processes.
For future industrial applications of the HRMS framework, several issues require consideration. In real-word manufacturing settings, carefully positioned cameras and sensor fusion of the multi-camera system will be essential for proper functioning of the camera-based gesture recognition system. In multi-robot settings, it might be necessary to introduce scheduling constraints specifying the workstations that are accessible for a robot. Depending on the installed multi-robot system and HRMS topology that constraint can be dynamic, if accessibility of workstations is influenced by other robots. In larger HRMS topologies an extended projection system will be required to ensure intuitive worker guidance, e.g., adding projected arrows on the shop floor.

Future Work

In the next phase, elimination of as much as possible of the system’s detected misbehavior is targeted, and the interaction will be extended beyond gestures with multi-modal inputs and safety-rated confirmation. Assessment of either safety-rated tactile skin (AIRSKIN; Blue Danube Robotics GmbH, Vienna, Austria) or on-robot proximity sensing (Roundpeg; Aschheim/Munich, Germany) is planned as an added layer for speed-and-separation monitoring, with effects on cycle time, safety margins, and user acceptance to be evaluated, while verifying compliance with ISO 10218 [51,52] and ISO 13849 [53] within the HRMS.
Currently, the laboratory setting is limited to recognize only yellow helmets and is thus not capable for multi-human operation. Updating the system accordingly and testing scalability with multiple humans is on the near-term roadmap. Challenges we expect include being able to reliably recognize different helmet colors in a wide variety of lighting conditions. The use of multiple speakers at workstations could lead to instructions such as in- and out-tones played for ‘Ready’ and ‘Done’ events being misunderstood by workers at other stations. To enable human initiative scheduling, workers must reliably know whether the system has already scheduled them. Whether alternatives to loudspeaker output are necessary or whether projection instructions in different colors work can only be determined in concrete multi-human tests.
Simulation KPIs (e.g., jobs per hour) measured in our previous simulation runs are not comparable to the lab setting due to a simplified lab setup and different operation-times. Extraction of operation-time distributions from the videos and their use to calibrate the simulation are planned, allowing predictions for HRMS configurations with multiple humans and robots and more than two workstations.
After implementing the necessary system improvements identified in this pilot, we plan to conduct a larger study with an externally recruited, more diverse participant sample to increase statistical significance and reduce confidence intervals. The primary outcomes will be SUS and job throughput time. Secondary outcomes will include error rates (E1–E4), NASA-TLX, and Godspeed Safety.

6. Conclusions

We asked, ‘How to implement a human-centric approach to task scheduling in an end-of-line quality assurance situation with human–robot collaboration?’ We answered this by specifying and building the Human–Robot Multistation System framework that integrates (i) multistation resource sharing, (ii) human-initiated task allocation via gestures and identity, (iii) projection-based guidance, and (iv) agent-based scheduling, and by validating technical feasibility in a ZDM lab demonstrator.
A gantry-style laboratory demonstrator for the final inspection process in a ZDM use case was realized and evaluated in an exploratory pilot (n = 5) focused on ergonomics and other human-factor outcomes. SUS, GQS, and RAW NASA–TLX and brief open-ended comments were collected to capture usability, perceptions, workload, and improvement ideas. The study confirms the technical feasibility of HRMS under laboratory conditions while still exposing recurrent task-level failures. Addressing these specific failure modes and confirming usability and throughput in larger, more diverse samples remain essential next steps.
Quantified outcomes were SUS 70.5 ± 22; 95 % C I = [ 46 , 83 ] ; overall GQS 3.2 ± 0.8; 95 % C I = [ 2.9 , 3.6 ] ; RAW NASA-TLX 37 ± 16.3; 95 % C I = [ 25.7 , 51.7 ] ; mean job throughput time 232.5 ± 46.5 s; 95 % C I = [ 205.7 , 268.2 ] ; and errors in 9/10 jobs (E1 = 1, E2 = 8, E3 = 6, E4 = 1; 1/10 jobs error-free), which should be addressed in future work. In simulation, a proximity-aware shortest-path heuristic reduced walking distance by up to 70% versus FIFO without reducing productivity.
Usability was acceptable, with a SUS mean of 70.5 (interpreted as ‘OK’ nearing ‘Good’), and perceived workload was moderate. Observational analyses and participant feedback identified concrete opportunities to streamline confirmations, strengthen visual and audio feedback, and clarify projected cues at the workstations. Procedure- and recognition-related errors revealed where robustness, calibration routines, and interaction details should be refined to support better task execution.
Specific recommendations for industrial applications are to use carefully positioned and routinely calibrated multi-camera vision, provide redundant visual/audio feedback with reliable input modalities (e.g., buttons for critical confirmations), employ proximity-aware scheduling to reduce walking, and ensure adequate projector coverage at stations.
The work is preliminary by design. The single-site laboratory setting, small sample size, and the specific gantry configuration limit generality to other HRMS topologies and field conditions. All inferences are descriptive.

Author Contributions

Conceptualization, A.H., M.J.K., M.I. and G.E.; methodology, A.H., M.J.K., M.I. and G.E.; software, A.H., F.S., S.F., F.W. and H.Z.; validation, A.H., M.J.K. and H.Z.; formal analysis, S.F., A.H. and H.Z.; investigation, A.H., S.F. and H.Z.; resources, A.H., M.I. and M.J.K.; data curation, S.F. and H.Z.; writing—original draft preparation, H.Z., A.H., M.J.K. and G.E.; writing—review and editing, H.Z., M.J.K., A.H., S.F. and A.P.; visualization, H.Z., A.H. and S.F.; supervision, M.I. and A.P.; project administration, M.J.K. and A.H.; funding acquisition, A.P. and M.I. All authors have read and agreed to the published version of the manuscript.

Funding

This research is financed by research subsidies granted by the government of Upper Austria (grant number Wi-2021-299762) as well as through financial support provided by the Austrian Institute of Technology (AIT).

Institutional Review Board Statement

Ethical review and approval were waived for this study as only internal employees voluntarily participated in the first technical validation experiments that did not involve personal or health-related data collection. Procedures adhered to institutional guidelines.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study, including consent for pseudonymous handling of study records and secure storage of raw files.

Data Availability Statement

The data presented in this study are available on request from the corresponding author due to privacy and institutional policy constraints. Raw study files are stored on secure institutional servers under role-based access control and are retained only for the required period before secure deletion. De-identified aggregate results are reported, and non-identifying metadata or analysis specifications may be shared with qualified researchers on reasonable request, subject to institutional review and a data-processing agreement.

Acknowledgments

The authors would like to thank Lauton Elias for preparing the CAD drawing of the HRMS laboratory setup and Sezgin Kutlu for providing helpful hints for organizing the pilot study. The authors used ChatGPT (OpenAI, San Francisco, CA, USA; Version GPT-5) to generate an illustrative figure of the planned HRMS laboratory setup (Figure 4) based on a text prompt. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

Authors Helmut Zörrer, Alexander Hämmerle, Gerhard Ebenhofer, Florian Steiner, Markus Ikeda, Stefan Fixl, Fabian Widmoser and Andreas Pichler were employed by the company PROFACTOR GmbH. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
3DThree Dimensional
AGVAutomated Guided Vehicle
CADComputer-Aided Design
FIFO            First In First Out
GQSGodspeed Questionnaire Series
HLHodges–Lehmann estimator
HRCHuman–Robot Collaboration
HRIHuman–Robot Interaction
HRMSHuman–Robot Multistation System
JSSPJob Shop Scheduling Problem
KPIKey Performance Indicator
LCDLiquid Crystal Display
MASMultiagent System
MPJPEMean Per Joint Position Error
NASA-TLXNational Aeronautics and Space Administration Task Load Index
QAQuality Assurance
SARSpatial Augmented Reality
SPAShortest-Path Algorithm
SUSSystem Usability Scale
ZDMZero-Defect Manufacturing

References

  1. Demir, K.A.; Döven, G.; Sezen, B. Industry 5.0 and Human-Robot Co-working. Procedia Comput. Sci. 2019, 158, 688–695. [Google Scholar] [CrossRef]
  2. Othman, U.; Yang, E. Human–Robot Collaborations in Smart Manufacturing Environments: Review and Outlook. Sensors 2023, 23, 5663. [Google Scholar] [CrossRef]
  3. Wan, P.K.; Leirmo, T.L. Human-centric zero-defect manufacturing: State-of-the-art review, perspectives, and challenges. Comput. Ind. 2023, 144, 103792. [Google Scholar] [CrossRef]
  4. Dhanda, M.; Rogers, B.A.; Hall, S.; Dekoninck, E.; Dhokia, V. Reviewing human-robot collaboration in manufacturing: Opportunities and challenges in the context of industry 5.0. Robot. Comput.-Integr. Manuf. 2025, 93, 102937. [Google Scholar] [CrossRef]
  5. Merlo, E.; Lamon, E.; Fusaro, F.; Lorenzini, M.; Carfì, A.; Mastrogiovanni, F.; Ajoudani, A. An ergonomic role allocation framework for dynamic human–robot collaborative tasks. J. Manuf. Syst. 2023, 67, 111–121. [Google Scholar] [CrossRef]
  6. Simone, V.D.; Pasquale, V.D.; Giubileo, V.; Miranda, S. Human-Robot Collaboration: An analysis of worker’s performance. Procedia Comput. Sci. 2022, 200, 1540–1549. [Google Scholar] [CrossRef]
  7. Weidemann, C.; Mandischer, N.; van Kerkom, F.; Corves, B.; Hüsing, M.; Kraus, T.; Garus, C. Literature Review on Recent Trends and Perspectives of Collaborative Robotics in Work 4.0. Robotics 2023, 12, 84. [Google Scholar] [CrossRef]
  8. Müller, R.; Vette, M.; Geenen, A. Skill-based Dynamic Task Allocation in Human-Robot-Cooperation with the Example of Welding Application. Procedia Manuf. 2017, 11, 13–21. [Google Scholar] [CrossRef]
  9. Pichler, A.; Akkaladevi, S.C.; Ikeda, M.; Hofmann, M.; Plasch, M.; Wögerer, C.; Fritz, G. Towards Shared Autonomy for Robotic Tasks in Manufacturing. Procedia Manuf. 2017, 11, 72–82. [Google Scholar] [CrossRef]
  10. Burgert, F.L.; Windhausen, M.; Kehder, M.; Steireif, N.; Mütze-Niewöhner, S.; Nitsch, V. Workforce scheduling approaches for supporting human-centered algorithmic management in manufacturing: A systematic literature review and a conceptual optimization model. Procedia Comput. Sci. 2024, 232, 1573–1583. [Google Scholar] [CrossRef]
  11. Liu, Y.; Fan, J.; Zhao, L.; Shen, W.; Zhang, C. Integration of deep reinforcement learning and multi-agent system for dynamic scheduling of re-entrant hybrid flow shop considering worker fatigue and skill levels. Robot. Comput.-Integr. Manuf. 2023, 84, 102605. [Google Scholar] [CrossRef]
  12. Ruiz, R.; Vázquez-Rodríguez, J.A. The hybrid flow shop scheduling problem. Eur. J. Oper. Res. 2010, 205, 1–18. [Google Scholar] [CrossRef]
  13. Zhang, M.; Li, C.; Shang, Y.; Huang, H.; Zhu, W.; Liu, Y. A task scheduling model integrating micro-breaks for optimisation of job-cycle time in human-robot collaborative assembly cells. Int. J. Prod. Res. 2022, 60, 4766–4777. [Google Scholar] [CrossRef]
  14. Baratta, A.; Cimino, A.; Longo, F.; Mirabelli, G.; Nicoletti, L. Task Allocation in Human-Robot Collaboration: A Simulation-based approach to optimize Operator’s Productivity and Ergonomics. Procedia Comput. Sci. 2024, 232, 688–697. [Google Scholar] [CrossRef]
  15. Manokha, I. The Implications of Digital Employee Monitoring and People Analytics for Power Relations in the Workplace. Surveill. Soc. 2020, 18, 540–554. [Google Scholar] [CrossRef]
  16. Siatras, V.; Bakopoulos, E.; Mavrothalassitis, P.; Nikolakis, N.; Alexopoulos, K. Production Scheduling Based on a Multi-Agent System and Digital Twin: A Bicycle Industry Case. Information 2024, 15, 337. [Google Scholar] [CrossRef]
  17. Mezgebe, T.T.; Bril El Haouzi, H.; Demesure, G.; Pannequin, R.; Thomas, A. Multi-agent systems negotiation to deal with dynamic scheduling in disturbed industrial context. J. Intell. Manuf. 2020, 31, 1367–1382. [Google Scholar] [CrossRef]
  18. Madi, F.; Al-Bazi, A.; Buckley, S.; Smallbone, J.; Foster, K. An agent-based heuristics optimisation model for production scheduling of make-to-stock connector plates manufacturing systems. Soft Comput. 2024, 28, 5899–5919. [Google Scholar] [CrossRef]
  19. Groß, S.; Gerke, W.; Plapper, P. Agent-based, hybrid control architecture for optimized and flexible production scheduling and control in remanufacturing. J. Remanuf. 2024, 14, 17–43. [Google Scholar] [CrossRef]
  20. Wang, J.; Li, Y.; Zhang, Z.; Wu, Z.; Wu, L.; Jia, S.; Peng, T. Dynamic Integrated Scheduling of Production Equipment and Automated Guided Vehicles in a Flexible Job Shop Based on Deep Reinforcement Learning. Processes 2024, 12, 2423. [Google Scholar] [CrossRef]
  21. Qin, Z.; Lu, Y. Knowledge graph-enhanced multi-agent reinforcement learning for adaptive scheduling in smart manufacturing. J. Intell. Manuf. 2024, 36, 5943–5966. [Google Scholar] [CrossRef]
  22. Psarommatis, F.; May, G.; Dreyfus, P.A.; Kiritsis, D. Zero defect manufacturing: State-of-the-art review, shortcomings and future directions in research. Int. J. Prod. Res. 2020, 58, 1–17. [Google Scholar] [CrossRef]
  23. US Assistant Secretary of Defense. Guide To Zero Defects. Manpower Installations and Logistics. In Quality and Reliability Assurance Handbook; US Assistant Secretary of Defense: Washington, DC, USA, 1965. [Google Scholar]
  24. Psarommatis, F.; Sousa, J.; Mendonça, J.P.; Kiritsis, D. Zero-defect manufacturing the approach for higher manufacturing sustainability in the era of industry 4.0: A position paper. Int. J. Prod. Res. 2022, 60, 73–91. [Google Scholar] [CrossRef]
  25. Deshpande, K.; Holzweber, J.; Thalhuber, S.; Hämmerle, A.; Mayrhofer, M.; Pichler, A.; de Paula Filho, P.L. Manufacturing Line Ablation, an approach to perform reliable early prediction. Procedia Comput. Sci. 2024, 232, 752–765. [Google Scholar] [CrossRef]
  26. Hämmerle, A.; Kollingbaum, M.J.; Steiner, F.; Ebenhofer, G.; Widmoser, F.; Ikeda, M.; Bauer, H.; Pichler, A. An approach to task scheduling in an end-of-line quality assurance situation with human-robot cooperation. Procedia Comput. Sci. 2025, 253, 524–532. [Google Scholar] [CrossRef]
  27. Hämmerle, A.; Zörrer, H.; Pichler, A. Simulation-Based Evaluation of Agent-Driven Scheduling in Human-Robot Teams for Zero-Defect Manufacturing. In Procedia Computer Science, Proceedings of the 7th International Conference on Industry of the Future and Smart Manufacturing (ISM 2025), Msida, Malta, 12–14 November 2025; Elsevier: Amsterdam, The Netherlands, 2025. [Google Scholar]
  28. Matloff, N. Introduction to Discrete-Event Simulation and the Simpy Language; Department of Computer Science, University of California at Davis: Davis, CA, USA, 2008; Volume 2, pp. 1–33. [Google Scholar]
  29. Carlson, J. Redis in Action; Simon and Schuster: New York, NY, USA, 2013. [Google Scholar]
  30. Michel, O. Webots: Professional Mobile Robot Simulation. Int. J. Adv. Robot. Syst. 2004, 1, 39–42. [Google Scholar] [CrossRef]
  31. Ferner, J. Redis Commander. Available online: https://github.com/joeferner/redis-commander (accessed on 3 November 2025).
  32. Zörrer, H.; Hämmerle, A. Example Video of a Running Webots Simulation for the Research Project Zero0P. Available online: https://figshare.com/articles/media/Example_video_of_a_running_Webots_simulation_for_the_research_project_Zero0P/30543338 (accessed on 5 November 2025).
  33. Profanter, S.; Perzylo, A.; Somani, N.; Rickert, M.; Knoll, A. Analysis and semantic modeling of modality preferences in industrial human-robot interaction. In Proceedings of the 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany, 28 September–2 October 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 1812–1818. [Google Scholar] [CrossRef]
  34. Stübl, G.; Ebenhofer, G.; Bauer, H.; Pichler, A. Lessons Learned from Industrial Augmented Reality Applications. Procedia Comput. Sci. 2022, 200, 1218–1226. [Google Scholar] [CrossRef]
  35. Heindl, C.; Stübl, G.; Pönitz, T.; Pichler, A.; Scharinger, J. Visual Large-Scale Industrial Interaction Processing. In Proceedings of the Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, London, UK, 9–13 September 2019; pp. 280–283. [Google Scholar] [CrossRef]
  36. Deshpande, K.; Heindl, C.; Stübl, G.; Kollingbaum, M.J.; Pichler, A. Novel First Person View for Human 3D Pose Estimation in Robotic Applications Using Fisheye Cameras. In Proceedings of the 2024 10th International Conference on Automation, Robotics and Applications (ICARA), Athens, Greece, 22–24 February 2024; pp. 112–116. [Google Scholar] [CrossRef]
  37. Ikeda, M.; Widmoser, F.; Stübl, G.; Zafari, S.; Sackl, A.; Pichler, A. An Interactive Workplace for Improving Human Robot Collaboration: Sketch Workpiece Interface for Fast Teaching (SWIFT). In Proceedings of the Adjunct Proceedings of the 2023 ACM International Joint Conference on Pervasive and Ubiquitous Computing & the 2023 ACM International Symposium on Wearable Computing, Cancun, Quintana Roo, Mexico, 8–12 October 2023; pp. 191–194. [Google Scholar] [CrossRef]
  38. Puthenkalam, J.; Zafari, S.; Sackl, A.; Gallhuber, K.; Ebenhofer, G.; Ikeda, M.; Tscheligi, M. Where should I put my Mark? In VR-based Evaluation of HRI Modalities for Industrial Assistance Systems for Spot Repair. In Proceedings of the 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), Busan, Republic of Korea, 28–31 August 2023; pp. 2148–2155. [Google Scholar] [CrossRef]
  39. Heindl, C.; Ebenhofer, G.; Kutlu, S.; Widmoser, F.; Pichler, A. SketchGuide: A Baseline Vision-Based Model for Rapid Robot Programming via Freehand Sketching on Any Surface. In European Robotics Forum 2025; Huber, M., Verl, A., Kraus, W., Eds.; Series Title: Springer Proceedings in Advanced Robotics; Springer Nature: Cham, Switzerland, 2025; Volume 36, pp. 174–179. [Google Scholar] [CrossRef]
  40. Heindl, C.; Ebenhofer, G.; Kutlu, S.; Widmoser, F.; Pichler, A. SketchGuide+: Enabling Parametric Freehand Robot Programming via Wearable-Assisted Sketching. In Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), Espoo, Finland, 11–15 October 2025. [Google Scholar]
  41. Zörrer, H.; Hämmerle, A.; Fixl, S. Example Video of a HRC ZDM Laboratory Test for the Research Project Zero0P. Available online: https://figshare.com/articles/media/Example_video_of_a_HRC_ZDM_laboratory_test_for_the_research_project_Zero0P/30543455 (accessed on 5 November 2025).
  42. Brooke, J. SUS-A quick and dirty usability scale. Usability Eval. Ind. 1996, 189, 4–7. [Google Scholar]
  43. Bartneck, C.; Kulić, D.; Croft, E.; Zoghbi, S. Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. Int. J. Soc. Robot. 2009, 1, 71–81. [Google Scholar] [CrossRef]
  44. Hancock, P.A.; Meshkati, N. Human Mental Workload; North-Holland Publishing Company: Amsterdam, The Netherlands, 1988; Volume 52. [Google Scholar]
  45. Hart, S.G. Nasa-Task Load Index (NASA-TLX); 20 Years Later. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2006, 50, 904–908. [Google Scholar] [CrossRef]
  46. Gao, M.; Kortum, P.; Oswald, F.L. Multi-Language Toolkit for the System Usability Scale. Int. J. Hum.–Comput. Interact. 2020, 36, 1883–1901. [Google Scholar] [CrossRef]
  47. Bartneck, C. Godspeed questionnaire series: Translations and usage. In International Handbook of Behavioral Health Assessment; Springer: Berlin/Heidelberg, Germany, 2023; pp. 1–35. [Google Scholar]
  48. Galler, B. NASA-Task Load Index: Ein Instrument, um Sich der Komplexität von Beratungsanlässen in der Allgemeinmedizin zu Nähern. Ph.D. Thesis, Zentrale Hochschulbibliothek Lübeck, Lübeck, Germany, 2021. [Google Scholar]
  49. Bangor, A.; Kortum, P.; Miller, J. Determining what individual SUS scores mean: Adding an adjective rating scale. J. Usability Stud. 2009, 4, 114–123. [Google Scholar]
  50. Prabaswari, A.D.; Basumerda, C.; Utomo, B.W. The Mental Workload Analysis of Staff in Study Program of Private Educational Organization. IOP Conf. Ser. Mater. Sci. Eng. 2019, 528, 012018. [Google Scholar] [CrossRef]
  51. ISO 10218-1: 2011; Robots and Robotic Devices—Safety Requirements for Industrial Robots—Part 1: Robots. International Organization for Standardization: Geneva, Switzerland, 2011.
  52. ISO 10218-2: 2011; Robots and Robotic Devices–Safety Requirements for Industrial Robots–Part 2: Robot Systems and Integration. International Organization for Standardization: Geneva, Switzerland, 2011; Volume 3.
  53. EN ISO 13849-1:2023; Safety of Machinery—Safety-Related Parts of Control Systems—Part 1: General Principles for Design. European Committee for Standardization (CEN): Brussels, Belgium; International Organization for Standardization (ISO): Geneva, Switzerland, 2023; European adoption of ISO 13849-1:2023.
Figure 1. Conceptual overview of the three pillars of the Human–Robot Multistation System (HRMS). 1. Multistation Architecture: One or more robots (1.2) serve multiple workstations (1.3), at which also human workers (1.1) operate. 2. Interaction and Guidance Modalities: The vision system (2.4) observes humans wearing helmets, their gestures and locations, and emits human-related events (Ready, Done, Location, Contextual swipe result) to the data blackboard (3.3). Resource/Skill Interaction Agents (2.1) pull the next human/robot tasks from the data blackboard (3.3) to (i) coordinate robot control (2.3) for the robot(s) (1.2), (ii) provide appropriate projections and audio output (2.2) for the workstations (1.3), and (iii) generate robot events back to the data blackboard (3.3). 3. Agent-Based Scheduling: Resource/skill scheduler agents (3.2) (i) add to and pull from job and operation pools (3.1), (ii) allocate jobs to workstations, (iii) schedule the next tasks for humans and robots, and (iv) write those tasks and receive Ready, Done, Location, and Result events via the data blackboard (3.3).
Figure 1. Conceptual overview of the three pillars of the Human–Robot Multistation System (HRMS). 1. Multistation Architecture: One or more robots (1.2) serve multiple workstations (1.3), at which also human workers (1.1) operate. 2. Interaction and Guidance Modalities: The vision system (2.4) observes humans wearing helmets, their gestures and locations, and emits human-related events (Ready, Done, Location, Contextual swipe result) to the data blackboard (3.3). Resource/Skill Interaction Agents (2.1) pull the next human/robot tasks from the data blackboard (3.3) to (i) coordinate robot control (2.3) for the robot(s) (1.2), (ii) provide appropriate projections and audio output (2.2) for the workstations (1.3), and (iii) generate robot events back to the data blackboard (3.3). 3. Agent-Based Scheduling: Resource/skill scheduler agents (3.2) (i) add to and pull from job and operation pools (3.1), (ii) allocate jobs to workstations, (iii) schedule the next tasks for humans and robots, and (iv) write those tasks and receive Ready, Done, Location, and Result events via the data blackboard (3.3).
Applsci 15 12230 g001
Figure 2. Process plan for quality assurance in human–robot cooperation (HRC): all steps are performed by the human except Polish, which is executed by the robot. During inspection, the human decides quality: OK → Unload; Not OK → Configure → Polish (by Robot) → (repeat) until no defects remain.
Figure 2. Process plan for quality assurance in human–robot cooperation (HRC): all steps are performed by the human except Polish, which is executed by the robot. During inspection, the human decides quality: OK → Unload; Not OK → Configure → Polish (by Robot) → (repeat) until no defects remain.
Applsci 15 12230 g002
Figure 3. Agent-oriented scheduling architecture. Interplay between the job pool, operation pool, workstation agents, worker/robot agents, and the data blackboard. A job consists of a sequence of operations. When an operation is executed by a human or the robot we refer to it as a task.
Figure 3. Agent-oriented scheduling architecture. Interplay between the job pool, operation pool, workstation agents, worker/robot agents, and the data blackboard. A job consists of a sequence of operations. When an operation is executed by a human or the robot we refer to it as a task.
Applsci 15 12230 g003
Figure 4. Illustration of the HRMS laboratory layout concept for the planned human–robot collaboration (HRC) on the polishing use case: The human marks surface defects at the first workstation, while the robot polishes at the third workstation (workstations are numbered from left to right). Helmet colors encode worker IDs and are projected as guidance in the work area.
Figure 4. Illustration of the HRMS laboratory layout concept for the planned human–robot collaboration (HRC) on the polishing use case: The human marks surface defects at the first workstation, while the robot polishes at the third workstation (workstations are numbered from left to right). Helmet colors encode worker IDs and are projected as guidance in the work area.
Applsci 15 12230 g004
Figure 5. Development and debugging setup for the first Webots-based HRMS simulation. Left: running Webots scene with workers, workstations, and mobile robot units. Top right: scheduler/resource-agent code in a VS Code debug session with live logs. Bottom right: Redis Commander [31] inspecting the shared Redis blackboard (keys for workers, robots, and job queues with timestamps).
Figure 5. Development and debugging setup for the first Webots-based HRMS simulation. Left: running Webots scene with workers, workstations, and mobile robot units. Top right: scheduler/resource-agent code in a VS Code debug session with live logs. Bottom right: Redis Commander [31] inspecting the shared Redis blackboard (keys for workers, robots, and job queues with timestamps).
Applsci 15 12230 g005
Figure 6. Webots simulation with humans in color-coded helmets; displays show the next workstation in matching colors. Feeder and Exit stations are visible at the top center. The floor uses a grid with a tile size of 0.5 m × 0.5 m . Left: Webots scene tree showing the static setup of the scene. Center: 3D simulation view. Right: Example of a human-controller program. Bottom: console window displaying log output from controllers.
Figure 6. Webots simulation with humans in color-coded helmets; displays show the next workstation in matching colors. Feeder and Exit stations are visible at the top center. The floor uses a grid with a tile size of 0.5 m × 0.5 m . Left: Webots scene tree showing the static setup of the scene. Center: 3D simulation view. Right: Example of a human-controller program. Bottom: console window displaying log output from controllers.
Applsci 15 12230 g006
Figure 7. HRMS laboratory setup. Modular T-slot aluminum framing built from 100 mm × 100 mm extrusions. External dimensions: 4.2 m ( L ) × 3.0 m ( W ) × 2.5 m ( H ) . Sliding trays external dimensions 0.95 m ( L ) × 0.80 m ( W ) × 0.91 m ( H ) : (a) CAD model showing the ceiling-mounted UR10e, a polishing tool, workstations with basins in sliding trays, projectors, and an operator desk. (b) Real setup with additional OptiTrack cameras mounted at the edges for defect marking using a 3D pen.
Figure 7. HRMS laboratory setup. Modular T-slot aluminum framing built from 100 mm × 100 mm extrusions. External dimensions: 4.2 m ( L ) × 3.0 m ( W ) × 2.5 m ( H ) . Sliding trays external dimensions 0.95 m ( L ) × 0.80 m ( W ) × 0.91 m ( H ) : (a) CAD model showing the ceiling-mounted UR10e, a polishing tool, workstations with basins in sliding trays, projectors, and an operator desk. (b) Real setup with additional OptiTrack cameras mounted at the edges for defect marking using a 3D pen.
Applsci 15 12230 g007
Figure 8. System architecture and Redis-based publish/subscribe interface used in the HRMS demonstrator. Task scheduler publishes jobs/operations and consumes worker/robot states (Ready, Done, Location, Contextual swipe results); resource/skill interaction agent layer observes operation events for each worker and the robot to coordinate the robot control and the projection/audio system and publishes robot events (Ready, Done); vision system (helmet-ID, gestures) publishes worker events (Ready, Done, Location, Contextual swipe results); optiTrack 3D-pen marking is omitted. Abbreviations: <WorkerID> = ID corresponding to the human worker’s helmet color (yellow: ‘worker1’; green: ‘worker2’; red: ‘worker3’; blue: ‘worker4’); <RobotID> = the robot ID; in the lab setting, only ‘robot1’.
Figure 8. System architecture and Redis-based publish/subscribe interface used in the HRMS demonstrator. Task scheduler publishes jobs/operations and consumes worker/robot states (Ready, Done, Location, Contextual swipe results); resource/skill interaction agent layer observes operation events for each worker and the robot to coordinate the robot control and the projection/audio system and publishes robot events (Ready, Done); vision system (helmet-ID, gestures) publishes worker events (Ready, Done, Location, Contextual swipe results); optiTrack 3D-pen marking is omitted. Abbreviations: <WorkerID> = ID corresponding to the human worker’s helmet color (yellow: ‘worker1’; green: ‘worker2’; red: ‘worker3’; blue: ‘worker4’); <RobotID> = the robot ID; in the lab setting, only ‘robot1’.
Applsci 15 12230 g008
Figure 9. Vision module for human-initiated interaction in the HRMS (gesture detection frame rate 18 fps; latencies: end of gesture → gesture recognition 0.3 s; gesture recognition → projection 2.4 s; end of gesture → robot starts polishing (end-to-end) 4.2 s; mean per joint position error (MPJPE) 11 cm for arm joints—wrist, shoulder, elbow): (a) Only helmet-wearing humans are tracked; helmet color encodes the worker ID. (b) Hand raising issues a Ready event to request the next task; the red 3D box indicates the allowed volume for the upcoming swipe gesture. (c) Swipe detection confirms task completion (Done); in this debug view, the 3D interaction box turns green after a successful swipe. (d) Contextual swipe provides a binary return value (yes/no), to report whether the surface quality of the basin assessed in the last inspection is Ok/Nok; the green 3D box marks the OK region, while the red box marks the NOK region.
Figure 9. Vision module for human-initiated interaction in the HRMS (gesture detection frame rate 18 fps; latencies: end of gesture → gesture recognition 0.3 s; gesture recognition → projection 2.4 s; end of gesture → robot starts polishing (end-to-end) 4.2 s; mean per joint position error (MPJPE) 11 cm for arm joints—wrist, shoulder, elbow): (a) Only helmet-wearing humans are tracked; helmet color encodes the worker ID. (b) Hand raising issues a Ready event to request the next task; the red 3D box indicates the allowed volume for the upcoming swipe gesture. (c) Swipe detection confirms task completion (Done); in this debug view, the 3D interaction box turns green after a successful swipe. (d) Contextual swipe provides a binary return value (yes/no), to report whether the surface quality of the basin assessed in the last inspection is Ok/Nok; the green 3D box marks the OK region, while the red box marks the NOK region.
Applsci 15 12230 g009
Figure 10. Example of the Load process step with gesture-driven human initiative and projected guidance: (a) The helmet-wearing worker signals Ready by raising the right hand. (b) Assigned feeder is visualized. (c) Human fetches basin from feeder to workstation. (d) Operation text (Load) is projected onto the basin. (e) Human signals load completion by using a swipe gesture (Done). (f) After loading, a contextual swipe detection projects appropriate fields to ask for the inspection result decision (Ok/Nok).
Figure 10. Example of the Load process step with gesture-driven human initiative and projected guidance: (a) The helmet-wearing worker signals Ready by raising the right hand. (b) Assigned feeder is visualized. (c) Human fetches basin from feeder to workstation. (d) Operation text (Load) is projected onto the basin. (e) Human signals load completion by using a swipe gesture (Done). (f) After loading, a contextual swipe detection projects appropriate fields to ask for the inspection result decision (Ok/Nok).
Applsci 15 12230 g010
Figure 11. Defect marking for process step configure—alternative input modalities: (ac) 3D pen: first region marked (diameter d 12 cm ) confirmed and second region processed. (df) SketchGuide: freehand drawing to mark a defect and process parameter adjustment. (f) The subsequent step is polish—robot polishes the marked defect area.
Figure 11. Defect marking for process step configure—alternative input modalities: (ac) 3D pen: first region marked (diameter d 12 cm ) confirmed and second region processed. (df) SketchGuide: freehand drawing to mark a defect and process parameter adjustment. (f) The subsequent step is polish—robot polishes the marked defect area.
Applsci 15 12230 g011
Figure 12. Laboratory layout of the HRMS. The HRMS—frontside contains the robot area (home position on the left) and two workstations. The worker area in the center is 1.1 m wide due to lab constraints. The feeder area with trays and basins is at the bottom, the exit is on the right, and the worker start position is near workstation 1. In the experiments workers had to fetch a basin on a tray from the feeder, carry it to the assigned workstation, collaborate with the polishing robot through the process plan (Figure 2), and, once all defects are eliminated, transport the basin to the exit area. With the shortest-path algorithm (SPA) scheduling, and the given human start position in the user study, Job A always used Feeder 1/Workstation 1 and Job B always used Feeder 2/Workstation 2.
Figure 12. Laboratory layout of the HRMS. The HRMS—frontside contains the robot area (home position on the left) and two workstations. The worker area in the center is 1.1 m wide due to lab constraints. The feeder area with trays and basins is at the bottom, the exit is on the right, and the worker start position is near workstation 1. In the experiments workers had to fetch a basin on a tray from the feeder, carry it to the assigned workstation, collaborate with the polishing robot through the process plan (Figure 2), and, once all defects are eliminated, transport the basin to the exit area. With the shortest-path algorithm (SPA) scheduling, and the given human start position in the user study, Job A always used Feeder 1/Workstation 1 and Job B always used Feeder 2/Workstation 2.
Applsci 15 12230 g012
Table 1. Summary of HRMS task and interaction specifications per process step. Abbreviations: H, human; R, robot; Ws, workstation; HwhC, human wearing helmet in color C; Hr, human signaled ‘Ready’ (raised hand) for task; HaF HaWs HaE, human arrived at feeder, workstation or exit; Rr, robot ready; PCoF PCoWs PCoE, project color on feeder, workstation or exit; PPsWs, project process step on workstation; Cs, contextual swipe detection (Ok/Nok); InspOk InspNok, flag set if the previous inspection was signaled as ok or not ok.
Table 1. Summary of HRMS task and interaction specifications per process step. Abbreviations: H, human; R, robot; Ws, workstation; HwhC, human wearing helmet in color C; Hr, human signaled ‘Ready’ (raised hand) for task; HaF HaWs HaE, human arrived at feeder, workstation or exit; Rr, robot ready; PCoF PCoWs PCoE, project color on feeder, workstation or exit; PPsWs, project process step on workstation; Cs, contextual swipe detection (Ok/Nok); InspOk InspNok, flag set if the previous inspection was signaled as ok or not ok.
Process
Step
HRWsFeeder
/Exit
PreconditionProjectionTask
LoadX XXHwhC, HrInitial: PCoF
HaF: PCoWs
HaWs: PPsWs
Fetch basin from feeder and insert into workstation.
InspectionX X HwhC, HrInitial: PCoWs
HaWs: PPsWs; Cs
Inspect if there are surface defects:
→ InspOk/-Nok.
ConfigureX X HwhC, Hr, InspNokInitial: PCoWs
HaWs: PPsWs
Define rework location.
Polish XX RrPolish at all given defect locations.
UnloadX XXHwhC, Hr, InspOkInitial: PCoWs
HaWs: PPsWs, PCoE
Unload basin from Ws to exit.
Table 2. Summary of SUS, Godspeed, and NASA–TLX. Values are mean ± SD (SUS out of 100; GQS 1–5; RAW NASA-TLX 0–100). 95% CIs via BCa bootstrap (10,000 resamples; participant-level); see Section 3.3. Abbreviations: CI, confidence interval; BCa, bias-corrected and accelerated; SUS, System Usability Scale; GQS, Godspeed Questionnaire Series; RAW NASA-TLX, Raw NASA Task Load Index.
Table 2. Summary of SUS, Godspeed, and NASA–TLX. Values are mean ± SD (SUS out of 100; GQS 1–5; RAW NASA-TLX 0–100). 95% CIs via BCa bootstrap (10,000 resamples; participant-level); see Section 3.3. Abbreviations: CI, confidence interval; BCa, bias-corrected and accelerated; SUS, System Usability Scale; GQS, Godspeed Questionnaire Series; RAW NASA-TLX, Raw NASA Task Load Index.
MeasureValue95% CI
SUS total70.5 ± 22[46, 83]
GQS3.2 ± 0.8[2.9, 3.6]
RAW NASA–TLX total37 ± 16.3[25.7, 51.7]
Table 3. Performance metrics per job (n = 5 per job; 10 executions total). Times are mean ± SD in seconds; 95% CIs via BCa bootstrap (10,000 resamples; participant-level). For overall, the mean is computed from per-participant averages mi = (Ai + Bi) / 2 and its CI is obtained with the same BCa procedure (participant-level resampling). See Section 3.3 for statistical details. Error counts are raw frequencies (no CI). Abbreviations: BCa, bias-corrected and accelerated; #, count; n/N, number with denominator. Throughput time: measured from first confirmed user event to job process plan completion; E1 = safety-critical errors (e.g., tray not secured with safety clips before polishing); E2 = procedural/intervention errors (e.g., gesture forgotten); E3 = system misbehavior errors (e.g., button not clickable); E4 = technical interruption errors (e.g., restart necessary, emergency stop).
Table 3. Performance metrics per job (n = 5 per job; 10 executions total). Times are mean ± SD in seconds; 95% CIs via BCa bootstrap (10,000 resamples; participant-level). For overall, the mean is computed from per-participant averages mi = (Ai + Bi) / 2 and its CI is obtained with the same BCa procedure (participant-level resampling). See Section 3.3 for statistical details. Error counts are raw frequencies (no CI). Abbreviations: BCa, bias-corrected and accelerated; #, count; n/N, number with denominator. Throughput time: measured from first confirmed user event to job process plan completion; E1 = safety-critical errors (e.g., tray not secured with safety clips before polishing); E2 = procedural/intervention errors (e.g., gesture forgotten); E3 = system misbehavior errors (e.g., button not clickable); E4 = technical interruption errors (e.g., restart necessary, emergency stop).
MetricJob A (n = 5)Job B (n = 5)Overall (n = 10)
Throughput time, s258.4 ± 46.8 [225, 297.6]206.6 ± 31.7 [182.6, 234.6]232.5 ± 46.5 [205.7, 268.2]
E1 safety-critical, #011
E2 procedural/intervention, #718
E3 system misbehavior, #336
E4 technical interruption, #011
Jobs with any error, n/N (%)5/5 (100%)4/5 (80%)9/10 (90.0%)
Jobs with no error, n/N (%)0/5 (0%)1/5 (20%)1/10 (10%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zörrer, H.; Hämmerle, A.; Kollingbaum, M.J.; Ebenhofer, G.; Steiner, F.; Ikeda, M.; Fixl, S.; Widmoser, F.; Pichler, A. The Human–Robot Multistation System—Visual Task Guidance and Human Initiative Scheduling for Collaborative Work Cells. Appl. Sci. 2025, 15, 12230. https://doi.org/10.3390/app152212230

AMA Style

Zörrer H, Hämmerle A, Kollingbaum MJ, Ebenhofer G, Steiner F, Ikeda M, Fixl S, Widmoser F, Pichler A. The Human–Robot Multistation System—Visual Task Guidance and Human Initiative Scheduling for Collaborative Work Cells. Applied Sciences. 2025; 15(22):12230. https://doi.org/10.3390/app152212230

Chicago/Turabian Style

Zörrer, Helmut, Alexander Hämmerle, Martin J. Kollingbaum, Gerhard Ebenhofer, Florian Steiner, Markus Ikeda, Stefan Fixl, Fabian Widmoser, and Andreas Pichler. 2025. "The Human–Robot Multistation System—Visual Task Guidance and Human Initiative Scheduling for Collaborative Work Cells" Applied Sciences 15, no. 22: 12230. https://doi.org/10.3390/app152212230

APA Style

Zörrer, H., Hämmerle, A., Kollingbaum, M. J., Ebenhofer, G., Steiner, F., Ikeda, M., Fixl, S., Widmoser, F., & Pichler, A. (2025). The Human–Robot Multistation System—Visual Task Guidance and Human Initiative Scheduling for Collaborative Work Cells. Applied Sciences, 15(22), 12230. https://doi.org/10.3390/app152212230

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop