Next Article in Journal
Network Structure and Synergy Characteristics in the Guangdong-Hong Kong-Macao Greater Bay Area
Previous Article in Journal
Energy Optimisation of Industrial Limestone Grinding Using ANN
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Mind, Machine, and Meaning: Cognitive Ergonomics and Adaptive Interfaces in the Age of Industry 5.0

by
Andreea-Ruxandra Ioniță
1,
Daniel-Constantin Anghel
2,* and
Toufik Boudouh
3,4
1
Doctoral School of Industrial Engineering, National University of Science and Technology Politehnica Bucharest, Pitesti University Center, Târgul din Vale Street, 110040 Pitesti, Romania
2
Department of Manufacturing and Industrial Management, National University of Science and Technology Politehnica Bucharest, Pitesti University Center, Târgul din Vale Street, 110040 Pitesti, Romania
3
Centre National de la Recherche Scientifique—CNRS, Laboratoire Interdisciplinaire Carnot de Bourgogne ICB UMR 6303, Université de Technologie de Belfort Montbéliard—UTBM, Université Marie et Louis Pasteur, 90010 Belfort, France
4
Centre National de la Recherche Scientifique—CNRS, Laboratoire Interdisciplinaire Carnot de Bourgogne ICB UMR 6303, Université Bourgogne Europe, 21000 Dijon, France
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(14), 7703; https://doi.org/10.3390/app15147703
Submission received: 14 June 2025 / Revised: 4 July 2025 / Accepted: 8 July 2025 / Published: 9 July 2025

Abstract

In the context of rapidly evolving industrial ecosystems, the human–machine interaction (HMI) has shifted from basic interface control toward complex, adaptive, and human-centered systems. This review explores the multidisciplinary foundations and technological advancements driving this transformation within Industry 4.0 and the emerging paradigm of Industry 5.0. Through a comprehensive synthesis of the recent literature, we examine the cognitive, physiological, psychological, and organizational factors that shape operator performance, safety, and satisfaction. A particular emphasis is placed on ergonomic interface design, real-time physiological sensing (e.g., EEG, EMG, and eye-tracking), and the integration of collaborative robots, exoskeletons, and extended reality (XR) systems. We further analyze methodological frameworks such as RULA, OWAS, and Human Reliability Analysis (HRA), highlighting their digital extensions and applicability in industrial contexts. This review also discusses challenges related to cognitive overload, trust in automation, and the ethical implications of adaptive systems. Our findings suggest that an effective HMI must go beyond usability and embrace a human-centric philosophy that aligns technological innovation with sustainability, personalization, and resilience. This study provides a roadmap for researchers, designers, and practitioners seeking to enhance interaction quality in smart manufacturing through cognitive ergonomics and intelligent system integration.

1. Introduction

1.1. Importance of the Human–Machine Interaction in Automated Industrial Environments

In the context of profound transformations driven by digitalization and the rise of emerging technologies, the human–machine interaction (HMI) is becoming a strategically relevant concept within automated industrial environments. The HMI encompasses all the ways in which humans communicate with, control, and collaborate with technological systems, ranging from basic graphical interfaces to direct cooperation with intelligent robots [1]. The shift from Industry 3.0 to Industry 4.0 and, more recently, toward Industry 5.0 has led to a paradigm change—moving from classical automation to a symbiotic relationship between humans and machines [2,3].
An effective interaction between operators and automated systems significantly contributes to increased productivity, improved product quality, and the reduction of operational errors and risks [4]. Moreover, an ergonomic interface design and adaptation to users’ cognitive and physiological capacities enhance work satisfaction and safety [5].
In the era of Industry 5.0, the emphasis shifts from mere efficiency and autonomy toward personalization, sustainability, and the promotion of human well-being [6]. Furthermore, the HMI is no longer merely a technical interface element but a transformative tool for redefining the human–technology relationship [7].

1.2. Main Objectives of This Research

This study aims to offer an in-depth overview of the current developments in the human–machine interaction within industrial settings. This paper pursues the following specific objectives:
-
To identify recent studies and trends in the field of HMIs, with a focus on applications within Industry 4.0 and 5.0;
-
To analyze ergonomic techniques applied in optimizing the design of interfaces and the interactions between operators and automated systems;
-
To explore interaction models and the cognitive, psychological, and physiological factors that influence operator performance, efficiency, and safety;
-
To define future research directions, emphasizing the development of intelligent, adaptive, and sustainable HMI solutions.
Through this work, we aim to contribute to the scientific understanding of the industrial HMI by offering a valuable theoretical and applied framework for researchers, engineers, system designers, and decision-makers involved in the digital transformation and humanization of manufacturing processes.

2. Human–Machine Interaction: Definitions and Key Concepts

2.1. Evolution of the Human–Machine Interaction

2.1.1. From Mechanization to Intelligent Automation

The development of the human–machine interaction has followed the trajectory of industrial evolution, starting from mechanization and culminating in the emergence of intelligent, human-centered systems. In the early industrial revolutions, machines were designed primarily to extend physical labor and operated based on mechanical or rudimentary electromechanical principles. Human interaction was minimal and mostly limited to manual controls and monitoring. Automation became increasingly important during the 20th century, and programmable logic controllers (PLCs) allowed machines to execute predefined sequences without continuous human input. However, this approach remained rigid and lacked adaptability [8].
The shift toward Industry 4.0 introduced smart manufacturing systems based on cyber–physical systems (CPS), real-time data analytics, and cloud connectivity. These technologies enabled dynamic decision-making and more sophisticated interaction patterns between humans and machines. Today, Industry 5.0 continues this transformation by emphasizing not only productivity and efficiency but also sustainability, resilience, and a human-centric design approach [8,9].

2.1.2. The Roles of Artificial Intelligence and Autonomous Systems

Artificial intelligence (AI) has played a pivotal role in evolving the HMI from static, preprogrammed interactions to context-aware, adaptive systems. Machine learning algorithms such as deep learning, computer vision, and natural language processing allow machines to understand human behavior, gestures, speech, and emotions, leading to more natural and seamless interactions [10].
Using these capabilities, autonomous systems can make decisions in real time and adapt to dynamic environments with limited human involvement. Such systems are now used not only in industrial robotics but also in areas like telemedicine, autonomous vehicles, and service robots. The convergence of AI with the HMI has also brought ethical and cognitive challenges, particularly in ensuring trust, maintaining human oversight, and preserving transparency in decision-making processes [10,11].
Furthermore, using immersive technologies like augmented reality (AR), virtual reality (VR), and extended reality (XR) improves interactions by offering operators clear visual guidance, realistic simulations, and digital feedback while performing complex tasks. These tools are particularly valuable in training, maintenance, and safety-critical operations [10].

2.1.3. Industrial Examples and Use Cases

The application of advanced HMI systems is widespread across various industrial domains. In smart manufacturing, collaborative robots (cobots) work side by side with human operators using sensor-based and vision-guided systems to adjust actions in real time. These robots are commonly deployed in assembly lines, reducing physical strain and improving precision [8].
In the automotive industry, AR-based interfaces guide workers through maintenance procedures, while wearable exoskeletons support ergonomically challenging tasks. Digital twins are used to simulate production scenarios, enabling predictive maintenance and operational optimization. Moreover, wearable sensors and AI-driven analytics track physiological indicators of fatigue, workload, and posture, creating closed-loop systems that protect worker health and enhance performance [9,11].
Recent studies have also highlighted the importance of maintaining the “sense of agency” (SoA) in HMI systems. SoA—the subjective experience of initiating and controlling one’s actions—has become critical in systems where humans supervise or intermittently interact with autonomous technologies. Maintaining SoA ensures that users remain cognitively engaged, which is essential for safety and accountability in applications like automated driving and remote robotic surgery [12].

2.2. Models of the Human–Machine Interaction

2.2.1. Classic Models of the Human–Machine Interaction

Norman’s Model—The model of the seven stages of action proposed by Norman provides a conceptual framework for analyzing how users interact with systems. It includes stages such as goal formation, intention, action specification, execution, perception, interpretation, and evaluation. This model is essential in interface design as it emphasizes feedback loops and the alignment between user intentions and system responses [13].
SHERPA—The Systematic Human Error Reduction and Prediction Approach (SHERPA) is a method of cognitive task analysis that helps detect possible human errors and understand their root causes. By combining a hierarchical task analysis with error prediction, SHERPA aims to make systems safer and easier to use. A recent application integrated SHERPA with Fuzzy FMEA to identify and prioritize UX-related failure modes in interactive systems, demonstrating the model’s utility in contemporary system design [14].
OODA Loop—Observe Orient Decide Act, originally developed for military applications, has been adapted for human–machine interactions to model decision-making under dynamic and uncertain environments. It supports the development of responsive systems that adapt to user input and environmental changes [15].

2.2.2. Modern Models Based on Intelligent Technologies: AI and Machine Learning Applications

The rise of Industry 4.0 and 5.0 has transformed human–machine interactions by integrating machine learning (ML) methods that allow for behavioral predictions, interface adaptation, and customized decision support. ML is increasingly used to extract knowledge from user behavior logs to enhance interface design and system efficiency [16].
Advanced HMIs supported by AI allow for more intuitive alarm management, content prioritization, and context-aware user guidance. One example demonstrated how ML can improve human-centric design in process industries by optimizing workflow displays based on historical event patterns [10].
Furthermore, trust modeling between users and intelligent agents is becoming a foundational aspect of modern HMI systems. Trust-aware models incorporate cognitive reasoning about how users perceive system reliability and vice versa, enabling more effective and safe collaborations [17].

2.2.3. User Interface and User Experience in Industrial Environments

Designing effective user interfaces (UIs) and user experiences (UEs) is essential in modern industrial environments where human operators interact with robotic or automated systems. UX is defined as the user’s overall perception and emotional response when interacting with a system, incorporating usability, effectiveness, and satisfaction [14].
In collaborative human–robot industrial settings, AR-supported interfaces have been shown to enhance task efficiency, operator satisfaction, and safety. A systematic review emphasized the importance of integrating human-centered design principles to address usability challenges and cognitive workload [17].
Additionally, UX improvement through failure mode analysis is gaining ground. A hybrid approach combining fuzzy logic and SHERPA enables the identification and ranking of UX-related failure modes, helping developers optimize the interaction design [14].

2.3. Factors Influencing Human–Machine Interactions

Human–machine interactions in industrial settings are influenced by a wide array of factors that impact usability, safety, efficiency, and user satisfaction. These factors can be categorized into cognitive, physiological/biomechanical, psychological, and environmental domains.

2.3.1. Cognitive Factors

Cognitive factors such as mental workload (MWL), attentional resources, and multi-sensory integration critically influence the quality and safety of human–machine interactions. Excessive information input may exceed users’ processing capacity, leading to cognitive overload and performance degradation, especially in high-demand environments such as automated vehicles or shared control systems [18,19]. Adaptive HMIs (A-HMIs) mitigate this risk by dynamically adjusting the presentation of information in response to the operator’s cognitive state [20]. Mental fatigue, resulting from prolonged cognitive engagement, has been shown to reduce proprioceptive integration and increase reliance on vestibular and cerebellar inputs, affecting postural stability and task performance [21]. Neurophysiological studies confirm that sustained cognitive tasks lead to reduced activation in control-related brain areas and lower task accuracy [22].
Furthermore, real-time sensing of MWL using physiological signals (e.g., EEG and eye-tracking) enhances the system’s ability to adapt interface complexity and timing [23]. Multimodal interaction, combining visual, auditory, and haptic channels, has been found to reduce visual dependency and improve cognitive ergonomics, contributing to a safer and more intuitive HMI design [24].

2.3.2. Physiological and Biomechanical Factors

Physiological and biomechanical factors play crucial role sin designing effective human–machine interaction systems, particularly within Industry 5.0 frameworks that emphasize human-centered work environments. Musculoskeletal disorders (MSDs), primarily resulting from repetitive tasks, static postures, and high biomechanical loads, remain one of the most prevalent occupational health issues in Europe, affecting up to 60% of workers and contributing significantly to absenteeism and early retirement [25]. Digital human modeling and wearable motion tracking technologies are increasingly employed to assess biomechanical loads and ergonomic indices in real time, supporting preventive interventions and improved workplace design [26,27]. The integration of collaborative robots (cobots) has shown promising results in reducing the physical workload and enhancing postural ergonomics by sharing tasks involving heavy lifting or repetitive motion [28]. Physiological monitoring, including heart rate variability and electromyography, is also gaining traction as a method for detecting physical strain and fatigue in industrial settings, allowing adaptive systems to respond in real time [29].
Consequently, virtual reality and digital simulations enable proactive ergonomic assessments during the design phase, facilitating the optimization of tasks and tools before implementation [26]. Altogether, the coupling of physiological sensing and biomechanical modeling contributes to safer, more efficient, and human-centric HMI systems aligned with Industry 5.0 values [30].

2.3.3. Psychological Factors

Psychological factors such as trust, stress, the emotional state, and user acceptance significantly shape human–machine interactions, particularly in the context of semi- and fully automated systems. Trust in automation is essential for effective HMI design; users must perceive the system as predictable and safe to engage with it confidently, especially in shared control environments like autonomous or rail vehicles [24]. Emotional and cognitive states—such as stress, mental fatigue, or emotional discomfort—can alter decision-making and situation awareness, potentially leading to errors or a loss of control [31]. Emerging technologies now enable real-time stress detection using physiological signals, including electrooculography (EOG) via smart glasses, allowing a dynamic adaptation of HMI elements to users’ psychological status [32]. Emotional recognition through facial expression analysis is also gaining traction as a non-invasive way to assess workers’ cognitive states and support ergonomic interventions [33].
Furthermore, user acceptance remains a pivotal psychological construct that influences HMI success, shaped by perceived safety, ease of use, behavioral intention, and social influence—factors formalized in models such as TAM (Technology Acceptance Model), UTAUT (Unified Theory of Acceptance and Use of Technology), and AVAM (Automated Vehicle Acceptance Model) [34]. Poorly accepted HMIs may generate anxiety, hesitation, or a rejection of otherwise functional technologies, which is why standardized testing procedures and user-centered design are crucial in validating psychological alignment [35]. In addition, visual legibility, interface aesthetics, and message clarity influence users’ perceptions, reducing mental workload and increasing comfort during interactions with automated systems [35].

2.3.4. Environmental and Organizational Factors

Environmental and organizational factors play a critical role in shaping human–machine interactions, influencing both ergonomic safety and psychological acceptance. Elements such as noise, lighting, and spatial layout affect cognitive performance and user comfort in automated or semi-automated environments [30]. At the organizational level, supportive leadership, clear communication, and participatory ergonomics reduce resistance to technological change and mitigate stress responses in highly digitalized contexts [24,26]. Moreover, a workplace culture that promotes emotional safety, autonomy, and psychological flexibility enables workers to better adapt to the rapid evolution of HMI systems [34]. Tools such as wearable sensors and digital simulations further enhance this adaptation by allowing personalized ergonomic adjustments and real-time monitoring of user well-being [32]. Finally, integrating sustainable design principles and ensuring work–life balance are essential for reducing burnout and fostering long-term acceptance of intelligent technologies [33].

3. Ergonomic Techniques in Human–Machine Interactions

3.1. Principles of Ergonomics Applied to Human–Machine Interactions

3.1.1. Physical Ergonomics (Posture, Fatigue, and Physical Effort)

Physical ergonomics focuses on the anatomical and biomechanical compatibility between the human body and the physical demands of tasks in human–machine interactions. A poor working posture, prolonged physical effort, and muscle fatigue are primary actors associated with work-induced musculoskeletal disorder problems (WMSDs), which are prevalent in repetitive and manual handling operations across industrial environments [36]. Ergonomic misalignment can lead to high physical strain, reduced productivity, and long-term health consequences, especially in aging workforces or those performing highly repetitive tasks [37].
Recent advancements in sensor technology and wearable systems have enabled more precise monitoring of posture and physical exertion in real-time. Devices such as inertial measurement units (IMUs), EMG, and pressure sensors allow for the objective identification of biomechanical risks and physical overload by tracking joint angles, movement velocity, and muscular activation during tasks [38]. These data can be processed through machine learning algorithms to estimate ergonomic indices and support preventive interventions [38,39]. Consequently, haptic feedback systems integrated into smart workwear are proving effective at training workers to adopt a better posture and reduce the time spent in harmful positions, thereby reducing local fatigue and long-term strain [40].
The integration of collaborative robots (cobots) further enhances physical ergonomics by reducing human physical effort through task sharing. In hybrid workstations, cobots assist with heavy lifting, repetitive motions, or awkward positioning, enabling a redistribution of physical demands and lowering the incidence of musculoskeletal injuries [41]. When designed with anthropometric considerations and real-time adaptability, such systems contribute to sustainable productivity and improved occupational well-being [42].
Table 1 provides a structured overview of the main physical ergonomic factors that influence the human–machine interaction (HMI). It highlights how elements such as posture, fatigue, and physical load impact operator performance, safety, and task execution [39,40]. The Table 1 also emphasizes the role of technological solutions, such as wearable sensors, haptic feedback systems, and collaborative robots (Cobots), in mitigating ergonomic risks by enabling real-time monitoring, posture correction, and physical workload redistribution [41]. By linking each factor to its ergonomic implications and referencing validated sources, the table offers a concise framework for identifying critical intervention points in HMI system design [41].

3.1.2. Cognitive Ergonomics

Cognitive ergonomics, as a discipline, addresses how mental processes such as perception, information processing, and decision-making affect performance and safety in human–machine interactions. In high-complexity work environments, operators are exposed to an increased informational density, requiring rapid perception and filtering of relevant cues—particularly under time pressure and stress, which can impair attention and lead to cognitive overload [43]. Studies confirm that mental workload correlates strongly with decision accuracy and task execution time, especially when perceptual ambiguity or multitasking is present [44]. Informational assistance systems, such as augmented reality (AR) and adaptive user interfaces, mitigate this by presenting visual guidance that reduces the burden on working memory and shortens decision paths [45].
The design of collaborative robotic systems must integrate cognitive ergonomics from the early stages to align machine behavior with human cognitive capabilities. Systems that adapt based on the user’s workload—measured through physiological signals such as eye movement, heart rate variability, or EEG (electroencephalography) —can enhance situational awareness and reduce the reaction time [29]. Decision-making in human–robot collaboration is influenced not only by task complexity but also by the level of system transparency and the avoidance of cognitive biases, such as confirmation or overconfidence bias, which have historically led to catastrophic errors in aviation and industry [46,47]. Thus, cognitive ergonomics is not merely a safety factor but a foundation for trust, adaptability, and sustainable interaction in relation to Industry 5.0 developments.
Figure 1 illustrates the sequential cognitive flow underpinning human–machine interactions, highlighting how humans perceive, interpret, and respond to information in real time. The process begins with external stimuli, such as visual or auditory cues, entering the perceptual system, where selective attention filters relevant inputs. These are then cognitively processed, evaluated, and interpreted based on the task context, memory, and experience. Once sufficient information is integrated, the user selects and executes a decision, which may involve a motor response (e.g., performing a button press) or a verbal command. This action, in turn, generates feedback either from the machine or the environment, which the user reinterprets, restarting the cognitive loop. Understanding this cycle is essential for ergonomic system design, as overload at any stage (especially perception or decision-making) can result in delayed reactions or critical errors [43].

3.1.3. Organizational Ergonomics: Workplace Design and Collaboration

As manufacturing advances toward Industry 5.0, the human–machine interaction is no longer limited to task automation but involves collaborative intelligence between humans and adaptive systems. This evolution places increased emphasis on organizational ergonomics—the optimization of structures, workflows, and policies to improve both system performance and worker well-being [9,48].
Workplace design significantly influences the quality of the HMI. Ergonomically optimized environments reduce musculoskeletal risks and mental strain, while promoting productivity and satisfaction. Elements such as task variation, job rotation, and clear communication channels contribute to lowering the incidence of fatigue and repetitive stress injuries [49,50].
Cyclic job rotation schemes, especially when optimized through algorithmic planning, have shown promise in reducing the ergonomic load and supporting sustainable work rhythms [51]. Moreover, motion capture technologies allow for a detailed analysis of posture and movement, offering objective data for ergonomic improvements [52].
In physically demanding sectors, passive exoskeletons can alleviate biomechanical stress, though their effectiveness is conditioned by task fit and worker perception. Field studies show mixed acceptance, emphasizing the need for participatory design approaches [53].
In collaborative workspaces with cobots, trust and system transparency are critical. Overtrust or distrust can impair performance and safety. Human-centered HMI design—featuring adjustable autonomy and intuitive feedback—helps calibrate user trust and enhances acceptance of intelligent systems [54,55].
Virtual reality (VR) has emerged as a valuable tool for prototyping collaborative stations. By simulating human–robot interactions and assessing ergonomic comfort in early design stages, VR supports more efficient implementation and reduces design errors [56].
Beyond workstation design, organizational ergonomics involves participatory strategies and knowledge-based tools for layout optimization. Simulation environments, digital twins, and AI-supported design platforms are increasingly integrated in ergonomic planning to balance efficiency with human comfort [50]. Flexible, human-centered layouts—those that can adapt to anthropometric diversity and cognitive needs—are fundamental in Industry 5.0 environments. Here, the human is viewed not as an operator, but as a creative problem-solver, supported by intelligent machines [9,48].

3.2. Methods Used in Ergonomic Evaluations

3.2.1. Rapid Upper Limb Assessment (RULA): Methodological Framework and Applications

The Rapid Upper Limb Assessment is an observational screening tool developed by McAtamney and Corlett in 1993 to evaluate biomechanical and postural strain affecting the upper limbs, neck, and trunk during work tasks. It is especially useful for identifying factors that contribute to musculoskeletal disorders caused by occupational activities (WMSDs) in environments where workers perform repetitive, static, or awkward postures. The method has been widely adopted in both industrial and healthcare settings due to its efficiency, low cost, and practicality [57].
RULA divides the body into two groups for posture analysis:
-
Group A—upper arm, lower arm, and wrist (Figure 2).
The scores are
-
One for 20° extension to 20° of flexion;
-
Two for extension greater than 20° or 20–45° of flexion;
-
Three for 45–90° of flexion;
-
Four for 90° or more of flexion.
-
Group B—neck, trunk, and legs (Figure 3).
The scores are
-
One for 0–10° flexion;
-
Two for 10–20° flexion;
-
Three for 2/1° or more flexion;
-
Four if in extension.
Each segment is scored based on the angular deviation from a neutral posture, combined with modifiers for force, repetition, and muscle use. The intermediate scores (A and B) are then cross-referenced in tables to yield a final Grand Score ranging from one to seven. The Table 2 shows a structured scoring approach that allows practitioners to prioritize interventions based on risk severity [58].
Strengths: RULA’s key advantage lies in its simplicity, speed of use, and focus on an upper limb evaluation—an anatomical area commonly affected by WMSDs. It does not require special equipment and is thus practical for on-site assessments in real work environments. Studies have validated its sensitivity and correlation with discomfort reports, EMG activity, and kinematic data [59,60].
RULA has demonstrated strong reliability and accuracy when used with digital pose estimation technologies, including single-view 3D pose estimation models that track worker postures using RGB video footage. These systems, such as the DyWHSE dataset used by Kwon et al., have improved the accuracy of the RULA-based wrist posture analysis in complex manufacturing environments [61].
Limitations: Despite its widespread adoption, RULA presents several drawbacks. It is limited in its capacity to assess lower extremity postures, temporal exposure to risk, or postural variability throughout a task. Furthermore, assessments are typically unilateral, which may overlook asymmetrical loads. Some studies have noted inter-rater reliability challenges due to subjective scoring, though training improves consistency [58].
The method also tends to overestimate risks for temporary or transitional postures and can underestimate dynamic loading, which is more accurately captured by tools such as EMG or biomechanical modeling [62].
Applied Industrial Case Studies
-
Wire Harness Assembly with Cobots: Navas-Reascos et al. applied RULA to assess ergonomic improvements in a wire harness assembly when a collaborative robot (cobot) was introduced. In the original manual process, high scores of six and seven were recorded, primarily due to upper arm elevation and wrist twisting. After implementation, these scores dropped to three, confirming a significant reduction in risk exposure [62].
-
Three-Dimensional Pose Estimation in Manufacturing: Paudel et al. developed a computer vision-based RULA evaluation framework using 3D human pose estimation. Their system showed 93% alignment with expert RULA ratings in non-occluded views and 82% accuracy in occluded postures. This method proves valuable in automating large-scale ergonomic assessments, especially in settings with numerous workers or real-time monitoring needs [59].
-
Inertial Sensor-Based RULA Automation: Gómez-Galán et al. proposed a wearable inertial sensor system that automates RULA and REBA scoring. Their results achieved an over 88% classification accuracy compared to expert assessments. This method is suitable for high-risk, dynamic tasks where visual scoring is impractical, such as in heavy industry or confined workspaces [58].
-
Agriculture Sector: Paudel et al. conducted a bibliometric review showing that agriculture, healthcare, and manufacturing are the top domains for RULA use. In agricultural contexts, such as fruit harvesting or rubber tapping, high RULA scores have led to equipment redesign and optimized workflows. The authors noted RULA’s consistent tendency to detect high risk in repetitive, bent-forward upper-limb postures [59,60].
Effectiveness and Reliability
-
Validation Accuracy: In recent experimental setups using digital sensors or pose estimation, RULA scores matched expert evaluations with over 90% accuracy [59,60].
-
Statistical Reliability: Intraclass correlation coefficients (ICCs) often exceed 0.80 in well-trained evaluators, though variability remains for complex tasks [58].
-
Comparative Sensitivity: RULA was shown to yield higher risk ratings than OWAS and REBA in static or upper-limb focused tasks, but lower sensitivity in dynamic whole-body postures [63].
In conclusion, the RULA remains an indispensable tool in ergonomic analysis, especially when assessing upper-body static or repetitive work. Its application has been enhanced by emerging technologies such as collaborative robots, computer vision systems, and wearable sensors, which improve scoring accuracy and allow for large-scale deployment. However, for full-body assessments or highly dynamic work scenarios, RULA should be complemented with other tools such as REBA (Rapid Entire Body Assessment) or biomechanical simulation models.

3.2.2. Ovako Working Posture Analysis System (OWAS): Ergonomic Assessment Methodology and Applications

The Ovako Working Posture Analysis System is a method developed in the 1970s by the steel surface treatment company Ovako Oy for the evaluation of workers’ postures and the identification of musculoskeletal disorder (MSD) risk factors in industrial environments. The system was originally designed for tasks involving heavy manual handling, particularly in steel manufacturing, but has since been widely adopted across various sectors such as agriculture, logistics, healthcare, and manufacturing [64,65].
OWAS assesses postural load by classifying body positions and providing action categories to determine the urgency of corrective ergonomic measures. It is widely used due to its simplicity, rapid application, and ability to cover the entire body’s postural configuration during work tasks [63].
Method Description and Scoring Principles
OWAS evaluates the worker’s posture based on four components [66]:
-
Back posture (four categories) (Figure 4);
-
Arm postures (three categories) (Figure 5);
-
Leg postures (seven categories) (Figure 6);
-
Load/force handled (three categories) (Figure 7).
Each body part is assigned a code based on its position during the task. The combination of these codes results in a unique posture code (e.g., 2121) which is then linked to one of four Action Categories (ACs) (Table 3).
The method is typically applied through a visual observation or video analysis of work tasks. The frequency and duration of each posture code are documented by analysts to evaluate the overall risk associated with the task [64,67].
Strengths are indicated as follows [59]:
-
Applicable in real-time with no specialized equipment;
-
Entire body evaluation, unlike RULA or AULA, which are more focused on upper limbs;
-
Suitable for physical work in logistics, agriculture, construction, and retail sectors.
Limitations can be defined as follows [30]:
-
Does not account for the duration or repetition of a posture;
-
Limited ability to differentiate between subtle variations in joint angles;
-
Less sensitive to dynamic tasks involving frequent movement transitions.
Case Studies and Industry Applications
-
Automotive Manual Assembly: Loske et al. applied OWAS to evaluate postures during manual seat assembly in the automotive sector [65]. The study showed that workers performing repetitive overhead and forward-bending tasks accumulated high-risk OWAS postures (code 4232, AC4). The use of AR-based training and ergonomic redesign reduced exposure to harmful postures and significantly improved cerebral oxygenation, indicating reduced fatigue.
-
Retail Intralogistics: Loske et al. (2021) analyzed logistics workers in warehouse picking and packaging using OWAS and CUELA (a digital strain analysis system) [65]. In four standard task configurations, OWAS identified a high frequency of AC3 and AC4 scores, primarily associated with trunk flexion and static leg postures. Recommendations included redesigning shelf heights and rotating tasks more frequently to reduce the lumbar load.
-
Agriculture Sector: In a comparative study by Choi et al. (2020), OWAS was used alongside RULA and REBA to evaluate 196 farm tasks [63]. While OWAS was less sensitive to the upper-limb load than RULA, it proved effective at identifying risky full-body postures, particularly during squatting and bent-knee tasks such as planting and harvesting.
-
Manufacturing Simulations: Ojstersek et al. (2020) employed OWAS within a digital human modeling platform (Tecnomatix Jack) to simulate ergonomic scenarios using data from a 360° spherical camera [67]. Postural risk profiles derived through OWAS were used to assess improvements achieved by collaborative workstations. The integration of OWAS in simulated environments validated its applicability in Industry 4.0 contexts.
Digital Integration and Future Directions
Modern OWAS applications increasingly rely on pose estimation and digital human modeling tools. For example, Paudel et al. (2022) reported an OWAS classification accuracy of 94% using 3D pose estimations for well-visible postures and 83% for occluded views, supporting its feasibility in vision-based automated systems [59].
Future developments include
-
Combining OWAS with cognitive workload indicators;
-
Real-time risk alerts through wearable motion sensors;
-
Integration in digital twin environments and augmented reality simulations.
In conclusion, OWAS remains a robust and versatile tool for whole-body ergonomic risk assessments and is particularly suitable for physically intensive and static posture-dominant work environments. While it has limitations in detecting fine-grained postural shifts and task dynamics, its integration with Industry 4.0 technologies, such as AR, VR, and 3D motion tracking, enhances its precision and applicability. The system provides actionable insights that support proactive workplace design and long-term health sustainability.

3.2.3. Human Reliability Analysis (HRA): Principles, Techniques, and Industrial Applications

Human Reliability Analysis (HRA) is a methodological framework used to detect, analyze, and control human errors in complex socio-technical systems. It is particularly vital in domains where the consequences of human error are severe, such as nuclear power, aerospace, healthcare, and, increasingly, industrial manufacturing [48]. With the expansion of Industry 4.0 and the introduction of digital tools and cyber–physical systems, HRA is also being recontextualized for modern industrial environments, emphasizing its importance in ensuring quality, efficiency, and safety [27,68].
HRA typically supports broader risk assessment frameworks such as Probabilistic Risk Assessment (PRA) and is essential for predicting human performance under varying operational conditions [69].
Methodological Frameworks in HRA
HRA encompasses a set of qualitative and quantitative techniques designed to evaluate the likelihood and impact of human errors. Common methodologies include
-
HEART (Human Error Assessment and Reduction Technique)—assigns error-producing conditions to specific tasks to estimate human error probability (HEP) [70];
-
SHERPA (Systematic Human Error Reduction and Prediction Approach)—developed by Embrey, this method is regarded as a highly effective approach for examining human reliability in task execution, with a focus on the cognitive factors behind human error [71];
-
CREAM (Cognitive Reliability and Error Analysis Method)—it was introduced by E. Hollnagel in 1998 and it builds upon a cognitive process framework, the Contextual Control Model (COCOM), which describes the relationships between cognitive processes occurring in the human brain [72].
These methods differ in complexity and application domains, with HEART and SHERPA often used in manufacturing contexts and CREAM more prevalent in safety-critical industries.
HRA in Manufacturing: Types of Human Error
Torres et al. (2021) applied SHERPA and HEART to a complex manual assembly process and grouped thirty-six action errors, nine related to selection, eight involving information retrieval, and six checking errors [73]. The study revealed that geometry-sensitive components (like cushioned loop clamps) had the most significant risk of human error. Perceptually enhanced instructions and feedback loops were found to be effective at mitigating such errors.
In another context, Barosz et al. (2020) discussed reliability differences between human-operated and robot-assisted machining lines [74]. Simulation models in FlexSim demonstrated that human operators are more likely to introduce errors during fatigue or when handling non-repetitive tasks. The use of HRA allowed the identification of systemic weaknesses where training or automation could reduce human-induced failures.
Embedding wearable sensors into the system, digital twins, and machine learning has allowed for real-time human reliability monitoring. For example, Hovanec et al. (2024) utilized Tecnomatix JACK software and motion-capture data to model maintenance operations for aircraft brakes and wheels [27]. Their digital ergonomic model accounted for anthropometric variation and predicted high-risk postures contributing to reliability issues.
Young et al. (2023) developed a CNN-based posture recognition model using inertial sensors to estimate wrist flexion/extension [69]. Their system achieved 65% agreement with the gold-standard optoelectronic systems and demonstrated the feasibility of using AI for HRA in tasks with repetitive hand movements.
These digital tools allow the proactive identification of failure points before errors manifest, extending HRA from a diagnostic tool to a predictive one.
HRA Applications: Industrial Case Studies
-
Aircraft Maintenance (Yu et al., 2025): A biomechanical model incorporating motion capture data was proposed for aircraft maintenance tasks [68]. By integrating inverse trigonometric algorithms, the model quantifies joint stress and task difficulty, directly supporting HRA metrics. The model proved capable of real-time feedback, enabling ergonomic and reliability-based scheduling.
-
Assembly Line Guidance (Torres et al., 2021): In a manual bracket installation process, the HEART method predicted the highest HEP values where operators had to choose among geometrically similar components [73]. Visual guidance systems and color-coded interfaces reduced the probability of action errors significantly.
-
Human–Robot Collaboration (Barosz et al., 2020): In simulation scenarios comparing human vs. robot operator reliability, HRA-informed scenarios demonstrated that mixed-mode (human and robot) setups outperformed human-only cells in stability and quality under high-volume settings [74]. The insights allowed for process redesign where robots handled repetitive tasks and humans were assigned tasks requiring judgment.
Limitations and Future Directions
Despite its maturity, HRA faces the following challenges:
-
Subjectivity—traditional HRA methods depend on expert judgment, leading to inter-analyst variability;
-
Data gaps—the lack of granular error data in real industrial settings limits HRA precision;
-
Complex environments—as systems grow more complex, mapping error pathways becomes harder.
Future directions include
-
Integration with AI and ML to create dynamic human error prediction models;
-
The use of digital twins to emulate and correct operator behaviors;
-
Enhanced human-in-the-loop interfaces for predictive workload and reliability modeling.
In conclusion, Human Reliability Analysis is evolving from a static risk assessment tool to an active component in industrial process design. When paired with simulation, sensor data and AI, HRA becomes instrumental in supporting quality assurance, reducing rework, and improving worker safety. The combined application of traditional methods (HEART and SHERPA) with digital tools represents a robust framework for addressing both legacy and emerging reliability challenges.

3.2.4. Eye-Tracking and EEG-Based Techniques in Ergonomic and Cognitive Research

In recent years, advancements in neurophysiological technology have enabled researchers to objectively evaluate human behavior and mental states through tools such as electroencephalography (EEG) and eye-tracking (ET). These methods provide a more accurate and non-invasive approach to studying cognitive processes, overcoming the limitations of subjective questionnaires or verbal feedback [75,76]. Their application in ergonomics, user experience, human–computer interaction (HCI), and Industry 5.0 environments is growing, especially in tasks requiring attention monitoring, stress assessment, and decision-making analysis [77].
Overview of EEG Technologies
Electroencephalography (EEG) is a method for monitoring brain activity through electrodes positioned on the surface of the scalp. It is known for its high temporal resolution, making it suitable for the real-time monitoring of mental states such as attention, fatigue, or cognitive load [75,78]. EEG frequency bands reflect different cognitive or emotional states (Table 4).
Specific components, such as Event-Related Potentials (ERPs), are often used to analyze responses to stimuli. For example, alpha desynchronization and theta synchronization are both used as markers of an increased cognitive load [79].
Overview of Eye-Tracking Technologies
Eye-tracking technology records fixation, saccades, and gaze trajectories to understand visual attention and interaction (Table 5). It helps determine where, when, and how long a person looks at specific areas, making it ideal for product design, interface usability, and mental workload estimation [77].
While eye-tracking reveals visual behavior, it does not capture internal cognition. Hence, the combination of EEG and ET yields deeper insights [75,77].
Integrated EEG and Eye-Tracking Research
Studies show that combining EEG with ET offers a multimodal understanding of user experience by aligning eye gaze with concurrent brain responses [77,78]. This fusion allows researchers to
-
Detect emotional and cognitive states in real time;
-
Quantify mental workload during complex tasks;
-
Improve HCIs by customizing feedback based on user states.
Zhu & Lv [75] highlight that EEG and ET integration improves the interpretability of gaze data and strengthens conclusions about the underlying thought processes. Ricci et al. [76] emphasize that in Industry 5.0, EEG (used in 45% of neurophysiological studies) and ET (30%) are critical for human-centric interface and system design.
Applications in Ergonomics and Industrial Design
The integration of EEG and eye-tracking technologies has enabled researchers to gain a clearer view of the ways users interact with designed systems, interfaces, and spatial environments in various applied contexts. In the field of product evaluation and aesthetics, these tools reveal both visual attention patterns and the underlying neural responses associated with user preferences, enabling the optimization of product designs for enhanced appeal and usability. For instance, changes in gaze trajectories and alpha brainwave activity have been correlated with engagement levels when assessing symmetrical product features or interface layouts [75].
In usability testing and human–computer interactions (HCIs), the combined use of EEG and ET facilitates the detection of moments when users experience cognitive strain or confusion. By analyzing gaze patterns alongside brainwave responses, designers can identify problematic interface elements and adjust them accordingly to reduce mental workload and improve efficiency [76]. Such methods also prove invaluable in assistive system design. Paing et al. demonstrated the feasibility of a gaze-controlled system that allowed users with severe motor impairments to control home appliances and communicate through a visual interface, highlighting the accessibility potential of these technologies [80].
As a result, the study of design cognition has benefited significantly from biometric techniques. Researchers such as Yu et al. have shown how EEG can be used to trace neural changes during different stages of creative thinking, with alpha suppression observed during ideation phases and increased theta activity during problem evaluation [78]. These results support a more detailed comprehension of the mental mechanisms involved in creative design and innovation. EEG and eye-tracking are also applied in the evaluation of cognitive loads during navigation and educational tasks. In the context of map learning, for example, Keskin et al. found that while fixation data alone may not distinguish between expert and novice users, saccade metrics and an EEG spectral analysis revealed elevated theta activity and decreased alpha activity during complex tasks, both recognized indicators of increased cognitive effort [77]. These insights help educators and interface developers tailor content to users’ cognitive capacities, ultimately enhancing learning outcomes and user satisfaction.
Limitations and Challenges
Despite its strengths, this approach has the following technical and interpretative challenges [75,76,79,80]:
-
EEG preparation time and movement sensitivity may hinder field deployment;
-
Eye-tracking accuracy can be affected by lighting, screen size, or user posture;
-
Requires multidisciplinary expertise in signal processing, design, and psychology;
-
A risk of data misinterpretation if not triangulated with behavioral/contextual inputs;
-
Nonetheless, newer wearable EEG devices and mobile ET systems are overcoming such barriers, promoting use in real-world environments.
In conclusion, EEG and eye-tracking are complementary tools that enrich our understanding of human cognition, attention, and emotion in ergonomic and design contexts. Their integrated application allows researchers and engineers to decode not only what users focus on but also how they think and feel. As Industry 5.0 moves toward human-centered solutions, these techniques offer vital pathways to enhance safety, usability, and satisfaction in work systems.

3.2.5. Summary of Key Differences

RULA (Rapid Upper Limb Assessment) focuses on the ergonomic risks related to the upper body (arms, neck, and trunk) and is particularly suited for identifying static or awkward postures in manufacturing and office environments. It is simple, fast, and widely used, but is limited in assessing dynamic tasks or whole-body postures.
OWAS (Ovako Working Posture Analysis System) analyzes full-body postures during physical tasks, especially in logistics, agriculture, and manual labor. It codes body postures, including those of the back, arms, and legs, along with the weight being handled, giving action categories based on urgency. Although comprehensive in body coverage, it lacks details on time exposure and repetitive strain.
HRA (Human Reliability Analysis) examines the likelihood and types of human error in tasks, integrating cognitive and procedural factors. It is essential in high-risk domains like aviation, nuclear energy, and, increasingly, Industry 4.0 contexts. It is more abstract and predictive, often using methods like HEART, SHERPA, or CREAM. It evaluates not postures, but error likelihood and system vulnerabilities.
Eye-tracking and EEG are physiological techniques used to assess visual attention, cognitive workload, and emotional responses. They are ideal for a fine-tuned ergonomic, interface, or behavioral analysis. While not direct ergonomic scoring tools, they provide deep insights into user experiences and cognitive processes, especially in research or advanced product development.

3.3. Modern Solutions for Improving Human–Machine Interactions

3.3.1. Exoskeletons and Assistive Devices

Exoskeletons are wearable robotic systems designed to augment, assist, or restore humans’ physical capabilities. These devices consist of a mechanical structure aligned with the human body’s joints and typically integrate actuators, sensors, and controllers. Exoskeletons can be classified based on their purpose: therapeutic recovery, support of daily living tasks, enhancement of human performance, or interaction through haptic feedback. They can also be categorized by anatomical region: upper limb, lower limb, full body, or specific segments such as the hand or trunk [81].
Assistive devices include both active (powered) and passive systems designed to support individuals with reduced mobility or those exposed to physically demanding tasks. In industrial contexts, for example, low-back exoskeletons are employed to reduce lumbar strain and prevent work-related musculoskeletal disorders [82].
Technological Components
Modern exoskeletons rely on advanced sensing and actuation technologies. Common actuation systems include electric motors, pneumatic artificial muscles, and hydraulic cylinders. Sensor systems frequently incorporate inertial measurement units (IMUs), surface electromyography (sEMG), and pressure or force sensors, which enable real-time recognition of the user’s movement intentions [81,83].
sEMG-based intention recognition has become particularly important, allowing for the decoding of muscle activity into motion commands. Recent developments have applied algorithms based on machine learning, including bidirectional long short-term memory (BiLSTM) networks, optimized using quantum-behaved particle swarm optimization (QPSO) to predict joint kinematics from sEMG signals with high accuracy [84].
Applications
-
Medical Rehabilitation: Exoskeletons like Lokomat and ReWalk have transformed gait rehabilitation for individuals recovering from strokes, spinal cord injuries, or cerebral palsy. These devices enable repetitive, task-specific training that is often superior in consistency and intensity compared to manual therapy sessions [81].
-
Industrial Ergonomics: Industrial exoskeletons, especially those designed for lumbar support, help reduce strain during lifting or overhead tasks. Studies show reductions of up to 30% in muscle activation during assisted lifting tasks. However, user comfort and adaptability remain challenges, necessitating further design personalization [82,85].
-
Human Intention Recognition: Human–exoskeleton cooperation relies on accurate motion intention recognition. sEMG is currently the most widely used modality, despite variability among users due to physiology and electrode placement. Deep learning techniques are being developed to improve the generalization and robustness of control models [83,84,86].
Illustrative Examples of Exoskeletons
Below are representative examples of exoskeletons, which can be expanded with images and mechanical schematics (Table 6, Figure 8).
Comparative Analysis of Industrial Exoskeletons in Automotive and Ergonomic Applications
The integration of exoskeletons in industrial environments, particularly in the automotive sector, has gained momentum as a solution to mitigate musculoskeletal strain, enhance worker safety, and improve productivity. Among the devices reviewed, a distinction can be made between passive and active systems, each offering specific benefits and limitations.
Passive upper-limb exoskeletons such as the Paexo Shoulder, Levitate Airframe, EksoVest, SuitX ShoulderX, and Comau MATE are designed to reduce fatigue during overhead or repetitive tasks by redistributing the weight of the arms to the hips or the ground. These systems are lightweight, require no power source, and are highly suitable for tasks like ceiling assembly, painting, or cable installation. Their main advantage lies in simplicity, affordability, and ease of integration, making them popular in automotive assembly lines. However, they may offer limited adaptability, with support levels that are static and not responsive to real-time user needs.
Lower-back passive exoskeletons such as BackX, Paexo Back, and Chairless Chair 2.0 primarily aim to alleviate lumbar strain during lifting, bending, or prolonged standing. These exoskeletons are highly valued in logistics and material handling sectors within automotive manufacturing, where repetitive lifting is frequent. Their ergonomic benefits include spinal load reduction and posture correction. Nonetheless, mobility can be restricted in tight or fast-paced environments, and support may deactivate during walking or dynamic movement, limiting versatility.
In contrast, active exoskeletons like HAL, MyoPro, EksoNR, and IX BACK VOLTON provide motorized or AI-driven assistance based on real-time feedback (e.g., sEMG or motion sensors). These systems are more adaptable and suitable for rehabilitation, high-precision tasks, or heavy-duty industrial roles. For instance, HAL and MyoPro interpret bioelectrical signals to assist voluntary movement, which is advantageous in both medical recovery and demanding factory environments. The IX BACK VOLTON stands out for its AI integration, adapting to user behavior and providing up to 17 kg of lifting support. However, these devices are heavier, more expensive, and require battery maintenance, which may limit their practical deployment in some automotive lines.
Modular exoskeletons such as the SuitX MAX combine components like back, shoulder, and leg supports into a single adaptable platform. These systems are highly flexible and can be customized based on task requirements, offering a comprehensive ergonomic solution across various workstations. Their modularity is a key advantage in dynamic industrial environments, but training and adjustment complexity can be a barrier to widespread adoption.
In terms of application, upper-limb exoskeletons dominate automotive assembly due to the prevalence of overhead and fine motor tasks. Lower-limb or back-focused systems are more prominent in logistics and supply chain roles. Tools like ExoPA play a strategic role in identifying tasks where exoskeletons could provide the most benefit, ensuring targeted deployment and cost efficiency.
In conclusion, while passive systems excel in simplicity and user comfort, active and modular exoskeletons offer enhanced functionality for more complex or variable tasks. The choice between these systems should be driven by task specificity, risk exposure, and cost constraints, with potential for combining different models across the automotive production line to maximize ergonomic benefits and operational efficiency.

3.3.2. Collaborative Robots and Human Assistance in Industry 5.0

Collaborative robots (cobots) represent one of the most impactful technologies in modern manufacturing, as they enhance productivity, safety, and flexibility by working directly alongside humans. These systems are central to the emerging concept of Industry 5.0, which emphasizes not just automation and efficiency but also sustainability, resilience, and human-centric values [104].
Whereas Industry 4.0 emphasized the automation and the integration of cyber–physical systems, Industry 5.0 shifts the focus toward restoring human involvement and promoting intelligent human–machine collaboration [105]. Collaborative robotics is a key enabler in achieving this synergy.
Classification and Modes of Human–Robot Interactions (HRIs)
According to Matheson et al. [106], human–robot interactions can be classified into four primary modes (Table 7):
These modes describe increasing levels of integration and real-time dependency between human and robotic agents. The goal of collaborative robotics is to achieve seamless collaboration, where safety, ergonomics, and performance are harmoniously integrated. Standards such as ISO/TS 15066 support the safe implementation of collaborative applications [107].
Cobots are also classified based on design principles and safety mechanisms into [107]
-
Power and force limiting—low-powered and with rounded edges to ensure safety;
-
Hand-guided teaching—robots that learn motions by manual demonstration;
-
Speed and separation monitoring—cobots that slow or stop based on human proximity;
-
Safety-rated monitored stop—temporary stops when humans enter the workspace.
Design Principles and Methods
Designing human–robot collaboration requires ergonomic integration, cognitive modeling, and ethical design.
-
Joint Cognitive Systems (JCS): A JCS is formed when humans and machines function as co-agents to achieve shared goals. Rather than treating the human and robot as isolated systems, the focus is on coordination patterns, adaptability, and co-agency [6]. Woods and Hollnagel emphasize that these systems should be resilient and functionally adaptive in real-time [108].
-
Actor–Network Theory (ANT): ANT views both humans and machines as actors (“actants”) in a sociotechnical network. It offers tools to map how dynamic roles evolve in human–machine teams, treating machines not merely as tools but as integral agents [109].
-
Concept of Operations (ConOps): ConOps provides a framework to define how humans and robots interact, collaborate, and share tasks within a system, specifying roles, responsibilities, operational scenarios, and performance expectations throughout the system’s lifecycle [110].
-
Human System Integration (HSI): HSI focuses on aligning system functionality, design, and operation with human capabilities, limitations, and work contexts by systematically integrating human factors throughout all stages of the system’s lifecycle [111].
Applications and Case Studies
Table 8 presents key applications of human–robot interactions across various industries, highlighting their roles and benefits:
Human-Centric Design Challenges
-
Cognitive load and mental models play a crucial role in human-centric design, as they influence how users perceive, interpret, and interact with automated systems. Effective human–machine collaboration requires minimizing unnecessary cognitive effort while supporting the development of accurate mental representations of the system’s behavior, capabilities, and intentions [112];
-
Resilience and role flexibility are fundamental to human-centric design, as they enable systems to adapt dynamically to unexpected changes, distribute tasks flexibly between human and machine agents, and maintain performance under varying conditions [113]. As highlighted by Madni et al. (2018), such adaptive architectures help reduce cognitive load, support the formation of shared mental models, and increase the overall robustness and effectiveness of human–machine collaboration [113].
-
Ethical and social concern: in human-centric system design become especially relevant when evaluating how the cognitive load influences users’ decision-making. As highlighted in the study by Zheng et al. (2025), an increased cognitive effort significantly alters moral judgments, making individuals more likely to adopt utilitarian reasoning and prioritize collective over individual outcomes under mental strain [114]. This raises concerns about the fairness and reliability of decisions made in cognitively demanding human–machine interaction scenarios. Designers must therefore ensure that system architectures minimize unnecessary cognitive load and preserve users’ ability to make ethically balanced decisions, avoiding unintended biases and safeguarding individual autonomy in collaborative settings [114].
Future Directions and Industry 5.0 Implications
The future of cobots is closely linked to the maturation of adjacent technologies, which collectively shape the evolution of human-centric, intelligent, and adaptive industrial systems.
-
AI-driven Personalization: In the context of Industry 5.0, artificial intelligence is increasingly tailored to human needs, enabling cobots to learn from human behavior and dynamically adapt to operator preferences. This leads to more personalized and inclusive human–robot collaboration, supporting worker well-being and task efficiency [115].
-
Digital Twins: Digital twins, particularly those driven by AI, provide a synchronized virtual representation of physical processes. They enhance predictive maintenance, optimize performance, and allow immersive human-in-the-loop simulations, which are crucial for training and decision support in dynamic environments [116,117].
-
Augmented Reality (AR): AR technologies contribute to improved human–machine interactions by visualizing robot trajectories, task sequences, and real-time guidance. This transparency fosters trust, reduces cognitive load, and improves accuracy during collaborative operations [118].
-
Edge Computing and 5G: The deployment of 5G networks and edge computing technologies enables ultra-low-latency and high-bandwidth communication between cobots and control systems. This infrastructure is essential for real-time decision-making and safe human–robot collaboration in smart factories [119,120].
These advancements point to a future in which cobots are not only co-workers but intelligent companions in an Industry 5.0 ecosystem that prioritizes human agency, safety, and sustainability.
Real-World Examples of Collaborative Robots
The deployment of collaborative robots in various industries has resulted in measurable improvements in efficiency, safety, and worker satisfaction. Below are examples of widely adopted cobots and their documented use cases (Table 9).
In conclusion, collaborative robots are not just tools but co-workers within a broader sociotechnical system. Their successful deployment depends on thoughtful, human-centered design grounded in both cognitive engineering and ethical responsibility. Integrating methods like ANT, ConOps, and JCS ensures systems that are not only efficient but also resilient and adaptive to human needs.

3.3.3. Augmented and Virtual Reality in Operator Training: Applications, Methods, and Examples

Augmented reality (AR) and virtual reality (VR) are transforming the field of operator training in industrial and manufacturing environments. These immersive technologies allow users to experience realistic simulations of machines, tasks, and processes, enabling effective skill acquisition, decision-making under pressure, and safe practice in hazardous or complex environments. Recent developments show their growing integration in education, military training, shipbuilding, construction, and chemical industries [139,140,141,142,143,144,145,146].
The emergence of Industry 4.0 and the changeover toward Industry 5.0 underscore the need for adaptive, human-centric, and digitally enhanced training solutions. AR and VR bridge the gap between theoretical instruction and hands-on practice, offering engaging, responsive, and measurable learning environments. According to market analyses, the global AR/VR in training market is expected to exceed USD 20 billion by 2030, driven by industrial applications. As product life cycles shorten and production systems become more complex, immersive training methods are proving essential for sustainable workforce development [143,144,145,146].
Technological Foundations and Classifications
-
Augmented Reality (AR): AR displays digital content within a real-world context, providing real-time assistance during training or operations. Devices include tablets, smartphones, and head-mounted displays (HMDs), such as Microsoft HoloLens and Vuzix Blade. AR allows step-by-step task guidance, superimposed safety alerts, and dynamic visualizations of internal machine components. For instance, a maintenance technician using AR can access exploded 3D diagrams and service checklists directly in their field of view, reducing the dependency on paper manuals [139,142].
-
Virtual Reality (VR): VR offers an environment that entirely surrounds the user, where users operate with digital replicas of machinery and systems using devices like the HTC Vive Pro, Oculus Rift, or Valve Index. It isolates trainees from distractions and enables a safe exploration of dangerous or high-stakes environments. In training for a chemical response, for example, VR simulations allow operators to rehearse emergency shutdown procedures without putting equipment or lives at risk. Some VR systems also integrate haptic feedback to simulate resistance, vibration, or material texture [140,143,144].
-
Mixed and Extended Reality (MR/XR): Mixed reality (MR) combines real and virtual elements, enabling interactions with both simultaneously. Devices like Microsoft HoloLens 2 or Magic Leap provide spatial anchoring, object occlusion, and shared workspaces for collaborative tasks. XR (extended reality) encompasses the full spectrum of immersive technologies—AR, VR, and MR—providing flexible deployment options based on context. CAVE systems (Computer-Assisted Virtual Environment), which project imagery on walls and floors of a room, are valuable for multi-user training and simulation but have high setup costs and space demands [142,144].
Integration with IoT and Digital Twins further enhances these platforms. Real-time sensor data can be streamed into VR environments to create dynamic, up-to-date representations of production systems, while AR can show the live machine status or predictive maintenance alerts superimposed on equipment.
Methods and Implementation Strategies
-
Simulation-Based Training: VR simulation offers risk-free environments to rehearse both routine and emergency operations. In a Slovak study, trainees performed electrical plug assembly in VR using HTC Vive Pro, improving their speed and reducing errors. Training scenarios can be adapted for stress induction, multitasking, or fault diagnosis. Gamified modules enhance motivation, and performance metrics such as time-on-task or error frequency can be logged automatically [139,144].
-
Augmented Task Guidance: AR can deliver contextual assistance directly within the operator’s field of vision. In industrial maintenance, AR headsets provide overlays with torque values, replacement instructions, or real-time sensor readings. For example, a worker replacing a valve may see animated instructions guiding hand motion, reducing the training time. Studies show that AR reduces the cognitive load by offloading memorization and minimizing task-switching [139].
-
Mixed Reality Collaboration: This supports collaborative, spatially-aware training. Trainees can work in synchronized environments, manipulating virtual objects anchored in physical space. In a shipbuilding case, multi-user AR setups reduced communication latency and improved spatial awareness during structural assembly [139]. CAVE systems and MR telepresence also allow remote trainers to provide real-time corrections.
-
Deep Learning with 3D Models AR: Guidance systems can be enhanced using convolutional neural networks (CNNs) trained to recognize parts from CAD models. For instance, small parts like bolts or washers, which are often difficult to track visually, can be identified and labeled in real time. The automated generation of 2D datasets from 3D models streamlines the CNN training process, reducing the manual annotation effort [146].
Application Areas and Case Studies
-
Industrial Assembly and Maintenance: AR/VR tools are widely used in manufacturing for assembly training and predictive maintenance. In the aerospace industry, Boeing uses AR to assist technicians in wiring aircraft, reducing error rates by 90%. In electronics, operators can rehearse fine-motor tasks repeatedly before touching physical parts, reducing scrap rates and downtime [141,142].
-
Chemical Industry Accident Simulation: Vidal-Balea developed an Operator Training Simulator (OTS) for chemical accidents using a combination of VR and AR. The simulation connected operators in the field and control room through a Distributed Control System (DCS). Operators reported a 4.5× improvement in procedural familiarity and reaction time compared to traditional tabletop exercises [139].
-
Shipbuilding Collaborative: AR Navantia and Universidade da Coruña deployed Microsoft HoloLens (Microsoft, Redmond, WA, USA) in shipyards to facilitate multi-user AR training. Participants experienced synchronized visualizations of complex assemblies, significantly reducing spatial misalignment and communication delays in team operations [139].
-
Engineering and University Education: The University of Žilina implemented AR/VR in their Digital Factory curriculum. Students trained with virtual replicas of machinery using Unity 3D and Autodesk Maya models. Time studies (chronometry) and error tracking revealed substantial improvements in performance, demonstrating AR/VR’s value for technical education [144].
-
Object Recognition for Assembly: CNNs trained on synthetic image datasets allowed AR systems to recognize components and suggest actions during assembly. This method is ideal for small-batch production, where frequent part changes make traditional automation costly [146].
-
Construction Safety and Risk Training: Afolabi et al. highlighted AR/VR’s potential in improving safety awareness in construction. Simulations recreated fall risks, structural collapse, and equipment failure. While immersive tech increased engagement, implementation was limited by high costs, a lack of training, and initial resistance from the workforce [141].
-
XR in Collaborative Education: Mourtzis and Angelopoulos introduced an XR platform for engineering education. Students collaborated in real-time across locations to perform virtual repairs and maintenance, enhancing teamwork, spatial understanding, and technical skill retention [142].
Evaluation and Effectiveness
Assessment techniques for AR/VR training include pre- and post-training knowledge tests; NASA-TLX for the cognitive workload; metrics such as the task execution time, success percentage, and total number of errors; and eye-tracking and physiological sensors for attention and stress (Table 10).
The University of Žilina study showed measurable performance gains in AR/VR groups versus traditional instruction. In corporate HR trials, VR onboarding modules improved user retention and reduced time-to-productivity by over 30% [144].
The more recent challenges are as follows [145,147,148]:
-
Cost—high upfront investments in hardware, software licenses, and content creation;
-
Scalability—difficulties in deploying across geographically distributed teams;
-
Resistance to change—especially among older employees unfamiliar with digital tools;
-
Technical limitations—VR sickness, display resolution, latency issues, and limited field of view;
-
Privacy and security—the use of cameras and sensors raises concerns in sensitive environments;
-
Content maintenance—updating 3D models and procedures as processes evolve requires dedicated resources.
In conclusion, combining augmented and virtual reality in operator training represents a leap forward in industrial education. From immersive learning to real-time assistance, AR/VR offers scalable, efficient, and engaging methods for upskilling workers. Future research should focus on integrating these technologies with AI, digital twins, and IoT systems to unlock their full potential. Tailored, adaptive, and data-driven training platforms powered by immersive tech may become the new standard in industry, ensuring continuous learning and workforce resilience.

4. Challenges and Future Directions in Cognitive Ergonomics for HMIs

Cognitive ergonomics in human–machine interactions focuses on the optimization of systems for human cognitive capacities, such as perception, memory, reasoning, and motor response. With rapid developments in artificial intelligence, automation, and ubiquitous computing, the cognitive demands placed on users are changing dramatically. This chapter aims to explore key challenges and outline future research directions in cognitive ergonomics within HMI contexts [18].

4.1. Major Challenges in Cognitive Ergonomics for HMIs

-
Increasing System Complexity: Modern interfaces integrate vast amounts of data and functionality, potentially overwhelming users. Cognitive overload can reduce task performance, increase error rates, and impact mental well-being [149]. For instance, smart manufacturing systems in Industry 4.0 often require users to interpret complex data visualizations and make real-time decisions [150].
-
Multimodal and Adaptive Interfaces: These interfaces increasingly incorporate voice, gesture, haptic, and biometric inputs. Designing intuitive, seamless, and cognitively ergonomic multimodal systems remains a major challenge. Zheng et al. [151] reviewed EMG-, FMG-, and EIT-based biosensors, highlighting the difficulty in harmonizing input accuracy and user comfort.
-
Situational Awareness in Shared Control Environments: Maintaining shared control between the human and machine is critical, especially in safety-sensitive domains like autonomous driving. Brill et al. [152] emphasized the importance of external HMIs (eHMIs) for communicating AV intentions to vulnerable road users (VRUs) in shared spaces.
-
Latency and Tactile Feedback in Real-Time Systems: Real-time applications such as teleoperation and VR/AR demand an ultra-low latency. Mourtzis et al. [150] describe how 5G and the tactile internet promise to enhance cognitive ergonomics by enabling haptic feedback and low-latency control, but such systems also raise new cognitive demands and safety concerns.

4.2. Emerging Trends and Research Directions

-
Personalization through Adaptive Cognitive Models: Cognitive ergonomics is moving toward adaptive systems that respond to individual users’ cognitive states. Biosensors and physiological monitoring (e.g., EEG and EMG) can be used to detect workload and adjust interface complexity dynamically [151].
-
Standardization of External Human–Machine Interfaces: Brill et al. [152] emphasize the need for standardizing eHMI designs to ensure consistent communication between AVs and VRUs. This is critical for developing shared cognitive models and reducing ambiguity.
-
Integration of Wearable and Embedded Biosensors: The integration of EMG, FMG, and EIT-based wearables allows for more natural human–machine interactions, such as gesture recognition and prosthetic control [151]. Future research should focus on improving robustness, data fusion, and interpretability.
-
Ethical and Societal Implications: As HMIs increasingly mediate decisions and actions, ethical issues arise regarding autonomy, consent, and surveillance. Cognitive ergonomics must integrate ethical foresight into design methodologies [153].
-
Cross-Disciplinary Methodologies: Future research will benefit from combining insights from psychology, neuroscience, data science, and design. New evaluation methodologies should consider both subjective experience and objective performance metrics [154].
In conclusion, the evolution of HMI systems poses a multitude of cognitive ergonomic challenges, from complexity and transparency to trust and adaptivity. Advances in biosensor technologies, AI, and network infrastructure offer promising pathways forward. However, these must be accompanied by human-centered design and interdisciplinary research to ensure that systems enhance, rather than hinder, human cognition.

5. Conclusions and Recommendations

5.1. Summary of the Findings

This study has explored the evolution, models, influencing factors, and evaluation methods relevant to human–machine interactions (HMIs) within the context of Industry 4.0 and 5.0. The research has revealed that
-
The transition from mechanized control to intelligent, adaptive HMI systems has been driven by advancements in AI, machine learning, and multimodal interfaces;
-
Cognitive, physiological, psychological, and organizational factors play a critical role in shaping operator performance, safety, and satisfaction;
-
Ergonomic methodologies such as RULA, OWAS, and Human Reliability Analysis (HRA) are essential tools for assessing physical and cognitive strain in industrial environments;
-
Emerging technologies such as exoskeletons, eye-tracking, EEG, and digital twins are reshaping HMI design, enabling real-time feedback and predictive interventions.

5.2. Contributions of This Study

This report provides a comprehensive theoretical and applied framework for understanding modern HMIs. The main contributions include
-
A multidisciplinary synthesis of ergonomic, cognitive, and psychological elements that influence HMI effectiveness;
-
A comparative analysis of evaluation methodologies (e.g., RULA, OWAS, HRA) in industrial settings, including their strengths, limitations, and digital integrations;
-
The integration of modern assistive technologies and sensing tools as pathways for improving safety, usability, and adaptability in smart factories;
-
The proposal of a human-centric paradigm in alignment with the Industry 5.0 goals of sustainability, personalization, and operator empowerment.

5.3. Practical Implications

The findings of this report have several practical implications for industrial stakeholders:
-
Design guidelines—ergonomic and cognitive principles should inform the design of user interfaces, collaborative robotics, and wearable systems to enhance operator performance and reduce injury risks;
-
Risk mitigation—the systematic application of RULA, OWAS, and HRA can identify high-risk tasks and guide targeted interventions, especially in repetitive, awkward, or high-load work;
-
Technological integration—industries should embrace neuroergonomic tools (e.g., EEG and eye-tracking), digital simulations, and AI-based monitoring to enable adaptive and responsive HMI;
-
Workplace culture—organizational ergonomics must support user acceptance, trust, and well-being, fostering a resilient and engaged workforce in technologically intensive environments.

5.4. Final Remarks

As industries evolve toward greater automation and intelligence, the role of the human operator remains central. This study confirms that the success of modern manufacturing systems depends not only on technological performance but also on their ergonomic, psychological, and organizational compatibility with human users. Human–machine interaction, once viewed as a technical necessity, now represents a strategic asset in designing safer, more productive, and humane work environments. Continued research and investment in adaptive, human-centered technologies will be fundamental in realizing the full potential of Industry 5.0.

5.5. Future Research Directions

While this study provides a comprehensive overview of current HMI systems and ergonomic evaluation methods, several areas remain open for future investigation:
-
Multimodal cognitive adaptation—Further research is needed on how to dynamically adapt interfaces using real-time cognitive input states using multi-sensor fusion (e.g., combining EEG, eye-tracking, and galvanic skin responses). Understanding how these inputs influence task performance in real-world environments can lead to more intuitive and adaptive HMI systems.
-
Trust calibration and emotional AI—The development of emotionally responsive systems capable of interpreting and responding to operator stress, fatigue, and trust levels remains in its infancy. Future studies could explore how emotional AI can foster safer and more reliable human–machine collaboration.
-
Longitudinal ergonomic impacts—Most ergonomic assessments, including those using RULA or OWAS, are applied in short-term studies. Long-term studies are necessary to measure the potential health benefits or risks linked to human–machine interactions in repetitive or physically demanding industrial tasks.
-
Human–digital twin co-simulation—Integrating human digital twins into cyber–physical systems opens new avenues for predictive ergonomics and task optimization. Research can focus on how digital representations of human behavior can be used to pre-validate workplace design and training scenarios.
-
AI-Enhanced Human Reliability Analysis—Expanding Human Reliability Analysis (HRA) frameworks to include machine learning models for error prediction and prevention can significantly improve safety in high-risk environments. Future research should explore how AI can reduce inter-analyst subjectivity and improve real-time reliability monitoring.
-
Inclusivity and personalization—As Industry 5.0 emphasizes human centrality, more studies are required on how HMI can be personalized for diverse user populations, including aging workers, people with disabilities, and those with varying cognitive styles.
These directions aim to bridge the gap between technological innovation and human-centric values, enabling the next generation of safe, inclusive, and efficient industrial systems.

Author Contributions

A.-R.I. and D.-C.A., conceptualization; A.-R.I., D.-C.A., and T.B., methodology; A.-R.I., D.-C.A., and T.B., data analysis and writing the paper. D.-C.A., review and editing. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Parasuraman, R.; Sheridan, T.B.; Wickens, C.D. A model for types and levels of human interaction with automation. IEEE Trans. Syst. Man, Cybern. Part A Syst. Humans 2000, 30, 286–297. [Google Scholar] [CrossRef]
  2. Romero, D.; Stahre, J.; Wuest, T.; Noran, O.; Bernus, P.; Vast-Berglund, Å.F.; Gorecky, D. Towards an Operator 4.0 Typology: A Human-Centric Perspective on the Fourth Industrial Revolution Technologies. In Proceedings of the International Conference on Computers & Industrial Engineering (CIE46), Tianjin, China, 29–31 October 2016; pp. 29–31. [Google Scholar]
  3. Nahavandi, S. Industry 5.0—A human-centric solution. Sustainability 2019, 11, 4371. [Google Scholar] [CrossRef]
  4. Zhang, J.; Walji, M.F. TURF: Toward a unified framework of EHR usability. J. Biomed. Inform. 2011, 44, 1056–1067. [Google Scholar] [CrossRef] [PubMed]
  5. Stanton, N.A.; Salmon, P.M.; Walker, G.H.; Baber, C.; Jenkins, D.P. Human Factors Methods; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar] [CrossRef]
  6. Breque, M.; De Nul, L.; Petridis, A. Industry 5.0: Towards a Sustainable, Human-Centric and Resilient European Industry; Publications Office of the European Union: Luxembourg, 2021. [Google Scholar]
  7. Weyer, S.; Schmitt, M.; Ohmer, M.; Gorecky, D. Towards industry 4.0—Standardization as the crucial challenge for highly modular, multi-vendor production systems. IFAC-PapersOnLine 2015, 48, 579–584. [Google Scholar] [CrossRef]
  8. Trstenjak, M.; Gregurić, P.; Janić, Ž.; Salaj, D. Integrated Multilevel Production Planning Solution According to Industry 5.0 Principles. Appl. Sci. 2023, 14, 160. [Google Scholar] [CrossRef]
  9. Rani, S.; Jining, D.; Shoukat, K.; Shoukat, M.U.; Nawaz, S.A. A Human–Machine Interaction Mechanism: Additive Manufacturing for Industry 5.0—Design and Management. Sustainability 2024, 16, 4158. [Google Scholar] [CrossRef]
  10. Solanes, J.E.; Gracia, L.; Miro, J.V. Advances in Human–Machine Interaction, Artificial Intelligence, and Robotics. Electronics 2024, 13, 3856. [Google Scholar] [CrossRef]
  11. Villalba-Diez, J.; Ordieres-Meré, J. Human–Machine integration in processes within industry 4.0 management. Sensors 2021, 21, 5928. [Google Scholar] [CrossRef]
  12. Yu, H.; Du, S.; Kurien, A.; van Wyk, B.J.; Liu, Q. The Sense of Agency in Human–Machine Interaction Systems. Appl. Sci. 2024, 14, 7327. [Google Scholar] [CrossRef]
  13. Jiang, J.; Xiao, Y.; Zhan, W.; Jiang, C.; Yang, D.; Xi, L.; Zhang, L.; Hu, H.; Zou, Y.; Liu, J. An HRA Model Based on the Cognitive Stages for a Human-Computer Interface in a Spacecraft Cabin. Symmetry 2022, 14, 1756. [Google Scholar] [CrossRef]
  14. Li, Y.; Zhu, L. Failure Modes Analysis Related to User Experience in Interactive System Design Through a Fuzzy Failure Mode and Effect Analysis-Based Hybrid Approach. Appl. Sci. 2025, 15, 2954. [Google Scholar] [CrossRef]
  15. Sapienza, A.; Cantucci, F.; Falcone, R. Modeling Interaction in Human–Machine Systems: A Trust and Trustworthiness Approach. Automation 2022, 3, 242–257. [Google Scholar] [CrossRef]
  16. Bántay, L.; Abonyi, J. Machine Learning-Supported Designing of Human–Machine Interfaces. Appl. Sci. 2024, 14, 1564. [Google Scholar] [CrossRef]
  17. Khamaisi, R.K.; Prati, E.; Peruzzini, M.; Raffaeli, R.; Pellicciari, M. ux in ar-supported industrial human–Robot collaborative tasks: A systematic review. Appl. Sci. 2021, 11, 10448. [Google Scholar] [CrossRef]
  18. Grobelna, I.; Mailland, D.; Horwat, M. Design of Automotive HMI: New Challenges in Enhancing User Experience, Safety, and Security. Appl. Sci. 2025, 15, 5572. [Google Scholar] [CrossRef]
  19. Tufano, F.; Bahadure, S.W.; Tufo, M.; Novella, L.; Fiengo, G.; Santini, S. An Optimization Framework for Information Management in Adaptive Automotive Human–Machine Interfaces. Appl. Sci. 2023, 13, 10687. [Google Scholar] [CrossRef]
  20. Planke, L.J.; Lim, Y.; Gardi, A.; Sabatini, R.; Kistan, T.; Ezer, N. A cyber-physical-human system for one-to-many uas operations: Cognitive load analysis. Sensors 2020, 20, 5467. [Google Scholar] [CrossRef]
  21. Sun, Y.; Sun, Y.; Zhang, J.; Ran, F. Sensor-Based Assessment of Mental Fatigue Effects on Postural Stability and Multi-Sensory Integration. Sensors 2025, 25, 1470. [Google Scholar] [CrossRef]
  22. Ren, L.; Wu, L.; Feng, T.; Liu, X. A New Method for Inducing Mental Fatigue: A High Mental Workload Task Paradigm Based on Complex Cognitive Abilities and Time Pressure. Brain Sci. 2025, 15, 541. [Google Scholar] [CrossRef]
  23. Gardner, M.; Castillo, C.S.M.; Wilson, S.; Farina, D.; Burdet, E.; Khoo, B.C.; Atashzar, S.F.; Vaidyanathan, R. A multimodal intention detection sensor suite for shared autonomy of upper-limb robotic prostheses. Sensors 2020, 20, 6097. [Google Scholar] [CrossRef]
  24. Enjalbert, S.; Gandini, L.M.; Baños, A.P.; Ricci, S.; Vanderhaegen, F. Human–machine interface in transport systems: An industrial overview for more extended rail applications. Machines 2021, 9, 36. [Google Scholar] [CrossRef]
  25. Onofrejova, D.; Andrejiova, M.; Porubcanova, D.; Pacaiova, H.; Sobotova, L. A Case Study of Ergonomic Risk Assessment in Slovakia with Respect to EU Standard. Int. J. Environ. Res. Public Health 2024, 21, 666. [Google Scholar] [CrossRef]
  26. da Silva, A.G.; Gomes, M.V.M.; Winkler, I. Virtual Reality and Digital Human Modeling for Ergonomic Assessment in Industrial Product Development: A Patent and Literature Review. Appl. Sci. 2022, 12, 1084. [Google Scholar] [CrossRef]
  27. Hovanec, M.; Korba, P.; Al-Rabeei, S.; Vencel, M.; Racek, B. Digital Ergonomics—The Reliability of the Human Factor and Its Impact on the Maintenance of Aircraft Brakes and Wheels. Machines 2024, 12, 203. [Google Scholar] [CrossRef]
  28. Lorenzini, M.; Lagomarsino, M.; Fortini, L.; Gholami, S.; Ajoudani, A. Ergonomic human-robot collaboration in industry: A review. Front. Robot. AI 2023, 9, 813907. [Google Scholar] [CrossRef]
  29. Antonaci, F.G.; Olivetti, E.C.; Marcolin, F.; Jimenez, I.A.C.; Eynard, B.; Vezzetti, E.; Moos, S. Workplace Well-Being in Industry 5.0: A Worker-Centered Systematic Review. Sensors 2024, 24, 5473. [Google Scholar] [CrossRef]
  30. Trstenjak, M.; Benešova, A.; Opetuk, T.; Cajner, H. Human Factors and Ergonomics in Industry 5.0—A Systematic Literature Review. Appl. Sci. 2025, 15, 2123. [Google Scholar] [CrossRef]
  31. Aguiñaga, A.R.; Realyvásquez-Vargas, A.; Lopez Ramirez, M.A.; Quezada, A. Cognitive ergonomics evaluation assisted by an intelligent emotion recognition technique. Appl. Sci. 2020, 10, 1736. [Google Scholar] [CrossRef]
  32. Papetti, A.; Ciccarelli, M.; Manni, A.; Caroppo, A.; Rescio, G. Investigating the Use of Electrooculography Sensors to Detect Stress During Working Activities. Sensors 2025, 25, 3015. [Google Scholar] [CrossRef]
  33. Komeijani, M.; Ryen, E.G.; Babbitt, C.W. Bridging the Gap between Eco-Design and the Human Thinking System. Challenges 2016, 7, 5. [Google Scholar] [CrossRef]
  34. Yan, M.; Rampino, L.; Caruso, G. Comparing User Acceptance in Human–Machine Interfaces Assessments of Shared Autonomous Vehicles: A Standardized Test Procedure. Appl. Sci. 2024, 15, 45. [Google Scholar] [CrossRef]
  35. Rettenmaier, M.; Schulze, J.; Bengler, K. How much space is required? Effect of distance, content, and color on external human–machine interface size. Information 2020, 11, 346. [Google Scholar] [CrossRef]
  36. Cardoso, A.; Colim, A.; Bicho, E.; Braga, A.C.; Menozzi, M.; Arezes, P. Ergonomics and Human factors as a requirement to implement safer collaborative robotic workstations: A literature review. Safety 2021, 7, 71. [Google Scholar] [CrossRef]
  37. Gualtieri, L.; Palomba, I.; Merati, F.A.; Rauch, E.; Vidoni, R. Design of human-centered collaborative assembly workstations for the improvement of operators’ physical ergonomics and production efficiency: A case study. Sustainability 2020, 12, 3606. [Google Scholar] [CrossRef]
  38. Donisi, L.; Cesarelli, G.; Pisani, N.; Ponsiglione, A.M.; Ricciardi, C.; Capodaglio, E. Wearable Sensors and Artificial Intelligence for Physical Ergonomics: A Systematic Review of Literature. Diagnostics 2022, 12, 3048. [Google Scholar] [CrossRef]
  39. Stefana, E.; Marciano, F.; Rossi, D.; Cocca, P.; Tomasoni, G. Wearable devices for ergonomics: A systematic literature review. Sensors 2021, 21, 777. [Google Scholar] [CrossRef]
  40. Lind, C.M.; Diaz-Olivares, J.A.; Lindecrantz, K.; Eklund, J. A wearable sensor system for physical ergonomics interventions using haptic feedback. Sensors 2020, 20, 6010. [Google Scholar] [CrossRef]
  41. Colim, A.; Faria, C.; Cunha, J.; Oliveira, J.; Sousa, N.; Rocha, L.A. Physical Ergonomic Improvement and Safe Design of an Assembly Workstation through Collaborative Robotics. Safety 2021, 7, 14. [Google Scholar] [CrossRef]
  42. Maruyama, T.; Ueshiba, T.; Tada, M.; Toda, H.; Endo, Y.; Domae, Y.; Nakabo, Y.; Mori, T.; Suita, K. Digital twin-driven human robot collaboration using a digital human. Sensors 2021, 21, 8266. [Google Scholar] [CrossRef]
  43. Bläsing, D.; Bornewasser, M. Influence of Increasing Task Complexity and Use of Informational Assistance Systems on Mental Workload. Brain Sci. 2021, 11, 102. [Google Scholar] [CrossRef]
  44. Brunzini, A.; Peruzzini, M.; Grandi, F.; Khamaisi, R.K.; Pellicciari, M. A preliminary experimental study on the workers’ workload assessment to design industrial products and processes. Appl. Sci. 2021, 11, 12066. [Google Scholar] [CrossRef]
  45. Méndez, G.M.; Velázquez, F.d.C. Adaptive Augmented Reality Architecture for Optimising Assistance and Safety in Industry 4.0. Big Data Cogn. Comput. 2025, 9, 133. [Google Scholar] [CrossRef]
  46. Murata, A.; Nakamura, T.; Karwowski, W. Influence of cognitive biases in distorting decision making and leading to critical unfavorable incidents. Safety 2015, 1, 44–58. [Google Scholar] [CrossRef]
  47. Kistan, T.; Gardi, A.; Sabatini, R. Machine learning and cognitive ergonomics in air traffic management: Recent developments and considerations for certification. Aerospace 2018, 5, 103. [Google Scholar] [CrossRef]
  48. Hasanain, B. The Role of Ergonomic and Human Factors in Sustainable Manufacturing: A Review. Machines 2024, 12, 159. [Google Scholar] [CrossRef]
  49. Diego-Mas, J.A. Designing cyclic job rotations to reduce the exposure to ergonomics risk factors. Int. J. Environ. Res. Public Health 2020, 17, 1073. [Google Scholar] [CrossRef]
  50. Faez, E.; Zakerian, S.A.; Azam, K.; Hancock, K.; Rosecrance, J. An assessment of ergonomics climate and its association with self-reported pain, organizational performance and employee well-being. Int. J. Environ. Res. Public Health 2021, 18, 2610. [Google Scholar] [CrossRef]
  51. Sorensen, G.; Peters, S.; Nielsen, K.; Nagler, E.; Karapanos, M.; Wallace, L.; Burke, L.; Dennerlein, J.T.; Wagner, G.R. Improving working conditions to promote worker safety, health, and wellbeing for low-wage workers: The workplace organizational health study. Int. J. Environ. Res. Public Health 2019, 16, 1449. [Google Scholar] [CrossRef]
  52. Salisu, S.; Ruhaiyem, N.I.R.; Eisa, T.A.E.; Nasser, M.; Saeed, F.; Younis, H.A. Motion Capture Technologies for Ergonomics: A Systematic Literature Review. Diagnostics 2023, 13, 2593. [Google Scholar] [CrossRef]
  53. Bennett, S.T.; Han, W.; Mahmud, D.; Adamczyk, P.G.; Dai, F.; Wehner, M.; Veeramani, D.; Zhu, Z. Usability and Biomechanical Testing of Passive Exoskeletons for Construction Workers: A Field-Based Pilot Study. Buildings 2023, 13, 822. [Google Scholar] [CrossRef]
  54. de Souza, D.F.; Sousa, S.; Kristjuhan-Ling, K.; Dunajeva, O.; Roosileht, M.; Pentel, A.; Mõttus, M.; Özdemir, M.C.; Gratšjova, Ž. Trust and Trustworthiness from Human-Centered Perspective in Human–Robot Interaction (HRI)—A Systematic Literature Review. Electronics 2025, 14, 1557. [Google Scholar] [CrossRef]
  55. Sun, Z.; Zhu, M.; Lee, C. Progress in the Triboelectric Human–Machine Interfaces (HMIs)-Moving from Smart Gloves to AI/Haptic Enabled HMI in the 5G/IoT Era. Nanoenergy Adv. 2021, 1, 81–120. [Google Scholar] [CrossRef]
  56. Prati, E.; Villani, V.; Peruzzini, M.; Sabattini, L. An approach based on VR to design industrial human-robot collaborative workstations. Appl. Sci. 2021, 11, 11773. [Google Scholar] [CrossRef]
  57. Mcatamney, L.; Corlett, E.N. RULA: A survey method for the investigation of world-related upper limb disorders. Appl. Ergon. 1993, 24, 91–99. [Google Scholar] [CrossRef]
  58. Gómez-Galán, M.; Callejón-Ferre, Á.-J.; Pérez-Alonso, J.; Díaz-Pérez, M.; Carrillo-Castrillo, J.-A. Musculoskeletal risks: RULA bibliometric review. Int. J. Environ. Res. Public Health 2020, 17, 4354. [Google Scholar] [CrossRef]
  59. Paudel, P.; Kwon, Y.-J.; Kim, D.-H.; Choi, K.-H. Industrial Ergonomics Risk Analysis Based on 3D-Human Pose Estimation. Electronics 2022, 11, 3403. [Google Scholar] [CrossRef]
  60. Huang, C.; Kim, W.; Zhang, Y.; Xiong, S. Development and validation of a wearable inertial sensors-based automated system for assessing work-related musculoskeletal disorders in the workspace. Int. J. Environ. Res. Public Health 2020, 17, 1–15. [Google Scholar] [CrossRef]
  61. Kwon, Y.-J.; Kim, D.-H.; Son, B.-C.; Choi, K.-H.; Kwak, S.; Kim, T. A Work-Related Musculoskeletal Disorders (WMSDs) Risk-Assessment System Using a Single-View Pose Estimation Model. Int. J. Environ. Res. Public Health 2022, 19, 9803. [Google Scholar] [CrossRef]
  62. Navas-Reascos, G.E.; Romero, D.; Rodriguez, C.A.; Guedea, F.; Stahre, J. Wire Harness Assembly Process Supported by a Collaborative Robot: A Case Study Focus on Ergonomics. Robotics 2022, 11, 131. [Google Scholar] [CrossRef]
  63. Choi, K.-H.; Kim, D.-M.; Cho, M.-U.; Park, C.-W.; Kim, S.-Y.; Kim, M.-J.; Kong, Y.-K. Application of AULA risk assessment tool by comparison with other ergonomic risk assessment tools. Int. J. Environ. Res. Public Health 2020, 17, 1–9. [Google Scholar] [CrossRef]
  64. Mao, W.; Yang, X.; Wang, C.; Hu, Y.; Gao, T. A Physical Fatigue Evaluation Method for Automotive Manual Assembly: An Experiment of Cerebral Oxygenation with ARE Platform. Sensors 2023, 23, 9410. [Google Scholar] [CrossRef]
  65. Loske, D.; Klumpp, M.; Keil, M.; Neukirchen, T. Logistics Work, Ergonomics and Social Sustainability: Empirical Musculoskeletal System Strain Assessment in Retail Intralogistics. Logistics 2021, 5, 89. [Google Scholar] [CrossRef]
  66. Al-Zuheri, A.; Ketan, H.S. Correcting Working Postures in Water Pump Assembly Tasks using the OVAKO Work Analysis System (OWAS). Al-Khwarizmi Eng. J. 2008, 4, 8–17. [Google Scholar]
  67. Ojstersek, R.; Buchmeister, B.; Herzog, N.V. Use of data-driven simulation modeling and visual computing methods for workplace evaluation. Appl. Sci. 2020, 10, 1–17. [Google Scholar] [CrossRef]
  68. Yu, M.; Zhao, D.; Zhang, Y.; Chen, J.; Shan, G.; Cao, Y.; Ye, J. Development of a Novel Biomechanical Framework for Quantifying Dynamic Risks in Motor Behaviors During Aircraft Maintenance. Appl. Sci. 2025, 15, 5390. [Google Scholar] [CrossRef]
  69. Young, C.; Hamilton-Wright, A.; Oliver, M.L.; Gordon, K.D. Predicting Wrist Posture during Occupational Tasks Using Inertial Sensors and Convolutional Neural Networks. Sensors 2023, 23, 942. [Google Scholar] [CrossRef] [PubMed]
  70. Swain, A.D.; Guttmann, H.E. Handbook of Human-Reliability Analysis with Emphasis on Nuclear Power Plant Applications; Final Report (No. NUREG/CR-1278; SAND-80-0200); Sandia National Labs.: Albuquerque, NM, USA, 1983. [Google Scholar]
  71. Fargnoli, M.; Lombardi, M.; Puri, D. Applying hierarchical task analysis to depict human safety errors during pesticide use in vineyard cultivation. Agriculture 2019, 9, 158. [Google Scholar] [CrossRef]
  72. Żywiec, J.; Tchórzewska-Cieślak, B.; Sokolan, K. Assessment of Human Errors in the Operation of the Water Treatment Plant. Water 2024, 16, 2399. [Google Scholar] [CrossRef]
  73. Torres, Y.; Nadeau, S.; Landau, K. Classification and quantification of human error in manufacturing: A case study in complex manual assembly. Appl. Sci. 2021, 11, 1–23. [Google Scholar] [CrossRef]
  74. Barosz, P.; Gołda, G.; Kampa, A. Efficiency analysis of manufacturing line with industrial robots and human operators. Appl. Sci. 2020, 10, 2862. [Google Scholar] [CrossRef]
  75. Zhu, L.; Lv, J. Review of Studies on User Research Based on EEG and Eye Tracking. Appl. Sci. 2023, 13, 6502. [Google Scholar] [CrossRef]
  76. Ricci, A.; Ronca, V.; Capotorto, R.; Giorgi, A.; Vozzi, A.; Germano, D.; Borghini, G.; Di Flumeri, G.; Babiloni, F.; Aricò, P. Understanding the Unexplored: A Review on the Gap in Human Factors Characterization for Industry 5.0. Appl. Sci. 2025, 15, 1822. [Google Scholar] [CrossRef]
  77. Keskin, M.; Ooms, K.; Dogru, A.O.; De Maeyer, P. Exploring the Cognitive Load of Expert and Novice Map Users Using EEG and Eye Tracking. ISPRS Int. J. Geo-Inf. 2020, 9, 429. [Google Scholar] [CrossRef]
  78. Yu, R.; Schubert, G.; Gu, N. Biometric Analysis in Design Cognition Studies: A Systematic Literature Review. Buildings 2023, 13, 630. [Google Scholar] [CrossRef]
  79. Slanzi, G.; Balazs, J.; Velasquez, J.D. Predicting Web User Click Intention Using Pupil Dilation and Electroencephalogram Analysis. In Proceedings of the 2016 IEEE/WIC/ACM International Conference on Web Intelligence, Omaha, NE, USA, 13–16 October 2016; pp. 417–420. [Google Scholar]
  80. Paing, M.P.; Juhong, A.; Pintavirooj, C. Design and Development of an Assistive System Based on Eye Tracking. Electronics 2022, 11, 535. [Google Scholar] [CrossRef]
  81. Tiboni, M.; Borboni, A.; Vérité, F.; Bregoli, C.; Amici, C. Sensors and Actuation Technologies in Exoskeletons: A Review. Sensors 2022, 22, 884. [Google Scholar] [CrossRef]
  82. Pesenti, M.; Antonietti, A.; Gandolla, M.; Pedrocchi, A. Towards a functional performance validation standard for industrial low-back exoskeletons: State of the art review. Sensors 2021, 21, 808. [Google Scholar] [CrossRef]
  83. Zhang, X.; Qu, Y.; Zhang, G.; Wang, Z.; Chen, C.; Xu, X. Review of sEMG for Exoskeleton Robots: Motion Intention Recognition Techniques and Applications. Sensors 2025, 25, 2448. [Google Scholar] [CrossRef]
  84. Song, Z.; Zhao, P.; Wu, X.; Yang, R.; Gao, X. An Active Control Method for a Lower Limb Rehabilitation Robot with Human Motion Intention Recognition. Sensors 2025, 25, 713. [Google Scholar] [CrossRef]
  85. Kian, A.; Widanapathirana, G.; Joseph, A.M.; Lai, D.T.H.; Begg, R. Application of Wearable Sensors in Actuation and Control of Powered Ankle Exoskeletons: A Comprehensive Review. Sensors 2022, 22, 2244. [Google Scholar] [CrossRef]
  86. Lobov, S.; Krilova, N.; Kastalskiy, I.; Kazantsev, V.; Makarov, V.A. Latent factors limiting the performance of sEMG-interfaces. Sensors 2018, 18, 1122. [Google Scholar] [CrossRef]
  87. Lokomat—Exoskeleton Report. Available online: https://exoskeletonreport.com/product/lokomat/ (accessed on 6 May 2025).
  88. Szondy, D. Lockheed Martin’s Fortis Tool Arm takes the load to cut fatigue. Available online: https://newatlas.com/lockheed-martin-fortis-tool-arm/49137/ (accessed on 6 May 2025).
  89. HAL Senses Bio-electrical Signals and Complete Intended Motion of Wearers. Cyberdyne Care Robotics GmbH. Available online: https://www.cyberdyne.eu/en/products/medical-device/hal-motion-principal/ (accessed on 6 May 2025).
  90. Myosuit. Available online: https://exoskeletonreport.com/product/myosuit/ (accessed on 6 May 2025).
  91. Laevo V2—Exoskeleton Report. Available online: https://exoskeletonreport.com/product/laevo/ (accessed on 6 May 2025).
  92. Ottobock Paexo Shoulder. Available online: https://corporate.ottobock.com/en/company/newsroom/media-information/exoskeletons (accessed on 6 May 2025).
  93. MyoPro—Exoskeleton Report. Available online: https://exoskeletonreport.com/product/myopro/ (accessed on 6 May 2025).
  94. Kuber, P.M.; Rashedi, E. Training and Familiarization with Industrial Exoskeletons: A Review of Considerations, Protocols, and Approaches for Effective Implementation. Biomimetics 2024, 9, 520. [Google Scholar] [CrossRef] [PubMed]
  95. Willmott Dixon Trials High-Tech Robotic Vest That Could Revolutionise Construction. Willmott Dixon. Available online: https://www.willmottdixon.co.uk/news/willmott-dixon-trials-high-tech-robotic-vest-that-could-revolutionise-construction (accessed on 6 May 2025).
  96. Noonee’s Wearable Chairless Chair 2.0 Boasts Improved Comfort. Available online: https://www.homecrux.com/second-generation-wearable-chairless-chair/135899/ (accessed on 6 May 2025).
  97. EXOPA. Available online: https://www.ergonomiesite.be/exoskeleton-potential-assessment-tool-exopa/ (accessed on 6 May 2025).
  98. Digitalisierung: Technik Für die Arbeitswelt von Morgen. Available online: https://industrieanzeiger.industrie.de/technik/entwicklung/technik-fuer-die-arbeitswelt-von-morgen/ (accessed on 6 May 2025).
  99. Exoskeletons: State-of-the-Art, Design Challenges, and Future Directions—Scientific Figure on ResearchGate. Available online: https://www.researchgate.net/figure/Upper-limb-exoskeletons-for-rehabilitation-and-assistive-purposes-A-ARMin-III-B_fig1_330631170 (accessed on 6 May 2025).
  100. Exoskeleton Arm 3D Model 3D Printable Rigged. CGTrader. Available online: https://www.cgtrader.com/3d-print-models/science/engineering/exoarm (accessed on 6 May 2025).
  101. SUITX MAX. Available online: https://www.industrytap.com/suitx-selected-winner-chairmans-choice-annual-big-innovation-awards/40856 (accessed on 6 May 2025).
  102. IX BACK VOLTON. Available online: https://exoskeletonreport.com/product/ix-back/ (accessed on 6 May 2025).
  103. MATE-XB Exoskeleton—Comau. Available online: https://www.comau.com/en/our-offer/products-and-solutions/wearable-robotics-exoskeletons/mate-xb-exoskeleton/ (accessed on 6 May 2025).
  104. Kaasinen, E.; Anttila, A.-H.; Heikkilä, P.; Laarni, J.; Koskinen, H.; Väätänen, A. Smooth and Resilient Human–Machine Teamwork as an Industry 5.0 Design Challenge. Sustainability 2022, 14, 2773. [Google Scholar] [CrossRef]
  105. Alves, J.; Lima, T.M.; Gaspar, P.D. Is Industry 5.0 a Human-Centred Approach? A Systematic Review. Processes 2023, 11, 193. [Google Scholar] [CrossRef]
  106. Matheson, E.; Minto, R.; Zampieri, E.G.G.; Faccio, M.; Rosati, G. Human–robot collaboration in manufacturing applications: A review. Robotics 2019, 8, 100. [Google Scholar] [CrossRef]
  107. Matthias, B. ISO/TS 15066-Collaborative Robots Present Status. In Proceedings of the European Robotics Forum 2015, Vienna, Austria, 11–13 March 2015. [Google Scholar]
  108. Hollnagel, E.; Woods, D.D. Resilience Engineering: Concepts and Precepts; CRC Press: Boca Raton, FL, USA, 2006. [Google Scholar]
  109. Belliger, A.; Krieger, D.J. Organizing Networks: An Actor-Network Theory of Organizations; Transcript Verlag: Bielefeld, Germany, 2016. [Google Scholar]
  110. Purcărea, A.; Albulescu, S.; Negoiță, O.D.; Dănălache, F.; Corocăescu, M. Modeling the human resource development process in the automotive industry services. UPB Sci. Bull. Ser. D Mech. Eng. 2016, 78, 263–275. [Google Scholar]
  111. Schirmer, F.; Kranz, P.; Rose, C.G.; Schmitt, J.; Kaupp, T. Towards Dynamic Human–Robot Collaboration: A Holistic Framework for Assembly Planning. Electronics 2025, 14, 190. [Google Scholar] [CrossRef]
  112. Boschetti, G.; Minto, R.; Trevisani, A. Experimental investigation of a cable robot recovery strategy. Robotics 2021, 10, 35. [Google Scholar] [CrossRef]
  113. Madni, A.M.; Madni, C.C. Architectural framework for exploring adaptive human-machine teaming options in simulated dynamic environments. Systems 2018, 6, 44. [Google Scholar] [CrossRef]
  114. Zheng, M.; Wang, L.; Tian, Y. Does Cognitive Load Influence Moral Judgments? The Role of Action–Omission and Collective Interests. Behav. Sci. 2025, 15, 361. [Google Scholar] [CrossRef]
  115. Martini, B.; Bellisario, D.; Coletti, P. Human-Centered and Sustainable Artificial Intelligence in Industry 5.0: Challenges and Perspectives. Sustainability 2024, 16, 5448. [Google Scholar] [CrossRef]
  116. Alfaro-Viquez, D.; Zamora-Hernandez, M.; Fernandez-Vega, M.; Garcia-Rodriguez, J.; Azorin-Lopez, J. A Comprehensive Review of AI-Based Digital Twin Applications in Manufacturing: Integration Across Operator, Product, and Process Dimensions. Electronics 2025, 14, 646. [Google Scholar] [CrossRef]
  117. Krupas, M.; Kajati, E.; Liu, C.; Zolotova, I. Towards a Human-Centric Digital Twin for Human–Machine Collaboration: A Review on Enabling Technologies and Methods. Sensors 2024, 24, 2232. [Google Scholar] [CrossRef] [PubMed]
  118. Alojaiman, B. Technological Modernizations in the Industry 5.0 Era: A Descriptive Analysis and Future Research Directions. Processes 2023, 11, 1318. [Google Scholar] [CrossRef]
  119. Fraga-Lamas, P.; Barros, D.; Lopes, S.I.; Fernández-Caramés, T.M. Mist and Edge Computing Cyber-Physical Human-Centered Systems for Industry 5.0: A Cost-Effective IoT Thermal Imaging Safety System. Sensors 2022, 22, 8500. [Google Scholar] [CrossRef]
  120. Hassan, M.A.; Zardari, S.; Farooq, M.U.; Alansari, M.M.; Nagro, S.A. Systematic Analysis of Risks in Industry 5.0 Architecture. Appl. Sci. 2024, 14, 1466. [Google Scholar] [CrossRef]
  121. Universal Robots—UR5e Case Studies. Available online: https://www.universal-robots.com/products/ur5e/ (accessed on 14 May 2025).
  122. KUKA—LBR iiwa in Automotive. Available online: https://www.kuka.com/ (accessed on 14 May 2025).
  123. FANUC—CRX Collaborative Robots. Available online: https://www.fanuc.eu/ (accessed on 14 May 2025).
  124. Rethink Robotics—Sawyer Applications. Available online: https://robotsguide.com/robots/sawyer (accessed on 14 May 2025).
  125. ABB Robotics—YuMi Collaborative Robot. Available online: https://new.abb.com/products/robotics/robots/collaborative-robots/yumi (accessed on 14 May 2025).
  126. Universal Robots—UR10e Applications. Available online: https://www.universal-robots.com/products/ur10e/ (accessed on 14 May 2025).
  127. ABB—GoFa CRB 15000. Available online: https://new.abb.com/products/robotics/robots/collaborative-robots/crb-15000 (accessed on 14 May 2025).
  128. FANUC—CR-35iA Collaborative Robot. Available online: https://www.fanucamerica.com/products/robots/series/collaborative-robot (accessed on 14 May 2025).
  129. Techman Robot—TM5 Series. Available online: https://www.tm-robot.com/ (accessed on 14 May 2025).
  130. Yaskawa Motoman—HC10. Available online: https://www.motoman.com/ (accessed on 14 May 2025).
  131. Doosan Robotics—M Series. Available online: https://www.doosanrobotics.com/ (accessed on 14 May 2025).
  132. AUBO Robotics—AUBO-i5. Available online: https://www.aubo-robotics.com/ (accessed on 14 May 2025).
  133. Kinova Robotics—Link 6. Available online: https://www.kinovarobotics.com/ (accessed on 14 May 2025).
  134. AKKA Technologies—Air-Cobot. Available online: https://www.akka-technologies.com/en/innovation/projects/aircobot (accessed on 14 May 2025).
  135. Dobot—Magician Robot. Available online: https://www.dobot.cc/ (accessed on 14 May 2025).
  136. Rethink Robotics—Baxter. Available online: https://robotsguide.com/robots/baxter (accessed on 14 May 2025).
  137. Neura Robotics—Product Portfolio. Available online: https://neura-robotics.com/products/ (accessed on 14 May 2025).
  138. Siasun Robotics—SR Series. Available online: https://www.siasun.com/ (accessed on 14 May 2025).
  139. Vidal-Balea, A.; Blanco-Novoa, O.; Fraga-Lamas, P.; Vilar-Montesinos, M.; Fernández-Caramés, T.M. Creating collaborative augmented reality experiences for industry 4.0 training and assistance applications: Performance evaluation in the shipyard of the future. Appl. Sci. 2020, 10, 9073. [Google Scholar] [CrossRef]
  140. Lee, J.; Ma, B. An Operator Training Simulator to Enable Responses to Chemical Accidents through Mutual Cooperation between the Participants. Appl. Sci. 2023, 13, 1382. [Google Scholar] [CrossRef]
  141. Afolabi, A.O.; Nnaji, C.; Okoro, C. Immersive Technology Implementation in the Construction Industry: Modeling Paths of Risk. Buildings 2022, 12, 363. [Google Scholar] [CrossRef]
  142. Mourtzis, D.; Angelopoulos, J. Development of an Extended Reality-Based Collaborative Platform for Engineering Education: Operator 5. Electronics 2023, 12, 3663. [Google Scholar] [CrossRef]
  143. Badia, S.B.i.; Silva, P.A.; Branco, D.; Pinto, A.; Carvalho, C.; Menezes, P.; Almeida, J.; Pilacinski, A. Virtual Reality for Safe Testing and Development in Collaborative Robotics: Challenges and Perspectives. Electronics 2022, 11, 1726. [Google Scholar] [CrossRef]
  144. Gabajová, G.; Furmannová, B.; Medvecká, I.; Grznár, P.; Krajčovič, M.; Furmann, R. Virtual training application by use of augmented and virtual reality under university technology enhanced learning in Slovakia. Sustainability 2019, 11, 6677. [Google Scholar] [CrossRef]
  145. Zhao, H.; Zhao, Q.H.; Ślusarczyk, B. Sustainability and digitalization of corporate management based on augmented/virtual reality tools usage: China and other world IT companies’ experience. Sustainability 2019, 11, 4717. [Google Scholar] [CrossRef]
  146. Židek, K.; Lazorík, P.; Piteľ, J.; Hošovský, A. An automated training of deep learning networks by 3d virtual models for object recognition. Symmetry 2019, 11, 496. [Google Scholar] [CrossRef]
  147. Bahubalendruni, M.V.A.R.; Putta, B. Assembly Sequence Validation with Feasibility Testing for Augmented Reality Assisted Assembly Visualization. Processes 2023, 11, 2094. [Google Scholar] [CrossRef]
  148. Lanyi, C.S.; Withers, J.D.A. Striving for a safer and more ergonomic workplace: Acceptability and human factors related to the adoption of AR/VR glasses in industry 4.0. Smart Cities 2020, 3, 289–307. [Google Scholar] [CrossRef]
  149. Othman, U.; Yang, E. Human–Robot Collaborations in Smart Manufacturing Environments: Review and Outlook †. Sensors 2023, 23, 5663. [Google Scholar] [CrossRef]
  150. Mourtzis, D.; Angelopoulos, J.; Panopoulos, N. Smart manufacturing and tactile internet based on 5G in industry 4.0: Challenges, Applications and New Trends. Electronics 2021, 10, 3175. [Google Scholar] [CrossRef]
  151. Zheng, Z.; Wu, Z.; Zhao, R.; Ni, Y.; Jing, X.; Gao, S. A Review of EMG-, FMG-, and EIT-Based Biosensors and Relevant Human–Machine Interactivities and Biomedical Applications. Biosensors 2022, 12, 516. [Google Scholar] [CrossRef]
  152. Brill, S.; Payre, W.; Debnath, A.; Horan, B.; Birrell, S. External Human–Machine Interfaces for Automated Vehicles in Shared Spaces: A Review of the Human–Computer Interaction Literature. Sensors 2023, 23, 4454. [Google Scholar] [CrossRef]
  153. Panchetti, T.; Pietrantoni, L.; Puzzo, G.; Gualtieri, L.; Fraboni, F. Assessing the Relationship between Cognitive Workload, Workstation Design, User Acceptance and Trust in Collaborative Robots. Appl. Sci. 2023, 13, 1720. [Google Scholar] [CrossRef]
  154. Iarlori, S.; Perpetuini, D.; Tritto, M.; Cardone, D.; Tiberio, A.; Chinthakindi, M.; Filippini, C.; Cavanini, L.; Freddi, A.; Ferracuti, F.; et al. An Overview of Approaches and Methods for the Cognitive Workload Estimation in Human–Machine Interaction Scenarios through Wearables Sensors. Bio. Med. Inform. 2024, 4, 1155–1173. [Google Scholar] [CrossRef]
Figure 1. Cognitive processing flow in human–machine interactions.
Figure 1. Cognitive processing flow in human–machine interactions.
Applsci 15 07703 g001
Figure 2. The body posture of Group A (upper arm, lower arm, and wrist).
Figure 2. The body posture of Group A (upper arm, lower arm, and wrist).
Applsci 15 07703 g002
Figure 3. The body posture of Group B (neck, trunk, and legs).
Figure 3. The body posture of Group B (neck, trunk, and legs).
Applsci 15 07703 g003
Figure 4. OWAS evaluation of back posture.
Figure 4. OWAS evaluation of back posture.
Applsci 15 07703 g004
Figure 5. OWAS evaluation of arm postures.
Figure 5. OWAS evaluation of arm postures.
Applsci 15 07703 g005
Figure 6. OWAS evaluation of leg postures.
Figure 6. OWAS evaluation of leg postures.
Applsci 15 07703 g006
Figure 7. OWAS evaluation of the external load for men.
Figure 7. OWAS evaluation of the external load for men.
Applsci 15 07703 g007
Figure 8. Exoskeleton examples for each picture number (1—Exoskeleton Lokomat [87], 2—Exoskeleton Levitate AirFrame [88], 3—Exoskeleton HAL with bio-electrical signals [89], 4—Exoskeleton MYOSUIT [90], 5—Exoskeleton Laevo V2 [91], 6—Exoskeleton Ottobock Paexo Shoulder [92], 7—Exoskeleton MyoPro [93], 8—Exoskeleton XoTrunk [94], 9—Exoskeleton EksoVest [95], 10—Exoskeleton Chairless Chair [96], 11—Exoskeleton Lockheed Matin’s Tool [88], 12—Exoskeleton ExoPA [97], 13—Exoskeleton Stuttgart Exo-Jacket [98], 14—Exoskeleton Exo—UL7 [99], 15—Exoskeleton ExoArm [100], 16—Exoskeleton SuitX MAX [101], 17—Exoskeleton IX Back VOLTON [102], 18—Exoskeleton Camau MATE [103]).
Figure 8. Exoskeleton examples for each picture number (1—Exoskeleton Lokomat [87], 2—Exoskeleton Levitate AirFrame [88], 3—Exoskeleton HAL with bio-electrical signals [89], 4—Exoskeleton MYOSUIT [90], 5—Exoskeleton Laevo V2 [91], 6—Exoskeleton Ottobock Paexo Shoulder [92], 7—Exoskeleton MyoPro [93], 8—Exoskeleton XoTrunk [94], 9—Exoskeleton EksoVest [95], 10—Exoskeleton Chairless Chair [96], 11—Exoskeleton Lockheed Matin’s Tool [88], 12—Exoskeleton ExoPA [97], 13—Exoskeleton Stuttgart Exo-Jacket [98], 14—Exoskeleton Exo—UL7 [99], 15—Exoskeleton ExoArm [100], 16—Exoskeleton SuitX MAX [101], 17—Exoskeleton IX Back VOLTON [102], 18—Exoskeleton Camau MATE [103]).
Applsci 15 07703 g008
Table 1. Key physical ergonomic factors and their HMI relevance.
Table 1. Key physical ergonomic factors and their HMI relevance.
FactorDescriptionImpact on HMI
Working PostureMisaligned or static body positions during tasksIncreases musculoskeletal strain and task inaccuracy
Muscle FatigueLocal or systemic fatigue due to repetitive or high-load movements Reduces performance and increases risk of errors
Physical LoadManual handling, lifting, and force exertionLeads to overexertion and decreased task tolerance
Wearable SensorsIMU, EMG, and pressure-based feedback for real-time assessments Enables posture monitoring and ergonomic corrections
Haptic FeedbackVibrotactile cues guiding movement correctionSupports self-training and ergonomic behavior learning
Cobots and HRCCollaborative robotics used to reduce the biomechanical workloadPrevent WMSDs and increase safety and comfort
Table 2. RULA structured scoring.
Table 2. RULA structured scoring.
Grand ScoreAction LevelInterpretation
1–2Level 1Acceptable posture
3–4Level 2Further investigation needed
5–6Level 3Investigation and changes required soon
7Level 4Immediate investigation and changes
Table 3. OWAS structured scoring.
Table 3. OWAS structured scoring.
Action CategoryInterpretationRequired Action
1Normal posture, no action neededNone
2Slightly risky body positionCorrect in the near future
3High-risk body positionCorrect as soon as possible
4Severely harmful postureUrgent correction is needed
Table 4. EEG frequency bands reflecting different cognitive or emotional states [75].
Table 4. EEG frequency bands reflecting different cognitive or emotional states [75].
BandFrequency (Hz)Associated State
Delta<4Deep sleep
Theta4–8Drowsiness, cognitive effort
Alpha8–12Relaxation, alertness
Beta13–30Active concentration
Gamma>30High-level cognition
Table 5. Eye-tracking technology.
Table 5. Eye-tracking technology.
MetricSignificance
Fixation DurationIndicates cognitive processing [77]
Number of FixationsRelated to the attention distribution [77]
Saccade Velocity/AmplitudeAssociated with cognitive effort or stress [77]
Heat Maps and AOIsHighlight key areas of visual attention [77]
Table 6. Current examples of exoskeletons: types, applications, and functionality principles.
Table 6. Current examples of exoskeletons: types, applications, and functionality principles.
NameTypeApplicationOperating Principle Description
LokomatLower limb, activeGait rehabilitationTreadmill-based system with actuated hip/knee joints for repeated motion training. [87]
Levitate AirframeUpper limb/passiveOverhead work
fatigue reduction
A passive exoskeleton that uses a pulley-based mechanism to transfer the weight of the arms to the hips, relieving strain on the shoulders and upper back. [88]
HALFull body, activeRehabilitation and industrial supportUses sEMG to predict user motion, driving actuators accordingly. [89]
MyosuitLower limb, softMobility assistanceTextile suit with cable-driven actuation for lower-limb extension. [90]
Laevo V2Trunk, passiveIndustrial lumbar supportSpring-based lumbar support that redistributes the load during bending. [91]
Ottobock Paexo ShoulderUpper limb/passiveOverhead industrial workSpring-assisted support that reduces strain on the shoulders by transferring the load to the hips. [92]
MyoProUpper limb, activeStroke rehab/assistanceEMG-based elbow and hand orthosis, assists with volitional movement. [93]
MATE-XBTrunk, passivePostural stabilizationPassive support structure targeting lower back stability. [94]
EksoVestUpper limb, passiveOverhead work reductionReduces arm fatigue via passive spring/hinge mechanisms. [95]
Chairless ChairLower limb/passiveIndustrial—fatigue reduction for standing workersA passive leg-worn exoskeleton that locks at the knees to provide seated support anywhere, reducing fatigue by shifting weight to the heels. [96]
Lockheed Martin’s Fortis Tool ArmUpper limb/passiveIndustrial—tool support and fatigue reductionA passive, waist-mounted arm support that redirects the weight of heavy tools (up to 50 lbs) to the ground, reducing user fatigue and increasing productivity. [88]
ExoPAAssessment toolEvaluating exoskeleton suitability for overhead tasksAnalyzes task demands to assess the ergonomic benefit of using an exoskeleton. [97]
Stuttgart Exo-JacketUpper limb, hybridIndustrial cable installationActive shoulder/elbow support with a gas spring and force grounding. [98]
Exo-UL7Upper limb, activeGeneral haptic assistanceElectric actuators on shoulder/elbow; suitable for ADL support. [99]
ExoArmUpper limb, activeAssistive/industrial usePneumatically actuated shoulder and elbow joints. [100]
SuitX MAXModular industrial (passive)lifting, overhead work, injury preventionCombines back, shoulder, and leg support modules to reduce strain and fatigue during physically demanding tasks. [101]
IX BACK VOLTONLower back/activeDynamic heavy load handlingAI-driven, provides up to 17 kg of support and 8 h of battery life. [102]
Comau MATE Upper limb/passiveOverhead work, assembly tasksA passive upper-limb exoskeleton that reduces fatigue and improves posture during repetitive tasks. [103]
Table 7. Human–robot interaction (HRI) classification.
Table 7. Human–robot interaction (HRI) classification.
Type of InteractionDescription
CoexistenceThe human and robot operate in the same area but without overlapping tasks. [106]
SynchronizationShared space, but tasks are performed at different times. [106]
CooperationSimultaneous task execution in a shared space with different goals. [106]
CollaborationJoint execution of shared tasks, requiring real-time interaction. [106]
Table 8. Applications of human–robot interactions.
Table 8. Applications of human–robot interactions.
Use CaseIndustryDescription
Assembly Line CobotsAutomotiveCobots assist in part fitting and inspection alongside human workers [104].
Precision WeldingAerospaceHuman–robot shared control ensures accuracy and adaptability [105].
Warehouse SortingLogisticsCobots carry out repetitive lifting tasks to reduce worker fatigue [106].
Human–AI TeamsSmart FactoriesCognitive agents assist humans in real-time decision-making [108].
Table 9. Collaborative robots used in various industries.
Table 9. Collaborative robots used in various industries.
Cobot NameManufacturerApplication AreaKey FunctionsSustainability in Industry 5.0Documented Benefits
UR5eUniversal RobotsElectronic assemblyScrewdriving, quality inspectionEnergy-efficient, reduces scrap through precision30% cycle time reduction and high repeatability [121]
LBR iiwaKUKAAutomotive manufacturingPrecision joining, welding, HRCSupports lightweight, precise processes; lower emissionsEnhanced precision, improved worker safety [122]
CRX-10iAFANUCLogistics, warehousingBox handling, sortingOptimizes logistics for energy efficiencyReduced operator fatigue, incident-free deployment [123]
SawyerRethink RoboticsPackaging, plasticsMachine tending, injection moldingReduces plastic waste through precise handlingROI in under 12 months, flexible deployment [124]
YuMiABBSmall part assemblyDual-arm pick-and-place, testingSupports lean production and waste reductionIdeal for close collaboration, compact design [125]
UR10eUniversal RobotsPalletizing, weldingHeavy-load handling, MIG weldingIncreases welding efficiency, reduces material wasteUp to 50% productivity increase, reduced errors [126]
GoFa CRB 15000ABBAssembly, logisticsMaterial handling, quality inspectionEnhances process optimization, reduces transportation impactHigh operation speed, easy to program [127]
CR-35iAFANUCHeavy-duty industryLarge part manipulationEnables safe, efficient heavy tasks, reducing energy use35 kg payload, certified safety [128]
TM5Techman RobotVisual inspectionIntegrated camera, pick-and-placeReduces inspection waste and improves defect detectionReduced inspection costs, fast deployment [129]
HC10Yaskawa MotomanAssembly, material handlingManual guidance, flexible workspacesImproves the reusability of production linesEasy reprogramming, compact [130]
Doosan M0609Doosan RoboticsElectronics, packagingSoldering, gluing, box packingPromotes precise soldering, minimizing wasteHigh precision, intuitive interface [131]
AUBO-i5AUBO RoboticsResearch, educationPick-and-place, small part assemblyEncourages local, small-scale innovation and reuseLow cost, ideal for prototyping and training [132]
Kinova Link 6Kinova RoboticsAssembly, inspectionAccurate handling, visual inspectionLow energy use, promotes a modular, flexible designCompact, easy integration [133]
Air-CobotAKKA TechnologiesAerospaceAircraft visual inspectionSupports predictive maintenance, reduces travel emissionsTime-saving, improved accuracy [134]
Dobot MagicianDobotEducation, prototyping3D printing, laser engraving, pick-and-placeEnables circular economy practices via prototypingVersatile, great for labs and training [135]
BaxterRethink RoboticsLight assembly, educationGuided teaching, dual-arm collaborationFlexible for multiple tasks, supports reuseSimple programming, safe around humans [136]
LARANeura RoboticsManufacturing, logisticsMaterial handling, adaptive HRCFacilitates adaptive production, reduces energy wasteAI-integrated, highly responsive [137]
SR6CSiasun RoboticsPrecision assemblyHigh-precision tasks, inspectionHigh precision minimizes material loss and defectsCompact and accurate design [138]
Table 10. Comparative table of methods [142,144].
Table 10. Comparative table of methods [142,144].
MethodHardware UsedBenefitsMeasured OutcomesMethod
VR SimulatorHTC Vive ProSafe, immersive learningReduced errors, faster completionVR Simulator [142]
AR Step GuidanceMicrosoft HoloLensHands-free, real-time supportImproved accuracy and efficiencyAR Step Guidance [142]
MR CollaborativeCAVE SystemMulti-user training scenariosEnhanced coordinationMR Collaborative [144]
CNN Object DetectionEpson Moverio ARAutomated guidance, fast trainingReliable identification in AR taskCNN Object Detection [144]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ioniță, A.-R.; Anghel, D.-C.; Boudouh, T. Mind, Machine, and Meaning: Cognitive Ergonomics and Adaptive Interfaces in the Age of Industry 5.0. Appl. Sci. 2025, 15, 7703. https://doi.org/10.3390/app15147703

AMA Style

Ioniță A-R, Anghel D-C, Boudouh T. Mind, Machine, and Meaning: Cognitive Ergonomics and Adaptive Interfaces in the Age of Industry 5.0. Applied Sciences. 2025; 15(14):7703. https://doi.org/10.3390/app15147703

Chicago/Turabian Style

Ioniță, Andreea-Ruxandra, Daniel-Constantin Anghel, and Toufik Boudouh. 2025. "Mind, Machine, and Meaning: Cognitive Ergonomics and Adaptive Interfaces in the Age of Industry 5.0" Applied Sciences 15, no. 14: 7703. https://doi.org/10.3390/app15147703

APA Style

Ioniță, A.-R., Anghel, D.-C., & Boudouh, T. (2025). Mind, Machine, and Meaning: Cognitive Ergonomics and Adaptive Interfaces in the Age of Industry 5.0. Applied Sciences, 15(14), 7703. https://doi.org/10.3390/app15147703

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop