Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (124)

Search Parameters:
Keywords = NASA-TLX

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 4608 KB  
Article
Comparison of Multi-View and Merged-View Mining Vehicle Teleoperation Systems Through Eye-Tracking
by Alireza Kamran Pishhesari, Mahdi Shahsavar, Amin Moniri-Morad and Javad Sattarvand
Mining 2026, 6(1), 3; https://doi.org/10.3390/mining6010003 - 12 Jan 2026
Viewed by 105
Abstract
While multi-view visualization systems are widely used for mining vehicle teleoperation, they often impose high cognitive load and restrict operator attention. To explore a more efficient alternative, this study evaluated a merged-view interface that integrates multiple camera perspectives into a single coherent display. [...] Read more.
While multi-view visualization systems are widely used for mining vehicle teleoperation, they often impose high cognitive load and restrict operator attention. To explore a more efficient alternative, this study evaluated a merged-view interface that integrates multiple camera perspectives into a single coherent display. In a controlled experiment, 35 participants navigated a teleoperated robot along a 50 m lab-scale path representative of an underground mine under both multi-view and merged-view conditions. Task performance and eye-tracking data—including completion time, path adherence, and speed-limit violations—were collected for comparison. The merged-view system enabled 6% faster completion times, 21% higher path adherence, and 28% fewer speed-limit violations. Eye-tracking metrics indicated more efficient and distributed attention: blink rate decreased by 29%, fixation duration shortened by 18%, saccade amplitude increased by 11%, and normalized gaze-transition entropy rose by 14%, reflecting broader and more adaptive scanning. NASA-TLX scores further showed a 27% reduction in perceived workload. Regression-based sensitivity analysis revealed that gaze entropy was the strongest predictor of efficiency in the multi-view condition, while fixation duration dominated under merged-view visualization. For path adherence, blink rate was most influential in the multi-view setup, whereas fixation duration became key in merged-view operation. Overall, the results indicated that merged-view visualization improved visual attention distribution and reduced cognitive tunneling indicators in a controlled laboratory teleoperation task, offering early-stage, interface-level insights motivated by mining-relevant teleoperation challenges. Full article
(This article belongs to the Special Issue Mine Automation and New Technologies, 2nd Edition)
Show Figures

Figure 1

29 pages, 5168 KB  
Article
Effects of Dual-Operator Modes on Team Situation Awareness: A Non-Dyadic HMI Perspective in Intelligent Coal Mines
by Xiaofang Yuan, Xinxiang Zhang, Jiawei He and Linhui Sun
Appl. Sci. 2025, 15(24), 13222; https://doi.org/10.3390/app152413222 - 17 Dec 2025
Viewed by 265
Abstract
Under the context of non-dyadic human–machine interaction in intelligent coal mines, this study investigates the impact of different dyadic collaboration modes on Team Situation Awareness (TSA). Based on a simulated coal mine monitoring task, the experiment compares four working modes—Individual Operation, Supervised Operation, [...] Read more.
Under the context of non-dyadic human–machine interaction in intelligent coal mines, this study investigates the impact of different dyadic collaboration modes on Team Situation Awareness (TSA). Based on a simulated coal mine monitoring task, the experiment compares four working modes—Individual Operation, Supervised Operation, Cooperative Operation, and Divided-task Operation—across tasks of varying complexity. TSA was assessed using both objective (SAGAT) and subjective (SART) measures, alongside parallel evaluations of task performance and workload (NASA-TLX). The results demonstrate that, compared to Individual or Supervised Operation, both Cooperative and Divided-task Operation significantly enhance TSA and task performance. Cooperative Operation improves information integration and comprehension, while Divided-task Operation enhances response efficiency by enabling focused attention on role-specific demands. Moreover, dyadic collaboration reduces cognitive workload, with the task-sharing mode showing the lowest cognitive and temporal demands. The findings indicate that clear task structuring and real-time information exchange can alleviate cognitive bottlenecks and promote accurate environmental perception. Theoretically, this study extends the application of non-dyadic interaction theory to intelligent coal mine scenarios and empirically validates a “Collaboration Mode–TSA–Performance” model. Practically, it provides design implications for adaptive collaboration frameworks in high-risk, high-complexity industrial systems, highlighting the value of dynamic role allocation in optimizing cognitive resource utilization and enhancing operational safety. Full article
Show Figures

Figure 1

18 pages, 1259 KB  
Article
Acute Effects of Physical, Cognitive, and Combined Exercise on Executive Functions During Postural Challenge in Older Adults with Mild Cognitive Impairment
by Eirini Giatagani, Konstantina Dipla, Dimitrios A. Patikas and Lida Mademli
Appl. Sci. 2025, 15(23), 12649; https://doi.org/10.3390/app152312649 - 28 Nov 2025
Viewed by 561
Abstract
Mild cognitive impairment (MCI) compromises executive control, yet brief exercise may yield acute cognitive benefits. We tested whether a single ~10 min bout of physical, cognitive, or combined physical-cognitive exercise modulates Stroop performance in older adults with MCI (MoCA) under heightened postural demand [...] Read more.
Mild cognitive impairment (MCI) compromises executive control, yet brief exercise may yield acute cognitive benefits. We tested whether a single ~10 min bout of physical, cognitive, or combined physical-cognitive exercise modulates Stroop performance in older adults with MCI (MoCA) under heightened postural demand (tandem stance). In a within-subject design (n = 28), participants completed three sessions: physical, cognitive, and combined physical-cognitive exercise protocols. Stroop performance was tested seated and in tandem stance pre-exercise (order counterbalanced) and again in tandem stance post-exercise. Pre- and post-exercise physiological markers (heart rate, blood pressure, SpO2, glucose) and workload (NASA-TLX) were recorded; balance was assessed with a 30 s Tandem Stance Test. Posture (seated vs. tandem) did not affect baseline Stroop levels. Across sessions, Stroop performance improved acutely after the physical and cognitive exercises—most robustly after the physical testing—whereas the combined condition produced the smallest changes. The physical and combined sessions increased heart rate and systolic pressure; SpO2, diastolic pressure, glucose, NASA-TLX, and tandem-stance balance were unchanged among all sessions. These results indicate that a single light- to moderate-intensity session can acutely enhance executive function for MCI and that a challenging posture does not impose any dual-task cost. Furthermore, single-task exercise may be more effective for rapid cognitive gains than combined mental and physical protocols in individuals with MCI. Full article
Show Figures

Figure 1

25 pages, 7782 KB  
Article
The Human–Robot Multistation System—Visual Task Guidance and Human Initiative Scheduling for Collaborative Work Cells
by Helmut Zörrer, Alexander Hämmerle, Martin J. Kollingbaum, Gerhard Ebenhofer, Florian Steiner, Markus Ikeda, Stefan Fixl, Fabian Widmoser and Andreas Pichler
Appl. Sci. 2025, 15(22), 12230; https://doi.org/10.3390/app152212230 - 18 Nov 2025
Viewed by 585
Abstract
In this paper, we present enabling concepts for Zero Defect Manufacturing (ZDM) based on flexible human–robot interaction. We introduce the Human–Robot Multistation System (HRMS) as a novel framework for flexible, human-initiated task allocation across multiple workstations. A HRMS is defined as one or [...] Read more.
In this paper, we present enabling concepts for Zero Defect Manufacturing (ZDM) based on flexible human–robot interaction. We introduce the Human–Robot Multistation System (HRMS) as a novel framework for flexible, human-initiated task allocation across multiple workstations. A HRMS is defined as one or more workstations that support human–robot collaborative task execution and integrate intelligent perception and interaction systems with coordination logic, enabling alternating or collaborative task execution. These components allow human workers to interact with the system through a gesture-based modality and to receive task assignments. An agent-based task scheduler responds to human-initiated ‘Ready’ signals to pace activities ergonomically. We built a laboratory demonstrator for an Industry 5.0 ZDM final inspection/rework use case and conducted a first pilot study (n = 5, internal participants) to evaluate system usability (SUS), perception (Godspeed), mental workload (NASA-TLX), completion times, and error rates. Results indicated technical feasibility under laboratory conditions and acceptable usability, with SUS 70.5 ± 22 (‘OK’ toward ‘Good’), overall GQS 3.2 ± 0.8, RAW NASA-TLX 37 ± 16.3, mean job throughput time 232.5 ± 46.5 s, and errors in 9/10 jobs (E1–E4). In simulation, a proximity-aware shortest-path heuristic reduced walking distance by up to 70% versus FIFO without productivity loss. We conclude that HRMS is feasible with acceptable user experience under lab constraints, while recurrent task-level failures require mitigation and larger-scale validation. Full article
(This article belongs to the Special Issue Human–Robot Collaboration and Its Applications)
Show Figures

Figure 1

15 pages, 2942 KB  
Article
Development and Evaluation of a Next-Generation Medication Safety Support System Based on AI and Mixed Reality: A Study from South Korea
by Nathan Lucien Vieira, Su Jin Kim, Sangah Ahn, Ji Sim Yoon, Sook Hyun Park, Jeong Hee Hong, Min-Jeoung Kang, Il Kim, Meong Hi Son, Won Chul Cha and Junsang Yoo
Appl. Sci. 2025, 15(22), 12002; https://doi.org/10.3390/app152212002 - 12 Nov 2025
Viewed by 1122
Abstract
Medication errors pose a significant threat to patient safety. Although Bar-Code Medication Administration (BCMA) has reduced error rates, it is constrained by handheld devices, workflow interruptions, and incomplete safeguards against wrong patients, wrong doses, or drug incompatibility. In this study, we developed and [...] Read more.
Medication errors pose a significant threat to patient safety. Although Bar-Code Medication Administration (BCMA) has reduced error rates, it is constrained by handheld devices, workflow interruptions, and incomplete safeguards against wrong patients, wrong doses, or drug incompatibility. In this study, we developed and evaluated a next-generation BCMA system by integrating artificial intelligence and mixed reality technologies for real-time safety checks: Optical Character Recognition verifies medication–label concordance, facial recognition confirms patient identity, and a rules engine evaluates drug–diluent compatibility. Computer vision models achieved high recognition accuracy for drug vials (100%), medication labels (90%), QR codes (90%), and patient faces (90%), with slightly lower performance for intravenous fluids (80%). A mixed-methods evaluation was conducted in a simulated environment using the System Usability Scale (SUS), Reduced Instructional Materials Motivation Survey (RIMMS), Virtual Reality Sickness Questionnaire (VRSQ), and NASA Task Load Index (NASA-TLX). The results indicated excellent usability (median SUS = 82.5/100), strong user motivation (RIMMS = 3.7/5), minimal cybersickness (VRSQ = 0.4/6), and manageable cognitive workload (NASA-TLX = 31.7/100). Qualitative analysis highlighted the system’s potential to streamline workflow and serve as a digital “second verifier.” These findings suggest strong potential for clinical integration, enhancing medication safety at the point of care. Full article
Show Figures

Figure 1

19 pages, 2278 KB  
Article
Virtual Reality and Digital Twins for Mechanical Engineering Lab Education: Applications in Composite Manufacturing
by Ali Darejeh, Guy Chilcott, Ebrahim Oromiehie and Sara Mashayekh
Educ. Sci. 2025, 15(11), 1519; https://doi.org/10.3390/educsci15111519 - 10 Nov 2025
Viewed by 962
Abstract
This study investigates the effectiveness of a virtual reality (VR) simulation for teaching the hand lay-up process in composite manufacturing within mechanical engineering education. A within-subjects experiment involving 17 undergraduate mechanical engineering students compared the VR-based training with conventional physical laboratory instruction. Task [...] Read more.
This study investigates the effectiveness of a virtual reality (VR) simulation for teaching the hand lay-up process in composite manufacturing within mechanical engineering education. A within-subjects experiment involving 17 undergraduate mechanical engineering students compared the VR-based training with conventional physical laboratory instruction. Task performance, cognitive load, and learner perceptions were measured using procedural accuracy scores, completion times, NASA-TLX workload ratings, and post-task interviews. Results indicated that while participants required more time to complete the task in VR, procedural accuracy was comparable between VR and physical labs. VR significantly reduced mental, physical, and effort-related demands but elicited higher frustration levels, primarily due to navigation challenges and motion discomfort. Qualitative feedback showed strong learner preference for VR, citing its hazard-free environment, repeatability, and step-by-step guidance. These findings suggest that VR offers a viable and pedagogically effective alternative or complement to traditional composite-manufacturing training, particularly in contexts where access to physical facilities is limited. Future work should examine long-term skill retention, incorporate haptic feedback for tactile realism, and explore hybrid models combining VR and physical practice to optimise learning outcomes. Full article
Show Figures

Figure 1

22 pages, 8968 KB  
Article
A Comparative Study of Authoring Performances Between In-Situ Mobile and Desktop Tools for Outdoor Location-Based Augmented Reality
by Komang Candra Brata, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Prismahardi Aji Riyantoko, Noprianto and Mustika Mentari
Information 2025, 16(10), 908; https://doi.org/10.3390/info16100908 - 16 Oct 2025
Viewed by 737
Abstract
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, [...] Read more.
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, we proposed an in-situ mobile authoring tool as an efficient solution to this problem by offering direct authoring interactions in real-world environments using a smartphone. Currently, the evaluation through the comparison between the proposal and conventional ones is not sufficient to show superiority, particularly in terms of interaction, authoring performance, and cognitive workload, where our tool uses 6DoF device movement for spatial input, while desktop ones rely on mouse-pointing. In this paper, we present a comparative study of authoring performances between the tools across three authoring phases: (1) Point of Interest (POI) location acquisition, (2) AR object creation, and (3) AR object registration. For the conventional tool, we adopt Unity and ARCore SDK. As a real-world application, we target the LAR content creation for pedestrian landmark annotation across campus environments at Okayama University, Japan, and Brawijaya University, Indonesia, and identify task-level bottlenecks in both tools. In our experiments, we asked 20 participants aged 22 to 35 with different LAR development experiences to complete equivalent authoring tasks in an outdoor campus environment, creating various LAR contents. We measured task completion time, phase-wise contribution, and cognitive workload using NASA-TLX. The results show that our tool made faster creations with 60% lower cognitive loads, where the desktop tool required higher mental efforts with manual data input and object verifications. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

13 pages, 962 KB  
Article
Enhancing Cyber Situational Awareness Through Dynamic Adaptive Symbology: The DASS Framework
by Nicholas Macrino, Sergio Pallas Enguita and Chung-Hao Chen
Sensors 2025, 25(20), 6300; https://doi.org/10.3390/s25206300 - 11 Oct 2025
Viewed by 865
Abstract
The static nature of traditional military symbology, such as MIL-STD-2525D, hinders effective real-time threat detection and response in modern cybersecurity operations. This research introduces the Dynamic Adaptive Symbol System (DASS), a novel framework enhancing cyber situational awareness in military and enterprise environments. The [...] Read more.
The static nature of traditional military symbology, such as MIL-STD-2525D, hinders effective real-time threat detection and response in modern cybersecurity operations. This research introduces the Dynamic Adaptive Symbol System (DASS), a novel framework enhancing cyber situational awareness in military and enterprise environments. The DASS addresses static symbology limitations by employing a modular Python 3.10 architecture that uses machine learning-driven threat detection to dynamically adapt symbol visualization based on threat severity and context. Empirical testing assessed the DASS against a MIL-STD-2525D baseline using active cybersecurity professionals. Results show that the DASS significantly improves threat identification rates by 30% and reduces response times by 25%, while achieving 90% accuracy in symbol interpretation. Although the current implementation focuses on virus-based scenarios, the DASS successfully prioritizes critical threats and reduces operator cognitive load. Full article
Show Figures

Figure 1

22 pages, 2739 KB  
Article
An Evaluation of VR Games and Haptic Feedback for Upper Limb Rehabilitation for Individuals with Parkinson’s Disease
by Meldin Bektic, Brittany E. Smith, Angela L. Ridgel and Kwangtaek Kim
Machines 2025, 13(10), 930; https://doi.org/10.3390/machines13100930 - 9 Oct 2025
Cited by 1 | Viewed by 1116
Abstract
Three-dimensional virtual reality (VR) games incorporating haptic feedback were developed to support upper-limb rehabilitation in individuals with Parkinson’s disease (PD). Three interactive games: fishing, archery, and mining, were designed to simulate resistance, tension, and vibration using a haptic device, thereby encouraging motor tasks [...] Read more.
Three-dimensional virtual reality (VR) games incorporating haptic feedback were developed to support upper-limb rehabilitation in individuals with Parkinson’s disease (PD). Three interactive games: fishing, archery, and mining, were designed to simulate resistance, tension, and vibration using a haptic device, thereby encouraging motor tasks such as pulling, lifting, and lateral maneuvers. Both individuals with PD and healthy participants completed structured sessions, with performance measured through task completion time, scores, and movement trajectories, alongside perceived workload via the NASA-TLX. Results showed that higher haptic resistance levels reduced tremor amplitude by up to 10.55% in participants with PD and improved task completion efficiency by an average of 12.4% across games. These findings demonstrate the potential of personalized haptic feedback to stabilize motor control and enhance performance in VR-based rehabilitation. Importantly, individuals with PD demonstrated improved motor control under moderate haptic resistance, indicating the potential of adjustable haptic feedback for tailoring rehabilitation. These findings underscore the value of VR-haptic games as engaging and adaptable rehabilitation tools, supporting personalized interventions for individuals with PD. Full article
Show Figures

Figure 1

18 pages, 747 KB  
Article
Aurea EDEN: A 3D Visualization Approach for E-Commerce Customer Journey Analytics
by Robert Waszkowski
J. Theor. Appl. Electron. Commer. Res. 2025, 20(4), 279; https://doi.org/10.3390/jtaer20040279 - 9 Oct 2025
Viewed by 1183
Abstract
This paper introduces and empirically evaluates Aurea EDEN, a novel 3D visualization library for e-commerce customer journey analytics. Traditional 2D dashboards often fragment performance data from the structural context of the conversion funnel, increasing analysts’ cognitive load. To address this, Aurea EDEN provides [...] Read more.
This paper introduces and empirically evaluates Aurea EDEN, a novel 3D visualization library for e-commerce customer journey analytics. Traditional 2D dashboards often fragment performance data from the structural context of the conversion funnel, increasing analysts’ cognitive load. To address this, Aurea EDEN provides a method for programmatically generating 3D funnel diagrams where Key Performance Indicators (KPIs) are encoded directly as geometric properties. This approach is validated through two empirical studies: a comparative user study and a technical performance benchmark. The user study demonstrates that, for common analytical tasks, the 3D visualization significantly reduces task completion time and perceived cognitive workload (measured via NASA-TLX) while improving usability (measured via SUS) compared to a traditional 2D dashboard baseline. The benchmark confirms the library’s scalability for complex, real-world scenarios. These findings indicate that integrating KPIs as geometric attributes in a 3D model offers a more effective and efficient alternative for analyzing e-commerce funnels, providing a tangible contribution to the fields of information visualization and e-commerce analytics. Full article
(This article belongs to the Section Data Science, AI, and e-Commerce Analytics)
Show Figures

Figure 1

18 pages, 1703 KB  
Article
Driver Distraction Detection in Conditionally Automated Driving Using Multimodal Physiological and Ocular Signals
by Yang Zhou, Yunxing Chen and Yixi Zhang
Electronics 2025, 14(19), 3811; https://doi.org/10.3390/electronics14193811 - 26 Sep 2025
Viewed by 1065
Abstract
The deployment of conditionally automated vehicles raises safety concerns, as drivers often engage in non-driving-related tasks (NDRTs), delaying takeover responses. This study investigates driver state monitoring (DSM) using multimodal physiological and ocular signals from the TD2D (Takeover during Distracted L2 Automated Driving) dataset, [...] Read more.
The deployment of conditionally automated vehicles raises safety concerns, as drivers often engage in non-driving-related tasks (NDRTs), delaying takeover responses. This study investigates driver state monitoring (DSM) using multimodal physiological and ocular signals from the TD2D (Takeover during Distracted L2 Automated Driving) dataset, which includes synchronized electrocardiogram (ECG), photoplethysmography (PPG), electrodermal activity (EDA), and eye-tracking data from 50 participants across ten task conditions. Tasks were reassigned into three workload-based categories informed by NASA-TLX ratings. A unified preprocessing and feature extraction pipeline was applied, and 25 informative features were selected. Random Forest outperformed Support Vector Machine and Multilayer Perceptron models, achieving 0.96 accuracy in within-subject evaluation and 0.69 in cross-subject evaluation with subject-disjoint splits. Sensitivity analysis showed that temporal overlap had a stronger effect than window length, with moderately long windows (5–8 s) and partial overlap providing the most robust generalization. SHAP (Shapley Additive Explanations) analysis confirmed ocular features as the dominant discriminators, while EDA contributed complementary robustness. Additional validation across age strata confirmed stable performance beyond the training cohort. Overall, the results highlight the effectiveness of physiological and ocular measures for distraction detection in automated driving and the need for strategies to further improve cross-driver robustness. Full article
Show Figures

Figure 1

15 pages, 5189 KB  
Article
Assembly Complexity Index (ACI) for Modular Robotic Systems: Validation and Conceptual Framework for AR/VR-Assisted Assembly
by Kartikeya Walia and Philip Breedon
Machines 2025, 13(10), 882; https://doi.org/10.3390/machines13100882 - 24 Sep 2025
Viewed by 954
Abstract
The growing adoption of modular robotic systems presents new challenges in ensuring ease of assembly, deployment, and reconfiguration, especially for end-users with varying technical expertise. This study proposes and validates an Assembly Complexity Index (ACI) framework, combining subjective workload (NASA Task Load Index) [...] Read more.
The growing adoption of modular robotic systems presents new challenges in ensuring ease of assembly, deployment, and reconfiguration, especially for end-users with varying technical expertise. This study proposes and validates an Assembly Complexity Index (ACI) framework, combining subjective workload (NASA Task Load Index) and task complexity (Task Complexity Index) into a unified metric to quantify assembly difficulty. Twelve participants performed modular manipulator assembly tasks under supervised and unsupervised conditions, enabling evaluation of learning effects and assembly complexity dynamics. Statistical analyses, including Cronbach’s alpha, correlation studies, and paired t-tests, demonstrated the framework’s internal consistency, sensitivity to user learning, and ability to capture workload-performance trade-offs. Additionally, we propose an augmented reality (AR) and virtual reality (VR) integration workflow to further mitigate assembly complexity, offering real-time guidance and adaptive assistance. The proposed framework not only supports design iteration and operator training but also provides a human-centered evaluation methodology applicable to modular robotics deployment in Industry 4.0 environments. The AR/VR-assisted workflow presented here is proposed as a conceptual extension and will be validated in future work. Full article
Show Figures

Figure 1

22 pages, 3918 KB  
Article
Evaluating Mental Workload and Productivity in Manufacturing: A Neuroergonomic Study of Human–Robot Collaboration Scenarios
by Carlo Caiazzo, Marko Djapan, Marija Savkovic, Djordje Milojevic, Arso Vukicevic and Luca Gualtieri
Machines 2025, 13(9), 783; https://doi.org/10.3390/machines13090783 - 1 Sep 2025
Cited by 1 | Viewed by 1980
Abstract
The field of human–robot collaboration (HRC) still lacks research studies regarding the evaluation of mental workload (MWL) through objective measurement to assess the mental state of operators in assembly tasks. This research study presents a comparative neuroergonomic analysis to evaluate the mental workload [...] Read more.
The field of human–robot collaboration (HRC) still lacks research studies regarding the evaluation of mental workload (MWL) through objective measurement to assess the mental state of operators in assembly tasks. This research study presents a comparative neuroergonomic analysis to evaluate the mental workload and productivity in three laboratory experimental conditions: in the first, the participant assembles a component without the intervention of the robot (standard scenario); in the second scenario, the participant performs the same activity in collaboration with the robot (collaborative scenario); in the third scenario, the participant is fully guided in the task in collaboration with the robot (collaborative guided scenario) through a system of guiding labels according to Poka-Yoke principles. The assessment of participants’ mental workload is shown through combinative analysis of subjective (NASA TLX) and objective (electroencephalogram—EEG). Objective MWL was assessed as the power waves ratio β/α (Beta—stress indicator, Alpha—relaxation indicator). Furthermore, the research used observational measurements to calculate the productivity index in terms of accurately assembled components across the three scenarios. Through ANOVA RM, mental workload significantly decreased in the activities involving the cobot. Also, an increase in productivity was observed shifting from the manual scenario to the cobot-assisted one (18.4%), and to the collaborative guided scenarios supported by Poka-Yoke principles (33.87%). Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

31 pages, 2118 KB  
Article
Leveraging Multimodal Information for Web Front-End Development Instruction: Analyzing Effects on Cognitive Behavior, Interaction, and Persistent Learning
by Ming Lu and Zhongyi Hu
Information 2025, 16(9), 734; https://doi.org/10.3390/info16090734 - 26 Aug 2025
Cited by 1 | Viewed by 1637
Abstract
This study focuses on the mechanisms of behavior and cognition, providing a comprehensive analysis of the innovative path of multimodal learning theory in the teaching practice of the “Web Front-end Development” course. This study integrates different sensory modes, such as vision, hearing, and [...] Read more.
This study focuses on the mechanisms of behavior and cognition, providing a comprehensive analysis of the innovative path of multimodal learning theory in the teaching practice of the “Web Front-end Development” course. This study integrates different sensory modes, such as vision, hearing, and haptic feedback, with the core objective of exploring the specific impact of this multi-sensory integration form on students’ cognitive engagement status, classroom interaction styles, and long-term learning behavior. We employed a mixed-methods approach in this study. On the one hand, we conducted a quasi-experiment involving 120 undergraduate students. On the other hand, research methods such as behavioral coding, in-depth interviews, and longitudinal tracking were also employed. Results show that multimodal teaching significantly reduces cognitive load (a 34.9% reduction measured by NASA-TLX), increases the frequency of collaborative interactions (2.3 times per class), and extends voluntary practice time (8.5 h per week). Mechanistically, these effects are mediated by enhanced embodied cognition (strengthening motor-sensory memory), optimized cognitive load distribution (reducing extraneous mental effort), and the fulfillment of intrinsic motivational needs (autonomy, competence, relatedness) as framed by self-determination theory. This study fills in the gap between educational technology and behavioral science. We have developed a comprehensive framework that provides practical guidance for designing technology-enhanced learning environments. With such a framework, learners can not only master technical skills more smoothly but also maintain their enthusiasm for learning for a long time and continue to participate. Full article
(This article belongs to the Special Issue Digital Systems in Higher Education)
Show Figures

Graphical abstract

24 pages, 6883 KB  
Article
A Human-in-the-Loop Study of Eye-Movement-Based Control for Workload Reduction in Delayed Teleoperation of Ground Vehicles
by Qiang Zhang, Aiping Zhao, Feng Zhao and Wangyu Wu
Machines 2025, 13(8), 735; https://doi.org/10.3390/machines13080735 - 18 Aug 2025
Cited by 2 | Viewed by 1650
Abstract
Teleoperated ground vehicles (TGVs) are widely applied in hazardous and dynamic environments, where communication delay and low transparency increase operator workload and reduce control performance. This study explores the cognitive and physiological workload associated with such conditions and evaluates the effectiveness of an [...] Read more.
Teleoperated ground vehicles (TGVs) are widely applied in hazardous and dynamic environments, where communication delay and low transparency increase operator workload and reduce control performance. This study explores the cognitive and physiological workload associated with such conditions and evaluates the effectiveness of an eye-movement-based predicted trajectory guidance control (ePTGC) framework in alleviating operator burden. A human-in-the-loop teleoperation experiment was conducted using a 2 × 2 within-subject design, incorporating subjective ratings (NASA-TLX), objective performance metrics from a dual-task paradigm (one-back memory task), and multimodal physiological indicators (ECG and EDA). Results show that delay and low transparency significantly elevated subjective, objective, and physiological workload levels. Compared to direct control (DC), the ePTGC framework significantly reduced workload across all three dimensions, particularly under high-delay conditions, while maintaining or even improving task performance. Notably, ePTGC enabled even lower workload levels under low-delay conditions than the baseline condition. These findings demonstrate the potential of the ePTGC framework to enhance teleoperation stability and reduce operator burden in delay-prone and low-transparency scenarios. Full article
Show Figures

Figure 1

Back to TopTop