Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (39)

Search Parameters:
Keywords = teleoperated driving

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 78455 KB  
Article
End-To-End Teleoperated Driving Video Transmission Under 6G with AI and Blockchain
by Ignacio Benito Frontelo, Pablo Pérez, Nuria Oyaga and Marta Orduna
Sensors 2026, 26(2), 571; https://doi.org/10.3390/s26020571 - 14 Jan 2026
Abstract
Intelligent vehicle networks powered by machine learning, AI and blockchain are transforming various sectors beyond transportation. In this context, being able to remote drive a vehicle is key for enhancing autonomous driving systems. After deploying end-to-end teleoperated driving systems under 5G networks, the [...] Read more.
Intelligent vehicle networks powered by machine learning, AI and blockchain are transforming various sectors beyond transportation. In this context, being able to remote drive a vehicle is key for enhancing autonomous driving systems. After deploying end-to-end teleoperated driving systems under 5G networks, the need to address complex challenges in other critical areas arises. These challenges belong to different technologies that need to be integrated in this particular system: video transmission and visualization technologies, artificial intelligence techniques, and network optimization features, incorporating haptic devices and critical data security. This article explores how these technologies can enhance the teleoperated driving activity experiences already executed in real-life environments by analyzing the quality of the video which is transmitted over the network, exploring its correlation with the current state-of-the-art AI object detection algorithms, analyzing the extended reality and digital twin paradigms, obtaining the maximum possible performance of forthcoming 6G networks and proposing decentralized security schema for ensuring the privacy and safety of the end-users of teleoperated driving infrastructures. An integrated set of conclusions and recommendations will be given to outline the future teleoperated driving systems design in the forthcoming years. Full article
(This article belongs to the Special Issue Advances in Intelligent Vehicular Networks and Communications)
22 pages, 2616 KB  
Article
Safety, Efficiency, and Mental Workload of Predictive Display in Simulated Teledriving
by Oren Musicant, Alexander Kuperman and Rotem Barachman
Sensors 2026, 26(1), 221; https://doi.org/10.3390/s26010221 - 29 Dec 2025
Viewed by 238
Abstract
Vehicle remote driving services are increasingly used in urban settings. Yet, vehicle-operator communication time delays may pose a challenge for teleoperators in maintaining safety and efficiency. The purpose of this study was to examine whether Predictive Displays (PDs), which show the vehicle’s predicted [...] Read more.
Vehicle remote driving services are increasingly used in urban settings. Yet, vehicle-operator communication time delays may pose a challenge for teleoperators in maintaining safety and efficiency. The purpose of this study was to examine whether Predictive Displays (PDs), which show the vehicle’s predicted real-time position, improve performance, safety, and mental workload under moderate time delays typical of 4G/5G networks. Twenty-nine participants drove a simulated urban route containing pedestrian crossings, overtaking, gap acceptance, and traffic light challenges under three conditions: 50 ms delay (baseline), 150 ms delay without PD, and 150 ms delay with PD. We analyzed the counts of crashes and navigation errors, task completion times, and the probability and intensity of braking and steering events, as well as self-reports of workload and usability. Results indicate that though descriptive trends indicated slightly sharper steering and braking under the 150 ms time delay conditions, the 150 ms time delay did not significantly degrade performance or increase workload compared with the 50 ms baseline. In addition, the PD neither improved performance nor reduced workload. Overall, participants demonstrated tolerance to typical 4G/5G network time delays, leaving little room for improvement rendering the necessitating of PDs. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion for Decision Making for Autonomous Driving)
Show Figures

Figure 1

18 pages, 2109 KB  
Article
Development and Application of a Vertical-Agnostic Methodological Assessment Framework for Evaluation of 5G-Based Use Cases
by Maximilian Brochhaus, Pierre Kehl, Dennis Grunert, Niels König, Robert H. Schmitt, Marit Zöcklein, Sigrid Brell-Cokcan and Jad Nasreddine
Electronics 2025, 14(23), 4771; https://doi.org/10.3390/electronics14234771 - 4 Dec 2025
Viewed by 351
Abstract
This paper addresses the industrial adoption gap of 5G/6G by presenting a novel, vertical-agnostic Methodological Assessment Framework (MAF). The MAF bridges the Network Key Performance Indicators (KPI) of 5G networks with user-centric User-KPIs and User-KVIs (Key Value Indicators) to quantify the techno-economic and [...] Read more.
This paper addresses the industrial adoption gap of 5G/6G by presenting a novel, vertical-agnostic Methodological Assessment Framework (MAF). The MAF bridges the Network Key Performance Indicators (KPI) of 5G networks with user-centric User-KPIs and User-KVIs (Key Value Indicators) to quantify the techno-economic and societal value propositions of industrial 5G use cases from an end-user perspective. First, a detailed description of the MAF and its underlying principles is given, explaining how a use case’s value proposition can be captured. Second, the MAF is applied to three different industrial use cases from the verticals manufacturing, construction, and automotive utilizing the individual User-KPI and User-KVI for the quantification of the individual value propositions. The results show that the use of 5G can lead to enhanced process capability and reproducibility as well as increased insights into different processes. In addition, it is shown that the MAF objectively quantifies user value across diverse verticals and is able to strengthen interdisciplinary alignment over different verticals. Full article
Show Figures

Figure 1

2413 KB  
Proceeding Paper
Enhanced Teleoperation for Manual Remote Driving: Extending ADAS Remote Control Towards Full Vehicle Operation
by İsa Karaböcek, Ege Özdemir and Batıkan Kavak
Eng. Proc. 2025, 118(1), 40; https://doi.org/10.3390/ECSA-12-26609 - 7 Nov 2025
Viewed by 224
Abstract
This study advances prior work on the remote control of Advanced Driver Assistance Systems (ADASs) by introducing a full manual teleoperation mode that enables remote control over both longitudinal and lateral vehicle dynamics via accelerator, brake, and steering inputs. The core contribution is [...] Read more.
This study advances prior work on the remote control of Advanced Driver Assistance Systems (ADASs) by introducing a full manual teleoperation mode that enables remote control over both longitudinal and lateral vehicle dynamics via accelerator, brake, and steering inputs. The core contribution is a flexible, dual-mode teleoperation architecture that allows seamless switching between assisted ADAS control and full manual operation, depending on driving context or system limitations. While teleoperation has been explored primarily for autonomous fallback or direct remote driving, few existing systems integrate dynamic mode-switching in a unified, real-time control framework. Our system leverages a wireless game controller and a Robot Operating System (ROS)-based vehicle software stack to translate remote human inputs into low-latency vehicle actions, supporting robust and adaptable remote driving. This design maintains a human-in-the-loop approach, offering improved responsiveness in complex environments, edge-case scenarios, or during autonomous system fallback. The proposed solution extends the applicability of teleoperation to a broader range of use cases, including remote assistance, fleet management, and emergency response. Its novelty lies in the integration of dual-mode teleoperation within a modular architecture, bridging the gap between ADAS-enhanced autonomy and full remote manual control. Full article
Show Figures

Figure 1

15 pages, 1297 KB  
Review
Haircutting Robots: From Theory to Practice
by Shuai Li
Automation 2025, 6(3), 47; https://doi.org/10.3390/automation6030047 - 18 Sep 2025
Viewed by 6022
Abstract
The field of haircutting robots is poised for a significant transformation, driven by advancements in artificial intelligence, mechatronics, and humanoid robotics. This perspective paper examines the emerging market for haircutting robots, propelled by decreasing hardware costs and a growing demand for automated grooming [...] Read more.
The field of haircutting robots is poised for a significant transformation, driven by advancements in artificial intelligence, mechatronics, and humanoid robotics. This perspective paper examines the emerging market for haircutting robots, propelled by decreasing hardware costs and a growing demand for automated grooming services. We review foundational technologies, including advanced hair modeling, real-time motion planning, and haptic feedback, and analyze their application in both teleoperated and fully autonomous systems. Key technical requirements and challenges in safety certification are discussed in detail. Furthermore, we explore how cutting-edge technologies like direct-drive systems, large language models, virtual reality, and big data collection can empower these robots to offer a human-like, personalized, and efficient experience. We propose a business model centered on supervised autonomy, which enables early commercialization and sets a path toward future scalability. This perspective paper provides a theoretical and technical framework for the future deployment and commercialization of haircutting robots, highlighting their potential to create a new sector in the automation industry. Full article
(This article belongs to the Section Robotics and Autonomous Systems)
Show Figures

Figure 1

18 pages, 15231 KB  
Article
Stereo Vision-Based Underground Muck Pile Detection for Autonomous LHD Bucket Loading
by Emilia Hennen, Adam Pekarski, Violetta Storoschewich and Elisabeth Clausen
Sensors 2025, 25(17), 5241; https://doi.org/10.3390/s25175241 - 23 Aug 2025
Viewed by 1220
Abstract
To increase the safety and efficiency of underground mining processes, it is important to advance automation. An important part of that is to achieve autonomous material loading using load–haul–dump (LHD) machines. To be able to autonomously load material from a muck pile, it [...] Read more.
To increase the safety and efficiency of underground mining processes, it is important to advance automation. An important part of that is to achieve autonomous material loading using load–haul–dump (LHD) machines. To be able to autonomously load material from a muck pile, it is crucial to first detect and characterize it in terms of spatial configuration and geometry. Currently, the technologies available on the market that do not require an operator at the stope are only applicable in specific mine layouts or use 2D camera images of the surroundings that can be observed from a control room for teleoperation. However, due to missing depth information, estimating distances is difficult. This work presents a novel approach to muck pile detection developed as part of the EU-funded Next Generation Carbon Neutral Pilots for Smart Intelligent Mining Systems (NEXGEN SIMS) project. It uses a stereo camera mounted on an LHD to gather three-dimensional data of the surroundings. By applying a topological algorithm, a muck pile can be located and its overall shape determined. This system can detect and segment muck piles while driving towards them at full speed. The detected position and shape of the muck pile can then be used to determine an optimal attack point for the machine. This sensor solution was then integrated into a complete system for autonomous loading with an LHD. In two different underground mines, it was tested and demonstrated that the machines were able to reliably load material without human intervention. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 8744 KB  
Article
A User-Centered Teleoperation GUI for Automated Vehicles: Identifying and Evaluating Information Requirements for Remote Driving and Assistance
by Maria-Magdalena Wolf, Henrik Schmidt, Michael Christl, Jana Fank and Frank Diermeyer
Multimodal Technol. Interact. 2025, 9(8), 78; https://doi.org/10.3390/mti9080078 - 31 Jul 2025
Cited by 5 | Viewed by 2449
Abstract
Teleoperation emerged as a promising fallback for situations beyond the capabilities of automated vehicles. Nevertheless, teleoperation still faces challenges, such as reduced situational awareness. Since situational awareness is primarily built through the remote operator’s visual perception, the graphical user interface (GUI) design is [...] Read more.
Teleoperation emerged as a promising fallback for situations beyond the capabilities of automated vehicles. Nevertheless, teleoperation still faces challenges, such as reduced situational awareness. Since situational awareness is primarily built through the remote operator’s visual perception, the graphical user interface (GUI) design is critical. In addition to video feed, supplemental informational elements are crucial—not only for the predominantly studied remote driving, but also for emerging desk-based remote assistance concepts. This work develops a GUI for different teleoperation concepts by identifying key informational elements during the teleoperation process through expert interviews (N = 9). Following this, a static and dynamic GUI prototype was developed and evaluated in a click dummy study (N = 36). Thereby, the dynamic GUI adapts the number of displayed elements according to the teleoperation phase. Results show that both GUIs achieve good system usability scale (SUS) ratings, with the dynamic GUI significantly outperforming the static version in both usability and task completion time. However, the results might be attributable to a learning effect due to the lack of randomization. The user experience questionnaire (UEQ) score shows potential for improvement. To enhance the user experience, the GUI should be evaluated in a follow-up study that includes interaction with a real vehicle. Full article
Show Figures

Figure 1

32 pages, 6323 KB  
Article
Design, Implementation and Evaluation of an Immersive Teleoperation Interface for Human-Centered Autonomous Driving
by Irene Bouzón, Jimena Pascual, Cayetana Costales, Aser Crespo, Covadonga Cima and David Melendi
Sensors 2025, 25(15), 4679; https://doi.org/10.3390/s25154679 - 29 Jul 2025
Cited by 2 | Viewed by 2080
Abstract
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to [...] Read more.
As autonomous driving technologies advance, the need for human-in-the-loop systems becomes increasingly critical to ensure safety, adaptability, and public confidence. This paper presents the design and evaluation of a context-aware immersive teleoperation interface that integrates real-time simulation, virtual reality, and multimodal feedback to support remote interventions in emergency scenarios. Built on a modular ROS2 architecture, the system allows seamless transition between simulated and physical platforms, enabling safe and reproducible testing. The experimental results show a high task success rate and user satisfaction, highlighting the importance of intuitive controls, gesture recognition accuracy, and low-latency feedback. Our findings contribute to the understanding of human-robot interaction (HRI) in immersive teleoperation contexts and provide insights into the role of multisensory feedback and control modalities in building trust and situational awareness for remote operators. Ultimately, this approach is intended to support the broader acceptability of autonomous driving technologies by enhancing human supervision, control, and confidence. Full article
(This article belongs to the Special Issue Human-Centred Smart Manufacturing - Industry 5.0)
Show Figures

Figure 1

18 pages, 16108 KB  
Article
Development of roCaGo for Forest Observation and Forestry Support
by Yoshinori Kiga, Yuzuki Sugasawa, Takumi Sakai, Takuma Nemoto and Masami Iwase
Forests 2025, 16(7), 1067; https://doi.org/10.3390/f16071067 - 26 Jun 2025
Cited by 1 | Viewed by 616
Abstract
This study addresses the ’last-mile’ transportation challenges that arise in steep and narrow forest terrain by proposing a novel robotic palanquin system called roCaGo. It is inspired by the mechanical principles of two-wheel-steering and two-wheel-drive (2WS/2WD) bicycles. The roCaGo system integrates front- and [...] Read more.
This study addresses the ’last-mile’ transportation challenges that arise in steep and narrow forest terrain by proposing a novel robotic palanquin system called roCaGo. It is inspired by the mechanical principles of two-wheel-steering and two-wheel-drive (2WS/2WD) bicycles. The roCaGo system integrates front- and rear-wheel-drive mechanisms, as well as a central suspension structure for carrying loads. Unlike conventional forestry machinery, which requires wide, well-maintained roads or permanent rail systems, the roCaGo system enables flexible, operator-assisted transport along narrow, unprepared mountain paths. A dynamic model of the system was developed to design a stabilization control strategy, enabling roCaGo to maintain transport stability and assist the operator during navigation. Numerical simulations and preliminary physical experiments demonstrate its effectiveness in challenging forest environments. Furthermore, the applicability of roCaGo has been extended to include use as a mobile third-person viewpoint platform to support the remote operation of existing forestry equipment; specifically the LV800crawler vehicle equipped with a front-mounted mulcher. Field tests involving LiDAR sensors mounted on roCaGo were conducted to verify its ability to capture the environmental data necessary for non-line-of-sight teleoperation. The results show that roCaGo is a promising solution for improving labor efficiency and ensuring operator safety in forest logistics and remote-controlled forestry operations. Full article
Show Figures

Figure 1

19 pages, 5486 KB  
Article
The Development of Teleoperated Driving to Cooperate with the Autonomous Driving Experience
by Nuksit Noomwongs, Krit T.Siriwattana, Sunhapos Chantranuwathana and Gridsada Phanomchoeng
Automation 2025, 6(3), 26; https://doi.org/10.3390/automation6030026 - 25 Jun 2025
Cited by 1 | Viewed by 2882
Abstract
Autonomous vehicles are increasingly being adopted, with manufacturers competing to enhance automation capabilities. While full automation eliminates human input, lower levels still require driver intervention under specific conditions. This study presents the design and development of a prototype vehicle featuring both low- and [...] Read more.
Autonomous vehicles are increasingly being adopted, with manufacturers competing to enhance automation capabilities. While full automation eliminates human input, lower levels still require driver intervention under specific conditions. This study presents the design and development of a prototype vehicle featuring both low- and high-level control systems, integrated with a 5G-based teleoperation interface that enables seamless switching between autonomous and remote-control modes. The system includes a malfunction surveillance unit that monitors communication latency and obstacle conditions, triggering a hardware-based emergency braking mechanism when safety thresholds are exceeded. Field experiments conducted over four test phases around Chulalongkorn University demonstrated stable performance under both driving modes. Mean lateral deviations ranged from 0.19 m to 0.33 m, with maximum deviations up to 0.88 m. Average end-to-end latency was 109.7 ms, with worst-case spikes of 316.6 ms. The emergency fallback system successfully identified all predefined fault conditions and responded with timely braking. Latency-aware stopping analysis showed an increase in braking distance from 1.42 m to 2.37 m at 3 m/s. In scenarios with extreme latency (>500 ms), the system required operator steering input or fallback to autonomous mode to avoid obstacles. These results confirm the platform’s effectiveness in real-world teleoperation over public 5G networks and its potential scalability for broader deployment. Full article
(This article belongs to the Section Smart Transportation and Autonomous Vehicles)
Show Figures

Figure 1

25 pages, 5088 KB  
Article
Improved Perceptual Quality of Traffic Signs and Lights for the Teleoperation of Autonomous Vehicle Remote Driving via Multi-Category Region of Interest Video Compression
by Itai Dror and Ofer Hadar
Entropy 2025, 27(7), 674; https://doi.org/10.3390/e27070674 - 24 Jun 2025
Viewed by 1361
Abstract
Autonomous vehicles are a promising solution to traffic congestion, air pollution, accidents, wasted time, and resources. However, remote driver intervention may be necessary in extreme situations to ensure safe roadside parking or complete remote takeover. In these cases, high-quality real-time video streaming is [...] Read more.
Autonomous vehicles are a promising solution to traffic congestion, air pollution, accidents, wasted time, and resources. However, remote driver intervention may be necessary in extreme situations to ensure safe roadside parking or complete remote takeover. In these cases, high-quality real-time video streaming is crucial for remote driving. In a preliminary study, we presented a region of interest (ROI) High-Efficiency Video Coding (HEVC) method where the image was segmented into two categories: ROI and background. This involved allocating more bandwidth to the ROI, which yielded an improvement in the visibility of classes essential for driving while transmitting the background at a lower quality. However, migrating the bandwidth to the large ROI portion of the image did not substantially improve the quality of traffic signs and lights. This study proposes a method that categorizes ROIs into three tiers: background, weak ROI, and strong ROI. To evaluate this approach, we utilized a photo-realistic driving scenario database created with the Cognata self-driving car simulation platform. We used semantic segmentation to categorize the compression quality of a Coding Tree Unit (CTU) according to its pixel classes. A background CTU contains only sky, trees, vegetation, or building classes. Essentials for remote driving include classes such as pedestrians, road marks, and cars. Difficult-to-recognize classes, such as traffic signs (especially textual ones) and traffic lights, are categorized as a strong ROI. We applied thresholds to determine whether the number of pixels in a CTU of a particular category was sufficient to classify it as a strong or weak ROI and then allocated bandwidth accordingly. Our results demonstrate that this multi-category ROI compression method significantly enhances the perceptual quality of traffic signs (especially textual ones) and traffic lights by up to 5.5 dB compared to a simpler two-category (background/foreground) partition. This improvement in critical areas is achieved by reducing the fidelity of less critical background elements, while the visual quality of other essential driving-related classes (weak ROI) is at least maintained. Full article
(This article belongs to the Special Issue Information Theory and Coding for Image/Video Processing)
Show Figures

Figure 1

20 pages, 2524 KB  
Article
Adaptive Nonlinear Friction Compensation for Pneumatically Driven Follower in Force-Projecting Bilateral Control
by Daisuke Haraguchi and Yuki Monden
Actuators 2025, 14(3), 151; https://doi.org/10.3390/act14030151 - 18 Mar 2025
Viewed by 903
Abstract
Force-projecting bilateral control is an effective method for enhancing the positioning rigidity and stability of teleoperation systems equipped with compliant pneumatically driven followers. However, friction in the pneumatic actuation mechanism has caused a deterioration in force reproducibility between the leader and follower. To [...] Read more.
Force-projecting bilateral control is an effective method for enhancing the positioning rigidity and stability of teleoperation systems equipped with compliant pneumatically driven followers. However, friction in the pneumatic actuation mechanism has caused a deterioration in force reproducibility between the leader and follower. To solve this problem, this study proposes a practical method of nonlinear friction compensation in force-projecting bilateral control to improve the force reproducibility. The proposed method generates two friction compensation forces: one based on the target admittance velocity from the leader and the other based on the actual velocity of the follower. These forces are seamlessly switched according to the dynamic state of the system to compensate for the follower’s driving force. This enables improved force reproducibility in any motion states of the system while maintaining the advantage of force-projecting bilateral control, which eliminates the need for external force measurement on the follower side. Experiments were conducted using a 1-DOF bilateral control device consisting of an electric linear motor and a pneumatic cylinder, including free motion and contact operations with two types of environments, demonstrating the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Intelligent Control for Pneumatic Servo System)
Show Figures

Figure 1

11 pages, 3982 KB  
Proceeding Paper
Remote Control of ADAS Features: A Teleoperation Approach to Mitigate Autonomous Driving Challenges
by İsa Karaböcek, Batıkan Kavak and Ege Özdemir
Eng. Proc. 2024, 82(1), 36; https://doi.org/10.3390/ecsa-11-20449 - 25 Nov 2024
Cited by 2 | Viewed by 2922
Abstract
This paper presents a novel approach to enhancing the safety of Advanced Driver Assistance Systems (ADAS) by integrating teleoperation for the remote control of ADAS features in a vehicle. The primary contribution of this research is the development and implementation of a teleoperation [...] Read more.
This paper presents a novel approach to enhancing the safety of Advanced Driver Assistance Systems (ADAS) by integrating teleoperation for the remote control of ADAS features in a vehicle. The primary contribution of this research is the development and implementation of a teleoperation system that allows human operators to take control of the vehicle’s ADAS features, enabling timely intervention in critical situations where autonomous functions may be insufficient. While the concept of teleoperation has been explored in the literature, with several implementations focused on the direct control of vehicles, there are relatively few examples of teleoperation systems designed specifically to utilize ADAS features. This research addresses this gap by exploring teleoperation as a supplementary mechanism that allows human intervention in critical driving situations, particularly where autonomous systems may encounter limitations. The teleoperation system was tested under two critical ADAS scenarios, cruise control and lane change assist, chosen for their importance in real-world driving conditions. These scenarios demonstrate how teleoperation can complement and enhance the performance of ADAS features. The experiments reveal the effectiveness of remote control in providing precise control, allowing for swift and accurate responses in scenarios where the autonomous system might face challenges. The novelty of this work lies in its application of teleoperation to ADAS features, offering a new perspective on how human intervention can enhance vehicle safety. The findings provide valuable insights into optimizing teleoperation for real-world driving scenarios. As a result of the experiments, it was demonstrated that integrating teleoperation with ADAS features offers a more reliable solution compared to standalone ADAS driving. Full article
Show Figures

Figure 1

20 pages, 12356 KB  
Article
Quantifying the Remote Driver’s Interaction with 5G-Enabled Level 4 Automated Vehicles: A Real-World Study
by Shuo Li, Yanghanzi Zhang, Simon Edwards and Phil Blythe
Electronics 2024, 13(22), 4366; https://doi.org/10.3390/electronics13224366 - 7 Nov 2024
Cited by 1 | Viewed by 2284
Abstract
This real-world investigation aimed to quantify the human–machine interaction between remote drivers of teleoperation systems and the Level 4 automated vehicle in a real-world setting. The primary goal was to investigate the effects of disengagement and distraction on remote driver performance and behaviour. [...] Read more.
This real-world investigation aimed to quantify the human–machine interaction between remote drivers of teleoperation systems and the Level 4 automated vehicle in a real-world setting. The primary goal was to investigate the effects of disengagement and distraction on remote driver performance and behaviour. Key findings revealed that mental disengagement, achieved through distraction via a reading task, significantly slowed the remote driver’s reaction time by an average of 5.309 s when the Level 4 automated system required intervention. Similarly, disengagement resulted in a 4.232 s delay in decision-making time for remote drivers when they needed to step in and make critical strategic decisions. Moreover, mental disengagement affected the remote drivers’ attention focus on the road and increased their cognitive workload compared to constant monitoring. Furthermore, when actively controlling the vehicle remotely, drivers experienced a higher cognitive workload than in both “monitoring” and “disengagement” conditions. The findings emphasize the importance of designing teleoperation systems that keep remote drivers actively engaged with their environment, minimise distractions, and reduce disengagement. Such designs are essential for enhancing safety and effectiveness in remote driving scenarios, ultimately supporting the successful deployment of Level 4 automated vehicles in real-world applications. Full article
(This article belongs to the Special Issue Advanced Technologies in Intelligent Transport Systems)
Show Figures

Figure 1

18 pages, 1924 KB  
Article
Safety, Efficiency, and Mental Workload in Simulated Teledriving of a Vehicle as Functions of Camera Viewpoint
by Oren Musicant, Assaf Botzer and Bar Richmond-Hacham
Sensors 2024, 24(18), 6134; https://doi.org/10.3390/s24186134 - 23 Sep 2024
Cited by 2 | Viewed by 1595
Abstract
Teleoperation services are expected to operate on-road and often in urban areas. In current teleoperation applications, teleoperators gain a higher viewpoint of the environment from a camera on the vehicle’s roof. However, it is unclear how this viewpoint compares to a conventional viewpoint [...] Read more.
Teleoperation services are expected to operate on-road and often in urban areas. In current teleoperation applications, teleoperators gain a higher viewpoint of the environment from a camera on the vehicle’s roof. However, it is unclear how this viewpoint compares to a conventional viewpoint in terms of safety, efficiency, and mental workload. In the current study, teleoperators (n = 148) performed driving tasks in a simulated urban environment with a conventional viewpoint (i.e., the simulated camera was positioned inside the vehicle at the height of a driver’s eyes) and a higher viewpoint (the simulated camera was positioned on the vehicle roof). The tasks required negotiating road geometry and other road users. At the end of the session, participants completed the NASA-TLX questionnaire. Results showed that participants completed most tasks faster with the higher viewpoint and reported lower frustration and mental demand. The camera position did not affect collision rates nor the probability of hard braking and steering events. We conclude that a viewpoint from the vehicle roof may improve teleoperation efficiency without compromising driving safety, while also lowering the teleoperators’ mental workload. Full article
(This article belongs to the Special Issue On-Board and Remote Sensors in Intelligent Vehicles-2nd Edition)
Show Figures

Figure 1

Back to TopTop