Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (394)

Search Parameters:
Keywords = teleoperation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
2 pages, 129 KB  
Correction
Correction: Wang et al. Adaptive Neural Network Control of Time Delay Teleoperation System Based on Model Approximation. Sensors 2021, 21, 7443
by Yaxiang Wang, Jiawei Tian, Yan Liu, Bo Yang, Shan Liu, Lirong Yin and Wenfeng Zheng
Sensors 2026, 26(6), 1763; https://doi.org/10.3390/s26061763 - 11 Mar 2026
Viewed by 98
Abstract
In the original publication [...] Full article
(This article belongs to the Section Intelligent Sensors)
17 pages, 4699 KB  
Article
Interactive Teleoperation of an Articulated Robotic Arm Using Vision-Based Human Hand Tracking
by Marius-Valentin Drăgoi, Aurel-Viorel Frimu, Andrei Postelnicu, Roxana-Adriana Puiu, Gabriel Petrea and Alexandru Hank
Biomimetics 2026, 11(2), 151; https://doi.org/10.3390/biomimetics11020151 - 19 Feb 2026
Viewed by 501
Abstract
Interactive teleoperation offers an intuitive pathway for human–robot interaction, yet many existing systems rely on dedicated sensors or wearable devices, limiting accessibility and scalability. This paper presents a vision-based teleoperation framework that enables real-time control of an articulated robotic arm (five joints plus [...] Read more.
Interactive teleoperation offers an intuitive pathway for human–robot interaction, yet many existing systems rely on dedicated sensors or wearable devices, limiting accessibility and scalability. This paper presents a vision-based teleoperation framework that enables real-time control of an articulated robotic arm (five joints plus a gripper actuator) using human hand tracking from a single, typical laptop camera. Hand pose and gesture information are extracted using a real-time landmark estimation pipeline, and a set of compact kinematic descriptors—palm position, apparent hand scale, wrist rotation, hand pitch, and pinch gesture—are mapped to robotic joint commands through a calibration-based control strategy. Commands are transmitted over a lightweight network interface to an embedded controller that executes synchronized servo actuation. To enhance stability and usability, temporal smoothing and rate-limited updates are employed to mitigate jitter while preserving responsiveness. In a human-in-the-loop evaluation with 42 participants, the system achieved an 88% success rate (37/42), with a completion time of 53.48 ± 18.51 s, a placement error of 6.73 ± 3.11 cm for successful trials (n = 37), and an ease-of-use score of 2.67 ± 1.20 on a 1–5 scale. Results indicate that the proposed approach enables feasible interactive teleoperation without specialized hardware, supporting its potential as a low-cost platform for robotic manipulation, education, and rapid prototyping. Full article
(This article belongs to the Special Issue Recent Advances in Bioinspired Robot and Intelligent Systems)
Show Figures

Figure 1

19 pages, 1689 KB  
Article
Bio-Adaptive Robot Control: Integrating Biometric Feedback and Gesture-Based Interfaces for Intuitive Human–Robot Interaction (HRI)
by Antonio Di Tecco, Daniele Leonardis, Edoardo Ragusa, Antonio Frisoli and Claudio Loconsole
Robotics 2026, 15(2), 45; https://doi.org/10.3390/robotics15020045 - 17 Feb 2026
Cited by 1 | Viewed by 394
Abstract
AI-driven assistance can help the user perform complex teleoperated tasks, introduce autonomous patterns, or adapt the workbench to objects of interest. On the other hand, the level of assistance should be responsive to the user’s response and adapt accordingly to promote a positive [...] Read more.
AI-driven assistance can help the user perform complex teleoperated tasks, introduce autonomous patterns, or adapt the workbench to objects of interest. On the other hand, the level of assistance should be responsive to the user’s response and adapt accordingly to promote a positive and effective experience. Envisaging this final goal, this article investigates whether physiological signals can be used to estimate the user’s performance and response in a teleoperation setup, with and without AI-driven assistance. In more detail, a teleoperated pick-and-place task was performed with or without AI-driven assistance during the grasping phase. A deep-learning algorithm for affordance detection provided assistance, helping participants align the robotic hand with the target object. Physiological and kinematic data were measured and processed by machine learning models to predict the effects of AI assistance on task performance during teleoperation. Results showed that AI-driven assistance, as expected, affected pick-and-place performance. Beyond this, the assistance affected the participant’s fatigue level, which the machine learning models could predict with an average accuracy of 84% based on the physiological response. In addition, the success or failure of the pick-and-place task could be predicted with an average accuracy of 88%. These findings highlight the potential of integrating deep learning with biometric feedback and gesture-based control to create more intuitive and adaptive HRI systems. Full article
(This article belongs to the Special Issue AI for Robotic Exoskeletons and Prostheses, 2nd Edition)
Show Figures

Figure 1

30 pages, 21507 KB  
Article
A Robotic Eye Gaze Mirroring System for Human–Robot Interaction: Evaluating Response Time Across Proxemic Distances
by Jun Wei Mok, Prabakaran Veerajagadheswar and Mohan Rajesh Elara
Systems 2026, 14(2), 206; https://doi.org/10.3390/systems14020206 - 15 Feb 2026
Viewed by 360
Abstract
Robots are increasingly becoming part of everyday social environments. Among the different types of communication cues, eye gaze in the context of human–robot interactions (HRIs) fosters connection and engagement. Although gaze behavior has been extensively studied, most existing research assumes a fixed interaction [...] Read more.
Robots are increasingly becoming part of everyday social environments. Among the different types of communication cues, eye gaze in the context of human–robot interactions (HRIs) fosters connection and engagement. Although gaze behavior has been extensively studied, most existing research assumes a fixed interaction distance and often does not account for variations in proxemic distance that influence the perceptions of gaze. While prior studies have developed robotic eye gaze systems capable of producing natural gaze behaviors, these systems are usually evaluated at fixed interaction distances. Comparatively, less attention has been given to measuring the impact of proxemic distance on gaze mirroring. This study introduces a gaze mirroring system that integrates 3D robotic eyes with a mobile robot to track human gaze across various proxemic distances. This paper presents the system’s mechanical design and implementation, as well as the evaluation of its tracking performance. Experiments on the system were conducted across Hall’s proxemic zones, intimate, personal, and social, under static, teleoperated, and integrated movement conditions. The results demonstrate that the proposed system achieves highly efficient tracking with response times that fall within established thresholds for natural gaze timing in human–robot interaction. Full article
(This article belongs to the Section Artificial Intelligence and Digital Systems Engineering)
Show Figures

Figure 1

20 pages, 4999 KB  
Article
Beyond Visual and Force Feedback: Role of Vibrotactile and Auditory Cues in Robot Teleoperated Assembly
by Kaoru Ohno, Hikaru Nagano and Yasuyoshi Yokokohji
Robotics 2026, 15(2), 39; https://doi.org/10.3390/robotics15020039 - 9 Feb 2026
Viewed by 383
Abstract
Reliable detection of contact states, such as the “mating” of connectors, is crucial for high-quality teleoperated assembly. Conventional systems relying solely on visual and continuous force feedback often fail to convey these discrete high-frequency transients due to the limited high-frequency rendering capabilities. This [...] Read more.
Reliable detection of contact states, such as the “mating” of connectors, is crucial for high-quality teleoperated assembly. Conventional systems relying solely on visual and continuous force feedback often fail to convey these discrete high-frequency transients due to the limited high-frequency rendering capabilities. This study investigates the effectiveness of augmenting visual and force feedback with vibrotactile and auditory cues for detecting connector mating. We conducted three experiments: (1) a mating detection task using recorded multimodal data (N=10), (2) a modality contribution analysis (N=10), and (3) a real-time robot connector insertion task (N=10). Results from the real-time task demonstrated that the proposed multimodal feedback significantly reduced the maximum contact force exerted after mating compared to the baseline visual-force condition (p<0.001), thereby enhancing physical safety. Furthermore, vibrotactile and auditory cues were found to be redundant yet complementary, providing robust cues even when one modality is compromised. Although subjective mental workload increased due to sensory integration, the significant improvement in detection clarity and safety justifies the multimodal approach. We conclude that providing transient vibrotactile and auditory cues is a highly effective strategy for compensating for the limitations of conventional force feedback in teleoperated assembly. Full article
(This article belongs to the Special Issue Embodied Intelligence: Physical Human–Robot Interaction)
Show Figures

Graphical abstract

23 pages, 5683 KB  
Article
Optimizing RTAB-Map Viewability to Reduce Cognitive Workload in VR Teleoperation: A User-Centric Approach
by Hojin Yoon, Haegyeom Choi, Jaehoon Jeong and Donghun Lee
Mathematics 2026, 14(3), 579; https://doi.org/10.3390/math14030579 - 6 Feb 2026
Viewed by 391
Abstract
In industrial environments, providing intuitive spatial information via 3D maps is essential for maximizing the efficiency of teleoperation. However, existing SLAM algorithms generating 3D maps predominantly focus on improving robot localization accuracy, often neglecting the optimization of viewability required for human operators to [...] Read more.
In industrial environments, providing intuitive spatial information via 3D maps is essential for maximizing the efficiency of teleoperation. However, existing SLAM algorithms generating 3D maps predominantly focus on improving robot localization accuracy, often neglecting the optimization of viewability required for human operators to clearly perceive object depth and structure in virtual environments. To address this, this study proposes a methodology to optimize the viewability of RTAB-Map-based 3D maps using the Taguchi method, aiming to enhance VR teleoperation efficiency and reduce cognitive workload. We identified eight key parameters that critically affect visual quality and utilized an L18 orthogonal array to derive an optimal combination that controls point cloud density and noise levels. Experimental results from a target object picking task demonstrated that the optimized 3D map reduced task completion time by approximately 9 s compared to the RGB image condition, achieving efficiency levels approaching those of the physical-world baseline. Furthermore, evaluations using NASA-TLX confirmed that intuitive visual feedback minimized situational awareness errors and substantially alleviated cognitive workload. This study suggests a new direction for constructing high-efficiency teleoperation interfaces from a Human–Robot Interaction perspective by expanding SLAM optimization criteria from geometric precision to user-centric visual quality. Full article
(This article belongs to the Special Issue Advances in Machine Learning and Intelligent Systems)
Show Figures

Figure 1

14 pages, 3718 KB  
Article
Miniature Magnetorheological Fluid Device Using Cylindrical Rotor for Handheld Haptic Interface
by Asahi Higashiguchi, Isao Abe and Takehito Kikuchi
Actuators 2026, 15(2), 101; https://doi.org/10.3390/act15020101 - 4 Feb 2026
Viewed by 441
Abstract
Magnetorheological (MR) fluids are composite materials composed of ferromagnetic particles, medium oils, and several types of additives. MR fluids are particularly suitable for haptic applications, because their rheological properties can be rapidly, stably, and reversibly controlled using an applied magnetic field, MR fluids [...] Read more.
Magnetorheological (MR) fluids are composite materials composed of ferromagnetic particles, medium oils, and several types of additives. MR fluids are particularly suitable for haptic applications, because their rheological properties can be rapidly, stably, and reversibly controlled using an applied magnetic field, MR fluids are particularly suitable for haptic applications. Moreover, with recent advances in virtual reality technologies, handheld haptic interfaces that offer high portability and operability, owing to their lightweight and compact design, have become increasingly important for enhancing immersion in teleoperation systems. In this study, we design and develop a miniature MR fluid device for handheld haptic interfaces using a cylindrical rotor. The proposed device is compact and light, and exhibits a high output. We analyzed the magnetic field distribution inside the device using an analytical model and confirmed that the serpentine magnetic flux path effectively increased the magnetic flux density in the MR fluid working region. According to the experimental characterization, the device generated a maximum torque of 0.3 Nm. The resulting interface had a total mass of 122 g and provided a maximum force of 4.5 N to the user, demonstrating its suitability for teleoperation and virtual reality applications. Full article
Show Figures

Figure 1

26 pages, 48080 KB  
Article
Teleoperation of Dual-Arm Manipulators via VR Interfaces: A Framework Integrating Simulation and Real-World Control
by Alejandro Torrejón, Sergio Eslava, Jorge Calderón, Pedro Núñez and Pablo Bustos
Electronics 2026, 15(3), 572; https://doi.org/10.3390/electronics15030572 - 28 Jan 2026
Viewed by 550
Abstract
We present a virtual reality (VR) framework for controlling dual-arm robotic manipulators through immersive interfaces, integrating both simulated and real-world platforms. The system combines the Webots robotics simulator with Unreal Engine 5.6.1 to provide real-time visualization and interaction, enabling users to manipulate each [...] Read more.
We present a virtual reality (VR) framework for controlling dual-arm robotic manipulators through immersive interfaces, integrating both simulated and real-world platforms. The system combines the Webots robotics simulator with Unreal Engine 5.6.1 to provide real-time visualization and interaction, enabling users to manipulate each arm’s tool point via VR controllers with natural depth perception and motion tracking. The same control interface is seamlessly extended to a physical dual-arm robot, enabling teleoperation within the same VR environment. Our architecture supports real-time bidirectional communication between the VR layer and both the simulator and hardware, enabling responsive control and feedback. We describe the system design and performance evaluation in both domains, demonstrating the viability of immersive VR as a unified interface for simulation and physical robot control. Full article
Show Figures

Figure 1

29 pages, 78456 KB  
Article
End-to-End Teleoperated Driving Video Transmission Under 6G with AI and Blockchain
by Ignacio Benito Frontelo, Pablo Pérez, Nuria Oyaga and Marta Orduna
Sensors 2026, 26(2), 571; https://doi.org/10.3390/s26020571 - 14 Jan 2026
Viewed by 476
Abstract
Intelligent vehicle networks powered by machine learning, AI and blockchain are transforming various sectors beyond transportation. In this context, being able to remote drive a vehicle is key for enhancing autonomous driving systems. After deploying end-to-end teleoperated driving systems under 5G networks, the [...] Read more.
Intelligent vehicle networks powered by machine learning, AI and blockchain are transforming various sectors beyond transportation. In this context, being able to remote drive a vehicle is key for enhancing autonomous driving systems. After deploying end-to-end teleoperated driving systems under 5G networks, the need to address complex challenges in other critical areas arises. These challenges belong to different technologies that need to be integrated in this particular system: video transmission and visualization technologies, artificial intelligence techniques, and network optimization features, incorporating haptic devices and critical data security. This article explores how these technologies can enhance the teleoperated driving activity experiences already executed in real-life environments by analyzing the quality of the video which is transmitted over the network, exploring its correlation with the current state-of-the-art AI object detection algorithms, analyzing the extended reality and digital twin paradigms, obtaining the maximum possible performance of forthcoming 6G networks and proposing decentralized security schema for ensuring the privacy and safety of the end-users of teleoperated driving infrastructures. An integrated set of conclusions and recommendations will be given to outline the future teleoperated driving systems design in the forthcoming years. Full article
(This article belongs to the Special Issue Advances in Intelligent Vehicular Networks and Communications)
Show Figures

Figure 1

26 pages, 3863 KB  
Article
A Pre-Industrial Prototype for a Tele-Operable Drone-Mountable Electrical Sensor
by Khaled Osmani, Marc Florian Meyer and Detlef Schulz
J. Sens. Actuator Netw. 2026, 15(1), 9; https://doi.org/10.3390/jsan15010009 - 13 Jan 2026
Viewed by 616
Abstract
This paper presents a pre-industrial, laboratory-stage version of an innovative sensor box designed to enable remote measurement of electrical currents. The proposed prototype functions as a drone-mounted payload that can be deployed onto overhead transmission lines. Utilizing Hall-effect sensors, electronic signal processing through [...] Read more.
This paper presents a pre-industrial, laboratory-stage version of an innovative sensor box designed to enable remote measurement of electrical currents. The proposed prototype functions as a drone-mounted payload that can be deployed onto overhead transmission lines. Utilizing Hall-effect sensors, electronic signal processing through filtering, and digital data transmission via Arduino and Bluetooth, the instantaneous line currents are visualized in MATLAB (R2023a) as time-based curves. The sensor box can also be remotely released from the transmission line once measurements are complete, allowing a fully autonomous mode of operation. Laboratory tests demonstrated promising results for real-world applications, with measurement efficiencies ranging from 92% to 98% under various test conditions, including stress tests involving harmonics and total harmonic distortion up to 40%. Future work will focus on implementing effective shielding against high electric fields to further enhance reliability and advance the sensor’s industrialization as a novel solution for power grid digitalization. Full article
Show Figures

Graphical abstract

25 pages, 4608 KB  
Article
Comparison of Multi-View and Merged-View Mining Vehicle Teleoperation Systems Through Eye-Tracking
by Alireza Kamran Pishhesari, Mahdi Shahsavar, Amin Moniri-Morad and Javad Sattarvand
Mining 2026, 6(1), 3; https://doi.org/10.3390/mining6010003 - 12 Jan 2026
Viewed by 349
Abstract
While multi-view visualization systems are widely used for mining vehicle teleoperation, they often impose high cognitive load and restrict operator attention. To explore a more efficient alternative, this study evaluated a merged-view interface that integrates multiple camera perspectives into a single coherent display. [...] Read more.
While multi-view visualization systems are widely used for mining vehicle teleoperation, they often impose high cognitive load and restrict operator attention. To explore a more efficient alternative, this study evaluated a merged-view interface that integrates multiple camera perspectives into a single coherent display. In a controlled experiment, 35 participants navigated a teleoperated robot along a 50 m lab-scale path representative of an underground mine under both multi-view and merged-view conditions. Task performance and eye-tracking data—including completion time, path adherence, and speed-limit violations—were collected for comparison. The merged-view system enabled 6% faster completion times, 21% higher path adherence, and 28% fewer speed-limit violations. Eye-tracking metrics indicated more efficient and distributed attention: blink rate decreased by 29%, fixation duration shortened by 18%, saccade amplitude increased by 11%, and normalized gaze-transition entropy rose by 14%, reflecting broader and more adaptive scanning. NASA-TLX scores further showed a 27% reduction in perceived workload. Regression-based sensitivity analysis revealed that gaze entropy was the strongest predictor of efficiency in the multi-view condition, while fixation duration dominated under merged-view visualization. For path adherence, blink rate was most influential in the multi-view setup, whereas fixation duration became key in merged-view operation. Overall, the results indicated that merged-view visualization improved visual attention distribution and reduced cognitive tunneling indicators in a controlled laboratory teleoperation task, offering early-stage, interface-level insights motivated by mining-relevant teleoperation challenges. Full article
(This article belongs to the Special Issue Mine Automation and New Technologies, 2nd Edition)
Show Figures

Figure 1

20 pages, 4633 KB  
Article
Teleoperation System for Service Robots Using a Virtual Reality Headset and 3D Pose Estimation
by Tiago Ribeiro, Eduardo Fernandes, António Ribeiro, Carolina Lopes, Fernando Ribeiro and Gil Lopes
Sensors 2026, 26(2), 471; https://doi.org/10.3390/s26020471 - 10 Jan 2026
Viewed by 564
Abstract
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense [...] Read more.
This paper presents an immersive teleoperation framework for service robots that combines real-time 3D human pose estimation with a Virtual Reality (VR) interface to support intuitive, natural robot control. The operator is tracked using MediaPipe for 2D landmark detection and an Intel RealSense D455 RGB-D (Red-Green-Blue plus Depth) camera for depth acquisition, enabling 3D reconstruction of key joints. Joint angles are computed using efficient vector operations and mapped to the kinematic constraints of an anthropomorphic arm on the CHARMIE service robot. A VR-based telepresence interface provides stereoscopic video and head-motion-based view control to improve situational awareness during manipulation tasks. Experiments in real-world object grasping demonstrate reliable arm teleoperation and effective telepresence; however, vision-only estimation remains limited for axial rotations (e.g., elbow and wrist yaw), particularly under occlusions and unfavorable viewpoints. The proposed system provides a practical pathway toward low-cost, sensor-driven, immersive human–robot interaction for service robotics in dynamic environments. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

18 pages, 3565 KB  
Article
Whole-Body Tele-Operation for Mobile Manipulator Based on Linear and Angular Motion Decomposition
by Ji-Wook Kwon, Ji-Hyun Park, Taeyoung Uhm, Jongdeuk Lee, Jungwoo Lee and Young-Ho Choi
Appl. Sci. 2026, 16(2), 712; https://doi.org/10.3390/app16020712 - 9 Jan 2026
Viewed by 342
Abstract
This paper proposed an end-effector (EE)-driven whole-body tele-operation framework based on linear and angular motion decomposition. The proposed EE-driven tele-operation method enables intuitive control of a mobile manipulator using only EE commands, unlike conventional systems where the mobile base and manipulator are controlled [...] Read more.
This paper proposed an end-effector (EE)-driven whole-body tele-operation framework based on linear and angular motion decomposition. The proposed EE-driven tele-operation method enables intuitive control of a mobile manipulator using only EE commands, unlike conventional systems where the mobile base and manipulator are controlled by separate interfaces that directly map user inputs to each component. The proposed linear and angular motion decomposition mechanism significantly reduces the computational burden compared to conventional optimization-based whole-body control algorithms. Also, EE position is evaluated relative to the manipulator’s WS, and control authority is automatically switched between the manipulator and mobile base to ensure feasible motion. A blending-based transition strategy is introduced to prevent discontinuous switching and chattering near WS boundaries. Simulation results confirm that the method accurately reproduces tele-operation commands while maintaining stable whole-body coordination, demonstrating smooth transitions between control authorities and effective WS regulation. Simulation results confirm that the method accurately reproduces tele-operation commands while maintaining stable whole-body coordination, verifying the feasibility of the proposed approach. Future work will focus on experimental validation using a physical mobile manipulator. Full article
(This article belongs to the Special Issue Advancements in Industrial Robotics and Automation)
Show Figures

Figure 1

22 pages, 2616 KB  
Article
Safety, Efficiency, and Mental Workload of Predictive Display in Simulated Teledriving
by Oren Musicant, Alexander Kuperman and Rotem Barachman
Sensors 2026, 26(1), 221; https://doi.org/10.3390/s26010221 - 29 Dec 2025
Viewed by 407
Abstract
Vehicle remote driving services are increasingly used in urban settings. Yet, vehicle-operator communication time delays may pose a challenge for teleoperators in maintaining safety and efficiency. The purpose of this study was to examine whether Predictive Displays (PDs), which show the vehicle’s predicted [...] Read more.
Vehicle remote driving services are increasingly used in urban settings. Yet, vehicle-operator communication time delays may pose a challenge for teleoperators in maintaining safety and efficiency. The purpose of this study was to examine whether Predictive Displays (PDs), which show the vehicle’s predicted real-time position, improve performance, safety, and mental workload under moderate time delays typical of 4G/5G networks. Twenty-nine participants drove a simulated urban route containing pedestrian crossings, overtaking, gap acceptance, and traffic light challenges under three conditions: 50 ms delay (baseline), 150 ms delay without PD, and 150 ms delay with PD. We analyzed the counts of crashes and navigation errors, task completion times, and the probability and intensity of braking and steering events, as well as self-reports of workload and usability. Results indicate that though descriptive trends indicated slightly sharper steering and braking under the 150 ms time delay conditions, the 150 ms time delay did not significantly degrade performance or increase workload compared with the 50 ms baseline. In addition, the PD neither improved performance nor reduced workload. Overall, participants demonstrated tolerance to typical 4G/5G network time delays, leaving little room for improvement rendering the necessitating of PDs. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion for Decision Making for Autonomous Driving)
Show Figures

Figure 1

19 pages, 6380 KB  
Article
Design and Analysis of an Anti-Collision Spacer Ring and Installation Robot for Overhead Transmission Lines
by Tianlei Wang, Huize Lian and Tianhui Cheng
Machines 2026, 14(1), 23; https://doi.org/10.3390/machines14010023 - 24 Dec 2025
Viewed by 442
Abstract
Overhead transmission lines often suffer from mutual collisions between adjacent conductors in windy weather, which can cause power failures to villages. To solve this problem, this paper introduces a spacer ring and a teleoperated robot for the installation and retrieval of the ring. [...] Read more.
Overhead transmission lines often suffer from mutual collisions between adjacent conductors in windy weather, which can cause power failures to villages. To solve this problem, this paper introduces a spacer ring and a teleoperated robot for the installation and retrieval of the ring. The spacer ring and robot address the installation challenges of the anti-collision devices and enhance transmission line maintenance. Fixed by the locking mechanism, the spacer ring can isolate adjacent conductors to avoid collisions. The structure and working principle of the spacer ring and installation robot are introduced. Static analysis and finite element analysis (FEA) are conducted to analyze the output force of the locking mechanism, which is then validated through experiments. Experimental results show that the locking mechanism can generate a strong output force of up to 2000 N with about 6.0 N·m of input torque, providing a secure installation for the spacer ring. Diverse installation tests have validated the robot’s capability for live-line operations on transmission lines. Field tests indicate that the installation robot can travel at 0.3 m/s on a 15° slope and successfully install the spacer rings. Full article
(This article belongs to the Section Machine Design and Theory)
Show Figures

Figure 1

Back to TopTop