Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (45)

Search Parameters:
Keywords = human-teleoperated demonstration

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 6831 KiB  
Article
Human–Robot Interaction and Tracking System Based on Mixed Reality Disassembly Tasks
by Raúl Calderón-Sesmero, Adrián Lozano-Hernández, Fernando Frontela-Encinas, Guillermo Cabezas-López and Mireya De-Diego-Moro
Robotics 2025, 14(8), 106; https://doi.org/10.3390/robotics14080106 - 30 Jul 2025
Viewed by 199
Abstract
Disassembly is a crucial process in industrial operations, especially in tasks requiring high precision and strict safety standards when handling components with collaborative robots. However, traditional methods often rely on rigid and sequential task planning, which makes it difficult to adapt to unforeseen [...] Read more.
Disassembly is a crucial process in industrial operations, especially in tasks requiring high precision and strict safety standards when handling components with collaborative robots. However, traditional methods often rely on rigid and sequential task planning, which makes it difficult to adapt to unforeseen changes or dynamic environments. This rigidity not only limits flexibility but also leads to prolonged execution times, as operators must follow predefined steps that do not allow for real-time adjustments. Although techniques like teleoperation have attempted to address these limitations, they often hinder direct human–robot collaboration within the same workspace, reducing effectiveness in dynamic environments. In response to these challenges, this research introduces an advanced human–robot interaction (HRI) system leveraging a mixed-reality (MR) interface embedded in a head-mounted device (HMD). The system enables operators to issue real-time control commands using multimodal inputs, including voice, gestures, and gaze tracking. These inputs are synchronized and processed via the Robot Operating System (ROS2), enabling dynamic and flexible task execution. Additionally, the integration of deep learning algorithms ensures precise detection and validation of disassembly components, enhancing accuracy. Experimental evaluations demonstrate significant improvements, including reduced task completion times, enhanced operator experience, and compliance with strict adherence to safety standards. This scalable solution offers broad applicability for general-purpose disassembly tasks, making it well-suited for complex industrial scenarios. Full article
(This article belongs to the Special Issue Robot Teleoperation Integrating with Augmented Reality)
Show Figures

Figure 1

23 pages, 3542 KiB  
Article
An Intuitive and Efficient Teleoperation Human–Robot Interface Based on a Wearable Myoelectric Armband
by Long Wang, Zhangyi Chen, Songyuan Han, Yao Luo, Xiaoling Li and Yang Liu
Biomimetics 2025, 10(7), 464; https://doi.org/10.3390/biomimetics10070464 - 15 Jul 2025
Viewed by 320
Abstract
Although artificial intelligence technologies have significantly enhanced autonomous robots’ capabilities in perception, decision-making, and planning, their autonomy may still fail when faced with complex, dynamic, or unpredictable environments. Therefore, it is critical to enable users to take over robot control in real-time and [...] Read more.
Although artificial intelligence technologies have significantly enhanced autonomous robots’ capabilities in perception, decision-making, and planning, their autonomy may still fail when faced with complex, dynamic, or unpredictable environments. Therefore, it is critical to enable users to take over robot control in real-time and efficiently through teleoperation. The lightweight, wearable myoelectric armband, due to its portability and environmental robustness, provides a natural human–robot gesture interaction interface. However, current myoelectric teleoperation gesture control faces two major challenges: (1) poor intuitiveness due to visual-motor misalignment; and (2) low efficiency from discrete, single-degree-of-freedom control modes. To address these challenges, this study proposes an integrated myoelectric teleoperation interface. The interface integrates the following: (1) a novel hybrid reference frame aimed at effectively mitigating visual-motor misalignment; and (2) a finite state machine (FSM)-based control logic designed to enhance control efficiency and smoothness. Four experimental tasks were designed using different end-effectors (gripper/dexterous hand) and camera viewpoints (front/side view). Compared to benchmark methods, the proposed interface demonstrates significant advantages in task completion time, movement path efficiency, and subjective workload. This work demonstrates the potential of the proposed interface to significantly advance the practical application of wearable myoelectric sensors in human–robot interaction. Full article
(This article belongs to the Special Issue Intelligent Human–Robot Interaction: 4th Edition)
Show Figures

Figure 1

19 pages, 5486 KiB  
Article
The Development of Teleoperated Driving to Cooperate with the Autonomous Driving Experience
by Nuksit Noomwongs, Krit T.Siriwattana, Sunhapos Chantranuwathana and Gridsada Phanomchoeng
Automation 2025, 6(3), 26; https://doi.org/10.3390/automation6030026 - 25 Jun 2025
Viewed by 701
Abstract
Autonomous vehicles are increasingly being adopted, with manufacturers competing to enhance automation capabilities. While full automation eliminates human input, lower levels still require driver intervention under specific conditions. This study presents the design and development of a prototype vehicle featuring both low- and [...] Read more.
Autonomous vehicles are increasingly being adopted, with manufacturers competing to enhance automation capabilities. While full automation eliminates human input, lower levels still require driver intervention under specific conditions. This study presents the design and development of a prototype vehicle featuring both low- and high-level control systems, integrated with a 5G-based teleoperation interface that enables seamless switching between autonomous and remote-control modes. The system includes a malfunction surveillance unit that monitors communication latency and obstacle conditions, triggering a hardware-based emergency braking mechanism when safety thresholds are exceeded. Field experiments conducted over four test phases around Chulalongkorn University demonstrated stable performance under both driving modes. Mean lateral deviations ranged from 0.19 m to 0.33 m, with maximum deviations up to 0.88 m. Average end-to-end latency was 109.7 ms, with worst-case spikes of 316.6 ms. The emergency fallback system successfully identified all predefined fault conditions and responded with timely braking. Latency-aware stopping analysis showed an increase in braking distance from 1.42 m to 2.37 m at 3 m/s. In scenarios with extreme latency (>500 ms), the system required operator steering input or fallback to autonomous mode to avoid obstacles. These results confirm the platform’s effectiveness in real-world teleoperation over public 5G networks and its potential scalability for broader deployment. Full article
(This article belongs to the Section Smart Transportation and Autonomous Vehicles)
Show Figures

Figure 1

17 pages, 1929 KiB  
Article
Bio-Signal-Guided Robot Adaptive Stiffness Learning via Human-Teleoperated Demonstrations
by Wei Xia, Zhiwei Liao, Zongxin Lu and Ligang Yao
Biomimetics 2025, 10(6), 399; https://doi.org/10.3390/biomimetics10060399 - 13 Jun 2025
Viewed by 494
Abstract
Robot learning from human demonstration pioneers an effective mapping paradigm for endowing robots with human-like operational capabilities. This paper proposes a bio-signal-guided robot adaptive stiffness learning framework grounded in the conclusion that muscle activation of the human arm is positively correlated with the [...] Read more.
Robot learning from human demonstration pioneers an effective mapping paradigm for endowing robots with human-like operational capabilities. This paper proposes a bio-signal-guided robot adaptive stiffness learning framework grounded in the conclusion that muscle activation of the human arm is positively correlated with the endpoint stiffness. First, we propose a human-teleoperated demonstration platform enabling real-time modulation of robot end-effector stiffness by human tutors during operational tasks. Second, we develop a dual-stage probabilistic modeling architecture employing the Gaussian mixture model and Gaussian mixture regression to model the temporal–motion correlation and the motion–sEMG relationship, successively. Third, a real-world experiment was conducted to validate the effectiveness of the proposed skill transfer framework, demonstrating that the robot achieves online adaptation of Cartesian impedance characteristics in contact-rich tasks. This paper provides a simple and intuitive way to plan the Cartesian impedance parameters, transcending the classical method that requires complex human arm endpoint stiffness identification before human demonstration or compensation for the difference in human–robot operational effects after human demonstration. Full article
Show Figures

Figure 1

19 pages, 28961 KiB  
Article
Human-like Dexterous Grasping Through Reinforcement Learning and Multimodal Perception
by Wen Qi, Haoyu Fan, Cankun Zheng, Hang Su and Samer Alfayad
Biomimetics 2025, 10(3), 186; https://doi.org/10.3390/biomimetics10030186 - 18 Mar 2025
Cited by 2 | Viewed by 1325
Abstract
Dexterous robotic grasping with multifingered hands remains a critical challenge in non-visual environments, where diverse object geometries and material properties demand adaptive force modulation and tactile-aware manipulation. To address this, we propose the Reinforcement Learning-Based Multimodal Perception (RLMP) framework, which integrates human-like grasping [...] Read more.
Dexterous robotic grasping with multifingered hands remains a critical challenge in non-visual environments, where diverse object geometries and material properties demand adaptive force modulation and tactile-aware manipulation. To address this, we propose the Reinforcement Learning-Based Multimodal Perception (RLMP) framework, which integrates human-like grasping intuition through operator-worn gloves with tactile-guided reinforcement learning. The framework’s key innovation lies in its Tactile-Driven DCNN architecture—a lightweight convolutional network achieving 98.5% object recognition accuracy using spatiotemporal pressure patterns—coupled with an RL policy refinement mechanism that dynamically correlates finger kinematics with real-time tactile feedback. Experimental results demonstrate reliable grasping performance across deformable and rigid objects while maintaining force precision critical for fragile targets. By bridging human teleoperation with autonomous tactile adaptation, RLMP eliminates dependency on visual input and predefined object models, establishing a new paradigm for robotic dexterity in occlusion-rich scenarios. Full article
(This article belongs to the Special Issue Biomimetic Innovations for Human–Machine Interaction)
Show Figures

Figure 1

25 pages, 13905 KiB  
Article
A Framework for Real-Time Autonomous Robotic Sorting and Segregation of Nuclear Waste: Modelling, Identification and Control of DexterTM Robot
by Mithun Poozhiyil, Omer F. Argin, Mini Rai, Amir G. Esfahani, Marc Hanheide, Ryan King, Phil Saunderson, Mike Moulin-Ramsden, Wen Yang, Laura Palacio García, Iain Mackay, Abhishek Mishra, Sho Okamoto and Kelvin Yeung
Machines 2025, 13(3), 214; https://doi.org/10.3390/machines13030214 - 6 Mar 2025
Viewed by 1536
Abstract
Robots are essential for carrying out tasks, for example, in a nuclear industry, where direct human involvement is limited. However, present-day nuclear robots are not versatile due to limited autonomy and higher costs. This research presents a merely teleoperated DexterTM nuclear robot’s [...] Read more.
Robots are essential for carrying out tasks, for example, in a nuclear industry, where direct human involvement is limited. However, present-day nuclear robots are not versatile due to limited autonomy and higher costs. This research presents a merely teleoperated DexterTM nuclear robot’s transformation into an autonomous manipulator for nuclear sort and segregation tasks. The DexterTM system comprises a arm client manipulator designed to operate in extreme radiation environments and a similar single/dual-arm local manipulator. In this paper, initially, a kinematic model and convex optimization-based dynamic model identification of a single-arm DexterTM manipulator is presented. This model is used for autonomous DexterTM control through Robot Operating System (ROS). A new integration framework incorporating vision, AI-based grasp generation and an intelligent radiological surveying method for enhancing the performance of autonomous DexterTM is presented. The efficacy of the framework is demonstrated on a mock-up nuclear waste test-bed using similar waste materials found in the nuclear industry. The experiments performed show potency, generality and applicability of the proposed framework in overcoming the entry barriers for autonomous systems in regulated domains like the nuclear industry. Full article
(This article belongs to the Special Issue New Trends in Industrial Robots)
Show Figures

Figure 1

18 pages, 4465 KiB  
Article
A Semi-Autonomous Telemanipulation Order-Picking Control Based on Estimating Operator Intent for Box-Stacking Storage Environments
by Donggyu Min, Hojin Yoon and Donghun Lee
Sensors 2025, 25(4), 1217; https://doi.org/10.3390/s25041217 - 17 Feb 2025
Viewed by 663
Abstract
Teleoperation-based order picking in logistics warehouse environments has been advancing steadily. However, the accuracy of such operations varies depending on the type of human–robot interface (HRI) employed. Immersive HRI, which uses a head-mounted display (HMD) and controllers, can significantly reduce task accuracy due [...] Read more.
Teleoperation-based order picking in logistics warehouse environments has been advancing steadily. However, the accuracy of such operations varies depending on the type of human–robot interface (HRI) employed. Immersive HRI, which uses a head-mounted display (HMD) and controllers, can significantly reduce task accuracy due to the limited field of view in virtual environments. To address this limitation, this study proposes a semi-autonomous telemanipulation order-picking control method based on operator intent estimation using intersection points between the end-effector and the target logistics plane in box-stacking storage environments. The proposed method consists of two stages. The first stage involves operator intent estimation, which approximates the target logistics plane using objects identified through camera vision and calculates the intersection points by intersecting the end-effector heading vector with the plane. These points are accumulated and modeled as a Gaussian distribution, with the probability density function (PDF) of each target object treated as its likelihood. Bayesian probability filtering is then applied to estimate target probabilities, and predefined conditions are used to switch control between autonomous and manual controllers. Results show that the proposed operator intent estimation method identified the correct target in 74.6% of the task’s duration. The proposed semi-autonomous control method successfully transferred control to the autonomous controller within 32.2% of the total task duration using a combination of three parameters. This approach inferred operator intent based solely on manipulator motion and reduced the fatigue of the operator. This method demonstrates potential for broad application in teleoperation systems, offering high operational efficiency regardless of operator expertise or training level. Full article
(This article belongs to the Special Issue Applications of Body Worn Sensors and Wearables)
Show Figures

Figure 1

22 pages, 7758 KiB  
Article
Haptic Guidance System for Teleoperation Based on Trajectory Similarity
by Hikaru Nagano, Tomoki Nishino, Yuichi Tazaki and Yasuyoshi Yokokohji
Robotics 2025, 14(2), 15; https://doi.org/10.3390/robotics14020015 - 30 Jan 2025
Cited by 1 | Viewed by 1496
Abstract
Teleoperation technology enables remote control of machines, but often requires complex manoeuvres that pose significant challenges for operators. To mitigate these challenges, assistive systems have been developed to support teleoperation. This study presents a teleoperation guidance system that provides assistive force feedback to [...] Read more.
Teleoperation technology enables remote control of machines, but often requires complex manoeuvres that pose significant challenges for operators. To mitigate these challenges, assistive systems have been developed to support teleoperation. This study presents a teleoperation guidance system that provides assistive force feedback to help operators align more accurately with desired trajectories. Two key issues remain: (1) the lack of a flexible, real-time approach to defining desired trajectories and calculating assistive forces, and (2) uncertainty about the effects of forward motion assistance within the assistive forces. To address these issues, we propose a novel approach that captures the posture trajectory of the local control interface, statistically generates a reference trajectory, and incorporates forward motion as an adjustable parameter. In Experiment 1, which involved simulating an object transfer task, the proposed method significantly reduced the operator’s workload compared to conventional techniques, especially in dynamic target scenarios. Experiment 2, which involved more complex paths, showed that assistive forces with forward assistance significantly improved manoeuvring performance. Full article
(This article belongs to the Special Issue Robot Teleoperation Integrating with Augmented Reality)
Show Figures

Figure 1

21 pages, 8220 KiB  
Article
Network Congestion Control Algorithm for Image Transmission—HRI and Visual Light Communications of an Autonomous Underwater Vehicle for Intervention
by Salvador López-Barajas, Pedro J. Sanz, Raúl Marín-Prades, Juan Echagüe and Sebastian Realpe
Future Internet 2025, 17(1), 10; https://doi.org/10.3390/fi17010010 - 1 Jan 2025
Cited by 2 | Viewed by 1108
Abstract
In this study, the challenge of teleoperating robots in harsh environments such as underwater or in tunnels is addressed. In these environments, wireless communication networks are prone to congestion, leading to potential mission failures. Our approach integrates a Human–Robot Interface (HRI) with a [...] Read more.
In this study, the challenge of teleoperating robots in harsh environments such as underwater or in tunnels is addressed. In these environments, wireless communication networks are prone to congestion, leading to potential mission failures. Our approach integrates a Human–Robot Interface (HRI) with a network congestion control algorithm at the application level for conservative transmission of images using the Robot Operating System (ROS) framework. The system was designed to avoid network congestion by adjusting the image compression parameters and the transmission rate depending on the real-time network conditions. To evaluate its performance, the algorithm was tested in two wireless underwater use cases: pipe inspection and an intervention task. An Autonomous Underwater Vehicle for Intervention (I-AUV) equipped with a Visual Light Communication (VLC) modem was used. Characterization of the VLC network was performed while the robot performed trajectories in the tank. The results demonstrate that our approach allows an operator to perform wireless missions where teleoperation requires images and the network conditions are variable. This solution provides a robust framework for image transmission and network control in the application layer, which allows for integration with any ROS-based system. Full article
Show Figures

Figure 1

34 pages, 3047 KiB  
Article
Stability Analysis and Experimental Validation of Standard Proportional-Integral-Derivative Control in Bilateral Teleoperators with Time-Varying Delays
by Marco A. Arteaga, Evert J. Guajardo-Benavides and Pablo Sánchez-Sánchez
Algorithms 2024, 17(12), 580; https://doi.org/10.3390/a17120580 - 16 Dec 2024
Cited by 1 | Viewed by 1189
Abstract
The control of bilateral teleoperation systems with time-varying delays is a challenging problem that is frequently addressed with advanced control techniques. Widely known controllers, like Proportional-Derivative (PD) and Proportional-Integral-Derivative (PID), are seldom employed independently and are typically combined with other approaches, or at [...] Read more.
The control of bilateral teleoperation systems with time-varying delays is a challenging problem that is frequently addressed with advanced control techniques. Widely known controllers, like Proportional-Derivative (PD) and Proportional-Integral-Derivative (PID), are seldom employed independently and are typically combined with other approaches, or at least with gravity compensation. This work aims to address a gap in the analysis of bilateral systems by demonstrating that the standard PID control law alone can achieve regulation in these systems when a human operator moves any of the robots while exchanging delayed positions. Experimental results are consistent with the theoretical analysis. Additionally, to illustrate the high degree of robustness of the standard PID, further experiments are conducted in constrained motion, both with and without force feedback. Full article
(This article belongs to the Special Issue Algorithms for PID Controller 2024)
Show Figures

Graphical abstract

11 pages, 3982 KiB  
Proceeding Paper
Remote Control of ADAS Features: A Teleoperation Approach to Mitigate Autonomous Driving Challenges
by İsa Karaböcek, Batıkan Kavak and Ege Özdemir
Eng. Proc. 2024, 82(1), 36; https://doi.org/10.3390/ecsa-11-20449 - 25 Nov 2024
Cited by 1 | Viewed by 1218
Abstract
This paper presents a novel approach to enhancing the safety of Advanced Driver Assistance Systems (ADAS) by integrating teleoperation for the remote control of ADAS features in a vehicle. The primary contribution of this research is the development and implementation of a teleoperation [...] Read more.
This paper presents a novel approach to enhancing the safety of Advanced Driver Assistance Systems (ADAS) by integrating teleoperation for the remote control of ADAS features in a vehicle. The primary contribution of this research is the development and implementation of a teleoperation system that allows human operators to take control of the vehicle’s ADAS features, enabling timely intervention in critical situations where autonomous functions may be insufficient. While the concept of teleoperation has been explored in the literature, with several implementations focused on the direct control of vehicles, there are relatively few examples of teleoperation systems designed specifically to utilize ADAS features. This research addresses this gap by exploring teleoperation as a supplementary mechanism that allows human intervention in critical driving situations, particularly where autonomous systems may encounter limitations. The teleoperation system was tested under two critical ADAS scenarios, cruise control and lane change assist, chosen for their importance in real-world driving conditions. These scenarios demonstrate how teleoperation can complement and enhance the performance of ADAS features. The experiments reveal the effectiveness of remote control in providing precise control, allowing for swift and accurate responses in scenarios where the autonomous system might face challenges. The novelty of this work lies in its application of teleoperation to ADAS features, offering a new perspective on how human intervention can enhance vehicle safety. The findings provide valuable insights into optimizing teleoperation for real-world driving scenarios. As a result of the experiments, it was demonstrated that integrating teleoperation with ADAS features offers a more reliable solution compared to standalone ADAS driving. Full article
Show Figures

Figure 1

18 pages, 9899 KiB  
Article
A Robotic Teleoperation System with Integrated Augmented Reality and Digital Twin Technologies for Disassembling End-of-Life Batteries
by Feifan Zhao, Wupeng Deng and Duc Truong Pham
Batteries 2024, 10(11), 382; https://doi.org/10.3390/batteries10110382 - 30 Oct 2024
Cited by 3 | Viewed by 2849
Abstract
Disassembly is a key step in remanufacturing, especially for end-of-life (EoL) products such as electric vehicle (EV) batteries, which are challenging to dismantle due to uncertainties in their condition and potential risks of fire, fumes, explosions, and electrical shock. To address these challenges, [...] Read more.
Disassembly is a key step in remanufacturing, especially for end-of-life (EoL) products such as electric vehicle (EV) batteries, which are challenging to dismantle due to uncertainties in their condition and potential risks of fire, fumes, explosions, and electrical shock. To address these challenges, this paper presents a robotic teleoperation system that leverages augmented reality (AR) and digital twin (DT) technologies to enable a human operator to work away from the danger zone. By integrating AR and DTs, the system not only provides a real-time visual representation of the robot’s status but also enables remote control via gesture recognition. A bidirectional communication framework established within the system synchronises the virtual robot with its physical counterpart in an AR environment, which enhances the operator’s understanding of both the robot and task statuses. In the event of anomalies, the operator can interact with the virtual robot through intuitive gestures based on information displayed on the AR interface, thereby improving decision-making efficiency and operational safety. The application of this system is demonstrated through a case study involving the disassembly of a busbar from an EoL EV battery. Furthermore, the performance of the system in terms of task completion time and operator workload was evaluated and compared with that of AR-based control methods without informational cues and ‘smartpad’ controls. The findings indicate that the proposed system reduces operation time and enhances user experience, delivering its broad application potential in complex industrial settings. Full article
(This article belongs to the Section Battery Processing, Manufacturing and Recycling)
Show Figures

Figure 1

27 pages, 28326 KiB  
Article
Full-Body Pose Estimation of Humanoid Robots Using Head-Worn Cameras for Digital Human-Augmented Robotic Telepresence
by Youngdae Cho, Wooram Son, Jaewan Bak, Yisoo Lee, Hwasup Lim and Youngwoon Cha
Mathematics 2024, 12(19), 3039; https://doi.org/10.3390/math12193039 - 28 Sep 2024
Cited by 1 | Viewed by 2218
Abstract
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot’s body rather than the remote individual. To address this, we propose [...] Read more.
We envision a telepresence system that enhances remote work by facilitating both physical and immersive visual interactions between individuals. However, during robot teleoperation, communication often lacks realism, as users see the robot’s body rather than the remote individual. To address this, we propose a method for overlaying a digital human model onto a humanoid robot using XR visualization, enabling an immersive 3D telepresence experience. Our approach employs a learning-based method to estimate the 2D poses of the humanoid robot from head-worn stereo views, leveraging a newly collected dataset of full-body poses for humanoid robots. The stereo 2D poses and sparse inertial measurements from the remote operator are optimized to compute 3D poses over time. The digital human is localized from the perspective of a continuously moving observer, utilizing the estimated 3D pose of the humanoid robot. Our moving camera-based pose estimation method does not rely on any markers or external knowledge of the robot’s status, effectively overcoming challenges such as marker occlusion, calibration issues, and dependencies on headset tracking errors. We demonstrate the system in a remote physical training scenario, achieving real-time performance at 40 fps, which enables simultaneous immersive and physical interactions. Experimental results show that our learning-based 3D pose estimation method, which operates without prior knowledge of the robot, significantly outperforms alternative approaches requiring the robot’s global pose, particularly during rapid headset movements, achieving markerless digital human augmentation from head-worn views. Full article
(This article belongs to the Topic Extended Reality: Models and Applications)
Show Figures

Figure 1

23 pages, 3808 KiB  
Article
Gesture Recognition Framework for Teleoperation of Infrared (IR) Consumer Devices Using a Novel pFMG Soft Armband
by Sam Young, Hao Zhou and Gursel Alici
Sensors 2024, 24(18), 6124; https://doi.org/10.3390/s24186124 - 22 Sep 2024
Viewed by 1684
Abstract
Wearable technologies represent a significant advancement in facilitating communication between humans and machines. Powered by artificial intelligence (AI), human gestures detected by wearable sensors can provide people with seamless interaction with physical, digital, and mixed environments. In this paper, the foundations of a [...] Read more.
Wearable technologies represent a significant advancement in facilitating communication between humans and machines. Powered by artificial intelligence (AI), human gestures detected by wearable sensors can provide people with seamless interaction with physical, digital, and mixed environments. In this paper, the foundations of a gesture-recognition framework for the teleoperation of infrared consumer electronics are established. This framework is based on force myography data of the upper forearm, acquired from a prototype novel soft pressure-based force myography (pFMG) armband. Here, the sub-processes of the framework are detailed, including the acquisition of infrared and force myography data; pre-processing; feature construction/selection; classifier selection; post-processing; and interfacing/actuation. The gesture recognition system is evaluated using 12 subjects’ force myography data obtained whilst performing five classes of gestures. Our results demonstrate an inter-session and inter-trial gesture average recognition accuracy of approximately 92.2% and 88.9%, respectively. The gesture recognition framework was successfully able to teleoperate several infrared consumer electronics as a wearable, safe and affordable human–machine interface system. The contribution of this study centres around proposing and demonstrating a user-centred design methodology to allow direct human–machine interaction and interface for applications where humans and devices are in the same loop or coexist, as typified between users and infrared-communicating devices in this study. Full article
(This article belongs to the Special Issue Intelligent Human-Computer Interaction Systems and Their Evaluation)
Show Figures

Figure 1

26 pages, 20338 KiB  
Article
Robust Adaptive-Sliding-Mode Control for Teleoperation Systems with Time-Varying Delays and Uncertainties
by Yeong-Hwa Chang, Cheng-Yuan Yang and Hung-Wei Lin
Robotics 2024, 13(6), 89; https://doi.org/10.3390/robotics13060089 - 13 Jun 2024
Cited by 5 | Viewed by 1866
Abstract
Master–slave teleoperation systems with haptic feedback enable human operators to interact with objects or perform tasks in remote environments. This paper presents a sliding-mode control scheme tailored for bilateral teleoperation systems operating in the presence of unknown uncertainties and time-varying delays. To address [...] Read more.
Master–slave teleoperation systems with haptic feedback enable human operators to interact with objects or perform tasks in remote environments. This paper presents a sliding-mode control scheme tailored for bilateral teleoperation systems operating in the presence of unknown uncertainties and time-varying delays. To address unknown but bounded uncertainties, adaptive laws are derived alongside controller design. Additionally, a linear matrix inequality is solved to determine the allowable bound of delays. Stability of the closed-loop system is ensured through Lyapunov–Krasovskii functional analysis. Two-degree-of-freedom mechanisms are self-built as haptic devices. Free-motion and force-perception scenarios are examined, with experimental results validating and comparing performances. The proposed adaptive-sliding-control method increases the position performance from 58.48% to 82.55% and the force performance from 83.48% to 99.77%. The proposed control scheme demonstrates enhanced position tracking and force perception in bilateral teleoperation systems. Full article
(This article belongs to the Special Issue Adaptive and Nonlinear Control of Robotics)
Show Figures

Figure 1

Back to TopTop