Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (13)

Search Parameters:
Keywords = proprioceptive state estimation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1978 KiB  
Article
Learning-Assisted Multi-IMU Proprioceptive State Estimation for Quadruped Robots
by Xuanning Liu, Yajie Bao, Peng Cheng, Dan Shen, Zhengyang Fan, Hao Xu and Genshe Chen
Information 2025, 16(6), 479; https://doi.org/10.3390/info16060479 - 9 Jun 2025
Viewed by 2163
Abstract
This paper presents a learning-assisted approach for state estimation of quadruped robots using observations of proprioceptive sensors, including multiple inertial measurement units (IMUs). Specifically, one body IMU and four additional IMUs attached to each calf link of the robot are used for sensing [...] Read more.
This paper presents a learning-assisted approach for state estimation of quadruped robots using observations of proprioceptive sensors, including multiple inertial measurement units (IMUs). Specifically, one body IMU and four additional IMUs attached to each calf link of the robot are used for sensing the dynamics of the body and legs, in addition to joint encoders. The extended Kalman filter (KF) is employed to fuse sensor data to estimate the robot’s states in the world frame and enhance the convergence of the extended KF (EKF). To circumvent the requirements for the measurements from the motion capture (mocap) system or other vision systems, the right-invariant EKF (RI-EKF) is extended to employ the foot IMU measurements for enhanced state estimation, and a learning-based approach is presented to estimate the vision system measurements for the EKF. One-dimensional convolutional neural networks (CNN) are leveraged to estimate required measurements using only the available proprioception data. Experiments on real data from a quadruped robot demonstrate that proprioception can be sufficient for state estimation. The proposed learning-assisted approach, which does not rely on data from vision systems, achieves competitive accuracy compared to EKF using mocap measurements and lower estimation errors than RI-EKF using multi-IMU measurements. Full article
(This article belongs to the Special Issue Sensing and Wireless Communications)
Show Figures

Figure 1

18 pages, 23641 KiB  
Article
Dynamic Fall Recovery Control for Legged Robots via Reinforcement Learning
by Sicen Li, Yiming Pang, Panju Bai, Shihao Hu, Liquan Wang and Gang Wang
Biomimetics 2024, 9(4), 193; https://doi.org/10.3390/biomimetics9040193 - 22 Mar 2024
Cited by 2 | Viewed by 3126
Abstract
Falling is inevitable for legged robots when deployed in unstructured and unpredictable real-world scenarios, such as uneven terrain in the wild. Therefore, to recover dynamically from a fall without unintended termination of locomotion, the robot must possess the complex motor skills required for [...] Read more.
Falling is inevitable for legged robots when deployed in unstructured and unpredictable real-world scenarios, such as uneven terrain in the wild. Therefore, to recover dynamically from a fall without unintended termination of locomotion, the robot must possess the complex motor skills required for recovery maneuvers. However, this is exceptionally challenging for existing methods, since it involves multiple unspecified internal and external contacts. To go beyond the limitation of existing methods, we introduced a novel deep reinforcement learning framework to train a learning-based state estimator and a proprioceptive history policy for dynamic fall recovery under external disturbances. The proposed learning-based framework applies to different fall cases indoors and outdoors. Furthermore, we show that the learned fall recovery policies are hardware-feasible and can be implemented on real robots. The performance of the proposed approach is evaluated with extensive trials using a quadruped robot, which shows good effectiveness in recovering the robot after a fall on flat surfaces and grassland. Full article
Show Figures

Figure 1

30 pages, 5705 KiB  
Article
Length Modelling of Spiral Superficial Soft Strain Sensors Using Geodesics and Covering Spaces
by Abdullah Al-Azzawi, Peter Stadler, He Kong and Salah Sukkarieh
Robotics 2023, 12(6), 164; https://doi.org/10.3390/robotics12060164 - 1 Dec 2023
Viewed by 2616
Abstract
Piecewise constant curvature soft actuators can generate various types of movements. These actuators can undergo extension, bending, rotation, twist, or a combination of these. Proprioceptive sensing provides the ability to track their movement or estimate their state in 3D space. Several proprioceptive sensing [...] Read more.
Piecewise constant curvature soft actuators can generate various types of movements. These actuators can undergo extension, bending, rotation, twist, or a combination of these. Proprioceptive sensing provides the ability to track their movement or estimate their state in 3D space. Several proprioceptive sensing solutions were developed using soft strain sensors. However, current mathematical models are only capable of modelling the length of the soft sensors when they are attached to actuators subjected to extension, bending, and rotation movements. Furthermore, these models are limited to modelling straight sensors and incapable of modelling spiral sensors. In this study, for both the spiral and straight sensors, we utilise concepts in geodesics and covering spaces to present a mathematical length model that includes twist. This study is limited to the Piecewise constant curvature actuators and demonstrates, among other things, the advantages of our model and the accuracy when including and excluding twist. We verify the model by comparing the results to a finite element analysis. This analysis involves multiple simulation scenarios designed specifically for the verification process. Finally, we validate the theoretical results with previously published experimental results. Then, we discuss the limitations and possible applications of our model using examples from the literature. Full article
(This article belongs to the Special Issue Editorial Board Members' Collection Series: "Soft Robotics")
Show Figures

Figure 1

13 pages, 1737 KiB  
Article
Experimental Evaluation of a Hybrid Sensory Feedback System for Haptic and Kinaesthetic Perception in Hand Prostheses
by Emre Sariyildiz, Fergus Hanss, Hao Zhou, Manish Sreenivasa, Lucy Armitage, Rahim Mutlu and Gursel Alici
Sensors 2023, 23(20), 8492; https://doi.org/10.3390/s23208492 - 16 Oct 2023
Cited by 4 | Viewed by 4931
Abstract
This study proposes a new hybrid multi-modal sensory feedback system for prosthetic hands that can provide not only haptic and proprioceptive feedback but also facilitate object recognition without the aid of vision. Modality-matched haptic perception was provided using a mechanotactile feedback system that [...] Read more.
This study proposes a new hybrid multi-modal sensory feedback system for prosthetic hands that can provide not only haptic and proprioceptive feedback but also facilitate object recognition without the aid of vision. Modality-matched haptic perception was provided using a mechanotactile feedback system that can proportionally apply the gripping force through the use of a force controller. A vibrotactile feedback system was also employed to distinguish four discrete grip positions of the prosthetic hand. The system performance was evaluated with a total of 32 participants in three different experiments (i) haptic feedback, (ii) proprioceptive feedback and (iii) object recognition with hybrid haptic-proprioceptive feedback. The results from the haptic feedback experiment showed that the participants’ ability to accurately perceive applied force depended on the amount of force applied. As the feedback force was increased, the participants tended to underestimate the force levels, with a decrease in the percentage of force estimation. Of the three arm locations (forearm volar, forearm ventral and bicep), and two muscle states (relaxed and tensed) tested, the highest accuracy was obtained for the bicep location in the relaxed state. The results from the proprioceptive feedback experiment showed that participants could very accurately identify four different grip positions of the hand prosthesis (i.e., open hand, wide grip, narrow grip, and closed hand) without a single case of misidentification. In experiment 3, participants could identify objects with different shapes and stiffness with an overall high success rate of 90.5% across all combinations of location and muscle state. The feedback location and muscle state did not have a significant effect on object recognition accuracy. Overall, our study results indicate that the hybrid feedback system may be a very effective way to enrich a prosthetic hand user’s experience of the stiffness and shape of commonly manipulated objects. Full article
(This article belongs to the Special Issue Sensor Technology for Improving Human Movements and Postures: Part II)
Show Figures

Figure 1

17 pages, 16909 KiB  
Article
Adaptive Locomotion Learning for Quadruped Robots by Combining DRL with a Cosine Oscillator Based Rhythm Controller
by Xiaoping Zhang, Yitong Wu, Huijiang Wang, Fumiya Iida and Li Wang
Appl. Sci. 2023, 13(19), 11045; https://doi.org/10.3390/app131911045 - 7 Oct 2023
Cited by 3 | Viewed by 3119
Abstract
Animals have evolved to adapt to complex and uncertain environments, acquiring locomotion skills for diverse surroundings. To endow a robot’s animal-like locomotion ability, in this paper, we propose a learning algorithm for quadruped robots based on deep reinforcement learning (DRL) and a rhythm [...] Read more.
Animals have evolved to adapt to complex and uncertain environments, acquiring locomotion skills for diverse surroundings. To endow a robot’s animal-like locomotion ability, in this paper, we propose a learning algorithm for quadruped robots based on deep reinforcement learning (DRL) and a rhythm controller that is based on a cosine oscillator. For a quadruped robot, two cosine oscillators are utilized at the hip joint and the knee joint of one leg, respectively, and, finally, eight oscillators form the controller to realize the quadruped robot’s locomotion rhythm during moving. The coupling between the cosine oscillators of the rhythm controller is realized by the phase difference, which is simpler and easier to realize when dealing with the complex coupling relationship between different joints. DRL is used to help learn the controller parameters and, in the reward function design, we address the challenge of terrain adaptation without relying on the complex camera-based vision processing but based on the proprioceptive information, where a state estimator is introduced to achieve the robot’s posture and help finally utilize the food-end coordinate. Experiments are carried out in CoppeliaSim, and all of the flat, uphill and downhill conditions are considered. The results show that the robot can successfully accomplish all the above skills and, at the same time, with the reward function designed, the robot’s pitch angle, yaw angle and roll angle are very small, which means that the robot is relatively stable during walking. Then, the robot is transplanted to a new scene; the results show that although the environment is previously unencountered, the robot can still fulfill the task, which demonstrates the effectiveness and robustness of this proposed method. Full article
(This article belongs to the Special Issue Intelligent Control and Robotics II)
Show Figures

Figure 1

13 pages, 1589 KiB  
Article
Dataset with Tactile and Kinesthetic Information from a Human Forearm and Its Application to Deep Learning
by Francisco Pastor, Da-hui Lin-Yang, Jesús M. Gómez-de-Gabriel and Alfonso J. García-Cerezo
Sensors 2022, 22(22), 8752; https://doi.org/10.3390/s22228752 - 12 Nov 2022
Cited by 3 | Viewed by 2895
Abstract
There are physical Human–Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of [...] Read more.
There are physical Human–Robot Interaction (pHRI) applications where the robot has to grab the human body, such as rescue or assistive robotics. Being able to precisely estimate the grasping location when grabbing a human limb is crucial to perform a safe manipulation of the human. Computer vision methods provide pre-grasp information with strong constraints imposed by the field environments. Force-based compliant control, after grasping, limits the amount of applied strength. On the other hand, valuable tactile and proprioceptive information can be obtained from the pHRI gripper, which can be used to better know the features of the human and the contact state between the human and the robot. This paper presents a novel dataset of tactile and kinesthetic data obtained from a robot gripper that grabs a human forearm. The dataset is collected with a three-fingered gripper with two underactuated fingers and a fixed finger with a high-resolution tactile sensor. A palpation procedure is performed to record the shape of the forearm and to recognize the bones and muscles in different sections. Moreover, an application for the use of the database is included. In particular, a fusion approach is used to estimate the actual grasped forearm section using both kinesthetic and tactile information on a regression deep-learning neural network. First, tactile and kinesthetic data are trained separately with Long Short-Term Memory (LSTM) neural networks, considering the data are sequential. Then, the outputs are fed to a Fusion neural network to enhance the estimation. The experiments conducted show good results in training both sources separately, with superior performance when the fusion approach is considered. Full article
Show Figures

Figure 1

16 pages, 7877 KiB  
Article
Infrastructure-Aided Localization and State Estimation for Autonomous Mobile Robots
by Daniel Flögel, Neel Pratik Bhatt and Ehsan Hashemi
Robotics 2022, 11(4), 82; https://doi.org/10.3390/robotics11040082 - 18 Aug 2022
Cited by 12 | Viewed by 3140
Abstract
A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated [...] Read more.
A slip-aware localization framework is proposed for mobile robots experiencing wheel slip in dynamic environments. The framework fuses infrastructure-aided visual tracking data (via fisheye lenses) and proprioceptive sensory data from a skid-steer mobile robot to enhance accuracy and reduce variance of the estimated states. The slip-aware localization framework includes: the visual thread to detect and track the robot in the stereo image through computationally efficient 3D point cloud generation using a region of interest; and the ego motion thread which uses a slip-aware odometry mechanism to estimate the robot pose utilizing a motion model considering wheel slip. Covariance intersection is used to fuse the pose prediction (using proprioceptive data) and the visual thread, such that the updated estimate remains consistent. As confirmed by experiments on a skid-steer mobile robot, the designed localization framework addresses state estimation challenges for indoor/outdoor autonomous mobile robots which experience high-slip, uneven torque distribution at each wheel (by the motion planner), or occlusion when observed by an infrastructure-mounted camera. The proposed system is real-time capable and scalable to multiple robots and multiple environmental cameras. Full article
(This article belongs to the Special Issue Advances in Industrial Robotics and Intelligent Systems)
Show Figures

Figure 1

16 pages, 2275 KiB  
Article
A Factor-Graph-Based Approach to Vehicle Sideslip Angle Estimation
by Antonio Leanza, Giulio Reina and José-Luis Blanco-Claraco
Sensors 2021, 21(16), 5409; https://doi.org/10.3390/s21165409 - 10 Aug 2021
Cited by 8 | Viewed by 3656
Abstract
Sideslip angle is an important variable for understanding and monitoring vehicle dynamics, but there is currently no inexpensive method for its direct measurement. Therefore, it is typically estimated from proprioceptive sensors onboard using filtering methods from the family of the Kalman filter. As [...] Read more.
Sideslip angle is an important variable for understanding and monitoring vehicle dynamics, but there is currently no inexpensive method for its direct measurement. Therefore, it is typically estimated from proprioceptive sensors onboard using filtering methods from the family of the Kalman filter. As a novel alternative, this work proposes modeling the problem directly as a graphical model (factor graph), which can then be optimized using a variety of methods, such as whole-dataset batch optimization for offline processing or fixed-lag smoothing for on-line operation. Experimental results on real vehicle datasets validate the proposal, demonstrating a good agreement between estimated and actual sideslip angle, showing similar performance to state-of-the-art methods but with a greater potential for future extensions due to the more flexible mathematical framework. An open-source implementation of the proposed framework has been made available online. Full article
Show Figures

Figure 1

10 pages, 3055 KiB  
Communication
Sensuator: A Hybrid Sensor–Actuator Approach to Soft Robotic Proprioception Using Recurrent Neural Networks
by Pornthep Preechayasomboon and Eric Rombokas
Actuators 2021, 10(2), 30; https://doi.org/10.3390/act10020030 - 7 Feb 2021
Cited by 18 | Viewed by 6365
Abstract
Soft robotic actuators are now being used in practical applications; however, they are often limited to open-loop control that relies on the inherent compliance of the actuator. Achieving human-like manipulation and grasping with soft robotic actuators requires at least some form of sensing, [...] Read more.
Soft robotic actuators are now being used in practical applications; however, they are often limited to open-loop control that relies on the inherent compliance of the actuator. Achieving human-like manipulation and grasping with soft robotic actuators requires at least some form of sensing, which often comes at the cost of complex fabrication and purposefully built sensor structures. In this paper, we utilize the actuating fluid itself as a sensing medium to achieve high-fidelity proprioception in a soft actuator. As our sensors are somewhat unstructured, their readings are difficult to interpret using linear models. We therefore present a proof of concept of a method for deriving the pose of the soft actuator using recurrent neural networks. We present the experimental setup and our learned state estimator to show that our method is viable for achieving proprioception and is also robust to common sensor failures. Full article
(This article belongs to the Special Issue Smart and Soft Self-Sensing Actuators)
Show Figures

Figure 1

15 pages, 4040 KiB  
Article
Terrain Estimation for Planetary Exploration Robots
by Mauro Dimastrogiovanni, Florian Cordes and Giulio Reina
Appl. Sci. 2020, 10(17), 6044; https://doi.org/10.3390/app10176044 - 31 Aug 2020
Cited by 16 | Viewed by 4654
Abstract
A planetary exploration rover’s ability to detect the type of supporting surface is critical to the successful accomplishment of the planned task, especially for long-range and long-duration missions. This paper presents a general approach to endow a robot with the ability to sense [...] Read more.
A planetary exploration rover’s ability to detect the type of supporting surface is critical to the successful accomplishment of the planned task, especially for long-range and long-duration missions. This paper presents a general approach to endow a robot with the ability to sense the terrain being traversed. It relies on the estimation of motion states and physical variables pertaining to the interaction of the vehicle with the environment. First, a comprehensive proprioceptive feature set is investigated to evaluate the informative content and the ability to gather terrain properties. Then, a terrain classifier is developed grounded on Support Vector Machine (SVM) and that uses an optimal proprioceptive feature set. Following this rationale, episodes of high slippage can be also treated as a particular terrain type and detected via a dedicated classifier. The proposed approach is tested and demonstrated in the field using SherpaTT rover, property of DFKI (German Research Center for Artificial Intelligence), that uses an active suspension system to adapt to terrain unevenness. Full article
(This article belongs to the Special Issue Modelling and Control of Mechatronic and Robotic Systems)
Show Figures

Figure 1

10 pages, 7777 KiB  
Article
Simpler Learning of Robotic Manipulation of Clothing by Utilizing DIY Smart Textile Technology
by Andreas Verleysen, Thomas Holvoet, Remko Proesmans, Cedric Den Haese and Francis wyffels
Appl. Sci. 2020, 10(12), 4088; https://doi.org/10.3390/app10124088 - 13 Jun 2020
Cited by 6 | Viewed by 3851
Abstract
Deformable objects such as ropes, wires, and clothing are omnipresent in society and industry but are little researched in robotics research. This is due to the infinite amount of possible state configurations caused by the deformations of the deformable object. Engineered approaches try [...] Read more.
Deformable objects such as ropes, wires, and clothing are omnipresent in society and industry but are little researched in robotics research. This is due to the infinite amount of possible state configurations caused by the deformations of the deformable object. Engineered approaches try to cope with this by implementing highly complex operations in order to estimate the state of the deformable object. This complexity can be circumvented by utilizing learning-based approaches, such as reinforcement learning, which can deal with the intrinsic high-dimensional state space of deformable objects. However, the reward function in reinforcement learning needs to measure the state configuration of the highly deformable object. Vision-based reward functions are difficult to implement, given the high dimensionality of the state and complex dynamic behavior. In this work, we propose the consideration of concepts beyond vision and incorporate other modalities which can be extracted from deformable objects. By integrating tactile sensor cells into a textile piece, proprioceptive capabilities are gained that are valuable as they provide a reward function to a reinforcement learning agent. We demonstrate on a low-cost dual robotic arm setup that a physical agent can learn on a single CPU core to fold a rectangular patch of textile in the real world based on a learned reward function from tactile information. Full article
(This article belongs to the Special Issue Artificial Intelligence for Smart Systems)
Show Figures

Figure 1

24 pages, 4599 KiB  
Article
Robust Stereo Visual Inertial Navigation System Based on Multi-Stage Outlier Removal in Dynamic Environments
by Dinh Van Nam and Kim Gon-Woo
Sensors 2020, 20(10), 2922; https://doi.org/10.3390/s20102922 - 21 May 2020
Cited by 31 | Viewed by 5901
Abstract
Robotic mapping and odometry are the primary competencies of a navigation system for an autonomous mobile robot. However, the state estimation of the robot typically mixes with a drift over time, and its accuracy is degraded critically when using only proprioceptive sensors in [...] Read more.
Robotic mapping and odometry are the primary competencies of a navigation system for an autonomous mobile robot. However, the state estimation of the robot typically mixes with a drift over time, and its accuracy is degraded critically when using only proprioceptive sensors in indoor environments. Besides, the accuracy of an ego-motion estimated state is severely diminished in dynamic environments because of the influences of both the dynamic objects and light reflection. To this end, the multi-sensor fusion technique is employed to bound the navigation error by adopting the complementary nature of the Inertial Measurement Unit (IMU) and the bearing information of the camera. In this paper, we propose a robust tightly-coupled Visual-Inertial Navigation System (VINS) based on multi-stage outlier removal using the Multi-State Constraint Kalman Filter (MSCKF) framework. First, an efficient and lightweight VINS algorithm is developed for the robust state estimation of a mobile robot by practicing a stereo camera and an IMU towards dynamic indoor environments. Furthermore, we propose strategies to deal with the impacts of dynamic objects by using multi-stage outlier removal based on the feedback information of estimated states. The proposed VINS is implemented and validated through public datasets. In addition, we develop a sensor system and evaluate the VINS algorithm in the dynamic indoor environment with different scenarios. The experimental results show better performance in terms of robustness and accuracy with low computation complexity as compared to state-of-the-art approaches. Full article
Show Figures

Figure 1

15 pages, 481 KiB  
Article
Multiple Vehicle Cooperative Localization with Spatial Registration Based on a Probability Hypothesis Density Filter
by Feihu Zhang, Christian Buckl and Alois Knoll
Sensors 2014, 14(1), 995-1009; https://doi.org/10.3390/s140100995 - 8 Jan 2014
Cited by 18 | Viewed by 8668
Abstract
This paper studies the problem of multiple vehicle cooperative localization with spatial registration in the formulation of the probability hypothesis density (PHD) filter. Assuming vehicles are equipped with proprioceptive and exteroceptive sensors (with biases) to cooperatively localize positions, a simultaneous solution for joint [...] Read more.
This paper studies the problem of multiple vehicle cooperative localization with spatial registration in the formulation of the probability hypothesis density (PHD) filter. Assuming vehicles are equipped with proprioceptive and exteroceptive sensors (with biases) to cooperatively localize positions, a simultaneous solution for joint spatial registration and state estimation is proposed. For this, we rely on the sequential Monte Carlo implementation of the PHD filtering. Compared to other methods, the concept of multiple vehicle cooperative localization with spatial registration is first proposed under Random Finite Set Theory. In addition, the proposed solution also addresses the challenges for multiple vehicle cooperative localization, e.g., the communication bandwidth issue and data association uncertainty. The simulation result demonstrates its reliability and feasibility in large-scale environments. Full article
(This article belongs to the Section Physical Sensors)
Show Figures

Back to TopTop