sensors-logo

Journal Browser

Journal Browser

Sensors for Robots II

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (30 November 2023) | Viewed by 17401

Special Issue Editors


E-Mail Website
Guest Editor
College of Computer and Control Engineering, Nankai University, Tianjin 300071, China
Interests: micro/nano robotic manipulation; micro/nano sensor fabrication; microscopy vision sensing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
College of Computer and Control Engineering, Nankai University, Tianjin 300071, China
Interests: micromanipulation robots; machine vision; biological pattern formation modeling and simulation
College of Computer and Control Engineering, Nankai University, Tianjin 300071, China
Interests: robotic patch clamp for brain science; automated cell manipulation and measurement robot; ultra-micro injection system; microscopic vision

Special Issue Information

Dear Colleagues,

Nowdays, robots play significant roles in industry, life sciences, education, medicine, social services, the military, etc. As key components of robots, sensors are the basis of the self-adaptive abilities and automation control abilities of a robot. Therefore, novel sensing techniques and advanced sensor applications for robots have received increasing interest worldwide. This Special Issue aims to showcase reviews or rigorous original papers describing current and expected challenges, along with potential solutions, for robotic sensing.

Potential topics include, but are not limited to:

  1. Novel sensing techniques for robots
  • novel force, vision, tactile, and auditory sensors
  • novel sensor design, fabrication, and calibration methods
  • sensors’ information integration and fusion
  • sensor information processing algorithm improvement and optimization
  • wireless sensor networks
  • multi-sensors, reconfigurable sensors, and cyber–physical sensing systems
  • micro/nano sensor theory, design, and development
  1. Sensors for novel robotic applications
  • vision sensors and vision algorithms for motive object tracking and robot navigation
  • force, vision, and tactile sensors for precise manipulation, collision prediction, etc.
  • multi-sensing intelligent systems
  • sensor systems for human–robot interactions
  • sensors for robotics and automation at the micro/nano scales
  • applications of robot sensing in interdisciplinary areas, including biology, material sciences, physical sciences, etc.

Prof. Dr. Xin Zhao
Dr. Mingzhu Sun
Dr. Qili Zhao
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

5 pages, 182 KiB  
Editorial
Sensors for Robots
by Xin Zhao, Mingzhu Sun and Qili Zhao
Sensors 2024, 24(6), 1854; https://doi.org/10.3390/s24061854 - 14 Mar 2024
Viewed by 491
Abstract
Currently, robots are playing significant roles in industry [...] Full article
(This article belongs to the Special Issue Sensors for Robots II)

Research

Jump to: Editorial

14 pages, 3979 KiB  
Article
Transparency-Aware Segmentation of Glass Objects to Train RGB-Based Pose Estimators
by Maira Weidenbach, Tim Laue and Udo Frese
Sensors 2024, 24(2), 432; https://doi.org/10.3390/s24020432 - 10 Jan 2024
Viewed by 578
Abstract
Robotic manipulation requires object pose knowledge for the objects of interest. In order to perform typical household chores, a robot needs to be able to estimate 6D poses for objects such as water glasses or salad bowls. This is especially difficult for glass [...] Read more.
Robotic manipulation requires object pose knowledge for the objects of interest. In order to perform typical household chores, a robot needs to be able to estimate 6D poses for objects such as water glasses or salad bowls. This is especially difficult for glass objects, as for these, depth data are mostly disturbed, and in RGB images, occluded objects are still visible. Thus, in this paper, we propose to redefine the ground-truth for training RGB-based pose estimators in two ways: (a) we apply a transparency-aware multisegmentation, in which an image pixel can belong to more than one object, and (b) we use transparency-aware bounding boxes, which always enclose whole objects, even if parts of an object are formally occluded by another object. The latter approach ensures that the size and scale of an object remain more consistent across different images. We train our pose estimator, which was originally designed for opaque objects, with three different ground-truth types on the ClearPose dataset. Just by changing the training data to our transparency-aware segmentation, with no additional glass-specific feature changes in the estimator, the ADD-S AUC value increases by 4.3%. Such a multisegmentation can be created for every dataset that provides a 3D model of the object and its ground-truth pose. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

25 pages, 6279 KiB  
Article
Optimizing Appearance-Based Localization with Catadioptric Cameras: Small-Footprint Models for Real-Time Inference on Edge Devices
by Marta Rostkowska and Piotr Skrzypczyński
Sensors 2023, 23(14), 6485; https://doi.org/10.3390/s23146485 - 18 Jul 2023
Cited by 1 | Viewed by 779
Abstract
This paper considers the task of appearance-based localization: visual place recognition from omnidirectional images obtained from catadioptric cameras. The focus is on designing an efficient neural network architecture that accurately and reliably recognizes indoor scenes on distorted images from a catadioptric camera, even [...] Read more.
This paper considers the task of appearance-based localization: visual place recognition from omnidirectional images obtained from catadioptric cameras. The focus is on designing an efficient neural network architecture that accurately and reliably recognizes indoor scenes on distorted images from a catadioptric camera, even in self-similar environments with few discernible features. As the target application is the global localization of a low-cost service mobile robot, the proposed solutions are optimized toward being small-footprint models that provide real-time inference on edge devices, such as Nvidia Jetson. We compare several design choices for the neural network-based architecture of the localization system and then demonstrate that the best results are achieved with embeddings (global descriptors) yielded by exploiting transfer learning and fine tuning on a limited number of catadioptric images. We test our solutions on two small-scale datasets collected using different catadioptric cameras in the same office building. Next, we compare the performance of our system to state-of-the-art visual place recognition systems on the publicly available COLD Freiburg and Saarbrücken datasets that contain images collected under different lighting conditions. Our system compares favourably to the competitors both in terms of the accuracy of place recognition and the inference time, providing a cost- and energy-efficient means of appearance-based localization for an indoor service robot. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

18 pages, 4315 KiB  
Article
An End-to-End Dynamic Posture Perception Method for Soft Actuators Based on Distributed Thin Flexible Porous Piezoresistive Sensors
by Jing Shu, Junming Wang, Kenneth Chik-Chi Cheng, Ling-Fung Yeung, Zheng Li and Raymond Kai-yu Tong
Sensors 2023, 23(13), 6189; https://doi.org/10.3390/s23136189 - 6 Jul 2023
Cited by 3 | Viewed by 1352
Abstract
This paper proposes a method for accurate 3D posture sensing of the soft actuators, which could be applied to the closed-loop control of soft robots. To achieve this, the method employs an array of miniaturized sponge resistive materials along the soft actuator, which [...] Read more.
This paper proposes a method for accurate 3D posture sensing of the soft actuators, which could be applied to the closed-loop control of soft robots. To achieve this, the method employs an array of miniaturized sponge resistive materials along the soft actuator, which uses long short-term memory (LSTM) neural networks to solve the end-to-end 3D posture for the soft actuators. The method takes into account the hysteresis of the soft robot and non-linear sensing signals from the flexible bending sensors. The proposed approach uses a flexible bending sensor made from a thin layer of conductive sponge material designed for posture sensing. The LSTM network is used to model the posture of the soft actuator. The effectiveness of the method has been demonstrated on a finger-size 3 degree of freedom (DOF) pneumatic bellow-shaped actuator, with nine flexible sponge resistive sensors placed on the soft actuator’s outer surface. The sensor-characterizing results show that the maximum bending torque of the sensor installed on the actuator is 4.7 Nm, which has an insignificant impact on the actuator motion based on the working space test of the actuator. Moreover, the sensors exhibit a relatively low error rate in predicting the actuator tip position, with error percentages of 0.37%, 2.38%, and 1.58% along the x-, y-, and z-axes, respectively. This work is expected to contribute to the advancement of soft robot dynamic posture perception by using thin sponge sensors and LSTM or other machine learning methods for control. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

11 pages, 3106 KiB  
Article
A Novel Active Device for Shoulder Rotation Based on Force Control
by Isabel M. Alguacil-Diego, Alicia Cuesta-Gómez, David Pont, Juan Carrillo, Paul Espinosa, Miguel A. Sánchez-Urán and Manuel Ferre
Sensors 2023, 23(13), 6158; https://doi.org/10.3390/s23136158 - 5 Jul 2023
Cited by 1 | Viewed by 1028
Abstract
This article describes a one-degree-of-freedom haptic device that can be applied to perform three different exercises for shoulder rehabilitation. The device is based on a force control architecture and an adaptive speed PI controller. It is a portable equipment that is easy to [...] Read more.
This article describes a one-degree-of-freedom haptic device that can be applied to perform three different exercises for shoulder rehabilitation. The device is based on a force control architecture and an adaptive speed PI controller. It is a portable equipment that is easy to use for any patient, and was optimized for rehabilitating external rotation movements of the shoulder in patients in whom this was limited by muscle–skeletal injuries. The sample consisted of 12 shoulder rehabilitation sessions with different shoulder pathologies that limited their range of shoulder mobility. The mean and standard deviations of the external rotation of shoulder were 42.91 ± 4.53° for the pre-intervention measurements and 53.88 ± 4.26° for the post-intervention measurement. In addition, patients reported high levels of acceptance of the device. Scores on the SUS questionnaire ranged from 65 to 97.5, with an average score of 82.70 ± 9.21, indicating a high degree of acceptance. The preliminary results suggest that the use of this device and the incorporation of such equipment into rehabilitation services could be of great help for patients in their rehabilitation process and for physiotherapists in applying their therapies. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

16 pages, 4794 KiB  
Article
Robotic Intracellular Pressure Measurement Using Micropipette Electrode
by Minghui Li, Jinyu Qiu, Ruimin Li, Yuzhu Liu, Yue Du, Yaowei Liu, Mingzhu Sun, Xin Zhao and Qili Zhao
Sensors 2023, 23(10), 4973; https://doi.org/10.3390/s23104973 - 22 May 2023
Cited by 1 | Viewed by 1730
Abstract
Intracellular pressure, a key physical parameter of the intracellular environment, has been found to regulate multiple cell physiological activities and impact cell micromanipulation results. The intracellular pressure may reveal the mechanism of these cells’ physiological activities or improve the micro-manipulation accuracy for cells. [...] Read more.
Intracellular pressure, a key physical parameter of the intracellular environment, has been found to regulate multiple cell physiological activities and impact cell micromanipulation results. The intracellular pressure may reveal the mechanism of these cells’ physiological activities or improve the micro-manipulation accuracy for cells. The involvement of specialized and expensive devices and the significant damage to cell viability that the current intracellular pressure measurement methods cause significantly limit their wide applications. This paper proposes a robotic intracellular pressure measurement method using a traditional micropipette electrode system setup. First, the measured resistance of the micropipette inside the culture medium is modeled to analyze its variation trend when the pressure inside the micropipette increases. Then, the concentration of KCl solution filled inside the micropipette electrode that is suitable for intracellular pressure measurement is determined according to the tested electrode resistance–pressure relationship; 1 mol/L KCl solution is our final choice. Further, the measurement resistance of the micropipette electrode inside the cell is modeled to measure the intracellular pressure through the difference in key pressure before and after the release of the intracellular pressure. Based on the above work, a robotic measurement procedure of the intracellular pressure is established based on a traditional micropipette electrode system. The experimental results on porcine oocytes demonstrate that the proposed method can operate on cells at an average speed of 20~40 cells/day with measurement efficiency comparable to the related work. The average repeated error of the relationship between the measured electrode resistance and the pressure inside the micropipette electrode is less than 5%, and no observable intracellular pressure leakage was found during the measurement process, both guaranteeing the measurement accuracy of intracellular pressure. The measured results of the porcine oocytes are in accordance with those reported in related work. Moreover, a 90% survival rate of operated oocytes was obtained after measurement, proving limited damage to cell viability. Our method does not rely on expensive instruments and is conducive to promotion in daily laboratories. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

21 pages, 6417 KiB  
Article
Research on Surface Tracking and Constant Force Control of a Grinding Robot
by Xiaohua Shi, Mingyang Li, Yuehu Dong and Shangyu Feng
Sensors 2023, 23(10), 4702; https://doi.org/10.3390/s23104702 - 12 May 2023
Viewed by 1294
Abstract
To improve the quality and efficiency of robot grinding, a design and a control algorithm for a robot used for grinding the surfaces of large, curved workpieces with unknown parameters, such as wind turbine blades, are proposed herein. Firstly, the structure and motion [...] Read more.
To improve the quality and efficiency of robot grinding, a design and a control algorithm for a robot used for grinding the surfaces of large, curved workpieces with unknown parameters, such as wind turbine blades, are proposed herein. Firstly, the structure and motion mode of the grinding robot are determined. Secondly, in order to solve the problem of complexity and poor adaptability of the algorithm in the grinding process, a force/position hybrid control strategy based on fuzzy PID is proposed which greatly improves the response speed and reduces the error of the static control strategy. Compared with normal PID, fuzzy PID has the advantages of variable parameters and strong adaptability; the hydraulic cylinder used to adjust the angle of the manipulator can control the speed offset within 0.27 rad/s, and the grinding process can be carried out directly without obtaining the specific model of the surface to be machined. Finally, the experiments are carried out, the grinding force and feed speed are maintained within the allowable error range of the expected value, and the results verify the feasibility and effectiveness of the position tracking and constant force control strategy in this paper. The surface roughness of the blade is maintained within Ra = 2~3 μm after grinding, which proves that the grinding quality meets the requirements of the best surface roughness required for the subsequent process. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

22 pages, 6249 KiB  
Article
Prehensile and Non-Prehensile Robotic Pick-and-Place of Objects in Clutter Using Deep Reinforcement Learning
by Muhammad Babar Imtiaz, Yuansong Qiao and Brian Lee
Sensors 2023, 23(3), 1513; https://doi.org/10.3390/s23031513 - 29 Jan 2023
Cited by 6 | Viewed by 4359
Abstract
In this study, we develop a framework for an intelligent and self-supervised industrial pick-and-place operation for cluttered environments. Our target is to have the agent learn to perform prehensile and non-prehensile robotic manipulations to improve the efficiency and throughput of the pick-and-place task. [...] Read more.
In this study, we develop a framework for an intelligent and self-supervised industrial pick-and-place operation for cluttered environments. Our target is to have the agent learn to perform prehensile and non-prehensile robotic manipulations to improve the efficiency and throughput of the pick-and-place task. To achieve this target, we specify the problem as a Markov decision process (MDP) and deploy a deep reinforcement learning (RL) temporal difference model-free algorithm known as the deep Q-network (DQN). We consider three actions in our MDP; one is ‘grasping’ from the prehensile manipulation category and the other two are ‘left-slide’ and ‘right-slide’ from the non-prehensile manipulation category. Our DQN is composed of three fully convolutional networks (FCN) based on the memory-efficient architecture of DenseNet-121 which are trained together without causing any bottleneck situations. Each FCN corresponds to each discrete action and outputs a pixel-wise map of affordances for the relevant action. Rewards are allocated after every forward pass and backpropagation is carried out for weight tuning in the corresponding FCN. In this manner, non-prehensile manipulations are learnt which can, in turn, lead to possible successful prehensile manipulations in the near future and vice versa, thus increasing the efficiency and throughput of the pick-and-place task. The Results section shows performance comparisons of our approach to a baseline deep learning approach and a ResNet architecture-based approach, along with very promising test results at varying clutter densities across a range of complex scenario test cases. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

20 pages, 7228 KiB  
Article
The Effective Coverage of Homogeneous Teams with Radial Attenuation Models
by Yuan-Rui Yang, Qiyu Kang and Rui She
Sensors 2023, 23(1), 350; https://doi.org/10.3390/s23010350 - 29 Dec 2022
Viewed by 1379
Abstract
For the area coverage (e.g., using a WSN), despite the comprehensive research works on full-plane coverage using a multi-node team equipped with the ideal constant model, only very few works have discussed the coverage of practical models with varying intensity. This paper analyzes [...] Read more.
For the area coverage (e.g., using a WSN), despite the comprehensive research works on full-plane coverage using a multi-node team equipped with the ideal constant model, only very few works have discussed the coverage of practical models with varying intensity. This paper analyzes the properties of the effective coverage of multi-node teams consisting of a given numbers of nodes. Each node is equipped with a radial attenuation disk model as its individual model of coverage, which conforms to the natural characteristics of devices in the real world. Based on our previous analysis of 2-node teams, the properties of the effective coverage of 3-node and n-node (n4) teams in regular geometric formations are analyzed as generalized cases. Numerical analysis and simulations for 3-node and n-node teams (n4) are conducted separately. For the 3-node cases, the relations between the side lengths of equilateral triangle formation and the effective coverage of the team equipped with two different types of models are respectively inspected. For the n-node cases (n4), the effective coverage of a team in three formations, namely regular polygon, regular star, and equilateral triangular tessellation (for n=6), are investigated. The results can be applied to many scenarios, either dynamic (e.g., robots with sensors) or static, where a team of multiple nodes cooperate to produce a larger effective coverage. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

12 pages, 3188 KiB  
Communication
A Reinforcement Learning-Based Strategy of Path Following for Snake Robots with an Onboard Camera
by Lixing Liu, Xian Guo and Yongchun Fang
Sensors 2022, 22(24), 9867; https://doi.org/10.3390/s22249867 - 15 Dec 2022
Cited by 2 | Viewed by 1716
Abstract
For path following of snake robots, many model-based controllers have demonstrated strong tracking abilities. However, a satisfactory performance often relies on precise modelling and simplified assumptions. In addition, visual perception is also essential for autonomous closed-loop control, which renders the path following of [...] Read more.
For path following of snake robots, many model-based controllers have demonstrated strong tracking abilities. However, a satisfactory performance often relies on precise modelling and simplified assumptions. In addition, visual perception is also essential for autonomous closed-loop control, which renders the path following of snake robots even more challenging. Hence, a novel reinforcement learning-based hierarchical control framework is designed to enable a snake robot with an onboard camera to realize autonomous self-localization and path following. Specifically, firstly, a path following policy is trained in a hierarchical manner, in which the RL algorithm and gait knowledge are well combined. On this basis, the training efficiency is sufficiently optimized, and the path following performance of the control policy is greatly improved, which can then be implemented on a practical snake robot without any additional training. Subsequently, in order to promote visual self-localization during path following, a visual localization stabilization item is added to the reward function that trains the path following strategy, which endows a snake robot with smooth steering ability during locomotion, thereby guaranteeing the accuracy of visual localization and facilitating practical applications. Comparative simulations and experimental results are illustrated to exhibit the superior performance of the proposed hierarchical path following the control method in terms of convergence speed and tracking accuracy. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

14 pages, 2341 KiB  
Article
Unmanned Aerial Vehicle (UAV) Robot Microwave Imaging Based on Multi-Path Scattering Model
by Zhihua Chen, Xinya Qiao, Pei Wu, Tiancai Zhang, Tao Hong and Linquan Fang
Sensors 2022, 22(22), 8736; https://doi.org/10.3390/s22228736 - 11 Nov 2022
Viewed by 1209
Abstract
Unmanned Aerial Vehicle (UAV) robot microwave imaging systems have attracted comprehensive attention. Compared with visible light and infrared imaging systems, microwave imaging is not susceptible to weather. Active microwave imaging systems have been realized in UAV robots. However, the scattering signals of geographical [...] Read more.
Unmanned Aerial Vehicle (UAV) robot microwave imaging systems have attracted comprehensive attention. Compared with visible light and infrared imaging systems, microwave imaging is not susceptible to weather. Active microwave imaging systems have been realized in UAV robots. However, the scattering signals of geographical objects from satellite transmitting systems received by UAV robots to process imaging is studied rarely, which reduces the need of load weight for the UAV robot. In this paper, a multi-path scattering model of vegetation on the earth surface is proposed, and then the microwave imaging algorithm is introduced to reconstruct the images from the UAV robot receiving the scattering data based on the multi-path model. In image processing, it is assumed that the orbit altitude of a transmitter loaded on the satellite remains unchanged, and the receiver loaded UAV robot obtains the reflective information from ground vegetation with different zenith angles. The imaging results show that the angle change has an impact on the imaging resolution. The combination of electromagnetic scattering model and image processing method contributes to understanding the image results and the multi-path scattering mechanisms of vegetation, which provide a reference for the research and development of microwave imaging systems of UAV robot networking using satellite transmitting signals. Full article
(This article belongs to the Special Issue Sensors for Robots II)
Show Figures

Figure 1

Back to TopTop