sensors-logo

Journal Browser

Journal Browser

Special Issue "Sensors for Robots"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 December 2015).

Special Issue Editors

Prof. Dr. Lianqing Liu
E-Mail Website
Guest Editor
Shenyang Institute of Automation, Chinese Academy of Sciences 114 Nanta Street, Shenhe District, Shenyang, Liaoning, China
Interests: nanorobotics, nanosensor fabrication, haptics, nanopositioning
Prof. Dr. Ning Xi
E-Mail Website
Guest Editor
Department of Electrical and Computer Engineering, Michigan State University 2120 Engineering Building, East Lansing, MI 48824-1226, USA
Interests: nano sensors, haptic sensors, tactile sensors, force/torque sensors, photo detectors, infrared sensors
Prof. Dr. Wen-Jung Li
E-Mail Website
Guest Editor
Department of Mechanical and Biomedical Engineering, City University of Hong Kong, Kowloon, Hong Kong
Interests: micro/nano/bio sensors; MEMS/nano-based biotechnology; electrokinetics-based cancer/stem; cell separation and identification
Special Issues and Collections in MDPI journals
Prof. Dr. Xin Zhao
E-Mail Website
Guest Editor
College of Computer and Control Engineering, Nankai University 94 Weijin Road, 300071 Tianjin, P.R.China
Interests: micro/nano robotic manipulation, micro/nano sensor fabrication, microscopy vision sensing
Dr. Yajing Shen
E-Mail Website
Guest Editor
Department of Biomedical Engineering, City University of Hong Kong, Kowloon, Hong Kong, China​
Interests: robotics; micro-nano manipulation; cell assembly; DNA origami; nano characterization
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

Robotics research has already made significant impacts on our daily life, such as in industry, education, medicine, social services, the military, and so forth. As key components of robots, sensors are the basis of the self-adaptive abilities and artificial intelligence of a robot. Therefore, novel sensing techniques and advanced sensor applications for robots have received increasing interest worldwide. This Special Issue aims to showcase review or rigorous original papers describing current and expected challenges, along with potential solutions, for Robotic Sensing.

Potential topics include, but are not limited to:

1. Novel sensing techniques for robots

  • novel force, vision, tactile, and auditory sensors
  • novel sensor design, fabrication, and calibration methods
  • sensors information integration and fusion
  • sensor information processing algorithm improvement and optimization
  • wireless sensor networks
  • multi-sensors, reconfigurable sensors, and cyber physical sensing systems
  • micro/nano sensor theory, design, and development

2. Sensors for novel robotic applications

  • vision sensors and vision algorithms for motive object tracking and robot navigation
  • force, vision, and tactile sensors for precise manipulation, collision prediction, etc.
  • multi-sensing intelligent systems
  • sensor systems for human-robot interactions
  • sensors for robotics and automation at micro/nano scales
  • applications of robot sensing in interdisciplinary areas, including biology, material sciences, physical sciences, etc.

Prof. Dr. Lianqing Liu
Prof. Dr. Ning Xi
Prof. Dr. Wen Jung Li
Prof. Dr. Xin Zhao
Dr. Yajing Shen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • robot force sensing
  • robot vision sensing
  • robot intelligent sensing
  • sensor applications for robot manipulation and control
  • integrated robot and sensor systems
  • sensors for cloud robots

Published Papers (25 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Design and Characterization of a Three-Axis Hall Effect-Based Soft Skin Sensor
Sensors 2016, 16(4), 491; https://doi.org/10.3390/s16040491 - 07 Apr 2016
Cited by 48 | Viewed by 4763
Abstract
This paper presents an easy means to produce a 3-axis Hall effect–based skin sensor for robotic applications. It uses an off-the-shelf chip and is physically small and provides digital output. Furthermore, the sensor has a soft exterior for safe interactions with the environment; [...] Read more.
This paper presents an easy means to produce a 3-axis Hall effect–based skin sensor for robotic applications. It uses an off-the-shelf chip and is physically small and provides digital output. Furthermore, the sensor has a soft exterior for safe interactions with the environment; in particular it uses soft silicone with about an 8 mm thickness. Tests were performed to evaluate the drift due to temperature changes, and a compensation using the integral temperature sensor was implemented. Furthermore, the hysteresis and the crosstalk between the 3-axis measurements were evaluated. The sensor is able to detect minimal forces of about 1 gf. The sensor was calibrated and results with total forces up to 1450 gf in the normal and tangential directions of the sensor are presented. The test revealed that the sensor is able to measure the different components of the force vector. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Visual EKF-SLAM from Heterogeneous Landmarks
Sensors 2016, 16(4), 489; https://doi.org/10.3390/s16040489 - 07 Apr 2016
Cited by 14 | Viewed by 3499
Abstract
Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in [...] Read more.
Many applications require the localization of a moving object, e.g., a robot, using sensory data acquired from embedded devices. Simultaneous localization and mapping from vision performs both the spatial and temporal fusion of these data on a map when a camera moves in an unknown environment. Such a SLAM process executes two interleaved functions: the front-end detects and tracks features from images, while the back-end interprets features as landmark observations and estimates both the landmarks and the robot positions with respect to a selected reference frame. This paper describes a complete visual SLAM solution, combining both point and line landmarks on a single map. The proposed method has an impact on both the back-end and the front-end. The contributions comprehend the use of heterogeneous landmark-based EKF-SLAM (the management of a map composed of both point and line landmarks); from this perspective, the comparison between landmark parametrizations and the evaluation of how the heterogeneity improves the accuracy on the camera localization, the development of a front-end active-search process for linear landmarks integrated into SLAM and the experimentation methodology. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Graphical abstract

Open AccessArticle
Fusion of Haptic and Gesture Sensors for Rehabilitation of Bimanual Coordination and Dexterous Manipulation
Sensors 2016, 16(3), 395; https://doi.org/10.3390/s16030395 - 18 Mar 2016
Cited by 13 | Viewed by 3518
Abstract
Disabilities after neural injury, such as stroke, bring tremendous burden to patients, families and society. Besides the conventional constrained-induced training with a paretic arm, bilateral rehabilitation training involves both the ipsilateral and contralateral sides of the neural injury, fitting well with the fact [...] Read more.
Disabilities after neural injury, such as stroke, bring tremendous burden to patients, families and society. Besides the conventional constrained-induced training with a paretic arm, bilateral rehabilitation training involves both the ipsilateral and contralateral sides of the neural injury, fitting well with the fact that both arms are needed in common activities of daily living (ADLs), and can promote good functional recovery. In this work, the fusion of a gesture sensor and a haptic sensor with force feedback capabilities has enabled a bilateral rehabilitation training therapy. The Leap Motion gesture sensor detects the motion of the healthy hand, and the omega.7 device can detect and assist the paretic hand, according to the designed cooperative task paradigm, as much as needed, with active force feedback to accomplish the manipulation task. A virtual scenario has been built up, and the motion and force data facilitate instantaneous visual and audio feedback, as well as further analysis of the functional capabilities of the patient. This task-oriented bimanual training paradigm recruits the sensory, motor and cognitive aspects of the patient into one loop, encourages the active involvement of the patients into rehabilitation training, strengthens the cooperation of both the healthy and impaired hands, challenges the dexterous manipulation capability of the paretic hand, suits easy of use at home or centralized institutions and, thus, promises effective potentials for rehabilitation training. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
A Monocular Vision Sensor-Based Obstacle Detection Algorithm for Autonomous Robots
Sensors 2016, 16(3), 311; https://doi.org/10.3390/s16030311 - 01 Mar 2016
Cited by 17 | Viewed by 4210
Abstract
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric [...] Read more.
This paper presents a monocular vision sensor-based obstacle detection algorithm for autonomous robots. Each individual image pixel at the bottom region of interest is labeled as belonging either to an obstacle or the floor. While conventional methods depend on point tracking for geometric cues for obstacle detection, the proposed algorithm uses the inverse perspective mapping (IPM) method. This method is much more advantageous when the camera is not high off the floor, which makes point tracking near the floor difficult. Markov random field-based obstacle segmentation is then performed using the IPM results and a floor appearance model. Next, the shortest distance between the robot and the obstacle is calculated. The algorithm is tested by applying it to 70 datasets, 20 of which include nonobstacle images where considerable changes in floor appearance occur. The obstacle segmentation accuracies and the distance estimation error are quantitatively analyzed. For obstacle datasets, the segmentation precision and the average distance estimation error of the proposed method are 81.4% and 1.6 cm, respectively, whereas those for a conventional method are 57.5% and 9.9 cm, respectively. For nonobstacle datasets, the proposed method gives 0.0% false positive rates, while the conventional method gives 17.6%. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Shape Reconstruction Based on a New Blurring Model at the Micro/Nanometer Scale
Sensors 2016, 16(3), 302; https://doi.org/10.3390/s16030302 - 27 Feb 2016
Cited by 2 | Viewed by 2612
Abstract
Real-time observation of three-dimensional (3D) information has great significance in nanotechnology. However, normal nanometer scale observation techniques, including transmission electron microscopy (TEM), and scanning probe microscopy (SPM), have some problems to obtain 3D information because they lack non-destructive, intuitive, and fast imaging ability [...] Read more.
Real-time observation of three-dimensional (3D) information has great significance in nanotechnology. However, normal nanometer scale observation techniques, including transmission electron microscopy (TEM), and scanning probe microscopy (SPM), have some problems to obtain 3D information because they lack non-destructive, intuitive, and fast imaging ability under normal conditions, and optical methods have not widely used in micro/nanometer shape reconstruction due to the practical requirements and the imaging limitations in micro/nano manipulation. In this paper, a high resolution shape reconstruction method based on a new optical blurring model is proposed. Firstly, the heat diffusion physics equation is analyzed and the optical diffraction model is modified to directly explain the basic principles of image blurring resulting from depth variation. Secondly, a blurring imaging model is proposed based on curve fitting of a 4th order polynomial curve. The heat diffusion equations combined with the blurring imaging are introduced, and their solution is transformed into a dynamic optimization problem. Finally, the experiments with a standard nanogrid, an atomic force microscopy (AFM) cantilever and a microlens have been conducted. The experiments prove that the proposed method can reconstruct 3D shapes at the micro/nanometer scale, and the minimal reconstruction error is 3 nm. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
A Layered Approach for Robust Spatial Virtual Human Pose Reconstruction Using a Still Image
Sensors 2016, 16(2), 263; https://doi.org/10.3390/s16020263 - 20 Feb 2016
Cited by 1 | Viewed by 2447
Abstract
Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing [...] Read more.
Pedestrian detection and human pose estimation are instructive for reconstructing a three-dimensional scenario and for robot navigation, particularly when large amounts of vision data are captured using various data-recording techniques. Using an unrestricted capture scheme, which produces occlusions or breezing, the information describing each part of a human body and the relationship between each part or even different pedestrians must be present in a still image. Using this framework, a multi-layered, spatial, virtual, human pose reconstruction framework is presented in this study to recover any deficient information in planar images. In this framework, a hierarchical parts-based deep model is used to detect body parts by using the available restricted information in a still image and is then combined with spatial Markov random fields to re-estimate the accurate joint positions in the deep network. Then, the planar estimation results are mapped onto a virtual three-dimensional space using multiple constraints to recover any deficient spatial information. The proposed approach can be viewed as a general pre-processing method to guide the generation of continuous, three-dimensional motion data. The experiment results of this study are used to describe the effectiveness and usability of the proposed approach. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
A Rapid Coordinate Transformation Method Applied in Industrial Robot Calibration Based on Characteristic Line Coincidence
Sensors 2016, 16(2), 239; https://doi.org/10.3390/s16020239 - 18 Feb 2016
Cited by 13 | Viewed by 3216
Abstract
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no [...] Read more.
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Graphical abstract

Open AccessArticle
Enhanced ICP for the Registration of Large-Scale 3D Environment Models: An Experimental Study
Sensors 2016, 16(2), 228; https://doi.org/10.3390/s16020228 - 15 Feb 2016
Cited by 22 | Viewed by 3076
Abstract
One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point [...] Read more.
One of the main applications of mobile robots is the large-scale perception of the outdoor environment. One of the main challenges of this application is fusing environmental data obtained by multiple robots, especially heterogeneous robots. This paper proposes an enhanced iterative closest point (ICP) method for the fast and accurate registration of 3D environmental models. First, a hierarchical searching scheme is combined with the octree-based ICP algorithm. Second, an early-warning mechanism is used to perceive the local minimum problem. Third, a heuristic escape scheme based on sampled potential transformation vectors is used to avoid local minima and achieve optimal registration. Experiments involving one unmanned aerial vehicle and one unmanned surface vehicle were conducted to verify the proposed technique. The experimental results were compared with those of normal ICP registration algorithms to demonstrate the superior performance of the proposed method. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Graphical abstract

Open AccessArticle
Design and Analysis of a Single—Camera Omnistereo Sensor for Quadrotor Micro Aerial Vehicles (MAVs)
Sensors 2016, 16(2), 217; https://doi.org/10.3390/s16020217 - 06 Feb 2016
Cited by 11 | Viewed by 3616
Abstract
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). [...] Read more.
We describe the design and 3D sensing performance of an omnidirectional stereo (omnistereo) vision system applied to Micro Aerial Vehicles (MAVs). The proposed omnistereo sensor employs a monocular camera that is co-axially aligned with a pair of hyperboloidal mirrors (a vertically-folded catadioptric configuration). We show that this arrangement provides a compact solution for omnidirectional 3D perception while mounted on top of propeller-based MAVs (not capable of large payloads). The theoretical single viewpoint (SVP) constraint helps us derive analytical solutions for the sensor’s projective geometry and generate SVP-compliant panoramic images to compute 3D information from stereo correspondences (in a truly synchronous fashion). We perform an extensive analysis on various system characteristics such as its size, catadioptric spatial resolution, field-of-view. In addition, we pose a probabilistic model for the uncertainty estimation of 3D information from triangulation of back-projected rays. We validate the projection error of the design using both synthetic and real-life images against ground-truth data. Qualitatively, we show 3D point clouds (dense and sparse) resulting out of a single image captured from a real-life experiment. We expect the reproducibility of our sensor as its model parameters can be optimized to satisfy other catadioptric-based omnistereo vision under different circumstances. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Graphical abstract

Open AccessArticle
Vision-Based Pose Estimation for Robot-Mediated Hand Telerehabilitation
Sensors 2016, 16(2), 208; https://doi.org/10.3390/s16020208 - 05 Feb 2016
Cited by 8 | Viewed by 3981
Abstract
Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can [...] Read more.
Vision-based Pose Estimation (VPE) represents a non-invasive solution to allow a smooth and natural interaction between a human user and a robotic system, without requiring complex calibration procedures. Moreover, VPE interfaces are gaining momentum as they are highly intuitive, such that they can be used from untrained personnel (e.g., a generic caregiver) even in delicate tasks as rehabilitation exercises. In this paper, we present a novel master–slave setup for hand telerehabilitation with an intuitive and simple interface for remote control of a wearable hand exoskeleton, named HX. While performing rehabilitative exercises, the master unit evaluates the 3D position of a human operator’s hand joints in real-time using only a RGB-D camera, and commands remotely the slave exoskeleton. Within the slave unit, the exoskeleton replicates hand movements and an external grip sensor records interaction forces, that are fed back to the operator-therapist, allowing a direct real-time assessment of the rehabilitative task. Experimental data collected with an operator and six volunteers are provided to show the feasibility of the proposed system and its performances. The results demonstrate that, leveraging on our system, the operator was able to directly control volunteers’ hands movements. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Graphical abstract

Open AccessArticle
An Imaging Sensor-Aided Vision Navigation Approach that Uses a Geo-Referenced Image Database
Sensors 2016, 16(2), 166; https://doi.org/10.3390/s16020166 - 28 Jan 2016
Cited by 3 | Viewed by 2872
Abstract
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, [...] Read more.
In determining position and attitude, vision navigation via real-time image processing of data collected from imaging sensors is advanced without a high-performance global positioning system (GPS) and an inertial measurement unit (IMU). Vision navigation is widely used in indoor navigation, far space navigation, and multiple sensor-integrated mobile mapping. This paper proposes a novel vision navigation approach aided by imaging sensors and that uses a high-accuracy geo-referenced image database (GRID) for high-precision navigation of multiple sensor platforms in environments with poor GPS. First, the framework of GRID-aided vision navigation is developed with sequence images from land-based mobile mapping systems that integrate multiple sensors. Second, a highly efficient GRID storage management model is established based on the linear index of a road segment for fast image searches and retrieval. Third, a robust image matching algorithm is presented to search and match a real-time image with the GRID. Subsequently, the image matched with the real-time scene is considered to calculate the 3D navigation parameter of multiple sensor platforms. Experimental results show that the proposed approach retrieves images efficiently and has navigation accuracies of 1.2 m in a plane and 1.8 m in height under GPS loss in 5 min and within 1500 m. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
An Online Continuous Human Action Recognition Algorithm Based on the Kinect Sensor
Sensors 2016, 16(2), 161; https://doi.org/10.3390/s16020161 - 28 Jan 2016
Cited by 42 | Viewed by 3610
Abstract
Continuous human action recognition (CHAR) is more practical in human-robot interactions. In this paper, an online CHAR algorithm is proposed based on skeletal data extracted from RGB-D images captured by Kinect sensors. Each human action is modeled by a sequence of key poses [...] Read more.
Continuous human action recognition (CHAR) is more practical in human-robot interactions. In this paper, an online CHAR algorithm is proposed based on skeletal data extracted from RGB-D images captured by Kinect sensors. Each human action is modeled by a sequence of key poses and atomic motions in a particular order. In order to extract key poses and atomic motions, feature sequences are divided into pose feature segments and motion feature segments, by use of the online segmentation method based on potential differences of features. Likelihood probabilities that each feature segment can be labeled as the extracted key poses or atomic motions, are computed in the online model matching process. An online classification method with variable-length maximal entropy Markov model (MEMM) is performed based on the likelihood probabilities, for recognizing continuous human actions. The variable-length MEMM method ensures the effectiveness and efficiency of the proposed CHAR method. Compared with the published CHAR methods, the proposed algorithm does not need to detect the start and end points of each human action in advance. The experimental results on public datasets show that the proposed algorithm is effective and highly-efficient for recognizing continuous human actions. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Design and Analysis of a Sensor System for Cutting Force Measurement in Machining Processes
Sensors 2016, 16(1), 70; https://doi.org/10.3390/s16010070 - 07 Jan 2016
Cited by 25 | Viewed by 3476
Abstract
Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact [...] Read more.
Multi-component force sensors have infiltrated a wide variety of automation products since the 1970s. However, one seldom finds full-component sensor systems available in the market for cutting force measurement in machine processes. In this paper, a new six-component sensor system with a compact monolithic elastic element (EE) is designed and developed to detect the tangential cutting forces Fx, Fy and Fz (i.e., forces along x-, y-, and z-axis) as well as the cutting moments Mx, My and Mz (i.e., moments about x-, y-, and z-axis) simultaneously. Optimal structural parameters of the EE are carefully designed via simulation-driven optimization. Moreover, a prototype sensor system is fabricated, which is applied to a 5-axis parallel kinematic machining center. Calibration experimental results demonstrate that the system is capable of measuring cutting forces and moments with good linearity while minimizing coupling error. Both the Finite Element Analysis (FEA) and calibration experimental studies validate the high performance of the proposed sensor system that is expected to be adopted into machining processes. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Real-Time Hand Posture Recognition for Human-Robot Interaction Tasks
Sensors 2016, 16(1), 36; https://doi.org/10.3390/s16010036 - 04 Jan 2016
Cited by 11 | Viewed by 2483
Abstract
In this work, we present a multiclass hand posture classifier useful for human-robot interaction tasks. The proposed system is based exclusively on visual sensors, and it achieves a real-time performance, whilst detecting and recognizing an alphabet of four hand postures. The proposed approach [...] Read more.
In this work, we present a multiclass hand posture classifier useful for human-robot interaction tasks. The proposed system is based exclusively on visual sensors, and it achieves a real-time performance, whilst detecting and recognizing an alphabet of four hand postures. The proposed approach is based on the real-time deformable detector, a boosting trained classifier. We describe a methodology to design the ensemble of real-time deformable detectors (one for each hand posture that can be classified). Given the lack of standard procedures for performance evaluation, we also propose the use of full image evaluation for this purpose. Such an evaluation methodology provides us with a more realistic estimation of the performance of the method. We have measured the performance of the proposed system and compared it to the one obtained by using only the sampled window approach. We present detailed results of such tests using a benchmark dataset. Our results show that the system can operate in real time at about a 10-fps frame rate. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Sensor Fusion Based Model for Collision Free Mobile Robot Navigation
Sensors 2016, 16(1), 24; https://doi.org/10.3390/s16010024 - 26 Dec 2015
Cited by 19 | Viewed by 4249
Abstract
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, [...] Read more.
Autonomous mobile robots have become a very popular and interesting topic in the last decade. Each of them are equipped with various types of sensors such as GPS, camera, infrared and ultrasonic sensors. These sensors are used to observe the surrounding environment. However, these sensors sometimes fail and have inaccurate readings. Therefore, the integration of sensor fusion will help to solve this dilemma and enhance the overall performance. This paper presents a collision free mobile robot navigation based on the fuzzy logic fusion model. Eight distance sensors and a range finder camera are used for the collision avoidance approach where three ground sensors are used for the line or path following approach. The fuzzy system is composed of nine inputs which are the eight distance sensors and the camera, two outputs which are the left and right velocities of the mobile robot’s wheels, and 24 fuzzy rules for the robot’s movement. Webots Pro simulator is used for modeling the environment and the robot. The proposed methodology, which includes the collision avoidance based on fuzzy logic fusion model and line following robot, has been implemented and tested through simulation and real time experiments. Various scenarios have been presented with static and dynamic obstacles using one robot and two robots while avoiding obstacles in different shapes and sizes. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
A TSR Visual Servoing System Based on a Novel Dynamic Template Matching Method
Sensors 2015, 15(12), 32152-32167; https://doi.org/10.3390/s151229884 - 21 Dec 2015
Cited by 18 | Viewed by 2960
Abstract
The so-called Tethered Space Robot (TSR) is a novel active space debris removal system. To solve its problem of non-cooperative target recognition during short-distance rendezvous events, this paper presents a framework for a real-time visual servoing system using non-calibrated monocular-CMOS (Complementary Metal Oxide [...] Read more.
The so-called Tethered Space Robot (TSR) is a novel active space debris removal system. To solve its problem of non-cooperative target recognition during short-distance rendezvous events, this paper presents a framework for a real-time visual servoing system using non-calibrated monocular-CMOS (Complementary Metal Oxide Semiconductor). When a small template is used for matching with a large scene, it always leads to mismatches, so a novel template matching algorithm to solve the problem is presented. Firstly, the novel matching algorithm uses a hollow annulus structure according to a FAST (Features from Accelerated Segment) algorithm and makes the method be rotation-invariant. Furthermore, the accumulative deviation can be decreased by the hollow structure. The matching function is composed of grey and gradient differences between template and object image, which help it reduce the effects of illumination and noises. Then, a dynamic template update strategy is designed to avoid tracking failures brought about by wrong matching or occlusion. Finally, the system synthesizes the least square integrated predictor, realizing tracking online in complex circumstances. The results of ground experiments show that the proposed algorithm can decrease the need for sophisticated computation and improves matching accuracy. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Multidirectional Image Sensing for Microscopy Based on a Rotatable Robot
Sensors 2015, 15(12), 31566-31580; https://doi.org/10.3390/s151229872 - 15 Dec 2015
Cited by 14 | Viewed by 2875
Abstract
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This [...] Read more.
Image sensing at a small scale is essentially important in many fields, including microsample observation, defect inspection, material characterization and so on. However, nowadays, multi-directional micro object imaging is still very challenging due to the limited field of view (FOV) of microscopes. This paper reports a novel approach for multi-directional image sensing in microscopes by developing a rotatable robot. First, a robot with endless rotation ability is designed and integrated with the microscope. Then, the micro object is aligned to the rotation axis of the robot automatically based on the proposed forward-backward alignment strategy. After that, multi-directional images of the sample can be obtained by rotating the robot within one revolution under the microscope. To demonstrate the versatility of this approach, we view various types of micro samples from multiple directions in both optical microscopy and scanning electron microscopy, and panoramic images of the samples are processed as well. The proposed method paves a new way for the microscopy image sensing, and we believe it could have significant impact in many fields, especially for sample detection, manipulation and characterization at a small scale. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Graphical abstract

Open AccessArticle
Bimodal Biometric Verification Using the Fusion of Palmprint and Infrared Palm-Dorsum Vein Images
Sensors 2015, 15(12), 31339-31361; https://doi.org/10.3390/s151229856 - 12 Dec 2015
Cited by 8 | Viewed by 3473
Abstract
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest [...] Read more.
In this paper, we present a reliable and robust biometric verification method based on bimodal physiological characteristics of palms, including the palmprint and palm-dorsum vein patterns. The proposed method consists of five steps: (1) automatically aligning and cropping the same region of interest from different palm or palm-dorsum images; (2) applying the digital wavelet transform and inverse wavelet transform to fuse palmprint and vein pattern images; (3) extracting the line-like features (LLFs) from the fused image; (4) obtaining multiresolution representations of the LLFs by using a multiresolution filter; and (5) using a support vector machine to verify the multiresolution representations of the LLFs. The proposed method possesses four advantages: first, both modal images are captured in peg-free scenarios to improve the user-friendliness of the verification device. Second, palmprint and vein pattern images are captured using a low-resolution digital scanner and infrared (IR) camera. The use of low-resolution images results in a smaller database. In addition, the vein pattern images are captured through the invisible IR spectrum, which improves antispoofing. Third, since the physiological characteristics of palmprint and vein pattern images are different, a hybrid fusing rule can be introduced to fuse the decomposition coefficients of different bands. The proposed method fuses decomposition coefficients at different decomposed levels, with different image sizes, captured from different sensor devices. Finally, the proposed method operates automatically and hence no parameters need to be set manually. Three thousand palmprint images and 3000 vein pattern images were collected from 100 volunteers to verify the validity of the proposed method. The results show a false rejection rate of 1.20% and a false acceptance rate of 1.56%. It demonstrates the validity and excellent performance of our proposed method comparing to other methods. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Joint Temperature-Lasing Mode Compensation for Time-of-Flight LiDAR Sensors
Sensors 2015, 15(12), 31205-31223; https://doi.org/10.3390/s151229854 - 11 Dec 2015
Cited by 4 | Viewed by 2950
Abstract
We propose an expectation maximization (EM) strategy for improving the precision of time of flight (ToF) light detection and ranging (LiDAR) scanners. The novel algorithm statistically accounts not only for the bias induced by temperature changes in the laser diode, but also for [...] Read more.
We propose an expectation maximization (EM) strategy for improving the precision of time of flight (ToF) light detection and ranging (LiDAR) scanners. The novel algorithm statistically accounts not only for the bias induced by temperature changes in the laser diode, but also for the multi-modality of the measurement noises that is induced by mode-hopping effects. Instrumental to the proposed EM algorithm, we also describe a general thermal dynamics model that can be learned either from just input-output data or from a combination of simple temperature experiments and information from the laser’s datasheet. We test the strategy on a SICK LMS 200 device and improve its average absolute error by a factor of three. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Tracking Multiple Video Targets with an Improved GM-PHD Tracker
Sensors 2015, 15(12), 30240-30260; https://doi.org/10.3390/s151229794 - 03 Dec 2015
Cited by 24 | Viewed by 3264
Abstract
Tracking multiple moving targets from a video plays an important role in many vision-based robotic applications. In this paper, we propose an improved Gaussian mixture probability hypothesis density (GM-PHD) tracker with weight penalization to effectively and accurately track multiple moving targets from a [...] Read more.
Tracking multiple moving targets from a video plays an important role in many vision-based robotic applications. In this paper, we propose an improved Gaussian mixture probability hypothesis density (GM-PHD) tracker with weight penalization to effectively and accurately track multiple moving targets from a video. First, an entropy-based birth intensity estimation method is incorporated to eliminate the false positives caused by noisy video data. Then, a weight-penalized method with multi-feature fusion is proposed to accurately track the targets in close movement. For targets without occlusion, a weight matrix that contains all updated weights between the predicted target states and the measurements is constructed, and a simple, but effective method based on total weight and predicted target state is proposed to search the ambiguous weights in the weight matrix. The ambiguous weights are then penalized according to the fused target features that include spatial-colour appearance, histogram of oriented gradient and target area and further re-normalized to form a new weight matrix. With this new weight matrix, the tracker can correctly track the targets in close movement without occlusion. For targets with occlusion, a robust game-theoretical method is used. Finally, the experiments conducted on various video scenarios validate the effectiveness of the proposed penalization method and show the superior performance of our tracker over the state of the art. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Vision Sensor-Based Road Detection for Field Robot Navigation
Sensors 2015, 15(11), 29594-29617; https://doi.org/10.3390/s151129594 - 24 Nov 2015
Cited by 15 | Viewed by 3310
Abstract
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging [...] Read more.
Road detection is an essential component of field robot navigation systems. Vision sensors play an important role in road detection for their great potential in environmental perception. In this paper, we propose a hierarchical vision sensor-based method for robust road detection in challenging road scenes. More specifically, for a given road image captured by an on-board vision sensor, we introduce a multiple population genetic algorithm (MPGA)-based approach for efficient road vanishing point detection. Superpixel-level seeds are then selected in an unsupervised way using a clustering strategy. Then, according to the GrowCut framework, the seeds proliferate and iteratively try to occupy their neighbors. After convergence, the initial road segment is obtained. Finally, in order to achieve a globally-consistent road segment, the initial road segment is refined using the conditional random field (CRF) framework, which integrates high-level information into road detection. We perform several experiments to evaluate the common performance, scale sensitivity and noise sensitivity of the proposed method. The experimental results demonstrate that the proposed method exhibits high robustness compared to the state of the art. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Graphical abstract

Open AccessArticle
A Neural Network-Based Gait Phase Classification Method Using Sensors Equipped on Lower Limb Exoskeleton Robots
Sensors 2015, 15(11), 27738-27759; https://doi.org/10.3390/s151127738 - 30 Oct 2015
Cited by 54 | Viewed by 4947
Abstract
An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such [...] Read more.
An exact classification of different gait phases is essential to enable the control of exoskeleton robots and detect the intentions of users. We propose a gait phase classification method based on neural networks using sensor signals from lower limb exoskeleton robots. In such robots, foot sensors with force sensing registers are commonly used to classify gait phases. We describe classifiers that use the orientation of each lower limb segment and the angular velocities of the joints to output the current gait phase. Experiments to obtain the input signals and desired outputs for the learning and validation process are conducted, and two neural network methods (a multilayer perceptron and nonlinear autoregressive with external inputs (NARX)) are used to develop an optimal classifier. Offline and online evaluations using four criteria are used to compare the performance of the classifiers. The proposed NARX-based method exhibits sufficiently good performance to replace foot sensors as a means of classifying gait phases. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
RGB-D SLAM Combining Visual Odometry and Extended Information Filter
Sensors 2015, 15(8), 18742-18766; https://doi.org/10.3390/s150818742 - 30 Jul 2015
Cited by 3 | Viewed by 3815
Abstract
In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A [...] Read more.
In this paper, we present a novel RGB-D SLAM system based on visual odometry and an extended information filter, which does not require any other sensors or odometry. In contrast to the graph optimization approaches, this is more suitable for online applications. A visual dead reckoning algorithm based on visual residuals is devised, which is used to estimate motion control input. In addition, we use a novel descriptor called binary robust appearance and normals descriptor (BRAND) to extract features from the RGB-D frame and use them as landmarks. Furthermore, considering both the 3D positions and the BRAND descriptors of the landmarks, our observation model avoids explicit data association between the observations and the map by marginalizing the observation likelihood over all possible associations. Experimental validation is provided, which compares the proposed RGB-D SLAM algorithm with just RGB-D visual odometry and a graph-based RGB-D SLAM algorithm using the publicly-available RGB-D dataset. The results of the experiments demonstrate that our system is quicker than the graph-based RGB-D SLAM algorithm. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
A Cross Structured Light Sensor and Stripe Segmentation Method for Visual Tracking of a Wall Climbing Robot
Sensors 2015, 15(6), 13725-13751; https://doi.org/10.3390/s150613725 - 11 Jun 2015
Cited by 11 | Viewed by 4071
Abstract
In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross [...] Read more.
In non-destructive testing (NDT) of metal welds, weld line tracking is usually performed outdoors, where the structured light sources are always disturbed by various noises, such as sunlight, shadows, and reflections from the weld line surface. In this paper, we design a cross structured light (CSL) to detect the weld line and propose a robust laser stripe segmentation algorithm to overcome the noises in structured light images. An adaptive monochromatic space is applied to preprocess the image with ambient noises. In the monochromatic image, the laser stripe obtained is recovered as a multichannel signal by minimum entropy deconvolution. Lastly, the stripe centre points are extracted from the image. In experiments, the CSL sensor and the proposed algorithm are applied to guide a wall climbing robot inspecting the weld line of a wind power tower. The experimental results show that the CSL sensor can capture the 3D information of the welds with high accuracy, and the proposed algorithm contributes to the weld line inspection and the robot navigation. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Open AccessArticle
Multiple Leader Candidate and Competitive Position Allocation for Robust Formation against Member Robot Faults
Sensors 2015, 15(5), 10771-10790; https://doi.org/10.3390/s150510771 - 06 May 2015
Cited by 12 | Viewed by 2864
Abstract
This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation [...] Read more.
This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm. Full article
(This article belongs to the Special Issue Sensors for Robots)
Show Figures

Figure 1

Back to TopTop