sensors-logo

Journal Browser

Journal Browser

Special Issue "Intelligent Systems and Sensors for Robotics"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: 31 May 2020.

Special Issue Editors

Prof. Paolo Gastaldo
E-Mail Website
Guest Editor
Dept of Electrical, Electronic, Telecommunication Engineering and Naval Architecture, DITEN, University of Genoa
Interests: machine learning; embedded systems; intelligent systems for robotics
Special Issues and Collections in MDPI journals
Dr. Lin Wang
E-Mail Website
Guest Editor
School of Electronic Engineering and Computer Science, Queen Mary University of London
Interests: signal processing, machine learning; robot perception

Special Issue Information

Dear Colleagues,

The effective performance of advanced robotic systems greatly depends on two components: a sensing system that can provide valuable and accurate information about the environment, and an intelligent processing system that can properly utilize such information to improve the ability of robots to handle ever more complex tasks.

Machine learning (ML) models provide an enabling technology in support of such intelligent processing systems. The capability of ML to learn from data an inference function represents a key strength for developing robots that are expected to become autonomous and make real-time decisions. This capability in turn enhances the role of sensors in empowering robotics, from industrial robotic systems to humanoid robots.

Bringing ML to embedded systems becomes indeed a requirement for building the next generation of robots. On the other hand, given the constraints imposed by robotics in terms of power consumption, latency, size, and cost, the deployment of a ML model on an embedded system poses major challenges. The main goal is to profit from efficient inference functions that can run on resource-constrained edge devices. Under such paradigm, training might in principle be demanded to a different, more powerful platform. Nonetheless, a more demanding goal is to be able to complete also the training on resource-constrained devices. 

This Special Issue will focus on machine learning based models and methodologies for real-time decision making on advanced robotic systems. The aim is to collect the most recent advances in machine learning research for low-resource embedded systems. Accordingly, the Special Issue welcomes methods and ideas that emphasize the impact of embedded machine learning on robotic technologies. 

The topics of interest for this special issue include, but are not limited to:

  • embedded machine learning
  • low-power inference engines
  • software/hardware techniques for machine learning
  • online learning on resource-constrained edge devices
  • power-efficient machine learning implementations on FPGAs
  • on-chip training of deep neural networks
  • high-performance, low-power computing for deep learning and computer vision
  • high-performance, low-power computing for deep learning-based audio and speech processing
  • intelligent sensors
  • machine learning for sensing and perception
  • machine learning for intelligent autonomous systems

Prof. Paolo Gastaldo
Dr. Lin Wang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • embedded machine learning
  • intelligent systems
  • robot sensing and perception
  • machine vision
  • autonomous robots
  • edge computing

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Analysis and Improvements in AprilTag Based State Estimation
Sensors 2019, 19(24), 5480; https://doi.org/10.3390/s19245480 - 12 Dec 2019
Abstract
In this paper, we analyzed the accuracy and precision of AprilTag as a visual fiducial marker in detail. We have analyzed error propagation along two horizontal axes along with the effect of angular rotation about the vertical axis. We have identified that the [...] Read more.
In this paper, we analyzed the accuracy and precision of AprilTag as a visual fiducial marker in detail. We have analyzed error propagation along two horizontal axes along with the effect of angular rotation about the vertical axis. We have identified that the angular rotation of the camera (yaw angle) about its vertical axis is the primary source of error that decreases the precision to the point where the marker system is not potentially viable for sub-decimeter precise tasks. Other factors are the distance and viewing angle of the camera from the AprilTag. Based on these observations, three improvement steps have been proposed. One is the trigonometric correction of the yaw angle to point the camera towards the center of the tag. Second, the use of a custom-built yaw-axis gimbal, which tracks the center of the tag in real-time. Third, we have presented for the first time a pose-indexed probabilistic sensor error model of the AprilTag using a Gaussian Processes based regression of experimental data, validated by particle filter tracking. Our proposed approach, which can be deployed with all three improvement steps, increases the system’s overall accuracy and precision by manifolds with a slight trade-off with execution time over commonly available AprilTag library. These proposed improvements make AprilTag suitable to be used as precision localization systems for outdoor and indoor applications. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

Open AccessArticle
Robust and Accurate Hand–Eye Calibration Method Based on Schur Matric Decomposition
Sensors 2019, 19(20), 4490; https://doi.org/10.3390/s19204490 - 16 Oct 2019
Abstract
To improve the accuracy and robustness of hand–eye calibration, a hand–eye calibration method based on Schur matric decomposition is proposed in this paper. The accuracy of these methods strongly depends on the quality of observation data. Therefore, preprocessing observation data is essential. As [...] Read more.
To improve the accuracy and robustness of hand–eye calibration, a hand–eye calibration method based on Schur matric decomposition is proposed in this paper. The accuracy of these methods strongly depends on the quality of observation data. Therefore, preprocessing observation data is essential. As with traditional two-step hand–eye calibration methods, we first solve the rotation parameters and then the translation vector can be immediately determined. A general solution was obtained from one observation through Schur matric decomposition and then the degrees of freedom were decreased from three to two. Observation data preprocessing is one of the basic unresolved problems with hand–eye calibration methods. A discriminant equation to delete outliers was deduced based on Schur matric decomposition. Finally, the basic problem of observation data preprocessing was solved using outlier detection, which significantly improved robustness. The proposed method was validated by both simulations and experiments. The results show that the prediction error of rotation and translation was 0.06 arcmin and 1.01 mm respectively, and the proposed method performed much better in outlier detection. A minimal configuration for the unique solution was proven from a new perspective. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

Open AccessArticle
Methods for Simultaneous Robot-World-Hand–Eye Calibration: A Comparative Study
Sensors 2019, 19(12), 2837; https://doi.org/10.3390/s19122837 - 25 Jun 2019
Cited by 2
Abstract
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the [...] Read more.
In this paper, we propose two novel methods for robot-world-hand–eye calibration and provide a comparative analysis against six state-of-the-art methods. We examine the calibration problem from two alternative geometrical interpretations, called ‘hand–eye’ and ‘robot-world-hand–eye’, respectively. The study analyses the effects of specifying the objective function as pose error or reprojection error minimization problem. We provide three real and three simulated datasets with rendered images as part of the study. In addition, we propose a robotic arm error modeling approach to be used along with the simulated datasets for generating a realistic response. The tests on simulated data are performed in both ideal cases and with pseudo-realistic robotic arm pose and visual noise. Our methods show significant improvement and robustness on many metrics in various scenarios compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue Intelligent Systems and Sensors for Robotics)
Show Figures

Figure 1

Back to TopTop