sensors-logo

Journal Browser

Journal Browser

Special Issue "Autonomous Mobile Robots: Real-Time Sensing, Navigation, and Control"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: 15 October 2021.

Special Issue Editor

Dr. Carlos J. Pérez Del Pulgar
Website
Guest Editor
Department of Systems Engineering and Automation, Universidad de Málaga, Andalucía Tech, 29071 Málaga, Spain
Interests: space robotics; machine learning; path and motion planning; control systems for space

Special Issue Information

Dear Colleagues,

Autonomous mobile robots are getting more and more attention since they can be used for different applications such as precision agriculture, field robotics, search and rescue, planetary exploration, etc. The use of sensors, together with navigation and control algorithms, allows improving autonomy in different manners. On the one hand, the use of exteroceptive sensors as LIDARs, stereocameras, ultrasonic devices, IR cameras, and others helps mobile robots to get rich information about the surrounding environment, useful to support robot navigation in combination with path and motion planning algorithms. On the other hand, proprioceptive sensors, such as current sensors, IMUs, vibration sensors, wheel sinkage sensors, become useful in improving robot awareness of the surface it is traversing. A combination of both kinds of sensors, together with artificial intelligence algorithms, would improve the autonomous navigation and control of robots in the aforementioned applications.

Therefore, this Special Issue includes but is not limited to the following topics:

  • Novel perception systems for robot navigation and localization;
  • Novel sensors for robot localization;
  • Robot localization without GNSS;
  • Novel proprioceptive sensors onboard mobile robots;
  • Path planning for mobile robots;
  • Motion planning for mobile manipulators;
  • Field tests with autonomous mobile robots;
  • Applications of mobile robots.

Dr. Carlos J. Pérez Del Pulgar
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • mobile robots
  • rovers
  • exteroceptive sensors
  • proprioceptive sensors
  • path planning
  • motion planning
  • field tests

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Open AccessArticle
A Compressed Sensing Approach for Multiple Obstacle Localisation Using Sonar Sensors in Air
Sensors 2020, 20(19), 5511; https://doi.org/10.3390/s20195511 - 26 Sep 2020
Abstract
Methods for autonomous navigation systems using sonars in air traditionally use the time-of-flight technique for obstacle detection and environment mapping. However, this technique suffers from constructive and destructive interference of ultrasonic reflections from multiple obstacles in the environment, requiring several acquisitions for proper [...] Read more.
Methods for autonomous navigation systems using sonars in air traditionally use the time-of-flight technique for obstacle detection and environment mapping. However, this technique suffers from constructive and destructive interference of ultrasonic reflections from multiple obstacles in the environment, requiring several acquisitions for proper mapping. This paper presents a novel approach for obstacle detection and localisation using inverse problems and compressed sensing concepts. Experiments were conducted with multiple obstacles present in a controlled environment using a hardware platform with four transducers, which was specially designed for sending, receiving and acquiring raw ultrasonic signals. A comparison between the performance of compressed sensing using Orthogonal Matching Pursuit and two traditional image reconstruction methods was conducted. The reconstructed 2D images representing the cross-section of the sensed environment were quantitatively assessed, showing promising results for robotic mapping tasks using compressed sensing. Full article
(This article belongs to the Special Issue Autonomous Mobile Robots: Real-Time Sensing, Navigation, and Control)
Show Figures

Figure 1

Open AccessArticle
Representations and Benchmarking of Modern Visual SLAM Systems
Sensors 2020, 20(9), 2572; https://doi.org/10.3390/s20092572 - 30 Apr 2020
Abstract
Simultaneous Localisation And Mapping (SLAM) has long been recognised as a core problem to be solved within countless emerging mobile applications that require intelligent interaction or navigation in an environment. Classical solutions to the problem primarily aim at localisation and reconstruction of a [...] Read more.
Simultaneous Localisation And Mapping (SLAM) has long been recognised as a core problem to be solved within countless emerging mobile applications that require intelligent interaction or navigation in an environment. Classical solutions to the problem primarily aim at localisation and reconstruction of a geometric 3D model of the scene. More recently, the community increasingly investigates the development of Spatial Artificial Intelligence (Spatial AI), an evolutionary paradigm pursuing a simultaneous recovery of object-level composition and semantic annotations of the recovered 3D model. Several interesting approaches have already been presented, producing object-level maps with both geometric and semantic properties rather than just accurate and robust localisation performance. As such, they require much broader ground truth information for validation purposes. We discuss the structure of the representations and optimisation problems involved in Spatial AI, and propose new synthetic datasets that, for the first time, include accurate ground truth information about the scene composition as well as individual object shapes and poses. We furthermore propose evaluation metrics for all aspects of such joint geometric-semantic representations and apply them to a new semantic SLAM framework. It is our hope that the introduction of these datasets and proper evaluation metrics will be instrumental in the evaluation of current and future Spatial AI systems and as such contribute substantially to the overall research progress on this important topic. Full article
(This article belongs to the Special Issue Autonomous Mobile Robots: Real-Time Sensing, Navigation, and Control)
Show Figures

Graphical abstract

Review

Jump to: Research, Other

Open AccessReview
A Review of Visual-LiDAR Fusion based Simultaneous Localization and Mapping
Sensors 2020, 20(7), 2068; https://doi.org/10.3390/s20072068 - 07 Apr 2020
Cited by 8
Abstract
Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. [...] Read more.
Autonomous navigation requires both a precise and robust mapping and localization solution. In this context, Simultaneous Localization and Mapping (SLAM) is a very well-suited solution. SLAM is used for many applications including mobile robotics, self-driving cars, unmanned aerial vehicles, or autonomous underwater vehicles. In these domains, both visual and visual-IMU SLAM are well studied, and improvements are regularly proposed in the literature. However, LiDAR-SLAM techniques seem to be relatively the same as ten or twenty years ago. Moreover, few research works focus on vision-LiDAR approaches, whereas such a fusion would have many advantages. Indeed, hybridized solutions offer improvements in the performance of SLAM, especially with respect to aggressive motion, lack of light, or lack of visual features. This study provides a comprehensive survey on visual-LiDAR SLAM. After a summary of the basic idea of SLAM and its implementation, we give a complete review of the state-of-the-art of SLAM research, focusing on solutions using vision, LiDAR, and a sensor fusion of both modalities. Full article
(This article belongs to the Special Issue Autonomous Mobile Robots: Real-Time Sensing, Navigation, and Control)
Show Figures

Figure 1

Other

Jump to: Research, Review

Open AccessLetter
Reactive Navigation on Natural Environments by Continuous Classification of Ground Traversability
Sensors 2020, 20(22), 6423; https://doi.org/10.3390/s20226423 - 10 Nov 2020
Abstract
Reactivity is a key component for autonomous vehicles navigating on natural terrains in order to safely avoid unknown obstacles. To this end, it is necessary to continuously assess traversability by processing on-board sensor data. This paper describes the case study of mobile robot [...] Read more.
Reactivity is a key component for autonomous vehicles navigating on natural terrains in order to safely avoid unknown obstacles. To this end, it is necessary to continuously assess traversability by processing on-board sensor data. This paper describes the case study of mobile robot Andabata that classifies traversable points from 3D laser scans acquired in motion of its vicinity to build 2D local traversability maps. Realistic robotic simulations with Gazebo were employed to appropriately adjust reactive behaviors. As a result, successful navigation tests with Andabata using the robot operating system (ROS) were performed on natural environments at low speeds. Full article
(This article belongs to the Special Issue Autonomous Mobile Robots: Real-Time Sensing, Navigation, and Control)
Show Figures

Figure 1

Back to TopTop