Journal Browser

Journal Browser

Special Issue "Deep Learning Techniques for the Mapping and Localization of Mobile Robotics"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: 30 November 2020.

Special Issue Editors

Prof. Dr. Oscar Reinoso Garcia
Guest Editor
Department of Systems and Automation Engineering, Universidad Miguel Hernández, Avinguda de la Universitat d'Elx, s/n, 03202 Elche, Alicante, Spain
Interests: computer vision; robotics; and cooperative robotics
Special Issues and Collections in MDPI journals
Prof. Dr. Luis Payá
Guest Editor
System Engineering and Automation Department, Miguel Hernandez University, Elche (Alicante) 03202, Spain
Interests: computer vision; omnidirectional imaging; appearance descriptors; image processing; mobile robotics; environment modeling; visual localization
Special Issues and Collections in MDPI journals

Special Issue Information

Dear colleagues,

Over the past few years, deep learning techniques have permitted tackling a number of problems regarding the recognition of scenes or of classification from visual information and other sensors, with promising results. More recently, their ability to analyze and detect patterns from large amounts of data has broadened their applications, and they can be used to solve a variety of problems in robotics, such as mapping and localization. Deep learning techniques may constitute a powerful alternative either when the main sensorial system is a vision system that provides enormous amounts of information, or when this visual information is merged with the data provided by other types of sensors, such as range sensors.

In this sense, different techniques have recently emerged, either supervised or unsupervised, such as convolutional neural networks (CNNs), autoencoders, recurrent neural networks, generative adversarial networks, and siamese networks. These techniques have shown an excellent performance in a variety of tasks, usually related to classification. However, the use of deep learning techniques to address the problem of mapping and/or the localization of mobile robots, undoubtedly offers high possibilities for any type of robot (either drones, ground vehicles, or submarine robots, etc.). Recognizing, identifying, and modeling scenarios from the information provided by multiple sensory systems is essential in order to carry out the tasks of autonomous navigation in mobile robots. In this field, deep learning techniques provide a novel alternative to describe the environments in which the robot operates.

The aim of this Special Issue is to present different alternatives or applications to resolve the problem of the mapping and/or localization of a mobile robot by using deep learning techniques. Also, state-of-the-art reviews of a specific problem in these fields, and how it has been addressed through the use of deep learning, would also be acceptable.

This Special Issue invites contributions in the following topics (but is not limited to them):

  • Place recognition and/or image retrieval by means of deep learning techniques.
  • Metric, topological, or hybrid mapping through deep learning techniques.
  • Modeling environments through long-short term memories.
  • Long-term mapping and localization.
  • Computer vision and deep learning in mapping and localization.
  • Fusion of information in multi-sensor systems.
  • Movement estimation using deep learning techniques.
  • Human–robot interaction in mapping and localization.
  • Navigation in social and/or crowded environments.
  • Path following in challenging environments.
  • Simultaneous localization and mapping (SLAM).
  • Visual odometry and trajectory estimation.
  • Loop closure detection.
  • Deep learning and autonomous driving.

Prof. Dr. Oscar Reinoso
Prof. Dr. Luis Payá
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:


Open AccessArticle
An End-to-End Trainable Multi-Column CNN for Scene Recognition in Extremely Changing Environment
Sensors 2020, 20(6), 1556; - 11 Mar 2020
Scene recognition is an essential part in the vision-based robot navigation domain. The successful application of deep learning technology has triggered more extensive preliminary studies on scene recognition, which all use extracted features from networks that are trained for recognition tasks. In the [...] Read more.
Scene recognition is an essential part in the vision-based robot navigation domain. The successful application of deep learning technology has triggered more extensive preliminary studies on scene recognition, which all use extracted features from networks that are trained for recognition tasks. In the paper, we interpret scene recognition as a region-based image retrieval problem and present a novel approach for scene recognition with an end-to-end trainable Multi-column convolutional neural network (MCNN) architecture. The proposed MCNN utilizes filters with receptive fields of different sizes to have Multi-level and Multi-layer image perception, and consists of three components: front-end, middle-end and back-end. The first seven layers VGG16 are taken as front-end for two-dimensional feature extraction, Inception-A is taken as the middle-end for deeper learning feature representation, and Large-Margin Softmax Loss (L-Softmax) is taken as the back-end for enhancing intra-class compactness and inter-class-separability. Extensive experiments have been conducted to evaluate the performance according to compare our proposed network to existing state-of-the-art methods. Experimental results on three popular datasets demonstrate the robustness and accuracy of our approach. To the best of our knowledge, the presented approach has not been applied for the scene recognition in literature. Full article
Show Figures

Figure 1

Back to TopTop