Special Issue "Advances in Mobile Mapping Technologies"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 30 June 2021.

Special Issue Editors

Prof. Dr. Ville Lehtola
E-Mail Website
Guest Editor
Department of Earth Observation Science, ITC Faculty, University of Twente, PO Box 217, 7500 AE Enschede, The Netherlands
Interests: mobile mapping; perception; LIDAR; 3D point clouds; indoor; multi-sensor systems; edge computing; autonomous navigation and positioning; computational science
Prof. Dr. Andreas Nüchter
E-Mail Website
Guest Editor
Robotics and Telematics, Julius Maximilian University of Würzburg, Am Hubland, 97074 Würzburg, Germany
Interests: 3D robot vision (3D Point Cloud Processing); Robotics and automation; telematics/geomatics; sensing and perception; semantics, machine vision; cognition; artificial intelligence
Special Issues and Collections in MDPI journals
Prof. Dr. François Goulette
E-Mail Website1 Website2
Guest Editor
Robotics Lab, MINES ParisTech - PSL University, 60 boulevard Saint Michel, 75272 PARIS Cedex 06, France
Interests: mobile mapping systems; 3D point cloud acquisition; 3D modeling; Lidar

Special Issue Information

Dear Colleagues,

Mobile mapping is applied widely in society for example, in asset management, fleet management, construction planning, road safety, and maintenance optimization. Yet, further advances in these technologies are called for. Advances can be radical, such as changes to the prevailing paradigms in mobile mapping, or incremental, such as the state-of-the-art mobile mapping methods.

With current multi-sensor systems in mobile mapping, laser-scanned data are often registered into point clouds with the aid of global navigation satellite system (GNSS) positioning or simultaneous localization and mapping (SLAM) techniques and then labeled and colored with the aid of machine learning methods and digital camera data. These multi-sensor platforms are beginning to see further advances via the addition of multi-spectral and other sensors and via the development of machine learning techniques used in processing this multi-modal data.

Embedded systems and minimalistic system designs are also attracting attention, from both academic and commercial perspectives. These systems typically aim for radical novel innovations or for specific applications. For example, single-photon technologies have great potential in the miniaturization of mobile laser scanning (MLS) systems.

We would like to invite you to contribute to this Special Issue of Remote Sensing by submitting original manuscripts, experimental work, and/or reviews in the field of Advances in Mobile Mapping Technologies. Open data, open source, and open hardware contributions are especially welcomed. Contributions may be from, but not limited to, the following topics:

  • Novel multi-sensor fusion techniques
  • System design and development
  • Multispectral implementations and advances
  • Robust calibration techniques
  • New open data sets
  • Improvements in positioning, including seamless indoor-outdoor 3D mapping
  • Low-cost solutions
  • Novel application cases in different challenging environments, e.g., indoor, urban, forest, underground, maritime, and underwater
  • Solutions that generalize for different environments
  • Simultaneous localization and mapping
  • Point cloud processing
  • Machine learning
  • Deep learning
  • Accuracy, precision, and quality assessment
  • Autonomous navigation
  • Post-disaster assessment

Prof. Dr. Ville Lehtola
Prof. Dr. Andreas Nüchter
Prof. Dr. François Goulette
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Mobile mapping
  • Laser scanning
  • Machine learning
  • Autonomous navigation
  • SLAM
  • Point cloud
  • Multi-sensor
  • Low-cost
  • Multispectral
  • Indoor 3D

Published Papers (6 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Open AccessArticle
Robust Loop Closure Detection Integrating Visual–Spatial–Semantic Information via Topological Graphs and CNN Features
Remote Sens. 2020, 12(23), 3890; https://doi.org/10.3390/rs12233890 - 27 Nov 2020
Cited by 1 | Viewed by 542
Abstract
Loop closure detection is a key module for visual simultaneous localization and mapping (SLAM). Most previous methods for this module have not made full use of the information provided by images, i.e., they have only used the visual appearance or have only considered [...] Read more.
Loop closure detection is a key module for visual simultaneous localization and mapping (SLAM). Most previous methods for this module have not made full use of the information provided by images, i.e., they have only used the visual appearance or have only considered the spatial relationships of landmarks; the visual, spatial and semantic information have not been fully integrated. In this paper, a robust loop closure detection approach integrating visual–spatial–semantic information is proposed by employing topological graphs and convolutional neural network (CNN) features. Firstly, to reduce mismatches under different viewpoints, semantic topological graphs are introduced to encode the spatial relationships of landmarks, and random walk descriptors are employed to characterize the topological graphs for graph matching. Secondly, dynamic landmarks are eliminated by using semantic information, and distinctive landmarks are selected for loop closure detection, thus alleviating the impact of dynamic scenes. Finally, to ease the effect of appearance changes, the appearance-invariant descriptor of the landmark region is extracted by a pre-trained CNN without the specially designed manual features. The proposed approach weakens the influence of viewpoint changes and dynamic scenes, and extensive experiments conducted on open datasets and a mobile robot demonstrated that the proposed method has more satisfactory performance compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Figure 1

Open AccessArticle
Manhole Cover Detection on Rasterized Mobile Mapping Point Cloud Data Using Transfer Learned Fully Convolutional Neural Networks
Remote Sens. 2020, 12(22), 3820; https://doi.org/10.3390/rs12223820 - 20 Nov 2020
Cited by 1 | Viewed by 602
Abstract
Large-scale spatial databases contain information of different objects in the public domain and are of great importance for many stakeholders. These data are not only used to inventory the different assets of the public domain but also for project planning, construction design, and [...] Read more.
Large-scale spatial databases contain information of different objects in the public domain and are of great importance for many stakeholders. These data are not only used to inventory the different assets of the public domain but also for project planning, construction design, and to create prediction models for disaster management or transportation. The use of mobile mapping systems instead of traditional surveying techniques for the data acquisition of these datasets is growing. However, while some objects can be (semi)automatically extracted, the mapping of manhole covers is still primarily done manually. In this work, we present a fully automatic manhole cover detection method to extract and accurately determine the position of manhole covers from mobile mapping point cloud data. Our method rasterizes the point cloud data into ground images with three channels: intensity value, minimum height and height variance. These images are processed by a transfer learned fully convolutional neural network to generate the spatial classification map. This map is then fed to a simplified class activation mapping (CAM) location algorithm to predict the center position of each manhole cover. The work assesses the influence of different backbone architectures (AlexNet, VGG-16, Inception-v3 and ResNet-101) and that of the geometric information channels in the ground image when commonly only the intensity channel is used. Our experiments show that the most consistent architecture is VGG-16, achieving a recall, precision and F2-score of 0.973, 0.973 and 0.973, respectively, in terms of detection performance. In terms of location performance, our approach achieves a horizontal 95% confidence interval of 16.5 cm using the VGG-16 architecture. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

Open AccessArticle
Improved Point–Line Visual–Inertial Odometry System Using Helmert Variance Component Estimation
Remote Sens. 2020, 12(18), 2901; https://doi.org/10.3390/rs12182901 - 07 Sep 2020
Cited by 1 | Viewed by 861
Abstract
Mobile platform visual image sequence inevitably has large areas with various types of weak textures, which affect the acquisition of accurate pose in the subsequent platform moving process. The visual–inertial odometry (VIO) with point features and line features as visual information shows a [...] Read more.
Mobile platform visual image sequence inevitably has large areas with various types of weak textures, which affect the acquisition of accurate pose in the subsequent platform moving process. The visual–inertial odometry (VIO) with point features and line features as visual information shows a good performance in weak texture environments, which can solve these problems to a certain extent. However, the extraction and matching of line features are time consuming, and reasonable weights between the point and line features are hard to estimate, which makes it difficult to accurately track the pose of the platform in real time. In order to overcome the deficiency, an improved effective point–line visual–inertial odometry system is proposed in this paper, which makes use of geometric information of line features and combines with pixel correlation coefficient to match the line features. Furthermore, this system uses the Helmert variance component estimation method to adjust weights between point features and line features. Comprehensive experimental results on the two datasets of EuRoc MAV and PennCOSYVIO demonstrate that the point–line visual–inertial odometry system developed in this paper achieved significant improvements in both localization accuracy and efficiency compared with several state-of-the-art VIO systems. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

Open AccessArticle
Design and Evaluation of a Permanently Installed Plane-Based Calibration Field for Mobile Laser Scanning Systems
Remote Sens. 2020, 12(3), 555; https://doi.org/10.3390/rs12030555 - 07 Feb 2020
Cited by 3 | Viewed by 1079
Abstract
Mobile laser scanning has become an established measuring technique that is used for many applications in the fields of mapping, inventory, and monitoring. Due to the increasing operationality of such systems, quality control w.r.t. calibration and evaluation of the systems becomes more and [...] Read more.
Mobile laser scanning has become an established measuring technique that is used for many applications in the fields of mapping, inventory, and monitoring. Due to the increasing operationality of such systems, quality control w.r.t. calibration and evaluation of the systems becomes more and more important and is subject to on-going research. This paper contributes to this topic by using tools from geodetic configuration analysis in order to design and evaluate a plane-based calibration field for determining the lever arm and boresight angles of a 2D laser scanner w.r.t. a GNSS/IMU unit (Global Navigation Satellite System, Inertial Measurement Unit). In this regard, the impact of random, systematic, and gross observation errors on the calibration is analyzed leading to a plane setup that provides accurate and controlled calibration parameters. The designed plane setup is realized in the form of a permanently installed calibration field. The applicability of the calibration field is tested with a real mobile laser scanning system by frequently repeating the calibration. Empirical standard deviations of <1 ... 1.5 mm for the lever arm and <0.005 for the boresight angles are obtained, which was priorly defined to be the goal of the calibration. In order to independently evaluate the mobile laser scanning system after calibration, an evaluation environment is realized consisting of a network of control points as well as TLS (Terrestrial Laser Scanning) reference point clouds. Based on the control points, both the horizontal and vertical accuracy of the system is found to be < 10 mm (root mean square error). This is confirmed by comparisons to the TLS reference point clouds indicating a well calibrated system. Both the calibration field and the evaluation environment are permanently installed and can be used for arbitrary mobile laser scanning systems. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

Open AccessArticle
Ideal Angular Orientation of Selected 64-Channel Multi Beam Lidars for Mobile Mapping Systems
Remote Sens. 2020, 12(3), 510; https://doi.org/10.3390/rs12030510 - 05 Feb 2020
Viewed by 799
Abstract
Lidar technology is thriving nowadays for different applications mainly for autonomous navigation, mapping, and smart city technology. Lidars vary in different aspects and can be: multi beam, single beam, spinning, solid state, full 360 field of view FOV, single or multi pulse returns, [...] Read more.
Lidar technology is thriving nowadays for different applications mainly for autonomous navigation, mapping, and smart city technology. Lidars vary in different aspects and can be: multi beam, single beam, spinning, solid state, full 360 field of view FOV, single or multi pulse returns, and many other geometric and radiometric aspects. Users and developers in the mapping industry are continuously looking for new released Lidars having high properties of output density, coverage, and accuracy while keeping a lower cost. Accordingly, every Lidar type should be well evaluated for the final intended mapping aim. This evaluation is not easy to implement in practice because of the need to have all the investigated Lidars available in hand and integrated into a ready to use mapping system. Furthermore, to have a fair comparison; it is necessary to ensure the test applied in the same environment at the same travelling path among other conditions. In this paper, we are evaluating two state-of-the-art multi beam Lidar types: Ouster OS-1-64 and Hesai Pandar64 for mapping applications. The evaluation of the Lidar types is applied in a simulation environment which approximates reality. The paper shows the determination of the ideal orientation angle for the two Lidars by assessing the density, coverage, and accuracy and presenting clear performance quantifications and conclusions. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

Open AccessArticle
Novel Approach to Automatic Traffic Sign Inventory Based on Mobile Mapping System Data and Deep Learning
Remote Sens. 2020, 12(3), 442; https://doi.org/10.3390/rs12030442 - 01 Feb 2020
Cited by 5 | Viewed by 1275
Abstract
Traffic signs are a key element in driver safety. Governments invest a great amount of resources in maintaining the traffic signs in good condition, for which a correct inventory is necessary. This work presents a novel method for mapping traffic signs based on [...] Read more.
Traffic signs are a key element in driver safety. Governments invest a great amount of resources in maintaining the traffic signs in good condition, for which a correct inventory is necessary. This work presents a novel method for mapping traffic signs based on data acquired with MMS (Mobile Mapping System): images and point clouds. On the one hand, images are faster to process and artificial intelligence techniques, specifically Convolutional Neural Networks, are more optimized than in point clouds. On the other hand, point clouds allow a more exact positioning than the exclusive use of images. The false positive rate per image is only 0.004. First, traffic signs are detected in the images obtained by the 360° camera of the MMS through RetinaNet and they are classified by their corresponding InceptionV3 network. The signs are then positioned in the georeferenced point cloud by means of a projection according to the pinhole model from the images. Finally, duplicate geolocalized signs detected in multiple images are filtered. The method has been tested in two real case studies with 214 images, where 89.7% of the signals have been correctly detected, of which 92.5% have been correctly classified and 97.5% have been located with an error of less than 0.5 m. This sequence, which combines images to detection–classification, and point clouds to geo-referencing, in this order, optimizes processing time and allows this method to be included in a company’s production process. The method is conducted automatically and takes advantage of the strengths of each data type. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

Back to TopTop