remotesensing-logo

Journal Browser

Journal Browser

Advances in Mobile Mapping Technologies

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: closed (30 June 2021) | Viewed by 48420

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors


E-Mail Website
Guest Editor
Department of Earth Observation Science, ITC Faculty, University of Twente, PO Box 217, 7500 AE Enschede, The Netherlands
Interests: mobile mapping; perception; LIDAR; 3D point clouds; indoor; multi-sensor systems; edge computing; autonomous navigation and positioning; computational science

E-Mail Website
Guest Editor
Robotics and Telematics, Julius Maximilian University of Würzburg, Am Hubland, 97074 Würzburg, Germany
Interests: 3D robot vision (3D Point Cloud Processing); Robotics and automation; telematics/geomatics; sensing and perception; semantics, machine vision; cognition; artificial intelligence
Special Issues, Collections and Topics in MDPI journals

E-Mail Website1 Website2
Guest Editor
Robotics Lab, MINES ParisTech - PSL University, 60 boulevard Saint Michel, 75272 Paris, CEDEX 06, France
Interests: mobile mapping systems; 3D point cloud acquisition; 3D modeling; Lidar

Special Issue Information

Dear Colleagues,

Mobile mapping is applied widely in society for example, in asset management, fleet management, construction planning, road safety, and maintenance optimization. Yet, further advances in these technologies are called for. Advances can be radical, such as changes to the prevailing paradigms in mobile mapping, or incremental, such as the state-of-the-art mobile mapping methods.

With current multi-sensor systems in mobile mapping, laser-scanned data are often registered into point clouds with the aid of global navigation satellite system (GNSS) positioning or simultaneous localization and mapping (SLAM) techniques and then labeled and colored with the aid of machine learning methods and digital camera data. These multi-sensor platforms are beginning to see further advances via the addition of multi-spectral and other sensors and via the development of machine learning techniques used in processing this multi-modal data.

Embedded systems and minimalistic system designs are also attracting attention, from both academic and commercial perspectives. These systems typically aim for radical novel innovations or for specific applications. For example, single-photon technologies have great potential in the miniaturization of mobile laser scanning (MLS) systems.

We would like to invite you to contribute to this Special Issue of Remote Sensing by submitting original manuscripts, experimental work, and/or reviews in the field of Advances in Mobile Mapping Technologies. Open data, open source, and open hardware contributions are especially welcomed. Contributions may be from, but not limited to, the following topics:

  • Novel multi-sensor fusion techniques
  • System design and development
  • Multispectral implementations and advances
  • Robust calibration techniques
  • New open data sets
  • Improvements in positioning, including seamless indoor-outdoor 3D mapping
  • Low-cost solutions
  • Novel application cases in different challenging environments, e.g., indoor, urban, forest, underground, maritime, and underwater
  • Solutions that generalize for different environments
  • Simultaneous localization and mapping
  • Point cloud processing
  • Machine learning
  • Deep learning
  • Accuracy, precision, and quality assessment
  • Autonomous navigation
  • Post-disaster assessment

Prof. Dr. Ville Lehtola
Prof. Dr. Andreas Nüchter
Prof. Dr. François Goulette
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Mobile mapping
  • Laser scanning
  • Machine learning
  • Autonomous navigation
  • SLAM
  • Point cloud
  • Multi-sensor
  • Low-cost
  • Multispectral
  • Indoor 3D

Published Papers (11 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

24 pages, 82153 KiB  
Article
Paris-CARLA-3D: A Real and Synthetic Outdoor Point Cloud Dataset for Challenging Tasks in 3D Mapping
by Jean-Emmanuel Deschaud, David Duque, Jean Pierre Richa, Santiago Velasco-Forero, Beatriz Marcotegui and François Goulette
Remote Sens. 2021, 13(22), 4713; https://doi.org/10.3390/rs13224713 - 21 Nov 2021
Cited by 31 | Viewed by 7349
Abstract
Paris-CARLA-3D is a dataset of several dense colored point clouds of outdoor environments built by a mobile LiDAR and camera system. The data are composed of two sets with synthetic data from the open source CARLA simulator (700 million points) and real data [...] Read more.
Paris-CARLA-3D is a dataset of several dense colored point clouds of outdoor environments built by a mobile LiDAR and camera system. The data are composed of two sets with synthetic data from the open source CARLA simulator (700 million points) and real data acquired in the city of Paris (60 million points), hence the name Paris-CARLA-3D. One of the advantages of this dataset is to have simulated the same LiDAR and camera platform in the open source CARLA simulator as the one used to produce the real data. In addition, manual annotation of the classes using the semantic tags of CARLA was performed on the real data, allowing the testing of transfer methods from the synthetic to the real data. The objective of this dataset is to provide a challenging dataset to evaluate and improve methods on difficult vision tasks for the 3D mapping of outdoor environments: semantic segmentation, instance segmentation, and scene completion. For each task, we describe the evaluation protocol as well as the experiments carried out to establish a baseline. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Figure 1

26 pages, 8305 KiB  
Article
Information-Based Georeferencing of an Unmanned Aerial Vehicle by Dual State Kalman Filter with Implicit Measurement Equations
by Rozhin Moftizadeh, Sören Vogel, Ingo Neumann, Johannes Bureick and Hamza Alkhatib
Remote Sens. 2021, 13(16), 3205; https://doi.org/10.3390/rs13163205 - 12 Aug 2021
Viewed by 1861
Abstract
Georeferencing a kinematic Multi-Sensor-System (MSS) within crowded areas, such as inner-cities, is a challenging task that should be conducted in the most reliable way possible. In such areas, the Global Navigation Satellite System (GNSS) data either contain inevitable errors or are not continuously [...] Read more.
Georeferencing a kinematic Multi-Sensor-System (MSS) within crowded areas, such as inner-cities, is a challenging task that should be conducted in the most reliable way possible. In such areas, the Global Navigation Satellite System (GNSS) data either contain inevitable errors or are not continuously available. Regardless of the environmental conditions, an Inertial Measurement Unit (IMU) is always subject to drifting, and therefore it cannot be fully trusted over time. Consequently, suitable filtering techniques are required that can compensate for such possible deficits and subsequently improve the georeferencing results. Sometimes it is also possible to improve the filter quality by engaging additional complementary information. This information could be taken from the surrounding environment of the MSS, which usually appears in the form of geometrical constraints. Since it is possible to have a high amount of such information in an environment of interest, their consideration could lead to an inefficient filtering procedure. Hence, suitable methodologies are necessary to be extended to the filtering framework to increase the efficiency while preserving the filter quality. In the current paper, we propose a Dual State Iterated Extended Kalman Filter (DSIEKF) that can efficiently georeference a MSS by taking into account additional geometrical information. The proposed methodology is based on implicit measurement equations and nonlinear geometrical constraints, which are applied to a real case scenario to further evaluate its performance. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Figure 1

23 pages, 5807 KiB  
Article
Outdoor Mobile Mapping and AI-Based 3D Object Detection with Low-Cost RGB-D Cameras: The Use Case of On-Street Parking Statistics
by Stephan Nebiker, Jonas Meyer, Stefan Blaser, Manuela Ammann and Severin Rhyner
Remote Sens. 2021, 13(16), 3099; https://doi.org/10.3390/rs13163099 - 5 Aug 2021
Cited by 17 | Viewed by 4762
Abstract
A successful application of low-cost 3D cameras in combination with artificial intelligence (AI)-based 3D object detection algorithms to outdoor mobile mapping would offer great potential for numerous mapping, asset inventory, and change detection tasks in the context of smart cities. This paper presents [...] Read more.
A successful application of low-cost 3D cameras in combination with artificial intelligence (AI)-based 3D object detection algorithms to outdoor mobile mapping would offer great potential for numerous mapping, asset inventory, and change detection tasks in the context of smart cities. This paper presents a mobile mapping system mounted on an electric tricycle and a procedure for creating on-street parking statistics, which allow government agencies and policy makers to verify and adjust parking policies in different city districts. Our method combines georeferenced red-green-blue-depth (RGB-D) imagery from two low-cost 3D cameras with state-of-the-art 3D object detection algorithms for extracting and mapping parked vehicles. Our investigations demonstrate the suitability of the latest generation of low-cost 3D cameras for real-world outdoor applications with respect to supported ranges, depth measurement accuracy, and robustness under varying lighting conditions. In an evaluation of suitable algorithms for detecting vehicles in the noisy and often incomplete 3D point clouds from RGB-D cameras, the 3D object detection network PointRCNN, which extends region-based convolutional neural networks (R-CNNs) to 3D point clouds, clearly outperformed all other candidates. The results of a mapping mission with 313 parking spaces show that our method is capable of reliably detecting parked cars with a precision of 100% and a recall of 97%. It can be applied to unslotted and slotted parking and different parking types including parallel, perpendicular, and angle parking. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

20 pages, 12428 KiB  
Article
A V-SLAM Guided and Portable System for Photogrammetric Applications
by Alessandro Torresani, Fabio Menna, Roberto Battisti and Fabio Remondino
Remote Sens. 2021, 13(12), 2351; https://doi.org/10.3390/rs13122351 - 16 Jun 2021
Cited by 17 | Viewed by 4189
Abstract
Mobile and handheld mapping systems are becoming widely used nowadays as fast and cost-effective data acquisition systems for 3D reconstruction purposes. While most of the research and commercial systems are based on active sensors, solutions employing only cameras and photogrammetry are attracting more [...] Read more.
Mobile and handheld mapping systems are becoming widely used nowadays as fast and cost-effective data acquisition systems for 3D reconstruction purposes. While most of the research and commercial systems are based on active sensors, solutions employing only cameras and photogrammetry are attracting more and more interest due to their significantly minor costs, size and power consumption. In this work we propose an ARM-based, low-cost and lightweight stereo vision mobile mapping system based on a Visual Simultaneous Localization And Mapping (V-SLAM) algorithm. The prototype system, named GuPho (Guided Photogrammetric System), also integrates an in-house guidance system which enables optimized image acquisitions, robust management of the cameras and feedback on positioning and acquisition speed. The presented results show the effectiveness of the developed prototype in mapping large scenarios, enabling motion blur prevention, robust camera exposure control and achieving accurate 3D results. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

28 pages, 6849 KiB  
Article
An Imaging Network Design for UGV-Based 3D Reconstruction of Buildings
by Ali Hosseininaveh and Fabio Remondino
Remote Sens. 2021, 13(10), 1923; https://doi.org/10.3390/rs13101923 - 14 May 2021
Cited by 7 | Viewed by 2888
Abstract
Imaging network design is a crucial step in most image-based 3D reconstruction applications based on Structure from Motion (SfM) and multi-view stereo (MVS) methods. This paper proposes a novel photogrammetric algorithm for imaging network design for building 3D reconstruction purposes. The proposed methodology [...] Read more.
Imaging network design is a crucial step in most image-based 3D reconstruction applications based on Structure from Motion (SfM) and multi-view stereo (MVS) methods. This paper proposes a novel photogrammetric algorithm for imaging network design for building 3D reconstruction purposes. The proposed methodology consists of two main steps: (i) the generation of candidate viewpoints and (ii) the clustering and selection of vantage viewpoints. The first step includes the identification of initial candidate viewpoints, selecting the candidate viewpoints in the optimum range, and defining viewpoint direction stages. In the second step, four challenging approaches—named façade pointing, centre pointing, hybrid, and both centre & façade pointing—are proposed. The entire methodology is implemented and evaluated in both simulation and real-world experiments. In the simulation experiment, a building and its environment are computer-generated in the ROS (Robot Operating System) Gazebo environment and a map is created by using a simulated robot and Gmapping algorithm based on a Simultaneously Localization and Mapping (SLAM) algorithm using a simulated Unmanned Ground Vehicle (UGV). In the real-world experiment, the proposed methodology is evaluated for all four approaches for a real building with two common approaches, called continuous image capturing and continuous image capturing & clustering and selection approaches. The results of both evaluations reveal that the fusion of centre & façade pointing approach is more efficient than all other approaches in terms of both accuracy and completeness criteria. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

26 pages, 9905 KiB  
Article
Robust Loop Closure Detection Integrating Visual–Spatial–Semantic Information via Topological Graphs and CNN Features
by Yuwei Wang, Yuanying Qiu, Peitao Cheng and Xuechao Duan
Remote Sens. 2020, 12(23), 3890; https://doi.org/10.3390/rs12233890 - 27 Nov 2020
Cited by 21 | Viewed by 3366
Abstract
Loop closure detection is a key module for visual simultaneous localization and mapping (SLAM). Most previous methods for this module have not made full use of the information provided by images, i.e., they have only used the visual appearance or have only considered [...] Read more.
Loop closure detection is a key module for visual simultaneous localization and mapping (SLAM). Most previous methods for this module have not made full use of the information provided by images, i.e., they have only used the visual appearance or have only considered the spatial relationships of landmarks; the visual, spatial and semantic information have not been fully integrated. In this paper, a robust loop closure detection approach integrating visual–spatial–semantic information is proposed by employing topological graphs and convolutional neural network (CNN) features. Firstly, to reduce mismatches under different viewpoints, semantic topological graphs are introduced to encode the spatial relationships of landmarks, and random walk descriptors are employed to characterize the topological graphs for graph matching. Secondly, dynamic landmarks are eliminated by using semantic information, and distinctive landmarks are selected for loop closure detection, thus alleviating the impact of dynamic scenes. Finally, to ease the effect of appearance changes, the appearance-invariant descriptor of the landmark region is extracted by a pre-trained CNN without the specially designed manual features. The proposed approach weakens the influence of viewpoint changes and dynamic scenes, and extensive experiments conducted on open datasets and a mobile robot demonstrated that the proposed method has more satisfactory performance compared to state-of-the-art methods. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Figure 1

21 pages, 17601 KiB  
Article
Manhole Cover Detection on Rasterized Mobile Mapping Point Cloud Data Using Transfer Learned Fully Convolutional Neural Networks
by Lukas Mattheuwsen and Maarten Vergauwen
Remote Sens. 2020, 12(22), 3820; https://doi.org/10.3390/rs12223820 - 20 Nov 2020
Cited by 14 | Viewed by 4269
Abstract
Large-scale spatial databases contain information of different objects in the public domain and are of great importance for many stakeholders. These data are not only used to inventory the different assets of the public domain but also for project planning, construction design, and [...] Read more.
Large-scale spatial databases contain information of different objects in the public domain and are of great importance for many stakeholders. These data are not only used to inventory the different assets of the public domain but also for project planning, construction design, and to create prediction models for disaster management or transportation. The use of mobile mapping systems instead of traditional surveying techniques for the data acquisition of these datasets is growing. However, while some objects can be (semi)automatically extracted, the mapping of manhole covers is still primarily done manually. In this work, we present a fully automatic manhole cover detection method to extract and accurately determine the position of manhole covers from mobile mapping point cloud data. Our method rasterizes the point cloud data into ground images with three channels: intensity value, minimum height and height variance. These images are processed by a transfer learned fully convolutional neural network to generate the spatial classification map. This map is then fed to a simplified class activation mapping (CAM) location algorithm to predict the center position of each manhole cover. The work assesses the influence of different backbone architectures (AlexNet, VGG-16, Inception-v3 and ResNet-101) and that of the geometric information channels in the ground image when commonly only the intensity channel is used. Our experiments show that the most consistent architecture is VGG-16, achieving a recall, precision and F2-score of 0.973, 0.973 and 0.973, respectively, in terms of detection performance. In terms of location performance, our approach achieves a horizontal 95% confidence interval of 16.5 cm using the VGG-16 architecture. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

20 pages, 11093 KiB  
Article
Improved Point–Line Visual–Inertial Odometry System Using Helmert Variance Component Estimation
by Bo Xu, Yu Chen, Shoujian Zhang and Jingrong Wang
Remote Sens. 2020, 12(18), 2901; https://doi.org/10.3390/rs12182901 - 7 Sep 2020
Cited by 8 | Viewed by 2707
Abstract
Mobile platform visual image sequence inevitably has large areas with various types of weak textures, which affect the acquisition of accurate pose in the subsequent platform moving process. The visual–inertial odometry (VIO) with point features and line features as visual information shows a [...] Read more.
Mobile platform visual image sequence inevitably has large areas with various types of weak textures, which affect the acquisition of accurate pose in the subsequent platform moving process. The visual–inertial odometry (VIO) with point features and line features as visual information shows a good performance in weak texture environments, which can solve these problems to a certain extent. However, the extraction and matching of line features are time consuming, and reasonable weights between the point and line features are hard to estimate, which makes it difficult to accurately track the pose of the platform in real time. In order to overcome the deficiency, an improved effective point–line visual–inertial odometry system is proposed in this paper, which makes use of geometric information of line features and combines with pixel correlation coefficient to match the line features. Furthermore, this system uses the Helmert variance component estimation method to adjust weights between point features and line features. Comprehensive experimental results on the two datasets of EuRoc MAV and PennCOSYVIO demonstrate that the point–line visual–inertial odometry system developed in this paper achieved significant improvements in both localization accuracy and efficiency compared with several state-of-the-art VIO systems. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

29 pages, 6437 KiB  
Article
Design and Evaluation of a Permanently Installed Plane-Based Calibration Field for Mobile Laser Scanning Systems
by Erik Heinz, Christoph Holst, Heiner Kuhlmann and Lasse Klingbeil
Remote Sens. 2020, 12(3), 555; https://doi.org/10.3390/rs12030555 - 7 Feb 2020
Cited by 17 | Viewed by 4131
Abstract
Mobile laser scanning has become an established measuring technique that is used for many applications in the fields of mapping, inventory, and monitoring. Due to the increasing operationality of such systems, quality control w.r.t. calibration and evaluation of the systems becomes more and [...] Read more.
Mobile laser scanning has become an established measuring technique that is used for many applications in the fields of mapping, inventory, and monitoring. Due to the increasing operationality of such systems, quality control w.r.t. calibration and evaluation of the systems becomes more and more important and is subject to on-going research. This paper contributes to this topic by using tools from geodetic configuration analysis in order to design and evaluate a plane-based calibration field for determining the lever arm and boresight angles of a 2D laser scanner w.r.t. a GNSS/IMU unit (Global Navigation Satellite System, Inertial Measurement Unit). In this regard, the impact of random, systematic, and gross observation errors on the calibration is analyzed leading to a plane setup that provides accurate and controlled calibration parameters. The designed plane setup is realized in the form of a permanently installed calibration field. The applicability of the calibration field is tested with a real mobile laser scanning system by frequently repeating the calibration. Empirical standard deviations of <1 ... 1.5 mm for the lever arm and <0.005 for the boresight angles are obtained, which was priorly defined to be the goal of the calibration. In order to independently evaluate the mobile laser scanning system after calibration, an evaluation environment is realized consisting of a network of control points as well as TLS (Terrestrial Laser Scanning) reference point clouds. Based on the control points, both the horizontal and vertical accuracy of the system is found to be < 10 mm (root mean square error). This is confirmed by comparisons to the TLS reference point clouds indicating a well calibrated system. Both the calibration field and the evaluation environment are permanently installed and can be used for arbitrary mobile laser scanning systems. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

20 pages, 13324 KiB  
Article
Ideal Angular Orientation of Selected 64-Channel Multi Beam Lidars for Mobile Mapping Systems
by Bashar Alsadik
Remote Sens. 2020, 12(3), 510; https://doi.org/10.3390/rs12030510 - 5 Feb 2020
Cited by 12 | Viewed by 5647
Abstract
Lidar technology is thriving nowadays for different applications mainly for autonomous navigation, mapping, and smart city technology. Lidars vary in different aspects and can be: multi beam, single beam, spinning, solid state, full 360 field of view FOV, single or multi pulse returns, [...] Read more.
Lidar technology is thriving nowadays for different applications mainly for autonomous navigation, mapping, and smart city technology. Lidars vary in different aspects and can be: multi beam, single beam, spinning, solid state, full 360 field of view FOV, single or multi pulse returns, and many other geometric and radiometric aspects. Users and developers in the mapping industry are continuously looking for new released Lidars having high properties of output density, coverage, and accuracy while keeping a lower cost. Accordingly, every Lidar type should be well evaluated for the final intended mapping aim. This evaluation is not easy to implement in practice because of the need to have all the investigated Lidars available in hand and integrated into a ready to use mapping system. Furthermore, to have a fair comparison; it is necessary to ensure the test applied in the same environment at the same travelling path among other conditions. In this paper, we are evaluating two state-of-the-art multi beam Lidar types: Ouster OS-1-64 and Hesai Pandar64 for mapping applications. The evaluation of the Lidar types is applied in a simulation environment which approximates reality. The paper shows the determination of the ideal orientation angle for the two Lidars by assessing the density, coverage, and accuracy and presenting clear performance quantifications and conclusions. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

15 pages, 4662 KiB  
Article
Novel Approach to Automatic Traffic Sign Inventory Based on Mobile Mapping System Data and Deep Learning
by Jesús Balado, Elena González, Pedro Arias and David Castro
Remote Sens. 2020, 12(3), 442; https://doi.org/10.3390/rs12030442 - 1 Feb 2020
Cited by 25 | Viewed by 5035
Abstract
Traffic signs are a key element in driver safety. Governments invest a great amount of resources in maintaining the traffic signs in good condition, for which a correct inventory is necessary. This work presents a novel method for mapping traffic signs based on [...] Read more.
Traffic signs are a key element in driver safety. Governments invest a great amount of resources in maintaining the traffic signs in good condition, for which a correct inventory is necessary. This work presents a novel method for mapping traffic signs based on data acquired with MMS (Mobile Mapping System): images and point clouds. On the one hand, images are faster to process and artificial intelligence techniques, specifically Convolutional Neural Networks, are more optimized than in point clouds. On the other hand, point clouds allow a more exact positioning than the exclusive use of images. The false positive rate per image is only 0.004. First, traffic signs are detected in the images obtained by the 360° camera of the MMS through RetinaNet and they are classified by their corresponding InceptionV3 network. The signs are then positioned in the georeferenced point cloud by means of a projection according to the pinhole model from the images. Finally, duplicate geolocalized signs detected in multiple images are filtered. The method has been tested in two real case studies with 214 images, where 89.7% of the signals have been correctly detected, of which 92.5% have been correctly classified and 97.5% have been located with an error of less than 0.5 m. This sequence, which combines images to detection–classification, and point clouds to geo-referencing, in this order, optimizes processing time and allows this method to be included in a company’s production process. The method is conducted automatically and takes advantage of the strengths of each data type. Full article
(This article belongs to the Special Issue Advances in Mobile Mapping Technologies)
Show Figures

Graphical abstract

Back to TopTop