Special Issue "Real-Time and Multi-Sensor Mobile Mapping Systems for ITS Applications"

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Engineering Remote Sensing".

Deadline for manuscript submissions: 31 August 2021.

Special Issue Editors

Prof. Kai-Wei Chiang
E-Mail Website
Guest Editor
Department of Geomatics, National Cheng Kung University, No.1, Ta-Hsueh Road, Tainan 701, Taiwan
Interests: inertial navigation system; optimal multi-sensor fusion; seamless mapping and navigation applications; artificial Intelligence and collaborative mobile mapping technology
Special Issues and Collections in MDPI journals
Dr. Chenglu Wen
E-Mail Website
Guest Editor
School of Informatics, Xiamen University, Xiamen, 361005, China
Interests: laser scanning; point cloud processing; 3D Vision; intelligent robot; SLAM; indoor mobile mapping; machine learning
Special Issues and Collections in MDPI journals
Dr. Li-Ta Hsu
E-Mail Website
Guest Editor
Interdisciplinary Division of Aeronautical and Aviation Engineering/ The Hong Kong Polytechnic University, Hung Hom, Kowloon 999077, Hong Kong
Interests: GNSS; navigation; autonomous systems; sensor fusion; multipath; NLOS
Special Issues and Collections in MDPI journals
Dr. Andrea Masiero
E-Mail Website
Guest Editor
Interdepartmental Research Center of Geomatics (CIRGEO), University of Padua, via dell'Università 16, 35020 Legnaro (PD), Italy
Interests: mobile mapping; geomatics; indoor and outdoor positioning and navigation; photogrammetry; laser scanning; machine learning; computer vision; remote sensing; adaptive optics; atmospheric turbulence
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

The recent growing market for geospatial data and its applications have increased the demand for collecting geospatial data efficiently and economically. Mobile mapping technologies, including multi-sensor integration and multi-platform mapping technology, have clearly established a modern framework moving towards efficient geospatial data acquisition for various applications such as conventional mapping scenarios, rapid disaster response, smart cities, and autonomous vehicle applications. Among those applications, applying mobile mapping systems to build indoor maps for pedestrian navigation and High-Definition (HD) maps for autonomous vehicles and intelligent transportation are the most popular topics driven by the booming business opportunities in geospatial communities.

This Special Issue will cover relevant topics and trends in real-time and multi-sensor mobile mapping system for intelligent transportation applications, and also introduce the new tendencies of this new paradigm in geospatial science.

We would like to invite you to contribute by submitting articles describing your recent research, experimental work, reviews, and/or case studies related to the field of mobile mapping. Contributions may be from, but not limited to, the following topics:

  • Multi-sensor calibration
  • Sensor cluster design and platform developments
  • Multi-sensor fusion for seamless mapping and navigation applications
  • Simultaneous localization and mapping with LiDAR or cameras
  • Point cloud processing
  • Deep learning for 3D spatial information processing
  • Feature extraction from georeferenced images and point cloud
  • Multi-sensor system assessment and quality validation
  • GNSS denied and challenging environments validation
  • Airborne photogrammetric and LiDAR mapping for ITS applications
  • UAS photogrammetric and LiDAR mapping for ITS applications
  • LiDAR point cloud processing for ITS applications
  • Land-based photogrammetric and LiDAR mapping for ITS applications
  • Static HD maps automated production and update
  • Dynamic HD maps automated production and update
  • 3rd party certified low-cost multi-sensor mapping systems for ITS applications
  • Autonomous vehicle navigation with HD point cloud maps
  • Autonomous vehicle navigation with HD vector maps
  • Georeferenced 3D spatial information processing for road infrastructure
  • Georeferenced 3D spatial information manipulation for connected vehicles
  • Novel application cases

Prof. Dr. Kai-Wei Chiang
Dr. Chenglu Wen
Dr. Li-Ta Hsu
Dr. Andrea Masiero
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • MMS (mobile mapping system)
  • SLAM (simultaneous localization and mapping)
  • HD maps (high-definition maps)
  • ITS (Intelligent transportation system)
  • Autonomous vehicles

Published Papers (10 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Coarse-to-Fine Loosely-Coupled LiDAR-Inertial Odometry for Urban Positioning and Mapping
Remote Sens. 2021, 13(12), 2371; https://doi.org/10.3390/rs13122371 - 17 Jun 2021
Viewed by 181
Abstract
Accurate positioning and mapping are significant for autonomous systems with navigation requirements. In this paper, a coarse-to-fine loosely-coupled (LC) LiDAR-inertial odometry (LC-LIO) that could explore the complementariness of LiDAR and inertial measurement unit (IMU) was proposed for the real-time and accurate pose estimation [...] Read more.
Accurate positioning and mapping are significant for autonomous systems with navigation requirements. In this paper, a coarse-to-fine loosely-coupled (LC) LiDAR-inertial odometry (LC-LIO) that could explore the complementariness of LiDAR and inertial measurement unit (IMU) was proposed for the real-time and accurate pose estimation of a ground vehicle in urban environments. Different from the existing tightly-coupled (TC) LiDAR-inertial fusion schemes which directly use all the considered ranges and inertial measurements to optimize the vehicle pose, the method proposed in this paper performs loosely-couped integrated optimization with the high-frequency motion prediction, which was produced by IMU integration based on optimized results, employed as the initial guess of LiDAR odometry to approach the optimality of LiDAR scan-to-map registration. As one of the prominent contributions, thorough studies were conducted on the performance upper bound of the TC LiDAR-inertial fusion schemes and LC ones, respectively. Furthermore, the experimental verification was performed on the proposition that the proposed pipeline can fully relax the potential of the LiDAR measurements (centimeter-level ranging accuracy) in a coarse-to-fine way without being disturbed by the unexpected IMU bias. Moreover, an adaptive covariance estimation method employed during LC optimization was proposed to explain the uncertainty of LiDAR scan-to-map registration in dynamic scenarios. Furthermore, the effectiveness of the proposed system was validated on challenging real-world datasets. Meanwhile, the process that the proposed pipelines realized the coarse-to-fine LiDAR scan-to-map registration was presented in detail. Comparing with the existing state-of-the-art TC-LIO, the focus of this study would be placed on that the proposed LC-LIO work could achieve similar or better accuracy with a reduced computational expense. Full article
Show Figures

Graphical abstract

Article
Robust Visual-Inertial Navigation System for Low Precision Sensors under Indoor and Outdoor Environments
Remote Sens. 2021, 13(4), 772; https://doi.org/10.3390/rs13040772 - 20 Feb 2021
Viewed by 571
Abstract
Simultaneous Localization and Mapping (SLAM) has always been the focus of the robot navigation for many decades and becomes a research hotspot in recent years. Because a SLAM system based on vision sensor is vulnerable to environment illumination and texture, the problem of [...] Read more.
Simultaneous Localization and Mapping (SLAM) has always been the focus of the robot navigation for many decades and becomes a research hotspot in recent years. Because a SLAM system based on vision sensor is vulnerable to environment illumination and texture, the problem of initial scale ambiguity still exists in a monocular SLAM system. The fusion of a monocular camera and an inertial measurement unit (IMU) can effectively solve the scale blur problem, improve the robustness of the system, and achieve higher positioning accuracy. Based on a monocular visual-inertial navigation system (VINS-mono), a state-of-the-art fusion performance of monocular vision and IMU, this paper designs a new initialization scheme that can calculate the acceleration bias as a variable during the initialization process so that it can be applied to low-cost IMU sensors. Besides, in order to obtain better initialization accuracy, visual matching positioning method based on feature point is used to assist the initialization process. After the initialization process, it switches to optical flow tracking visual positioning mode to reduce the calculation complexity. By using the proposed method, the advantages of feature point method and optical flow method can be fused. This paper, the first one to use both the feature point method and optical flow method, has better performance in the comprehensive performance of positioning accuracy and robustness under the low-cost sensors. Through experiments conducted with the EuRoc dataset and campus environment, the results show that the initial values obtained through the initialization process can be efficiently used for launching nonlinear visual-inertial state estimator and positioning accuracy of the improved VINS-mono has been improved by about 10% than VINS-mono. Full article
Show Figures

Figure 1

Article
Autonomous Vehicle Localization with Prior Visual Point Cloud Map Constraints in GNSS-Challenged Environments
Remote Sens. 2021, 13(3), 506; https://doi.org/10.3390/rs13030506 - 31 Jan 2021
Cited by 2 | Viewed by 915
Abstract
Accurate vehicle ego-localization is key for autonomous vehicles to complete high-level navigation tasks. The state-of-the-art localization methods adopt visual and light detection and ranging (LiDAR) simultaneous localization and mapping (SLAM) to estimate the position of the vehicle. However, both of them may suffer [...] Read more.
Accurate vehicle ego-localization is key for autonomous vehicles to complete high-level navigation tasks. The state-of-the-art localization methods adopt visual and light detection and ranging (LiDAR) simultaneous localization and mapping (SLAM) to estimate the position of the vehicle. However, both of them may suffer from error accumulation due to long-term running without loop optimization or prior constraints. Actually, the vehicle cannot always return to the revisited location, which will cause errors to accumulate in Global Navigation Satellite System (GNSS)-challenged environments. To solve this problem, we proposed a novel localization method with prior dense visual point cloud map constraints generated by a stereo camera. Firstly, the semi-global-block-matching (SGBM) algorithm is adopted to estimate the visual point cloud of each frame and stereo visual odometry is used to provide the initial position for the current visual point cloud. Secondly, multiple filtering and adaptive prior map segmentation are performed on the prior dense visual point cloud map for fast matching and localization. Then, the current visual point cloud is matched with the candidate sub-map by normal distribution transformation (NDT). Finally, the matching result is used to update pose prediction based on the last frame for accurate localization. Comprehensive experiments were undertaken to validate the proposed method, showing that the root mean square errors (RMSEs) of translation and rotation are less than 5.59 m and 0.08°, respectively. Full article
Show Figures

Figure 1

Article
Detection of a Moving UAV Based on Deep Learning-Based Distance Estimation
Remote Sens. 2020, 12(18), 3035; https://doi.org/10.3390/rs12183035 - 17 Sep 2020
Cited by 1 | Viewed by 1058
Abstract
Distance information of an obstacle is important for obstacle avoidance in many applications, and could be used to determine the potential risk of object collision. In this study, the detection of a moving fixed-wing unmanned aerial vehicle (UAV) with deep learning-based distance estimation [...] Read more.
Distance information of an obstacle is important for obstacle avoidance in many applications, and could be used to determine the potential risk of object collision. In this study, the detection of a moving fixed-wing unmanned aerial vehicle (UAV) with deep learning-based distance estimation to conduct a feasibility study of sense and avoid (SAA) and mid-air collision avoidance of UAVs is proposed by using a monocular camera to detect and track an incoming UAV. A quadrotor is regarded as an owned UAV, and it is able to estimate the distance of an incoming fixed-wing intruder. The adopted object detection method is based on the you only look once (YOLO) object detector. Deep neural network (DNN) and convolutional neural network (CNN) methods are applied to exam their performance in the distance estimation of moving objects. The feature extraction of fixed-wing UAVs is based on the VGG-16 model, and then its result is applied to the distance network to estimate the object distance. The proposed model is trained by using synthetic images from animation software and validated by using both synthetic and real flight videos. The results show that the proposed active vision-based scheme is able to detect and track a moving UAV with high detection accuracy and low distance errors. Full article
Show Figures

Graphical abstract

Article
Robust TOA-Based UAS Navigation under Model Mismatch in GNSS-Denied Harsh Environments
Remote Sens. 2020, 12(18), 2928; https://doi.org/10.3390/rs12182928 - 10 Sep 2020
Cited by 1 | Viewed by 708
Abstract
Global Navigation Satellite Systems (GNSS) is the technology of choice for outdoor positioning purposes but has many limitations when used in safety-critical applications such Intelligent Transportation Systems (ITS) and Unmanned Autonomous Systems (UAS). Namely, its performance clearly degrades in harsh propagation conditions and [...] Read more.
Global Navigation Satellite Systems (GNSS) is the technology of choice for outdoor positioning purposes but has many limitations when used in safety-critical applications such Intelligent Transportation Systems (ITS) and Unmanned Autonomous Systems (UAS). Namely, its performance clearly degrades in harsh propagation conditions and is not reliable due to possible attacks or interference. Moreover, GNSS signals may not be available in the so-called GNSS-denied environments, such as deep urban canyons or indoors, and standard GNSS architectures do not provide the precision needed in ITS. Among the different alternatives, cellular signals (LTE/5G) may provide coverage in constrained urban environments and Ultra-Wideband (UWB) ranging is a promising solution to achieve high positioning accuracy. The key points impacting any time-of-arrival (TOA)-based navigation system are (i) the transmitters’ geometry, (ii) a perfectly known transmitters’ position, and (iii) the environment. In this contribution, we analyze the performance loss of alternative TOA-based navigation systems in real-life applications where we may have both transmitters’ position mismatch, harsh propagation environments, and GNSS-denied conditions. In addition, we propose new robust filtering methods able to cope with both effects up to a certain extent. Illustrative results in realistic scenarios are provided to support the discussion and show the performance improvement brought by the new methodologies with respect to the state-of-the-art. Full article
Show Figures

Graphical abstract

Article
The Design a TDCP-Smoothed GNSS/Odometer Integration Scheme with Vehicular-Motion Constraint and Robust Regression
Remote Sens. 2020, 12(16), 2550; https://doi.org/10.3390/rs12162550 - 07 Aug 2020
Viewed by 1273
Abstract
Global navigation satellite system (GNSS) is widely regarded as the primary positioning solution for intelligent transport system (ITS) applications. However, its performance could degrade, due to signal outages and faulty-signal contamination, including multipath and non-line-of-sight reception. Considering the limitation of the performance and [...] Read more.
Global navigation satellite system (GNSS) is widely regarded as the primary positioning solution for intelligent transport system (ITS) applications. However, its performance could degrade, due to signal outages and faulty-signal contamination, including multipath and non-line-of-sight reception. Considering the limitation of the performance and computation loads in mass-produced automotive products, this research investigates the methods for enhancing GNSS-based solutions without significantly increasing the cost for vehicular navigation system. In this study, the measurement technique of the odometer in modern vehicle designs is selected to integrate the GNSS information, without using an inertial navigation system. Three techniques are implemented to improve positioning accuracy; (a) Time-differenced carrier phase (TDCP) based filter: A state-augmented extended Kalman filter is designed to incorporate TDCP measurements for maximizing the effectiveness of phase-smoothing; (b) odometer-aided constraints: The aiding measurement from odometer utilizing forward speed with the lateral constraint enhances the state estimation; the information based on vehicular motion, comprising the zero-velocity constraint, fault detection and exclusion, and dead reckoning, maintains the stability of the positioning solution; (c) robust regression: A weighted-least-square based robust regression as a measurement-quality assessment is applied to adjust the weightings of the measurements adaptively. Experimental results in a GNSS-challenging environment indicate that, based on the single-point-positioning mode with an automotive-grade receiver, the combination of the proposed methods presented a root-mean-square error of 2.51 m, 3.63 m, 1.63 m, and 1.95 m for the horizontal, vertical, forward, and lateral directions, with improvements of 35.1%, 49.6%, 45.3%, and 21.1%, respectively. The statistical analysis exhibits 97.3% state estimation result in the horizontal direction for the percentage of epochs that had errors of less than 5 m, presenting that after the intervention of proposed methods, the positioning performance can fulfill the requirements for road level applications. Full article
Show Figures

Graphical abstract

Article
On the Recursive Joint Position and Attitude Determination in Multi-Antenna GNSS Platforms
Remote Sens. 2020, 12(12), 1955; https://doi.org/10.3390/rs12121955 - 17 Jun 2020
Cited by 2 | Viewed by 867
Abstract
Global Navigation Satellite Systems’ (GNSS) carrier phase observations are fundamental in the provision of precise navigation for modern applications in intelligent transport systems. Differential precise positioning requires the use of a base station nearby the vehicle location, while attitude determination requires the vehicle [...] Read more.
Global Navigation Satellite Systems’ (GNSS) carrier phase observations are fundamental in the provision of precise navigation for modern applications in intelligent transport systems. Differential precise positioning requires the use of a base station nearby the vehicle location, while attitude determination requires the vehicle to be equipped with a setup of multiple GNSS antennas. In the GNSS context, positioning and attitude determination have been traditionally tackled in a separate manner, thus losing valuable correlated information, and for the latter only in batch form. The main goal of this contribution is to shed some light on the recursive joint estimation of position and attitude in multi-antenna GNSS platforms. We propose a new formulation for the joint positioning and attitude (JPA) determination using quaternion rotations. A Bayesian recursive formulation for JPA is proposed, for which we derive a Kalman filter-like solution. To support the discussion and assess the performance of the new JPA, the proposed methodology is compared to standard approaches with actual data collected from a dynamic scenario under the influence of severe multipath effects. Full article
Show Figures

Graphical abstract

Article
The Performance Analysis of INS/GNSS/V-SLAM Integration Scheme Using Smartphone Sensors for Land Vehicle Navigation Applications in GNSS-Challenging Environments
Remote Sens. 2020, 12(11), 1732; https://doi.org/10.3390/rs12111732 - 28 May 2020
Cited by 2 | Viewed by 894
Abstract
Modern smartphones contain embedded global navigation satellite systems (GNSSs), inertial measurement units (IMUs), cameras, and other sensors which are capable of providing user position, velocity, and attitude. However, it is difficult to utilize the actual navigation performance capabilities of smartphones due to the [...] Read more.
Modern smartphones contain embedded global navigation satellite systems (GNSSs), inertial measurement units (IMUs), cameras, and other sensors which are capable of providing user position, velocity, and attitude. However, it is difficult to utilize the actual navigation performance capabilities of smartphones due to the low-cost and disparate sensors, software technologies adopted by manufacturers, and the significant influence of environmental conditions. In this study, we proposed a scheme that integrated sensor data from smartphone IMUs, GNSS chipsets, and cameras using an extended Kalman filter (EKF) to enhance the navigation performance. The visual data from the camera was preprocessed using oriented FAST (Features from accelerated segment test) and rotated BRIEF (Binary robust independent elementary features)-simultaneous localization and mapping (ORB-SLAM), rescaled by applying GNSS measurements, and converted to velocity data before being utilized to update the integration filter. In order to verify the performance of the integrated system, field test data was collected in a downtown area of Tainan City, Taiwan. Experimental results indicated that visual data contributed significantly to improving the accuracy of the navigation performance, demonstrating improvements of 43.0% and 51.3% in position and velocity, respectively. It was verified that the proposed integrated system, which used data from smartphone sensors, was efficient in terms of increasing navigation accuracy in GNSS-challenging environments. Full article
Show Figures

Figure 1

Article
Robust Visual-Inertial Integrated Navigation System Aided by Online Sensor Model Adaption for Autonomous Ground Vehicles in Urban Areas
Remote Sens. 2020, 12(10), 1686; https://doi.org/10.3390/rs12101686 - 25 May 2020
Cited by 4 | Viewed by 1037
Abstract
The visual-inertial integrated navigation system (VINS) has been extensively studied over the past decades to provide accurate and low-cost positioning solutions for autonomous systems. Satisfactory performance can be obtained in an ideal scenario with sufficient and static environment features. However, there are usually [...] Read more.
The visual-inertial integrated navigation system (VINS) has been extensively studied over the past decades to provide accurate and low-cost positioning solutions for autonomous systems. Satisfactory performance can be obtained in an ideal scenario with sufficient and static environment features. However, there are usually numerous dynamic objects in deep urban areas, and these moving objects can severely distort the feature-tracking process which is critical to the feature-based VINS. One well-known method that mitigates the effects of dynamic objects is to detect vehicles using deep neural networks and remove the features belonging to surrounding vehicles. However, excessive feature exclusion can severely distort the geometry of feature distribution, leading to limited visual measurements. Instead of directly eliminating the features from dynamic objects, this study proposes to adopt the visual measurement model based on the quality of feature tracking to improve the performance of the VINS. First, a self-tuning covariance estimation approach is proposed to model the uncertainty of each feature measurement by integrating two parts: (1) the geometry of feature distribution (GFD); (2) the quality of feature tracking. Second, an adaptive M-estimator is proposed to correct the measurement residual model to further mitigate the effects of outlier measurements, like the dynamic features. Different from the conventional M-estimator, the proposed method effectively alleviates the reliance on the excessive parameterization of the M-estimator. Experiments were conducted in typical urban areas of Hong Kong with numerous dynamic objects. The results show that the proposed method could effectively mitigate the effects of dynamic objects and improved accuracy of the VINS is obtained when compared with the conventional VINS method. Full article
Show Figures

Graphical abstract

Article
Navigation Engine Design for Automated Driving Using INS/GNSS/3D LiDAR-SLAM and Integrity Assessment
Remote Sens. 2020, 12(10), 1564; https://doi.org/10.3390/rs12101564 - 14 May 2020
Cited by 7 | Viewed by 1200
Abstract
Automated driving has made considerable progress recently. The multisensor fusion system is a game changer in making self-driving cars possible. In the near future, multisensor fusion will be necessary to meet the high accuracy needs of automated driving systems. This paper proposes a [...] Read more.
Automated driving has made considerable progress recently. The multisensor fusion system is a game changer in making self-driving cars possible. In the near future, multisensor fusion will be necessary to meet the high accuracy needs of automated driving systems. This paper proposes a multisensor fusion design, including an inertial navigation system (INS), a global navigation satellite system (GNSS), and light detection and ranging (LiDAR), to implement 3D simultaneous localization and mapping (INS/GNSS/3D LiDAR-SLAM). The proposed fusion structure enhances the conventional INS/GNSS/odometer by compensating for individual drawbacks such as INS-drift and error-contaminated GNSS. First, a highly integrated INS-aiding LiDAR-SLAM is presented to improve the performance and increase the robustness to adjust to varied environments using the reliable initial values from the INS. Second, the proposed fault detection exclusion (FDE) contributes SLAM to eliminate the failure solutions such as local solution or the divergence of algorithm. Third, the SLAM position velocity acceleration (PVA) model is used to deal with the high dynamic movement. Finally, an integrity assessment benefits the central fusion filter to avoid failure measurements into the update process based on the information from INS-aiding SLAM, which increases the reliability and accuracy. Consequently, our proposed multisensor design can deal with various situations such as long-term GNSS outage, deep urban areas, and highways. The results show that the proposed method can achieve an accuracy of under 1 meter in challenging scenarios, which has the potential to contribute the autonomous system. Full article
Show Figures

Graphical abstract

Back to TopTop