sensors-logo

Journal Browser

Journal Browser

3D Point Clouds for Intelligent Road Transportation Systems

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Remote Sensors".

Deadline for manuscript submissions: closed (15 April 2020) | Viewed by 15095

Special Issue Editors


E-Mail Website
Guest Editor
Normandie University, UNIROUEN, LITIS, 76000 Rouen, France
Interests: Robotics; Computer Vison; 3D data analysis; LiDAR; Intelligent Transportation Systems

E-Mail Website
Guest Editor
Normandie University, UNIROUEN, ESIGELEC, IRSEEM, 76000 Rouen, France
Interests: robotics; computer vison; 3D data analysis; LiDAR; intelligent transportation systems

Special Issue Information

Dear Colleagues,

Three-dimensional (3D) information is regarded as a key form of data for intelligent road transportation systems. Recent advances in 3D imaging systems, such as LiDAR and RGBD cameras, have enabled us to obtain denser point clouds at cheaper prices. Additionally, 3D perception systems can be fused with other sensors, such as IMUs or cameras or radars, to provide augmented information.

This Special Issue aims to present recent advances in the treatment and application of 3D data for intelligent transportation systems. This includes challenges with calibrating devices and building accurate 3D point clouds for high-level applications, such as localization, understanding scenes, and behavior analysis.

Therefore, we kindly invite you to submit a paper to this Special Issue on “3D Point Clouds for Intelligent Road Transportation Systems”. Potential topics include, but are not limited to:

  • behavior analysis and trajectory prediction;
  • heterogeneous sensor calibration;
  • localization and SLAM (simultaneous localization and mapping);
  • mobile mapping and point cloud registration;
  • object detection and pattern recognition;
  • understanding road scenes.

Prof. Dr. Pascal VASSEUR
Dr. Yohan DUPUIS
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • behavior analysis
  • heterogeneous sensor calibration
  • localization
  • mobile mapping
  • object detection
  • odometry
  • pattern recognition
  • point cloud registration
  • understanding road scenes
  • SLAM
  • trajectory prediction

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

21 pages, 8049 KiB  
Article
Cross-Section Deformation Analysis and Visualization of Shield Tunnel Based on Mobile Tunnel Monitoring System
by Haili Sun, Shuang Liu, Ruofei Zhong and Liming Du
Sensors 2020, 20(4), 1006; https://doi.org/10.3390/s20041006 - 13 Feb 2020
Cited by 29 | Viewed by 3740
Abstract
With the ongoing developments in laser scanning technology, applications for describing tunnel deformation using rich point cloud data have become a significant topic of investigation. This study describes the independently developed a mobile tunnel monitoring system called the second version of Tunnel Scan [...] Read more.
With the ongoing developments in laser scanning technology, applications for describing tunnel deformation using rich point cloud data have become a significant topic of investigation. This study describes the independently developed a mobile tunnel monitoring system called the second version of Tunnel Scan developed by Capital Normal University (CNU-TS-2) for data acquisition, which has an electric system to control its forward speed and is compatible with various laser scanners such as the Faro and Leica models. A comparison with corresponding data acquired by total station data demonstrates that the data collected by CNU-TS-2 is accurate. Following data acquisition, the overall and local deformation of the tunnel is determined by denoising and 360° deformation analysis of the point cloud data. To enhance the expression of the analysis results, this study proposes an expansion of the tunnel point cloud data into a two-dimensional image via cylindrical projection, followed by an expression of the tunnel deformation through color difference to visualize the deformation. Compared with the three-dimensional modeling method of visualization, this method is easier to implement and facilitates storage. In addition, it is conducive to the performance of comprehensive analysis of problems such as water leakage in the tunnel, thereby achieving the effect of multiple uses for a single image. Full article
(This article belongs to the Special Issue 3D Point Clouds for Intelligent Road Transportation Systems)
Show Figures

Figure 1

18 pages, 21183 KiB  
Article
Voxel-FPN: Multi-Scale Voxel Feature Aggregation for 3D Object Detection from LIDAR Point Clouds
by Hongwu Kuang, Bei Wang, Jianping An, Ming Zhang and Zehan Zhang
Sensors 2020, 20(3), 704; https://doi.org/10.3390/s20030704 - 28 Jan 2020
Cited by 125 | Viewed by 7408
Abstract
Object detection in point cloud data is one of the key components in computer vision systems, especially for autonomous driving applications. In this work, we present Voxel-Feature Pyramid Network, a novel one-stage 3D object detector that utilizes raw data from LIDAR sensors only. [...] Read more.
Object detection in point cloud data is one of the key components in computer vision systems, especially for autonomous driving applications. In this work, we present Voxel-Feature Pyramid Network, a novel one-stage 3D object detector that utilizes raw data from LIDAR sensors only. The core framework consists of an encoder network and a corresponding decoder followed by a region proposal network. Encoder extracts and fuses multi-scale voxel information in a bottom-up manner, whereas decoder fuses multiple feature maps from various scales by Feature Pyramid Network in a top-down way. Extensive experiments show that the proposed method has better performance on extracting features from point data and demonstrates its superiority over some baselines on the challenging KITTI-3D benchmark, obtaining good performance on both speed and accuracy in real-world scenarios. Full article
(This article belongs to the Special Issue 3D Point Clouds for Intelligent Road Transportation Systems)
Show Figures

Figure 1

17 pages, 1358 KiB  
Article
Rapid Motion Segmentation of LiDAR Point Cloud Based on a Combination of Probabilistic and Evidential Approaches for Intelligent Vehicles
by Kichun Jo, Sumyeong Lee, Chansoo Kim and Myoungho Sunwoo
Sensors 2019, 19(19), 4116; https://doi.org/10.3390/s19194116 - 23 Sep 2019
Cited by 5 | Viewed by 3106
Abstract
Point clouds from light detecting and ranging (LiDAR) sensors represent increasingly important information for environmental object detection and classification of automated and intelligent vehicles. Objects in the driving environment can be classified as either d y n a m i c or [...] Read more.
Point clouds from light detecting and ranging (LiDAR) sensors represent increasingly important information for environmental object detection and classification of automated and intelligent vehicles. Objects in the driving environment can be classified as either d y n a m i c or s t a t i c depending on their movement characteristics. A LiDAR point cloud is also segmented into d y n a m i c and s t a t i c points based on the motion properties of the measured objects. The segmented motion information of a point cloud can be useful for various functions in automated and intelligent vehicles. This paper presents a fast motion segmentation algorithm that segments a LiDAR point cloud into d y n a m i c and s t a t i c points in real-time. The segmentation algorithm classifies the motion of the latest point cloud based on the LiDAR’s laser beam characteristics and the geometrical relationship between consecutive LiDAR point clouds. To accurately and reliably estimate the motion state of each LiDAR point considering the measurement uncertainty, both probability theory and evidence theory are employed in the segmentation algorithm. The probabilistic and evidential algorithm segments the point cloud into three classes: d y n a m i c , s t a t i c , and u n k n o w n . Points are placed in the u n k n o w n class when LiDAR point cloud is not sufficient for motion segmentation. The point motion segmentation algorithm was evaluated quantitatively and qualitatively through experimental comparisons with previous motion segmentation methods. Full article
(This article belongs to the Special Issue 3D Point Clouds for Intelligent Road Transportation Systems)
Show Figures

Figure 1

Back to TopTop