sensors-logo

Journal Browser

Journal Browser

Sensing and Semantic Perception in Autonomous Driving

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (1 November 2022) | Viewed by 9142

Special Issue Editors


E-Mail Website
Guest Editor
School of Information Technology, Halmstad University , 30118 Halmstad, Sweden
Interests: Semantic Perception; Autonomous Robots; Imitation Learning

E-Mail Website
Co-Guest Editor
Robot Learning Lab, University of Freiburg, Freiburg, Germany
Interests: Robot Learning; Scene Understanding; Navigation

E-Mail Website
Co-Guest Editor
Department of Robotics and Computer Science, MINES ParisTech, PSL University, Paris, France
Interests: SLAM; 3D Vision; Deep Learning

Special Issue Information

Dear Colleagues,

Sensing is an essential requirement to understand the surroundings in autonomous systems. Semantic perception accelerates the process of contextual understanding of the surroundings by predicting a meaningful class label for each sensed data point. A fine-grained semantic perception of the sensed data stream can, to a great extent, help to reach full autonomy in safety-critical systems such as autonomous vehicles.

This Special Issue aims to capture the latest developments in sensing technologies and semantic perception algorithms in the field of autonomous driving. We are aiming to compare various state-of-the-art approaches for generic sensing and semantic perception methods both in computer vision and robotic communities. Furthermore, this Special Issue will gather information about advanced deep neural network models for semantic environment perception. We, therefore, call for submissions of novel solutions that target developing AI-based smart sensing methodologies and intelligent perception approaches.   

Prof. Dr. Eren Erdal Aksoy
Prof. Dr. Abhinav Valada
Prof. Dr. Jean-Emmanuel Deschaud
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Sensing methodologies
  • Smart sensors
  • Semantic perception
  • Semantic segmentation
  • Panoptic segmentation
  • Object detection
  • Deep learning

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

12 pages, 3339 KiB  
Article
Supervised Object-Specific Distance Estimation from Monocular Images for Autonomous Driving
by Yury Davydov, Wen-Hui Chen and Yu-Chen Lin
Sensors 2022, 22(22), 8846; https://doi.org/10.3390/s22228846 - 16 Nov 2022
Cited by 4 | Viewed by 2158
Abstract
Accurate distance estimation is a requirement for advanced driver assistance systems (ADAS) to provide drivers with safety-related functions such as adaptive cruise control and collision avoidance. Radars and lidars can be used for providing distance information; however, they are either expensive or provide [...] Read more.
Accurate distance estimation is a requirement for advanced driver assistance systems (ADAS) to provide drivers with safety-related functions such as adaptive cruise control and collision avoidance. Radars and lidars can be used for providing distance information; however, they are either expensive or provide poor object information compared to image sensors. In this study, we propose a lightweight convolutional deep learning model that can extract object-specific distance information from monocular images. We explore a variety of training and five structural settings of the model and conduct various tests on the KITTI dataset for evaluating seven different road agents, namely, person, bicycle, car, motorcycle, bus, train, and truck. Additionally, in all experiments, a comparison with the Monodepth2 model is carried out. Experimental results show that the proposed model outperforms Monodepth2 by 15% in terms of the average weighted mean absolute error (MAE). Full article
(This article belongs to the Special Issue Sensing and Semantic Perception in Autonomous Driving)
Show Figures

Figure 1

15 pages, 4986 KiB  
Article
An Efficient Ensemble Deep Learning Approach for Semantic Point Cloud Segmentation Based on 3D Geometric Features and Range Images
by Muhammed Enes Atik and Zaide Duran
Sensors 2022, 22(16), 6210; https://doi.org/10.3390/s22166210 - 18 Aug 2022
Cited by 8 | Viewed by 2421
Abstract
Mobile light detection and ranging (LiDAR) sensor point clouds are used in many fields such as road network management, architecture and urban planning, and 3D High Definition (HD) city maps for autonomous vehicles. Semantic segmentation of mobile point clouds is critical for these [...] Read more.
Mobile light detection and ranging (LiDAR) sensor point clouds are used in many fields such as road network management, architecture and urban planning, and 3D High Definition (HD) city maps for autonomous vehicles. Semantic segmentation of mobile point clouds is critical for these tasks. In this study, we present a robust and effective deep learning-based point cloud semantic segmentation method. Semantic segmentation is applied to range images produced from point cloud with spherical projection. Irregular 3D mobile point clouds are transformed into regular form by projecting the clouds onto the plane to generate 2D representation of the point cloud. This representation is fed to the proposed network that produces semantic segmentation. The local geometric feature vector is calculated for each point. Optimum parameter experiments were also performed to obtain the best results for semantic segmentation. The proposed technique, called SegUNet3D, is an ensemble approach based on the combination of U-Net and SegNet algorithms. SegUNet3D algorithm has been compared with five different segmentation algorithms on two challenging datasets. SemanticPOSS dataset includes the urban area, whereas RELLIS-3D includes the off-road environment. As a result of the study, it was demonstrated that the proposed approach is superior to other methods in terms of mean Intersection over Union (mIoU) in both datasets. The proposed method was able to improve the mIoU metric by up to 15.9% in the SemanticPOSS dataset and up to 5.4% in the RELLIS-3D dataset. Full article
(This article belongs to the Special Issue Sensing and Semantic Perception in Autonomous Driving)
Show Figures

Figure 1

17 pages, 31428 KiB  
Article
Fast and Accurate Lane Detection via Graph Structure and Disentangled Representation Learning
by Yulin He, Wei Chen, Chen Li, Xin Luo and Libo Huang
Sensors 2021, 21(14), 4657; https://doi.org/10.3390/s21144657 - 7 Jul 2021
Cited by 4 | Viewed by 3181
Abstract
It is desirable to maintain high accuracy and runtime efficiency at the same time in lane detection. However, due to the long and thin properties of lanes, extracting features with both strong discrimination and perception abilities needs a huge amount of calculation, which [...] Read more.
It is desirable to maintain high accuracy and runtime efficiency at the same time in lane detection. However, due to the long and thin properties of lanes, extracting features with both strong discrimination and perception abilities needs a huge amount of calculation, which seriously slows down the running speed. Therefore, we design a more efficient way to extract the features of lanes, including two phases: (1) Local feature extraction, which sets a series of predefined anchor lines, and extracts the local features through their locations. (2) Global feature aggregation, which treats local features as the nodes of the graph, and builds a fully connected graph by adaptively learning the distance between nodes, the global feature can be aggregated through weighted summing finally. Another problem that limits the performance is the information loss in feature compression, mainly due to the huge dimensional gap, e.g., from 512 to 8. To handle this issue, we propose a feature compression module based on decoupling representation learning. This module can effectively learn the statistical information and spatial relationships between features. After that, redundancy is greatly reduced and more critical information is retained. Extensional experimental results show that our proposed method is both fast and accurate. On the Tusimple and CULane benchmarks, with a running speed of 248 FPS, F1 values of 96.81% and 75.49% were achieved, respectively. Full article
(This article belongs to the Special Issue Sensing and Semantic Perception in Autonomous Driving)
Show Figures

Figure 1

Back to TopTop