sensors-logo

Journal Browser

Journal Browser

Special Issue "Advanced Computer Vision Techniques for Autonomous Driving"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 31 August 2021.

Special Issue Editors

Prof. Dr. M. Hassaballah
E-Mail Website
Guest Editor
Department of Computer Science Faculty of Computers and Information South Valley University, Qena, Egypt
Interests: Computer vision; Image processing; Object detection and tracking; Scenes understanding; deep Learning; Biometrics, Security
Prof. Dr. Zhengming Ding
E-Mail
Guest Editor
Department of Computer Science, Tulane University, New Orleans, LA 70118, USA
Interests: Transfer learning; Domain adaptation; Deep learning and Multi-view learning; Vehicle and Person Re-Identification
Dr. Senthil Yogamani
E-Mail Website
Guest Editor
AI Architect, Autonomous Driving, Valeo Vision Systems, Ireland
Interests: Autonomous driving; computer vision; Deep learning; Semantic segmentation; Automated parking systems

Special Issue Information

Dear Colleagues,

Autonomous driving (AD) refers to self-driving vehicles or any transport system moves without humans. Automotive systems are equipped with cameras and sensors to cover all the fields of view and range. Further, sensor architecture in AD includes multiple sets of cameras, radars, and LIDARs, as well as GPS-GNSS for absolute localization and inertial measurement units that provide a 3D pose of the vehicle in space. Representation of the environment state or scene understanding is utilized by a decision-making system to produce the final driving policy, which can be achieved by a combination of several perception or computer vision tasks such as semantic segmentation, motion estimation, depth estimation, and soiling detection. Computer vision is as a key technique in AD technologies. Thus, there is a need to explore new and emerging trends in computer vision for autonomous driving. This Special Issue aims to address the most up-to-date impacts of computer vision on progress autonomous driving research. Topics of interest include but are not limited to:

  • New trends on vision and sensors for autonomous driving;
  • Vision‐based traffic flow analysis and smart vehicle technologies;
  • Vehicle trajectory prediction in autonomous driving;
  • Vehicle classification and semantic segmentation;
  • Traffic sign detection, recognition, and scene understanding;
  • Detection, tracking, learning, and predicting on-road pedestrian behavior;
  • Object detection and tracking on Fisheye cameras for autonomous driving;
  • Unsupervised, weakly-supervised, and reinforcement deep.

Prof. Dr. M. Hassaballah
Prof. Dr. Zhengming Ding
Dr. Senthil Yogamani
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

 
 

Keywords

  • Autonomous driving
  • Computer vision
  • Automated parking
  • Scene understanding
  • Vehicle trajectory prediction
  • Object detection and tracking
  • Semantic segmentation
  • Tracking using a Lidar sensor
  • Deep neural network models
  • Smart sensors
 
 

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Transfer Learning Based Semantic Segmentation for 3D Object Detection from Point Cloud
Sensors 2021, 21(12), 3964; https://doi.org/10.3390/s21123964 - 08 Jun 2021
Viewed by 529
Abstract
Three-dimensional object detection utilizing LiDAR point cloud data is an indispensable part of autonomous driving perception systems. Point cloud-based 3D object detection has been a better replacement for higher accuracy than cameras during nighttime. However, most LiDAR-based 3D object methods work in a [...] Read more.
Three-dimensional object detection utilizing LiDAR point cloud data is an indispensable part of autonomous driving perception systems. Point cloud-based 3D object detection has been a better replacement for higher accuracy than cameras during nighttime. However, most LiDAR-based 3D object methods work in a supervised manner, which means their state-of-the-art performance relies heavily on a large-scale and well-labeled dataset, while these annotated datasets could be expensive to obtain and only accessible in the limited scenario. Transfer learning is a promising approach to reduce the large-scale training datasets requirement, but existing transfer learning object detectors are primarily for 2D object detection rather than 3D. In this work, we utilize the 3D point cloud data more effectively by representing the birds-eye-view (BEV) scene and propose a transfer learning based point cloud semantic segmentation for 3D object detection. The proposed model minimizes the need for large-scale training datasets and consequently reduces the training time. First, a preprocessing stage filters the raw point cloud data to a BEV map within a specific field of view. Second, the transfer learning stage uses knowledge from the previously learned classification task (with more data for training) and generalizes the semantic segmentation-based 2D object detection task. Finally, 2D detection results from the BEV image have been back-projected into 3D in the postprocessing stage. We verify results on two datasets: the KITTI 3D object detection dataset and the Ouster LiDAR-64 dataset, thus demonstrating that the proposed method is highly competitive in terms of mean average precision (mAP up to 70%) while still running at more than 30 frames per second (FPS). Full article
(This article belongs to the Special Issue Advanced Computer Vision Techniques for Autonomous Driving)
Show Figures

Figure 1

Review

Jump to: Research

Review
Automatic Number Plate Recognition:A Detailed Survey of Relevant Algorithms
Sensors 2021, 21(9), 3028; https://doi.org/10.3390/s21093028 - 26 Apr 2021
Viewed by 749
Abstract
Technologies and services towards smart-vehicles and Intelligent-Transportation-Systems (ITS), continues to revolutionize many aspects of human life. This paper presents a detailed survey of current techniques and advancements in Automatic-Number-Plate-Recognition (ANPR) systems, with a comprehensive performance comparison of various real-time tested and simulated algorithms, [...] Read more.
Technologies and services towards smart-vehicles and Intelligent-Transportation-Systems (ITS), continues to revolutionize many aspects of human life. This paper presents a detailed survey of current techniques and advancements in Automatic-Number-Plate-Recognition (ANPR) systems, with a comprehensive performance comparison of various real-time tested and simulated algorithms, including those involving computer vision (CV). ANPR technology has the ability to detect and recognize vehicles by their number-plates using recognition techniques. Even with the best algorithms, a successful ANPR system deployment may require additional hardware to maximize its accuracy. The number plate condition, non-standardized formats, complex scenes, camera quality, camera mount position, tolerance to distortion, motion-blur, contrast problems, reflections, processing and memory limitations, environmental conditions, indoor/outdoor or day/night shots, software-tools or other hardware-based constraint may undermine its performance. This inconsistency, challenging environments and other complexities make ANPR an interesting field for researchers. The Internet-of-Things is beginning to shape future of many industries and is paving new ways for ITS. ANPR can be well utilized by integrating with RFID-systems, GPS, Android platforms and other similar technologies. Deep-Learning techniques are widely utilized in CV field for better detection rates. This research aims to advance the state-of-knowledge in ITS (ANPR) built on CV algorithms; by citing relevant prior work, analyzing and presenting a survey of extraction, segmentation and recognition techniques whilst providing guidelines on future trends in this area. Full article
(This article belongs to the Special Issue Advanced Computer Vision Techniques for Autonomous Driving)
Show Figures

Figure 1

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

 
 
Back to TopTop