Computer Vision and Scene Understanding for Autonomous Driving

A special issue of Journal of Imaging (ISSN 2313-433X). This special issue belongs to the section "Computer Vision and Pattern Recognition".

Deadline for manuscript submissions: closed (20 September 2022) | Viewed by 8543

Special Issue Editors


E-Mail Website
Guest Editor
Autonomous Driving and Intelligent Transport Group, School of Engineering, Computing and Mathematics, Oxford Brookes University, Oxford OX33 IHX, UK
Interests: autonomous vehicles; computer vision; vehicle dynamics and control

E-Mail Website1 Website2
Guest Editor
Institute of Automation and Control, Graz University of Technology, Graz, Austria
Interests: sense & control of automated vehicles; signal processing; multi-sensor data fusion; uncertainty estimation and quantification; robust optimization
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Autonomous driving presents an exciting opportunity to revolutionise the transport industry, offering the potential to enhance road safety and provide efficient transport systems. Current industry focus and high levels of investment are promoting rapid advances in technology in all aspects of the autonomous driving system. However, the vehicle’s ability to identify and understand the environment remains critical to the safe and effective operation of autonomous vehicles.

There remain a number of perception system-based challenges to the widespread adoption of autonomous vehicles, including (but not limited to) efficient perception system hardware and software, lightweight processing algorithms, robust object detection in a variety of weather conditions, understanding of the actions (and intent) of other road users and the prediction of future actions of other vehicles and pedestrians.

Contributions are requested from researchers working to enhance perception system capabilities, helping to accelerate the development and widespread adoption of autonomous vehicles on the world’s roads.

Dr. Andrew Bradley
Dr. Daniel Watzenig
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • efficient perception system hardware and software
  • object detection for autonomous driving
  • 3D object detection
  • multimodal object detection
  • action detection and prediction
  • advanced scene understanding
  • prediction of evolution of road events
  • robust perception in adverse weather or environmental conditions
  • data augmentation for robust object detection
  • fault-tolerant perception and scene understanding
  • use of simulation for training/enhancing perception system capability
  • other advances in image processing for autonomous driving applications
  • advances in imaging hardware for autonomous vehicles
  • fusion of sensor and image data
  • dealing with dirt accumulation on sensory equipment
  • attention-based perception systems
  • hazard identification techniques

Published Papers (3 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 14607 KiB  
Article
LiDAR-Based Sensor Fusion SLAM and Localization for Autonomous Driving Vehicles in Complex Scenarios
by Kai Dai, Bohua Sun, Guanpu Wu, Shuai Zhao, Fangwu Ma, Yufei Zhang and Jian Wu
J. Imaging 2023, 9(2), 52; https://doi.org/10.3390/jimaging9020052 - 20 Feb 2023
Cited by 5 | Viewed by 4408
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) and online localization methods are widely used in autonomous driving, and are key parts of intelligent vehicles. However, current SLAM algorithms have limitations in map drift and localization algorithms based on a single sensor have poor adaptability [...] Read more.
LiDAR-based simultaneous localization and mapping (SLAM) and online localization methods are widely used in autonomous driving, and are key parts of intelligent vehicles. However, current SLAM algorithms have limitations in map drift and localization algorithms based on a single sensor have poor adaptability to complex scenarios. A SLAM and online localization method based on multi-sensor fusion is proposed and integrated into a general framework in this paper. In the mapping process, constraints consisting of normal distributions transform (NDT) registration, loop closure detection and real time kinematic (RTK) global navigation satellite system (GNSS) position for the front-end and the pose graph optimization algorithm for the back-end, which are applied to achieve an optimized map without drift. In the localization process, the error state Kalman filter (ESKF) fuses LiDAR-based localization position and vehicle states to realize more robust and precise localization. The open-source KITTI dataset and field tests are used to test the proposed method. The method effectiveness shown in the test results achieves 5–10 cm mapping accuracy and 20–30 cm localization accuracy, and it realizes online autonomous driving in complex scenarios. Full article
(This article belongs to the Special Issue Computer Vision and Scene Understanding for Autonomous Driving)
Show Figures

Figure 1

15 pages, 1899 KiB  
Article
Obscurant Segmentation in Long Wave Infrared Images Using GLCM Textures
by Mohammed Abuhussein and Aaron Robinson
J. Imaging 2022, 8(10), 266; https://doi.org/10.3390/jimaging8100266 - 30 Sep 2022
Cited by 2 | Viewed by 1446
Abstract
The benefits of autonomous image segmentation are readily apparent in many applications and garners interest from stakeholders in many fields. The wide range of benefits encompass applications ranging from medical diagnosis, where the shape of the grouped pixels increases diagnosis accuracy, to autonomous [...] Read more.
The benefits of autonomous image segmentation are readily apparent in many applications and garners interest from stakeholders in many fields. The wide range of benefits encompass applications ranging from medical diagnosis, where the shape of the grouped pixels increases diagnosis accuracy, to autonomous vehicles where the grouping of pixels defines roadways, traffic signs, other vehicles, etc. It even proves beneficial in many phases of machine learning, where the resulting segmentation can be used as inputs to the network or as labels for training. The majority of the available image segmentation algorithmic development and results focus on visible image modalities. Therefore, in this treatment, the authors present the results of a study designed to identify and improve current semantic methods for infrared scene segmentation. Specifically, the goal is to propose a novel approach to provide tile-based segmentation of occlusion clouds in Long Wave Infrared images. This work complements the collection of well-known semantic segmentation algorithms applicable to thermal images but requires a vast dataset to provide accurate performance. We document performance in applications where the distinction between dust cloud tiles and clear tiles enables conditional processing. Therefore, the authors propose a Gray Level Co-Occurrence Matrix (GLCM) based method for infrared image segmentation. The main idea of our approach is that GLCM features are extracted from local tiles in the image and used to train a binary classifier to provide indication of tile occlusions. Our method introduces a new texture analysis scheme that is more suitable for image segmentation than the solitary Gabor segmentation or Markov Random Field (MRF) scheme. Our experimental results show that our algorithm performs well in terms of accuracy and a better inter-region homogeneity than the pixel-based infrared image segmentation algorithms. Full article
(This article belongs to the Special Issue Computer Vision and Scene Understanding for Autonomous Driving)
Show Figures

Figure 1

13 pages, 21914 KiB  
Article
A Dataset for Temporal Semantic Segmentation Dedicated to Smart Mobility of Wheelchairs on Sidewalks
by Benoit Decoux, Redouane Khemmar, Nicolas Ragot, Arthur Venon, Marcos Grassi-Pampuch, Antoine Mauri, Louis Lecrosnier and Vishnu Pradeep
J. Imaging 2022, 8(8), 216; https://doi.org/10.3390/jimaging8080216 - 7 Aug 2022
Cited by 2 | Viewed by 2162
Abstract
In smart mobility, the semantic segmentation of images is an important task for a good understanding of the environment. In recent years, many studies have been made on this subject, in the field of Autonomous Vehicles on roads. Some image datasets are available [...] Read more.
In smart mobility, the semantic segmentation of images is an important task for a good understanding of the environment. In recent years, many studies have been made on this subject, in the field of Autonomous Vehicles on roads. Some image datasets are available for learning semantic segmentation models, leading to very good performance. However, for other types of autonomous mobile systems like Electric Wheelchairs (EW) on sidewalks, there is no specific dataset. Our contribution presented in this article is twofold: (1) the proposal of a new dataset of short sequences of exterior images of street scenes taken from viewpoints located on sidewalks, in a 3D virtual environment (CARLA); (2) a convolutional neural network (CNN) adapted for temporal processing and including additional techniques to improve its accuracy. Our dataset includes a smaller subset, made of image pairs taken from the same places in the maps of the virtual environment, but from different viewpoints: one located on the road and the other located on the sidewalk. This additional set is aimed at showing the importance of the viewpoint in the result of semantic segmentation. Full article
(This article belongs to the Special Issue Computer Vision and Scene Understanding for Autonomous Driving)
Show Figures

Figure 1

Back to TopTop