sensors-logo

Journal Browser

Journal Browser

Sensors and Object Detection for Autonomous Driving in Adverse Conditions

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 8551

Special Issue Editor

Institute for Aerospace Studies, University of Toronto, Toronto, ON M3H 5T6, Canada
Interests: artificial intelligence; robotics; autonomous driving; unmanned aerial vehicles

Special Issue Information

Dear Colleagues,

Autonomous driving benchmarks for object detection have primarily focused on local regions and benign conditions, yet much more diversity of environment and conditions will be required for these vehicles to reach the public market. Some detection benchmarks, such as the Waymo and CADC datasets, include precipitation and winter conditions but still rely primarily on vision and lidar as the primary sensing modalities. This Special Issue seeks papers that directly address issues related to autonomous driving in adverse conditions, from the sensor, dataset, or algorithmic side, in order to shine a light on the myriad challenges that remain in robust object detection in all weather conditions and driving environments.

Dr. Steven L. Waslander
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous driving
  • object detection
  • adverse weather
  • robust sensing

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 3285 KiB  
Communication
Camera-LiDAR Fusion Method with Feature Switch Layer for Object Detection Networks
by Taek-Lim Kim and Tae-Hyoung Park
Sensors 2022, 22(19), 7163; https://doi.org/10.3390/s22197163 - 21 Sep 2022
Cited by 3 | Viewed by 1958
Abstract
Object detection is an important factor in the autonomous driving industry. Object detection for autonomous vehicles requires robust results, because various situations and environments must be considered. A sensor fusion method is used to implement robust object detection. A sensor fusion method using [...] Read more.
Object detection is an important factor in the autonomous driving industry. Object detection for autonomous vehicles requires robust results, because various situations and environments must be considered. A sensor fusion method is used to implement robust object detection. A sensor fusion method using a network should effectively meld two features, otherwise, there is concern that the performance is substantially degraded. To effectively use sensors in autonomous vehicles, data analysis is required. We investigated papers in which the camera and LiDAR data change for effective fusion. We propose a feature switch layer for a sensor fusion network for object detection in cameras and LiDAR. Object detection performance was improved by designing a feature switch layer that can consider its environment during network feature fusion. The feature switch layer extracts and fuses features while considering the environment in which the sensor data changes less than during the learning network. We conducted an evaluation experiment using the Dense Dataset and confirmed that the proposed method improves the object detection performance. Full article
Show Figures

Figure 1

21 pages, 1978 KiB  
Article
Adopting the YOLOv4 Architecture for Low-Latency Multispectral Pedestrian Detection in Autonomous Driving
by Kamil Roszyk, Michał R. Nowicki and Piotr Skrzypczyński
Sensors 2022, 22(3), 1082; https://doi.org/10.3390/s22031082 - 30 Jan 2022
Cited by 34 | Viewed by 5351
Abstract
Detecting pedestrians in autonomous driving is a safety-critical task, and the decision to avoid a a person has to be made with minimal latency. Multispectral approaches that combine RGB and thermal images are researched extensively, as they make it possible to gain robustness [...] Read more.
Detecting pedestrians in autonomous driving is a safety-critical task, and the decision to avoid a a person has to be made with minimal latency. Multispectral approaches that combine RGB and thermal images are researched extensively, as they make it possible to gain robustness under varying illumination and weather conditions. State-of-the-art solutions employing deep neural networks offer high accuracy of pedestrian detection. However, the literature is short of works that evaluate multispectral pedestrian detection with respect to its feasibility in obstacle avoidance scenarios, taking into account the motion of the vehicle. Therefore, we investigated the real-time neural network detector architecture You Only Look Once, the latest version (YOLOv4), and demonstrate that this detector can be adapted to multispectral pedestrian detection. It can achieve accuracy on par with the state-of-the-art while being highly computationally efficient, thereby supporting low-latency decision making. The results achieved on the KAIST dataset were evaluated from the perspective of automotive applications, where low latency and a low number of false negatives are critical parameters. The middle fusion approach to YOLOv4 in its Tiny variant achieved the best accuracy to computational efficiency trade-off among the evaluated architectures. Full article
Show Figures

Figure 1

Back to TopTop