Explainable Deep Architectures for Saliency-Based Autonomous Vehicle Driving Monitoring

A special issue of Drones (ISSN 2504-446X). This special issue belongs to the section "Innovative Urban Mobility".

Deadline for manuscript submissions: closed (28 February 2023) | Viewed by 7524

Special Issue Editors


E-Mail Website
Guest Editor
STMicroelectronics, ADG R&D Power and Discretes Division, Artificial Intelligence Team, Catania, Italy
Interests: deep learning systems; explainable deep learning for automotive and healthcare applications; medical imaging
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Mathematics and Computer Science, University of Catania, Viale A. Doria, 6, 95125 Catania, Italy
Interests: computer vision; multimedia; image processing; machine learning; digital forensics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
1. STMicroelectronics, IPA Division, Catania, Italy
2. Consiglio Nazionale delle Ricerche, Istituto per la Microelettronica e Microsistemi, Catania, Italy
Interests: deep learning; solutions for industrial; healthcare and automotive applications

Special Issue Information

Dear Colleagues,

The latest generation of autonomous vehicles needs to collect a wide variety of multi-modal sampled data processed by several sensing systems embedded in the car. The data which are most relevant in reducing the autonomous driving risk level are those of the perceptual type including vision, LiDAR, RADAR, accelerometer data and so on. Through advanced deep learning-based solutions, it is possible to reconstruct the driving scene, driving dynamics, and driving risk level in order to constantly monitor the safety level of the self-driving vehicle as well as the actions to be taken in order to minimize the driving risk. In order to better characterize the criteria analyzed by the deep learning engines embedded in self-driving cars, recent scientific research proposes the use of explainable architectures that highlight the activations used by the networks to determine autonomous driving actions. Furthermore, the visual analysis of driving scenarios based on the salience concept will make the autonomous driving system more efficient and robust.

This Special Issue on “Explainable Deep Architectures for Saliency-based Autonomous Vehicle Driving Monitoring” aims to collect studies on the recent advances of explainable deep learning for autonomous driving in a wide range of topics, including (but are not limited to) the following:

  • Car-driver vision explainable systems and physiological big data processing systems for self-driving vehicles;
  • Saliency-based visual scene understanding for autonomous vehicles;
  • Explainable deep solutions for autonomous vehicles sensing-data fusion;
  • Software Deep Learning embedded architectures for autonomous vehicles;
  • Vision, LiDAR , RADAR, near-infrared-thermal, physiological data for explainable self-driving safety assessment;
  • Explainable Visual Transformers for autonomous driving applications;
  • Saliency perceptual visual assessment for personalized self-driving solutions;
  • Intelligent multi-modal bio-sensing data analysis and modeling for autonomous driving risk assessment;
  • Adversarial attack stabilization deep solutions for self-driving applications;
  • Saliency-based domain adaptation for robust self-driving scene understanding/reconstruction;
  • Car-driver saliency-based scene reconstruction and object detection and tracking in self-driving scenario.

On behalf of Drones, we invite you to consider this Special Issue as an opportunity to publish your research results in the field of explainable deep learning for autonomous driving. We are looking forward to receiving your submissions.

Prof. Dr. Francesco Rundo
Prof. Dr. Sebastiano Battiato
Dr. Angelo Alberto Messina
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Drones is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

18 pages, 18017 KiB  
Article
RREV: A Robust and Reliable End-to-End Visual Navigation
by Wenxiao Ou, Tao Wu, Junxiang Li, Jinjiang Xu and Bowen Li
Drones 2022, 6(11), 344; https://doi.org/10.3390/drones6110344 - 4 Nov 2022
Cited by 1 | Viewed by 1470
Abstract
With the development of deep learning, more and more attention has been paid to end-to-end autonomous driving. However, affected by the nature of deep learning, end-to-end autonomous driving is currently facing some problems. First, due to the imbalance between the “junctions” and “non-junctions” [...] Read more.
With the development of deep learning, more and more attention has been paid to end-to-end autonomous driving. However, affected by the nature of deep learning, end-to-end autonomous driving is currently facing some problems. First, due to the imbalance between the “junctions” and “non-junctions” samples of the road scene, the model is overfitted to a large class of samples during training, resulting in insufficient learning of the ability to turn at intersections; second, it is difficult to evaluate the confidence of the deep learning model, so it is impossible to determine whether the model output is reliable, and then make further decisions, which is an important reason why the end-to-end autonomous driving solution is not recognized; and third, the deep learning model is highly sensitive to disturbances, and the predicted results of the previous and subsequent frames are prone to jumping. To this end, a more robust and reliable end-to-end visual navigation scheme (RREV navigation) is proposed in this paper, which was used to predict a vehicle’s future waypoints from front-view RGB images. First, the scheme adopted a dual-model learning strategy, using two models to independently learn “junctions” and “non-junctions” to eliminate the influence of sample imbalance. Secondly, according to the smoothness and continuity of waypoints, a model confidence quantification method of “Independent Prediction-Fitting Error” (IPFE) was proposed. Finally, IPFE was applied to weight the multi-frame output to eliminate the influence of the prediction jump of the deep learning model and ensure the coherence and smoothness of the output. The experimental results show that the RREV navigation scheme in this paper was more reliable and robust, especially, the steering performance of the model intersection could be greatly improved. Full article
Show Figures

Figure 1

Review

Jump to: Research

26 pages, 8701 KiB  
Review
Application and Development of Autonomous Robots in Concrete Construction: Challenges and Opportunities
by Shun Zhao, Qiang Wang, Xinjun Fang, Wei Liang, Yu Cao, Changyi Zhao, Lu Li, Chunbao Liu and Kunyang Wang
Drones 2022, 6(12), 424; https://doi.org/10.3390/drones6120424 - 16 Dec 2022
Cited by 11 | Viewed by 5536
Abstract
Updated concrete construction robots are designed to optimize equipment operation, improve safety, enhance workspace awareness, and further ensure a proper working environment for construction workers. The importance of concrete construction robots has been constantly highlighted, as they have a profound impact on construction [...] Read more.
Updated concrete construction robots are designed to optimize equipment operation, improve safety, enhance workspace awareness, and further ensure a proper working environment for construction workers. The importance of concrete construction robots has been constantly highlighted, as they have a profound impact on construction quality and efficiency. Autonomous vehicle driving monitoring has been widely employed in concrete construction robots; however, they lack clear relevance to the key functions in the building process. This paper aims to bridge this knowledge gap by systematically classifying and summarizing the existing concrete construction robots, analyzing their existing problems, and providing direction for their future development. The prescription criteria and selection of robots depend on the concrete construction process, which includes six common functional levels: distribution, leveling and compaction, floor finishing, surface painting, 3D printing, and surveillance. Misunderstood functions and the improper adjustment of construction robots may lead to increased cost, reduced effectiveness, and restricted application scenarios. Our review identifies current commercial and recently studied concrete construction robots to facilitate the standardization and optimization of robotic construction design. Moreover, this study may be able to guide future research and technology development efforts for autonomous robots in concrete construction. Full article
Show Figures

Figure 1

Back to TopTop