Formal Verification of Imaging Algorithms for Autonomous System

A special issue of Journal of Imaging (ISSN 2313-433X).

Deadline for manuscript submissions: closed (31 January 2022) | Viewed by 15731

Special Issue Editors

Biocentis, Milan, Lombardy 20124, Italy
Interests: topological data analysis; complex autonomous systems; entropy; human–machine interaction; artificial intelligence; machine learning; imaging; pattern recognition; data analysis

E-Mail Website
Guest Editor
IEIIT Institute, Consiglio Nazionale delle Ricerche (CNR), Piazzale Aldo Moro, 7, 00185 Rome, Italy
Interests: certification of AI; explainable AI; control of communication networks; cybersecurity
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Robust Software Engineering group, NASA Ames Research Center, Moffett Field, CA 94035, USA

Special Issue Information

Dear Colleagues,

Imaging algorithms are being adopted at a fast pace and are being applied widely in many complex industrial applications. Their development has significantly accelerated in the last decade due to different concurrent technological advances, i.e., the development of increasingly powerful computational architectures and algorithms. Imaging techniques began as an obvious tool within autonomous systems. For instance, they are fundamental for generating the input for complex functions in autonomous-cars, i.e., self-driving, object detection, and obstacle avoidance. In addition, they are requested to enable autonomous airborne operations such as autonomous-taxi, and autonomous-landing. In other domains, such as manufacturing and maintenance, imaging algorithms are fundamental to enable human–robot cooperation and to increase human safety while improving operators’ performances. It is worth mentioning the broad application of imaging algorithms for supporting medical doctors during prognostic and diagnostic tasks, specifically of robot-assisted surgery. In general, imaging analysis has achieved unprecedented performance in different fields by relying on artificial neural networks. Despite this success, contemporary imaging algorithms still face fundamental challenges. Up to now, imaging algorithms, e.g., pattern recognition or object detection, are evaluated by means of quantitative analysis. The lack of methodology and tools for learning formal assurances of the correctness, resiliency, robustness, and generalizability of imaging algorithms is blocking the certification of these algorithms, and therefore this is slowing down their certification and commercialization in several fields. We request contributions presenting techniques (methods, tools, ideas, or even market evaluations) that will contribute to the future roadmap of formally verifiable imaging algorithms with applications in real-world domains. We welcome papers combining both analytical (formal robustness verification, scenario generation, formally verifiable training procedures, falsification, etc.) and data-driven approaches (e.g., statistical and topological analysis of artificial neural network) that would support the formal verification of imaging algorithms. Scientifically founded innovative and speculative research lines are welcome for proposal and evaluation.

Dr. Matteo Rucco
Dr. Maurizio Mongelli
Dr. Anastasia Mavridou
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • imaging algorithms
  • neural network
  • assurance learning
  • scenario generation
  • formal verification
  • robustness
  • industry 4.0
  • autonomous system
  • autonomous aircraft
  • cyber-pilot
  • medicine
  • CAD
  • autonomous car
  • scenario generation
  • perception system

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

9 pages, 1842 KiB  
Article
Improving Performance of the PRYSTINE Traffic Sign Classification by Using a Perturbation-Based Explainability Approach
by Kaspars Sudars, Ivars Namatēvs and Kaspars Ozols
J. Imaging 2022, 8(2), 30; https://doi.org/10.3390/jimaging8020030 - 30 Jan 2022
Cited by 4 | Viewed by 2864
Abstract
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier [...] Read more.
Model understanding is critical in many domains, particularly those involved in high-stakes decisions, e.g., medicine, criminal justice, and autonomous driving. Explainable AI (XAI) methods are essential for working with black-box models such as convolutional neural networks. This paper evaluates the traffic sign classifier of the Deep Neural Network (DNN) from the Programmable Systems for Intelligence in Automobiles (PRYSTINE) project for explainability. The results of explanations were further used for the CNN PRYSTINE classifier vague kernels’ compression. Then, the precision of the classifier was evaluated in different pruning scenarios. The proposed classifier performance methodology was realised by creating an original traffic sign and traffic light classification and explanation code. First, the status of the kernels of the network was evaluated for explainability. For this task, the post-hoc, local, meaningful perturbation-based forward explainable method was integrated into the model to evaluate each kernel status of the network. This method enabled distinguishing high- and low-impact kernels in the CNN. Second, the vague kernels of the classifier of the last layer before the fully connected layer were excluded by withdrawing them from the network. Third, the network’s precision was evaluated in different kernel compression levels. It is shown that by using the XAI approach for network kernel compression, the pruning of 5% of kernels leads to a 2% loss in traffic sign and traffic light classification precision. The proposed methodology is crucial where execution time and processing capacity prevail. Full article
(This article belongs to the Special Issue Formal Verification of Imaging Algorithms for Autonomous System)
Show Figures

Figure 1

18 pages, 18210 KiB  
Article
Automated Data Annotation for 6-DoF AI-Based Navigation Algorithm Development
by Javier Gibran Apud Baca, Thomas Jantos, Mario Theuermann, Mohamed Amin Hamdad, Jan Steinbrener, Stephan Weiss, Alexander Almer and Roland Perko
J. Imaging 2021, 7(11), 236; https://doi.org/10.3390/jimaging7110236 - 10 Nov 2021
Cited by 4 | Viewed by 3136
Abstract
Accurately estimating the six degree of freedom (6-DoF) pose of objects in images is essential for a variety of applications such as robotics, autonomous driving, and autonomous, AI, and vision-based navigation for unmanned aircraft systems (UAS). Developing such algorithms requires large datasets; however, [...] Read more.
Accurately estimating the six degree of freedom (6-DoF) pose of objects in images is essential for a variety of applications such as robotics, autonomous driving, and autonomous, AI, and vision-based navigation for unmanned aircraft systems (UAS). Developing such algorithms requires large datasets; however, generating those is tedious as it requires annotating the 6-DoF relative pose of each object of interest present in the image w.r.t. to the camera. Therefore, this work presents a novel approach that automates the data acquisition and annotation process and thus minimizes the annotation effort to the duration of the recording. To maximize the quality of the resulting annotations, we employ an optimization-based approach for determining the extrinsic calibration parameters of the camera. Our approach can handle multiple objects in the scene, automatically providing ground-truth labeling for each object and taking into account occlusion effects between different objects. Moreover, our approach can not only be used to generate data for 6-DoF pose estimation and corresponding 3D-models but can be also extended to automatic dataset generation for object detection, instance segmentation, or volume estimation for any kind of object. Full article
(This article belongs to the Special Issue Formal Verification of Imaging Algorithms for Autonomous System)
Show Figures

Figure 1

12 pages, 1712 KiB  
Article
Optimizing the Simplicial-Map Neural Network Architecture
by Eduardo Paluzo-Hidalgo, Rocio Gonzalez-Diaz, Miguel A. Gutiérrez-Naranjo and Jónathan Heras
J. Imaging 2021, 7(9), 173; https://doi.org/10.3390/jimaging7090173 - 1 Sep 2021
Cited by 1 | Viewed by 2526
Abstract
Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, [...] Read more.
Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, the refinement toward robustness is optimized by reducing the number of simplices (i.e., nodes) needed. We have shown experimentally that such a refined neural network is equivalent to the original network as a classification tool but requires much less storage. Full article
(This article belongs to the Special Issue Formal Verification of Imaging Algorithms for Autonomous System)
Show Figures

Figure 1

Review

Jump to: Research

24 pages, 1432 KiB  
Review
A Review of Modern Thermal Imaging Sensor Technology and Applications for Autonomous Aerial Navigation
by Tran Xuan Bach Nguyen, Kent Rosser and Javaan Chahl
J. Imaging 2021, 7(10), 217; https://doi.org/10.3390/jimaging7100217 - 19 Oct 2021
Cited by 33 | Viewed by 6125
Abstract
Limited navigation capabilities of many current robots and UAVs restricts their applications in GPS denied areas. Large aircraft with complex navigation systems rely on a variety of sensors including radio frequency aids and high performance inertial systems rendering them somewhat resistant to GPS [...] Read more.
Limited navigation capabilities of many current robots and UAVs restricts their applications in GPS denied areas. Large aircraft with complex navigation systems rely on a variety of sensors including radio frequency aids and high performance inertial systems rendering them somewhat resistant to GPS denial. The rapid development of computer vision has seen cameras incorporated into small drones. Vision-based systems, consisting of one or more cameras, could arguably satisfy both size and weight constraints faced by UAVs. A new generation of thermal sensors is available that are lighter, smaller and widely available. Thermal sensors are a solution to enable navigation in difficult environments, including in low-light, dust or smoke. The purpose of this paper is to present a comprehensive literature review of thermal sensors integrated into navigation systems. Furthermore, the physics and characteristics of thermal sensors will also be presented to provide insight into challenges when integrating thermal sensors in place of conventional visual spectrum sensors. Full article
(This article belongs to the Special Issue Formal Verification of Imaging Algorithms for Autonomous System)
Show Figures

Figure 1

Back to TopTop