Explainable Artificial Intelligence for Biometrics 2021

A special issue of Computers (ISSN 2073-431X). This special issue belongs to the section "ICT Infrastructures for Cybersecurity".

Deadline for manuscript submissions: closed (15 August 2021) | Viewed by 16563

Special Issue Editor


E-Mail Website
Guest Editor
INESC TEC - Institute for Systems and Computer Engineering, Technology and Science, Porto, Portugal
Interests: machine learning; computer vision; biometrics; explainable AI; cryptography; mathematics education
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

The "Workshop on Explainable & Interpretable Artificial Intelligence for Biometrics"—xAI4Biometrics Workshop—is embraced by the WACV 2021 Conference (http://wacv2021.thecvf.com/home). This workshop will be held on January 5, 2021, as a Fully Virtual Event. For more information about the workshop, please use the following link: http://vcmi.inesctec.pt/xai4biom_wacv/index.html.

Selected papers among the works presented at the workshop will be invited to submit extended versions to this Special Issue of Computers. The invited papers will be free of charge if they are accepted after peer review. The submission papers to the SI should be extended from the original workshop paper to the length of regular research or review articles, with at least 50% coverage of new results. All submitted papers will undergo our standard peer-review procedure. Accepted papers will be published in open access format in Computers and collected together in this Special Issue. There are no page limitations for this journal.

We are also inviting original research work focused on biometrics and promoting the development of a) methods to interpret the biometric models to validate their decisions, as well as to improve the models and detect possible vulnerabilities; b) quantitative methods to objectively assess and compare different explanations of the automatic decisions; c) methods to generate better explanations; and d) more transparent algorithms.

The main topics include, but are not limited to, the following:

  • Methods to interpret the biometric models to validate their decisions as well as to improve the models and to detect possible vulnerabilities;
  • Quantitative methods to objectively assess and compare different explanations of the automatic decisions;
  • Methods and metrics to study/evaluate the quality of explanations obtained by post-model approaches and improve the explanations;
  • Methods to generate model-agnostic explanations;
  • Transparency and fairness in AI algorithms avoiding bias;
  • Interpretable methods able to explain decisions of previously built and unconstrained (black box) models;
  • Inherently interpretable (white box) models;
  • Methods that use post-model explanations to improve the models’ training;
  • Methods to achieve/design inherently interpretable algorithms (rule-based, case-based reasoning, regularization methods);
  • Study on causal learning, causal discovery, causal reasoning, causal explanations, and causal inference;
  • Natural Language generation for explanatory models;
  • Methods for adversarial attacks detection, explanation and defense (“How can we interpret adversarial examples?”);
  • Theoretical approaches of explainability (“What makes a good explanation?”);
  • Applications of all the above including proof0of-concepts and demonstrators of how to integrate explainable AI into real-world work-flows and industrial processes;
  • Novel theories, innovative methods, and meaningful applications that can potentially lead to significant advances in artificial neural networks in pattern recognition.

Dr. Ana Filipa Sequeira
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Computers is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Biometrics
  • AI Explainability and Interpretability
  • Machine Learning
  • Computer Vision

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 887 KiB  
Article
Representation Learning for EEG-Based Biometrics Using Hilbert–Huang Transform
by Mikhail Svetlakov, Ilya Kovalev, Anton Konev, Evgeny Kostyuchenko and Artur Mitsel
Computers 2022, 11(3), 47; https://doi.org/10.3390/computers11030047 - 20 Mar 2022
Cited by 7 | Viewed by 2652
Abstract
A promising approach to overcome the various shortcomings of password systems is the use of biometric authentication, in particular the use of electroencephalogram (EEG) data. In this paper, we propose a subject-independent learning method for EEG-based biometrics using Hilbert spectrograms of the data. [...] Read more.
A promising approach to overcome the various shortcomings of password systems is the use of biometric authentication, in particular the use of electroencephalogram (EEG) data. In this paper, we propose a subject-independent learning method for EEG-based biometrics using Hilbert spectrograms of the data. The proposed neural network architecture treats the spectrogram as a collection of one-dimensional series and applies one-dimensional dilated convolutions over them, and a multi-similarity loss was used as the loss function for subject-independent learning. The architecture was tested on the publicly available PhysioNet EEG Motor Movement/Imagery Dataset (PEEGMIMDB) with a 14.63% Equal Error Rate (EER) achieved. The proposed approach’s main advantages are subject independence and suitability for interpretation via created spectrograms and the integrated gradients method. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

26 pages, 872 KiB  
Article
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
by Alfonso Ortega, Julian Fierrez, Aythami Morales, Zilong Wang, Marina de la Cruz, César Luis Alonso and Tony Ribeiro
Computers 2021, 10(11), 154; https://doi.org/10.3390/computers10110154 - 17 Nov 2021
Cited by 6 | Viewed by 4232
Abstract
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a [...] Read more.
Machine learning methods are growing in relevance for biometrics and personal information processing in domains such as forensics, e-health, recruitment, and e-learning. In these domains, white-box (human-readable) explanations of systems built on machine learning methods become crucial. Inductive logic programming (ILP) is a subfield of symbolic AI aimed to automatically learn declarative theories about the processing of data. Learning from interpretation transition (LFIT) is an ILP technique that can learn a propositional logic theory equivalent to a given black-box system (under certain conditions). The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity). We show the expressiveness of LFIT for this specific problem and propose a scheme that can be applicable to other domains. In order to check the ability to cope with other domains no matter the machine learning paradigm used, we have done a preliminary test of the expressiveness of LFIT, feeding it with a real dataset about adult incomes taken from the US census, in which we consider the income level as a function of the rest of attributes to verify if LFIT can provide logical theory to support and explain to what extent higher incomes are biased by gender and ethnicity. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

18 pages, 4622 KiB  
Article
Feature Focus: Towards Explainable and Transparent Deep Face Morphing Attack Detectors
by Clemens Seibold, Anna Hilsmann and Peter Eisert
Computers 2021, 10(9), 117; https://doi.org/10.3390/computers10090117 - 18 Sep 2021
Cited by 3 | Viewed by 2710
Abstract
Detecting morphed face images has become an important task to maintain the trust in automated verification systems based on facial images, e.g., at automated border control gates. Deep Neural Network (DNN)-based detectors have shown remarkable results, but without further investigations their decision-making process [...] Read more.
Detecting morphed face images has become an important task to maintain the trust in automated verification systems based on facial images, e.g., at automated border control gates. Deep Neural Network (DNN)-based detectors have shown remarkable results, but without further investigations their decision-making process is not transparent. In contrast to approaches based on hand-crafted features, DNNs have to be analyzed in complex experiments to know which characteristics or structures are generally used to distinguish between morphed and genuine face images or considered for an individual morphed face image. In this paper, we present Feature Focus, a new transparent face morphing detector based on a modified VGG-A architecture and an additional feature shaping loss function, as well as Focused Layer-wise Relevance Propagation (FLRP), an extension of LRP. FLRP in combination with the Feature Focus detector forms a reliable and accurate explainability component. We study the advantages of the new detector compared to other DNN-based approaches and evaluate LRP and FLRP regarding their suitability for highlighting traces of image manipulation from face morphing. To this end, we use partial morphs which contain morphing artifacts in predefined areas only and analyze how much of the overall relevance each method assigns to these areas. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

25 pages, 7059 KiB  
Article
Evaluating Impact of Race in Facial Recognition across Machine Learning and Deep Learning Algorithms
by James Coe and Mustafa Atay
Computers 2021, 10(9), 113; https://doi.org/10.3390/computers10090113 - 10 Sep 2021
Cited by 12 | Viewed by 5385
Abstract
The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give [...] Read more.
The research aims to evaluate the impact of race in facial recognition across two types of algorithms. We give a general insight into facial recognition and discuss four problems related to facial recognition. We review our system design, development, and architectures and give an in-depth evaluation plan for each type of algorithm, dataset, and a look into the software and its architecture. We thoroughly explain the results and findings of our experimentation and provide analysis for the machine learning algorithms and deep learning algorithms. Concluding the investigation, we compare the results of two kinds of algorithms and compare their accuracy, metrics, miss rates, and performances to observe which algorithms mitigate racial bias the most. We evaluate racial bias across five machine learning algorithms and three deep learning algorithms using racially imbalanced and balanced datasets. We evaluate and compare the accuracy and miss rates between all tested algorithms and report that SVC is the superior machine learning algorithm and VGG16 is the best deep learning algorithm based on our experimental study. Our findings conclude the algorithm that mitigates the bias the most is VGG16, and all our deep learning algorithms outperformed their machine learning counterparts. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence for Biometrics 2021)
Show Figures

Figure 1

Back to TopTop