Special Issue "Fairness and Explanation for Trustworthy AI"

A special issue of Machine Learning and Knowledge Extraction (ISSN 2504-4990).

Deadline for manuscript submissions: closed (15 May 2022) | Viewed by 2375

Special Issue Editors

Dr. Jianlong Zhou
E-Mail Website1 Website2
Guest Editor
Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia
Interests: AI ethics; AI fairness; AI explainability; Behavior analytics; human–computer interaction
Prof. Dr. Fang Chen
E-Mail Website
Guest Editor
Data Science Institute, University of Technology Sydney, Ultimo, NSW 2007, Australia
Interests: machine learning; pattern recognition; human–machine interaction; behavior analytics; cognitive modelling
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) and machine learning (ML) are increasingly being used to shape our daily lives by making, or at least influencing, decisions with ethical and legal implications in a variety of application areas (from agriculture to zoology). However, due to biased input data and/or flawed algorithms, unfair AI-informed decision-making systems may result in reinforcing discrimination, such as racial/gender bias in AI-informed decision-making, or even in high risk environments due to incorrect decisions, e.g., in medical diagnoses. Furthermore, due to the black-box nature of deep learning, for example, the use of such algorithms requires verification and plausibility checks by experts, especially in high-risk areas, such as health, not only for safety and ethical reasons, but particularly for mandatory legal reasons. Such requirements need to provide re-traceability, explainability, interpretability, and transparency for such AI systems—which is technically challenging. AI explanations will become indispensable in the future to interpret black-box results and provide users with insights into the system's decision-making process. Meanwhile, fairness and explanations are key components in fostering trust and confidence in AI systems. In this Special Issue, we will feature cutting-edge research where fairness and explanations are presented for making trustworthy decisions in AI systems.

This Special Issue invites submissions that feature original research on designing, presenting, and evaluating approaches for fairness and explanations in AI systems. The approaches aim at improving human trust in AI systems.

Dr. Jianlong Zhou
Prof. Dr. Andreas Holzinger
Prof. Dr. Fang Chen
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Machine Learning and Knowledge Extraction is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • role of fairness in trustworthy AI systems
  • role of explanation in trustworthy AI systems
  • role of both fairness and explanation in trustworthy AI systems
  • human’s judgement of fairness and explanations in AI systems
  • innovative methods and technologies in presenting fairness and explanations for boosting trustworthiness of AI systems
  • novel applications of user experience design and evaluation methods for trustworthy AI with fairness and explanations
  • social, ethical and legal aspects of fairness in AI, fostering trustworthy AI.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Article
Fairness and Explanation in AI-Informed Decision Making
Mach. Learn. Knowl. Extr. 2022, 4(2), 556-579; https://doi.org/10.3390/make4020026 - 16 Jun 2022
Viewed by 400
Abstract
AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits [...] Read more.
AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits responsible use of socio-technical AI systems, but currently receives little attention. In this paper, we investigate the effects of AI explanations and fairness on human-AI trust and perceived fairness, respectively, in specific AI-based decision-making scenarios. A user study simulating AI-assisted decision-making in two health insurance and medical treatment decision-making scenarios provided important insights. Due to the global pandemic and restrictions thereof, the user studies were conducted as online surveys. From the participant’s trust perspective, fairness was found to affect user trust only under the condition of a low fairness level, with the low fairness level reducing user trust. However, adding explanations helped users increase their trust in AI-assisted decision-making. From the perspective of perceived fairness, our work found that low levels of introduced fairness decreased users’ perceptions of fairness, while high levels of introduced fairness increased users’ perceptions of fairness. The addition of explanations definitely increased the perception of fairness. Furthermore, we found that application scenarios influenced trust and perceptions of fairness. The results show that the use of AI explanations and fairness statements in AI applications is complex: we need to consider not only the type of explanations and the degree of fairness introduced, but also the scenarios in which AI-assisted decision-making is used. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

Article
Developing a Novel Fair-Loan Classifier through a Multi-Sensitive Debiasing Pipeline: DualFair
Mach. Learn. Knowl. Extr. 2022, 4(1), 240-253; https://doi.org/10.3390/make4010011 - 12 Mar 2022
Viewed by 1138
Abstract
Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this [...] Read more.
Machine learning (ML) models are increasingly being used for high-stake applications that can greatly impact people’s lives. Sometimes, these models can be biased toward certain social groups on the basis of race, gender, or ethnicity. Many prior works have attempted to mitigate this “model discrimination” by updating the training data (pre-processing), altering the model learning process (in-processing), or manipulating the model output (post-processing). However, more work can be done in extending this situation to intersectional fairness, where we consider multiple sensitive parameters (e.g., race) and sensitive options (e.g., black or white), thus allowing for greater real-world usability. Prior work in fairness has also suffered from an accuracy–fairness trade-off that prevents both accuracy and fairness from being high. Moreover, the previous literature has not clearly presented holistic fairness metrics that work with intersectional fairness. In this paper, we address all three of these problems by (a) creating a bias mitigation technique called DualFair and (b) developing a new fairness metric (i.e., AWI, a measure of bias of an algorithm based upon inconsistent counterfactual predictions) that can handle intersectional fairness. Lastly, we test our novel mitigation method using a comprehensive U.S. mortgage lending dataset and show that our classifier, or fair loan predictor, obtains relatively high fairness and accuracy metrics. Full article
(This article belongs to the Special Issue Fairness and Explanation for Trustworthy AI)
Show Figures

Figure 1

Back to TopTop