Privacy, Trust and Fairness in Data

A special issue of Applied Sciences (ISSN 2076-3417). This special issue belongs to the section "Computing and Artificial Intelligence".

Deadline for manuscript submissions: closed (22 April 2022) | Viewed by 14977

Special Issue Editors


E-Mail Website
Guest Editor
Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, 7522 NB Enschede, The Netherlands
Interests: data quality; uncertainty in data; probabilistic databases; information extraction; data integration

E-Mail Website
Guest Editor
Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, 7522 NB Enschede, The Netherlands
Interests: compliance checking; process mining; information auditing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute for Artificial Intelligence in Medicine, University of Duisburg-Essen, Essen, Germany
Interests: machine learning; natural language processing; explainable AI; medical data science; information visualization

Special Issue Information

Dear Colleagues,

The application of artificial intelligence in business, healthcare, engineering, education, and many more domains holds much potential for improved quality and efficiency. However, there are threats that endanger this potential: information misuse and algorithmic irregularities. Proper assurances and solutions are needed for society to trust and further depend on this technology in all such application domains. Data analytics and articificial intelligence designed with privacy-preservation, trust-building, and fair data usage can maximize potential while minimizing risks.

The Special Issue on on  ‘Privacy, Trust, and Fairness in Data’ of Applied Sciences (ISSN 2076-3417) aims to collect research contributions from a wide range of disciplines and domains directly or indirectly related to privacy, trust, and fairness aspects of artificial intelligence. We invite contributions ranging from theoretical or conceptual papers to technical algorithmic ones as well as applications and case studies. Topics include, but are not limited to:

  • Trustworthy artificial intelligence and machine learning;
  • Foundations and models for privacy, trust, and fairness;
  • Algorithms for privacy, trust, and fairness;
  • Application of machine learning for privacy, trust, and fairness;
  • Social Influences on privacy, trust, and fairness;
  • Impact of issues with privacy, trust, and fairness;
  • Quality assurance of privacy, trust, and fairness;
  • Ethics of privacy, trust, and fairness;
  • Case studies in privacy, trust, and fairness;
  • Perception of privacy and trust;
  • Privacy preservation;
  • Privacy-utility trade-off;
  • Resiliency and robustness of algorithms against data quality and fairness issues;
  • Information and data quality measurement, curation, and assurance;
  • Bias, fairness, and integrity of algorithms;
  • Transparency, accountability, and explainability of algorithms and data processing;
  • Fairness and integrity in data utilization and organizational goals.

Dr. Maurice Van Keulen
Dr. Faiza Allah Bukhsh
Prof. Dr. Christin Seifert
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Applied Sciences is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • privacy
  • data utility
  • trust
  • data quality
  • fairness
  • bias
  • transparency
  • explainability
  • trustworthy AI

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

15 pages, 8342 KiB  
Article
Evaluation of Different Plagiarism Detection Methods: A Fuzzy MCDM Perspective
by Kamal Mansour Jambi, Imtiaz Hussain Khan and Muazzam Ahmed Siddiqui
Appl. Sci. 2022, 12(9), 4580; https://doi.org/10.3390/app12094580 - 30 Apr 2022
Cited by 7 | Viewed by 3749
Abstract
Due to the overall widespread accessibility of electronic materials available on the internet, the availability and usage of computers in education have resulted in a growth in the incidence of plagiarism among students. A growing number of individuals at colleges around the globe [...] Read more.
Due to the overall widespread accessibility of electronic materials available on the internet, the availability and usage of computers in education have resulted in a growth in the incidence of plagiarism among students. A growing number of individuals at colleges around the globe appear to be presenting plagiarised papers to their professors for credit, while no specific details are collected of how much was plagiarised previously or how much is plagiarised currently. Supervisors, who are overburdened with huge responsibility, desire a simple way—similar to a litmus test—to rapidly reform plagiarized papers so that they may focus their work on the remaining students. Plagiarism-checking software programs are useful for detecting plagiarism in examinations, projects, publications, and academic research. A number of the latest research findings dedicated to evaluating and comparing plagiarism-checking methods have demonstrated that these have restrictions in identifying the complicated structures of plagiarism, such as extensive paraphrasing as well as the utilization of technical manipulations, such as substituting original text with similar text from foreign alphanumeric characters. Selecting the best reliable and efficient plagiarism-detection method is a challenging task with so many options available nowadays. This paper evaluates the different academic plagiarism-detection methods using the fuzzy MCDM (multi-criteria decision-making) method and provides recommendations for the development of efficient plagiarism-detection systems. A hierarchy of evaluation is discussed, as well as an examination of the most promising plagiarism-detection methods that have the opportunity to resolve the constraints of current state-of-the-art tools. As a result, the study serves as a “blueprint” for constructing the next generation of plagiarism-checking tools. Full article
(This article belongs to the Special Issue Privacy, Trust and Fairness in Data)
Show Figures

Figure 1

29 pages, 843 KiB  
Article
Framework for Assessing Ethical Aspects of Algorithms and Their Encompassing Socio-Technical System
by Xadya van Bruxvoort and Maurice van Keulen
Appl. Sci. 2021, 11(23), 11187; https://doi.org/10.3390/app112311187 - 25 Nov 2021
Cited by 4 | Viewed by 5147
Abstract
In the transition to a data-driven society, organizations have introduced data-driven algorithms that often apply artificial intelligence. In this research, an ethical framework was developed to ensure robustness and completeness and to avoid and mitigate potential public uproar. We take a socio-technical perspective, [...] Read more.
In the transition to a data-driven society, organizations have introduced data-driven algorithms that often apply artificial intelligence. In this research, an ethical framework was developed to ensure robustness and completeness and to avoid and mitigate potential public uproar. We take a socio-technical perspective, i.e., view the algorithm embedded in an organization with infrastructure, rules, and procedures as one to-be-designed system. The framework consists of five ethical principles: beneficence, non-maleficence, autonomy, justice, and explicability. It can be used during the design for identification of relevant concerns. The framework has been validated by applying it to real-world fraud detection cases: Systeem Risico Indicatie (SyRI) of the Dutch government and the algorithm of the municipality of Amersfoort. The former is a controversial country-wide algorithm that was ultimately prohibited by court. The latter is an algorithm in development. In both cases, it proved effective in identifying all ethical risks. For SyRI, all concerns found in the media were also identified by the framework, mainly focused on transparency of the entire socio-technical system. For the municipality of Amersfoort, the framework highlighted risks regarding the amount of sensitive data and communication to and with the public, presenting a more thorough overview compared to the risks the media raised. Full article
(This article belongs to the Special Issue Privacy, Trust and Fairness in Data)
Show Figures

Figure 1

18 pages, 4573 KiB  
Article
Semantic Description of Explainable Machine Learning Workflows for Improving Trust
by Patricia Inoue Nakagawa, Luís Ferreira Pires, João Luiz Rebelo Moreira, Luiz Olavo Bonino da Silva Santos and Faiza Bukhsh
Appl. Sci. 2021, 11(22), 10804; https://doi.org/10.3390/app112210804 - 16 Nov 2021
Cited by 2 | Viewed by 2247
Abstract
Explainable Machine Learning comprises methods and techniques that enable users to better understand the machine learning functioning and results. This work proposes an ontology that represents explainable machine learning experiments, allowing data scientists and developers to have a holistic view, a better understanding [...] Read more.
Explainable Machine Learning comprises methods and techniques that enable users to better understand the machine learning functioning and results. This work proposes an ontology that represents explainable machine learning experiments, allowing data scientists and developers to have a holistic view, a better understanding of the explainable machine learning process, and to build trust. We developed the ontology by reusing an existing domain-specific ontology (ML-SCHEMA) and grounding it in the Unified Foundational Ontology (UFO), aiming at achieving interoperability. The proposed ontology is structured in three modules: (1) the general module, (2) the specific module, and (3) the explanation module. The ontology was evaluated using a case study in the scenario of the COVID-19 pandemic using healthcare data from patients, which are sensitive data. In the case study, we trained a Support Vector Machine to predict mortality of patients infected with COVID-19 and applied existing explanation methods to generate explanations from the trained model. Based on the case study, we populated the ontology and queried it to ensure that it fulfills its intended purpose and to demonstrate its suitability. Full article
(This article belongs to the Special Issue Privacy, Trust and Fairness in Data)
Show Figures

Figure 1

34 pages, 2103 KiB  
Article
Multilevel Privacy Assurance Evaluation of Healthcare Metadata
by Syeda Amna Sohail, Faiza Allah Bukhsh and Maurice van Keulen
Appl. Sci. 2021, 11(22), 10686; https://doi.org/10.3390/app112210686 - 12 Nov 2021
Cited by 2 | Viewed by 2520
Abstract
Healthcare providers are legally bound to ensure the privacy preservation of healthcare metadata. Usually, privacy concerning research focuses on providing technical and inter-/intra-organizational solutions in a fragmented manner. In this wake, an overarching evaluation of the fundamental (technical, organizational, and third-party) privacy-preserving measures [...] Read more.
Healthcare providers are legally bound to ensure the privacy preservation of healthcare metadata. Usually, privacy concerning research focuses on providing technical and inter-/intra-organizational solutions in a fragmented manner. In this wake, an overarching evaluation of the fundamental (technical, organizational, and third-party) privacy-preserving measures in healthcare metadata handling is missing. Thus, this research work provides a multilevel privacy assurance evaluation of privacy-preserving measures of the Dutch healthcare metadata landscape. The normative and empirical evaluation comprises the content analysis and process mining discovery and conformance checking techniques using real-world healthcare datasets. For clarity, we illustrate our evaluation findings using conceptual modeling frameworks, namely e3-value modeling and REA ontology. The conceptual modeling frameworks highlight the financial aspect of metadata share with a clear description of vital stakeholders, their mutual interactions, and respective exchange of information resources. The frameworks are further verified using experts’ opinions. Based on our empirical and normative evaluations, we provide the multilevel privacy assurance evaluation with a level of privacy increase and decrease. Furthermore, we verify that the privacy utility trade-off is crucial in shaping privacy increase/decrease because data utility in healthcare is vital for efficient, effective healthcare services and the financial facilitation of healthcare enterprises. Full article
(This article belongs to the Special Issue Privacy, Trust and Fairness in Data)
Show Figures

Figure 1

Back to TopTop