The Role of Artificial Intelligence for Diversity, Equity, and Inclusion

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 31 October 2025 | Viewed by 3660

Special Issue Editor


E-Mail Website
Guest Editor
Department of Foreign Languages and Literatures, University of Verona Lungadige Porta Vittoria, 41-37129 Verona, Italy
Interests: artificial intelligence; knowledge extraction; semantic web; natural language processing; digital humanities

Special Issue Information

Dear Colleagues,

Artificial intelligence (AI) has emerged as a transformative force across various sectors, influencing decision-making processes, shaping societal norms, and impacting economic landscapes. Despite AI technologies continuing to evolve and integrate deeper into our daily lives, in recent years, concerns about AI bias and fairness have gained significant attention from researchers, policymakers, and industry leaders alike. Studies have shown that AI systems can perpetuate and amplify societal biases in their training data, leading to discriminatory outcomes across various domains, including employment, healthcare, and criminal justice. Despite these challenges, AI also holds tremendous potential to advance DEI objectives. The possibilities are vast, from developing AI-driven tools to mitigate bias in hiring processes to leveraging AI for personalized education and healthcare. This special issue aims to investigate the role of AI technologies in addressing and contributing to diversity, equity, and inclusion (DEI) aspects.

We invite submissions that explore, but are not limited to, the following topics:

  • Bias and Fairness: Methods and techniques for detecting and mitigating bias in AI models and technologies.
  • Data Diversity and Representation: Strategies for enhancing diversity in training datasets.
  • Ethical AI Development: Frameworks and guidelines for developing AI systems that adhere to ethical principles, including transparency, accountability, and privacy.
  • Explainable AI (XAI): Techniques for enhancing the interpretability and explainability of AI models to promote trust and understanding among diverse stakeholders.
  • AI for Accessibility: Innovations in using AI to develop assistive technologies that enhance accessibility for people with disabilities, including natural language processing, computer vision, and gesture recognition.
  • Human–AI Collaboration: Studies on designing AI systems that facilitate inclusive collaboration between humans and machines, considering diverse user needs and perspectives.
  • Inclusive User Interface Design: Methods for designing AI-powered user interfaces that are accessible and inclusive, considering diverse user abilities and preferences.
  • AI-driven Social Network Analysis: Applications of AI in analyzing social networks to understand and address DEI-related issues, such as online harassment, echo chambers, and social polarization.

We encourage researchers and practitioners in computer science to contribute to this special issue by submitting original research papers, survey articles, and case studies. By advancing our understanding of the role of AI in promoting diversity, equity, and inclusion through rigorous computer science research, we can pave the way for the development of more ethical, inclusive, and trustworthy AI systems. Submissions will undergo a thorough peer review process, ensuring the highest quality and relevance to the special issue's theme.

Dr. Marco Rospocher
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • natural language processing
  • human-AI collaboration
  • accessibility
  • explainability
  • fairness
  • diversity

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

23 pages, 711 KiB  
Article
Comparison of Grammar Characteristics of Human-Written Corpora and Machine-Generated Texts Using a Novel Rule-Based Parser
by Simon Strübbe, Irina Sidorenko and Renée Lampe
Information 2025, 16(4), 274; https://doi.org/10.3390/info16040274 - 28 Mar 2025
Viewed by 204
Abstract
As the prevalence of machine-written texts grows, it has become increasingly important to distinguish between human- and machine-generated content, especially when such texts are not explicitly labeled. Current artificial intelligence (AI) detection methods primarily focus on human-like characteristics, such as emotionality and subjectivity. [...] Read more.
As the prevalence of machine-written texts grows, it has become increasingly important to distinguish between human- and machine-generated content, especially when such texts are not explicitly labeled. Current artificial intelligence (AI) detection methods primarily focus on human-like characteristics, such as emotionality and subjectivity. However, these features can be easily modified through AI humanization, which involves altering word choice. In contrast, altering the underlying grammar without affecting the conveyed information is considerably more challenging. Thus, the grammatical characteristics of a text can be used as additional indicators of its origin. To address this, we employ a newly developed rule-based parser to analyze the grammatical structures in human- and machine-written texts. Our findings reveal systematic grammatical differences between human- and machine-written texts, providing a reliable criterion for the determination of the text origin. We further examine the stability of this criterion in the context of AI humanization and translation to other languages. Full article
Show Figures

Figure 1

14 pages, 2751 KiB  
Article
Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists
by Geoffrey Currie, Christina Chandra and Hosen Kiat
Information 2024, 15(10), 594; https://doi.org/10.3390/info15100594 - 30 Sep 2024
Viewed by 2752
Abstract
Introduction: While the global medical graduate and student population is approximately 50% female, only 13–15% of cardiologists and 20–27% of training fellows in cardiology are female. The potentially transformative use of text-to-image generative artificial intelligence (AI) could improve promotions and professional perceptions. In [...] Read more.
Introduction: While the global medical graduate and student population is approximately 50% female, only 13–15% of cardiologists and 20–27% of training fellows in cardiology are female. The potentially transformative use of text-to-image generative artificial intelligence (AI) could improve promotions and professional perceptions. In particular, DALL-E 3 offers a useful tool for promotion and education, but it could reinforce gender and ethnicity biases. Method: Responding to pre-specified prompts, DALL-E 3 via GPT-4 generated a series of individual and group images of cardiologists. Overall, 44 images were produced, including 32 images that contained individual characters and 12 group images that contained between 7 and 17 characters. All images were independently analysed by three reviewers for the characters’ apparent genders, ages, and skin tones. Results: Among all images combined, 86% (N = 123) of cardiologists were depicted as male. A light skin tone was observed in 93% (N = 133) of cardiologists. The gender distribution was not statistically different from that of actual Australian workforce data (p = 0.7342), but this represents a DALL-E 3 gender bias and the under-representation of females in the cardiology workforce. Conclusions: Gender bias associated with text-to-image generative AI when using DALL-E 3 among cardiologists limits its usefulness for promotion and education in addressing the workforce gender disparities. Full article
Show Figures

Figure 1

Back to TopTop