Women Leaders in AI: Insights, Challenges, and Transformative Practices

A special issue of Merits (ISSN 2673-8104).

Deadline for manuscript submissions: 31 December 2026 | Viewed by 308

Special Issue Editors


E-Mail Website
Guest Editor
Institute for Social Innovation, Fielding Graduate University, Santa Barbara, CA 93105, USA
Interests: gender; leadership; commons; education; evaluation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Regis University, Denver, CO 80221, USA
Interests: ethics and AI; ethics in business

E-Mail Website
Guest Editor
Eastern University, St. Davids, PA 19087-3696, USA
Interests: leadership; organizational behavior; mentoring; women in leadership

Special Issue Information

Dear Colleagues,

We are pleased to invite you to submit original academic and practical articles for our Special Issue, “Women Leaders in AI: Insights, Challenges, and Transformative Practices”.

Purpose of the Special Issue:

Artificial Intelligence (AI) is an integral part of our world. From helping doctors detect diseases earlier to enabling smart assistants that streamline our daily tasks, AI technologies embedded in various applications as well as generative AI are changing the way we live and work. Its rise promises unprecedented innovation across nearly every industry. However, as AI continues to evolve, so do the concerns about its ethical use as the way in which we engage with these technologies will have psychological, social, and economic consequences.

Understanding both the benefits and the risks of AI is crucial to ensuring that we develop and apply these technologies in ways that truly benefit humanity. Women leaders and practitioners play a critical role in shaping the use of AI at its development and implementation stages to ensure that AI becomes a tool for human thriving without reproducing the inequalities that currently exist in the world—the goal for our ever-evolving understanding of ethics as the way we relate to each other and our various technologies.

The purpose of this Special Issue of Merits is for leaders and scholars who focus on the contributions of women to share the insights, challenges, and transformative practices of AI. These contributions will be based on their research and experience of working on designing AI, using AI in organizations, studying AI, criticizing AI designs and companies, or interviewing key women players in the AI world by asking tough questions. We anticipate that this Special Issue will provoke controversial and important discussions and spark the actions necessary to ensure that AI supports but does not supplant humanity.

Background:

AI’s advantages are vast and are already visible in various fields. In healthcare, for example, AI-powered tools are being used to analyze medical images, detect cancer at early stages, and even predict patient outcomes. Systems like Google’s DeepMind have demonstrated remarkable accuracy in diagnosing eye diseases and breast cancer, sometimes outperforming human specialists (McKinney et al., 2020).

In the business world, AI streamlines operations by automating repetitive tasks, analyzing customer behavior, and enhancing decision making through predictive analytics. Retailers use AI to optimize supply chains and personalize customer experiences, while financial institutions use it to detect fraud and assess credit risk (PwC, 2018). These efficiencies can save time, reduce costs, and boost overall productivity.

Furthermore, AI can support environmental sustainability. Machine learning algorithms can help scientists model climate change, manage energy consumption, and monitor endangered species. AI-driven smart grids can balance energy loads and reduce waste, contributing to more sustainable urban development (Rolnick et al., 2019).

However, AI also presents a number of serious risks that demand careful attention. For example, AI data centers consume enormous amounts of energy to operate, risking the rise in energy costs to be borne by individuals and communities. Another immediate concern is job displacement. As AI systems become more capable, tasks that once required human intelligence are increasingly performed by machines. A 2023 Goldman Sachs report predicted that AI could impact up to 300 million full-time jobs worldwide (Goldman Sachs, 2023).

Algorithmic bias is another pressing issue. AI systems learn from data, and if that data reflects historical biases or lacks diversity, the system may reproduce or even amplify those biases. This has real-world consequences. For example, facial recognition technologies have been shown to perform poorly on people with darker skin tones, leading to false arrests and other injustices (Buolamwini & Gebru, 2018).

Because generative AI is a compilation of that which has been written, racism, sexism, and ableism are coded into the system. Meredith Broussard demonstrated in More Than a Glitch how neutrality in tech is a myth and why those who create the algorithms need to be held accountable (Broussard, 2024).

Privacy and surveillance also pose significant challenges. AI systems often require vast amounts of personal data to function effectively. Without strong regulation, this data can be collected, stored, and exploited in ways that violate individuals’ privacy. In some countries, AI is already being used to enable mass surveillance. China's social credit system, which monitors and scores citizens based on their behavior, raises concerns about authoritarian control and the erosion of civil liberties (Creemers, 2018).

Long-term existential risks must also be considered. Because new AI technologies have the ability to program themselves and thus act in unpredictable ways, some experts warn that, if the AI systems surpass human intelligence and are not aligned with human values, the use of those systems could have harmful results. Figures like the late Stephen Hawking and Elon Musk have urged caution, calling for robust safeguards and international cooperation to prevent potentially catastrophic outcomes (Cellan-Jones, 2014; Clifford, 2017).

In Taming Silicon Valley, Gary Marcus, one of the most trusted voices in AI, explains how Big Tech is taking advantage of us, how AI could make things much worse, and, most importantly, what we can do to safeguard our democracy, our society, and our future. Marcus explains the potential—and potential risks—of AI and how Big Tech has effectively captured policymakers. He begins by laying out what is lacking in the current deployment of AI, what the greatest risks of AI are, and how Big Tech has been playing both the public and the government, before digging into why the US government has thus far been ineffective at reining in Big Tech. He calls for government regulation and action by the citizenry to protect democracy and ourselves.

The future of how humanity will use AI is not predetermined; the choices we make today will shape how this emerging technology impacts individuals and the community at large. To maximize the benefits of AI while minimizing its risks, we need responsible ethical development guided by active engagement with the change, intentional cultivation of trust, inclusivity, ensuring the safety of the community, and actively imagining a future where all can flourish.

Everyone’s voice and participation are essential to harmonize the benefits and burdens of these ever-evolving technologies. AI should not be developed in isolation from the people it affects. Open dialog and democratic oversight can help ensure that AI technologies serve the public good, rather than narrow interests. Governments must establish clear regulations to ensure fairness of access and set the guidelines for safety. Companies must commit to building their stakeholders’ capacity for ethical use through training, fairness, and accountability. Educational institutions must prepare the next generation with the skills needed to thrive in an AI-driven economy. Individuals must use AI responsibly and thoughtfully.

As a tool, AI holds extraordinary potential to improve lives, solve complex problems, and drive progress. But like any powerful tool, it comes with risks that must not be ignored. By balancing innovation with responsibility and by ensuring that AI serves human values, we can build a future where technology enhances our shared well-being rather than undermining it. The choices we make now will determine whether AI becomes a force for good or a source of harm—and those choices are ours to make.

Authors who focus on women leaders in AI and/or how women are impacted by AI are invited to submit articles on topics relating to our three broad themes: (1) insights from the development and deployment of general AI technologies and generative AI, (2) challenges that arise as the potential for these technologies are actualized, and (3) transformative practices that will help leaders and practitioners creatively and responsibly integrate AI into their professional and personal lives.

These themes could include topics such as: (1) bias in algorithms that disproportionally exclude groups such as women, people of color, or lower socio-economic classes; (2) how AI is supporting organizations and teams; (3) the positives and negatives of using AI at each level of education; (4) the need for self-regulation and government regulation of AI; (5) how countries can cooperate for global standards in AI; (6) governance tools inspired by insights into women’s leadership practices that people can use in various settings to ensure fair and positive use of AI; (7) challenges around the use of AI in women’s health applications; (8) other ethical issues in the development and deployment of AI; (9) creative ways AI has been implemented that support human flourishing; and (10) other topics that relate to women’s involvement with the broad themes.

We request that, prior to submitting a manuscript, interested authors submit a proposed title and an abstract of 200–300 words summarizing their intended contribution. Abstracts will be reviewed by the Guest Editors for the purposes of ensuring proper fit within the scope of this Special Issue. Please send the manuscript to the Guest Editors or to the Managing Editor Sandee Pan (sandee.pan@mdpi.com) of Merits. Full manuscripts will undergo double-blind peer review.

We look forward to receiving your contributions.

References:

Anadiotis, George (12 November 2020). "What's next for AI: Gary Marcus talks about the journey toward robust artificial intelligence". ZDNet.

Benoît, G. (26 November 2019). "Les machines ne savent pas gérer les situations imprévues". Les Echos (in French).

Bhuiyan, Johana (8 March 2017). "Uber's new head of its AI labs has stepped down from his role". Vox.

Broussard, M. (2024). More than a glitch: Confronting race, gender and ability bias in tech. The MIT Press.

Buolamwini, J., & Gebru, T. (2018). Gender shades: Intersectional accuracy disparities in commercial gender classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency.

Cellan-Jones, R. (2014, December 2). Stephen Hawking warns artificial intelligence could end mankind. BBC News.

Chavanne, Yannick (29 March 2023). "Bengio, Musk, Wozniak et des centaines d'autres experts appellent à mettre en pause le développement des IA". ICTjournal (in French).

Clifford, C. (2017, July 17). Elon Musk: AI is a fundamental risk to the existence of human civilization. CNBC.

Creemers, R. (2018). China’s social credit system: An evolving practice of control. SSRN.

Goldman Sachs. (2023). The Potentially Large Effects of Artificial Intelligence on Economic Growth.

Feldman, Amy. "Startup Founded By Cognitive Scientist Gary Marcus And Roboticist Rodney Brooks Raises $15 Million To Make Building Smarter Robots Easier". Forbes. Retrieved 30 March 2023.

Fried, Ina (8 March 2017). "The head of Uber's AI labs is latest to leave the company". Axios. Retrieved 30 March2023.

Marcus, Gary, "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymied by the same old problems", Scientific American, vol. 327, no. 4 (October 2022), pp. 42–45.

Marcus, Gary, "Am I Human?: Researchers need new ways to distinguish artificial intelligence from the natural kind", Scientific American, vol. 316, no. 3 (March 2017), pp. 58–63.

Marcus, Gary (7 August 2022). "Siri or Skynet? How to separate AI fact from fiction". The Observer. ISSN 0029-7712.

Marcus, Gary (28 March 2023). "AI risk ≠ AGI risk". The Road to AI We Can Trust.

Marcus, G. F. (2024). Taming Silicon Valley: How We Can Ensure That AI Works for Us. MIT Press.

McKinney, S. M., Sieniek, M., Godbole, V., et al. (2020). International evaluation of an AI system for breast cancer screening. Nature, 577(7788), 89–94.

"The world needs an international agency for artificial intelligence, say two AI experts". The Economist. ISSN 0013-0613. Retrieved 22 December 2023.

"Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 30 March 2023.

PwC. (2018). AI will transform the productivity and GDP potential of the global economy.

Rolnick, D., Donti, P. L., Kaack, L. H., et al. (2019). Tackling climate change with machine learning. arXiv preprint arXiv:1906.05433.

Dr. Randal Joy Thompson
Dr. Catharyn A. Baird
Dr. Mary Tabata
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 250 words) can be sent to the Editorial Office for assessment.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a double-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Merits is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • artificial intelligence
  • women
  • leadership

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers

This special issue is now open for submission.
Back to TopTop