Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists
Abstract
:1. Introduction
Artificial Intelligence in Cardiology
- Lack of transparency of algorithms.
- Variable accuracy and potential errors, fabrication, and hallucination.
- Lack of algorithm responsibility for errors or accountability, which raises a question regarding liability.
- How the lack of a detailed patient history or medical knowledge limits the accuracy and appropriateness of applications.
- Privacy and confidentiality of the data.
- Variability in the success with text prompts and a reliance on users’ patience or skills for prompt engineering.
2. Methods
- Create an image of a typical ……
- Create an image of a group of 10 cardiologists socialising at a coffee shop.
- Create an image of a group of 10 cardiologists socialising at a bar.
- Create an image of a group of 10 cardiologists on an expert panel at a professional conference.
- Create an image of a group of five cardiologists discussing a difficult case.
3. Results
4. Discussion
4.1. Summary of DALL-E 3 from ChatGPT
- User Prompt:
- ChatGPT (GPT-4):
- Visual Campaigns: DALL-E could be used to create visual representations of diversity in cardiology, showing a variety of people in different roles and settings, to help challenge stereotypes.
- Educational Materials: DALL-E could be used to develop educational content that can be used in presentations or on social media to highlight the importance of a work–life balance, showcase successful programs, and promote initiatives supporting diversity.
- Role Models: DALL-E could be used to generate illustrations featuring diverse cardiologists in a variety of work and life scenarios, possibly paired with real stories of cardiologists who manage a healthy work–life balance.
- Highlighting Initiatives: DALL-E could be used to generate images showcasing initiatives, programs, and events that promote diversity and a work–life balance in cardiology, which can help attract a broader range of medical graduates to the field.
- User Note:
- User Prompt:
- ChatGPT (GPT-4):
- Reinforcing Stereotypes: If DALL-E generates images based on biased prompts or existing stereotypes, it can reinforce those biases. For instance, if it creates images of cardiologists that predominantly depict one gender or ethnicity, it may perpetuate the idea that cardiology is not inclusive.
- Lack of Representation: If the AI is not provided with diverse and inclusive prompts, it might produce images that lack representation of minority groups.
- Cultural Sensitivity: The AI might not fully understand cultural nuances, which can lead to the creation of images that are culturally insensitive or inappropriate.
- Bias in Training Data: The AI’s training data may contain historical biases, which can be reflected in the images it generates.
- User Prompt:
- ChatGPT (GPT-4):
- Training Data: All these models learn from vast datasets sourced from the internet, which inevitably contain biases present in human culture. The model’s output reflects these biases, so if the training data over-represent certain demographics, the AI might generate images that do so as well.
- Bias Mitigation Efforts: Different AI platforms may employ various strategies to address and mitigate biases. These could include curating training datasets, adjusting algorithms, or setting guidelines for usage.
- User Input: The way users phrase prompts can also influence the output and potentially perpetuate stereotypes. Tools might offer guidance on crafting neutral prompts to help reduce bias.
- Filtering and Moderation: Some systems may have more stringent content moderation to prevent the generation of biased or harmful images.
- Transparency and Updates: The organisations behind these models may differ in how transparent they are about their models’ limitations and biases and how actively they work to update and improve their models.
- Research and Improvement: Ongoing research into bias in AI and machine learning is vital. Models that are regularly updated with the latest research findings might handle bias more effectively.
4.2. Limitations
4.3. Future Directions
5. Conclusions
Supplementary Materials
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Crews, D.C.; Collins, C.A.; Cooper, L.A. Distinguishing workforce diversity from health equity efforts in medicine. JAMA Health Forum 2021, 2, e214820. [Google Scholar] [CrossRef] [PubMed]
- Lee, R. How should medical school admissions drive health care workforce diversity? AMA J. Ethics 2021, 23, E912–E918. [Google Scholar]
- Zafar, M.D.B.; Jamil, Y.; Bilal, M.; Rathi, S.; Anwer, A. Impact of racial, ethnic and gender disparities in Cardiology. Curr. Probl. Cardiol. 2023, 48, 101725. [Google Scholar] [CrossRef]
- Crowley, A.L.; Damp, J.; Sulistio, M.S.; Berlacher, K.; Polk, D.M.; Hong, R.A.; Capers, Q., IV. Perceptions on diversity in cardiology: A survey of cardiology fellowship training program directors. J. Am. Heart Assoc. 2020, 9, e017196. [Google Scholar] [CrossRef]
- Mehta, L.S.; Fisher, K.; Rzeszut, A.K.; Lipner, R.; Mitchell, S.; Dill, M.; Douglas, P.S. Current demographic status of cardiologists in the United States. JAMA Cardiol. 2019, 4, 1029–1033. [Google Scholar] [CrossRef]
- Capdeville, M. Gender disparities in cardiovascular fellowship training among 3 specialties from 2007 to 2017. J. Cardiothorac. Vasc. Anesth. 2019, 33, 604–620. [Google Scholar] [CrossRef]
- Burgess, S.; Shaw, E.; Ellenberger, K.; Thomas, L.; Grines, C.; Zaman, S. Women in medicine: Addressing the gender gap in interventional cardiology. J. Am. Coll. Cardiol. 2018, 72, 2663–2667. [Google Scholar] [CrossRef]
- Burgess, S.; Shaw, E.; Zaman, S. Women in cardiology: Underwhelming rate of change. Circulation 2019, 139, 1001–1002. [Google Scholar] [CrossRef] [PubMed]
- Greenwood, B.N.; Carnahan, S.; Huang, L. Patient-physician gender concordance and increased mortality among female heart attack patients. Proc. Natl. Acad. Sci. USA 2018, 115, 8569–8574. Available online: https://pubmed.ncbi.nlm.nih.gov/30082406/ (accessed on 3 October 2022). [CrossRef] [PubMed]
- Douglas, P.S.; Rzeszut, A.K.; Merz, C.N.B.; Duvernoy, C.S.; Lewis, S.J.; Walsh, M.N.; Gillam, L. Career preferences and perceptions of cardiology among US internal medicine trainees: Factors influencing cardiology career choice. JAMA Cardiol. 2018, 3, 682–691. [Google Scholar] [CrossRef] [PubMed]
- Yong, C.M.; Abnousi, F.; Rzeszut, A.K.; Douglas, P.S.; Harrington, R.A.; Mehran, R. Society for Cardiovascular Angiography and Interventions Women in Innovations (SCAI WIN). Sex differences in the pursuit of interventional cardiology as a subspecialty among cardiovascular fellows-in-training. Cardiovasc. Interv. 2019, 12, 219–228. [Google Scholar]
- Lewis, S.J.; Mehta, L.; Douglas, P.; Gulati, M.; Limacher, M.; Poppas, A.; Walsh, M.; Bohannon, K.; Rzeszut, A.; Duvernoy, C. The professional life of cardiologists: Insights from the third American College of Cardiology professional life survey. J. Am. Coll. Cardiol. 2016, 67, 1928. [Google Scholar] [CrossRef]
- Lundberg, G.; Tamirisa, K.; Le, E.; Wood, M.; York, M.; Singh, T. Addressing Gender Equity in Cardiology. Am. J. Med. 2020, 133, 1113–1115. [Google Scholar] [CrossRef]
- Karatzia, L.; Aung, N.; Aksentijevic, D. Artificial intelligence in cardiology: Hope for the future and power for the present. Front. Cardiovasc. Med. 2022, 9, 945726. [Google Scholar] [CrossRef] [PubMed] [PubMed Central]
- Boonstra, M.J.; Weissenbacher, D.; Moore, J.H.; Gonzalez-Hernandez, G.; Asselbergs, F.W. Artificial intelligence: Revolutionizing cardiology with large language models. Eur. Heart J. 2024, 45, 332–345. [Google Scholar] [CrossRef]
- Gala, D.; Makaryus, A.N. The utility of language models in cardiology: A narrative review of the benefits and concerns of ChatGPT-4. Int. J. Environ. Res. Public Health 2023, 20, 6438. [Google Scholar] [CrossRef]
- Ooi, K.B.; Tan GW, H.; Al-Emran, M.; Al-Sharafi, M.A.; Capatina, A.; Chakraborty, A.; Wong, L.W. The potential of generative artificial intelligence across disciplines: Perspectives and future directions. J. Comput. Inf. Syst. 2023, 1–32. [Google Scholar] [CrossRef]
- Chui, M.; Hazan, E.; Roberts, R.; Singla, A.; Smaje, K. The Economic Potential of Generative AI. 2023. Available online: https://www.mckinsey.com/~/media/mckinsey/business%20functions/mckinsey%20digital/our%20insights/the%20economic%20potential%20of%20generative%20ai%20the%20next%20productivity%20frontier/the-economic-potential-of-generative-ai-the-next-productivity-frontier.pdf (accessed on 5 August 2024).
- Temsah, M.H.; Alhuzaimi, A.N.; Almansour, M.; Aljamaan, F.; Alhasan, K.; Batarfi, M.A.; Nazer, R. Art or Artifact: Evaluating the Accuracy, Appeal, and Educational Value of AI-Generated Imagery in DALL· E 3 for Illustrating Congenital Heart Diseases. J. Med. Syst. 2024, 48, 54. [Google Scholar] [CrossRef]
- Mazzoli, C.A.; Semeraro, F.; Gamberini, L. Enhancing Cardiac Arrest Education: Exploring the potential use of MidJourney. Resuscitation 2023, 189, 109893. [Google Scholar] [CrossRef] [PubMed]
- Zhu, L.; Mou, W.; Wu, K.; Zhang, J.; Luo, P. Can DALL-E 3 Reliably Generate 12-Lead ECGs and Teaching Illustrations? Cureus 2024, 16, e52748. [Google Scholar] [CrossRef]
- Currie, G.; Rohren, E. Radiation dosimetry, artificial intelligence and digital twins: Old dog, new tricks. Semin. Nucl. Med. 2022, 52, 498–503. [Google Scholar] [CrossRef] [PubMed]
- Currie, G. The emerging role of artificial intelligence and digital twins in pre-clinical molecular imaging. Nucl. Med. Biol. 2023, 120–121, 108337. [Google Scholar] [CrossRef]
- Ali, R.; Tang, O.Y.; Connolly, I.D.; Abdulrazeq, H.F.; Mirza, F.N.; Lim, R.K.; Johnston, B.R.; Groff, M.W.; Williamson, T.; Svokos, K.; et al. Demographic representation in 3 leading artificial intelligence text-to-image generators. JAMA Surg. 2024, 159, 87–95. [Google Scholar] [CrossRef] [PubMed]
- Cevik, J.; Lim, B.; Seth, I.; Sofiadellis, F.; Ross, R.J.; Cuomo, R.; Rozen, W.M. Assessment of the bias of artificial intelligence generated images and large language models on their depiction of a surgeon. ANZ J. Surg. 2024, 94, 287–294. [Google Scholar] [CrossRef] [PubMed]
- Choudhry, H.S.; Toor, U.; Sanchez, A.J.; Mian, S.I. Perception of race and sex diversity in ophthalmology by artificial intelligence: A DALL E-2 study. Clin. Ophthalmol. 2023, 17, 2889–2899. [Google Scholar] [CrossRef]
- Massey, D.S.; Martin, J.A. The NIS Skin Color Scale; Princeton University Press: Princeton, NJ, USA, 2003. [Google Scholar]
Characteristic | Percentage (N) | ||||
---|---|---|---|---|---|
Cardiologist | Interventional Cardiologist | Nuclear Cardiologist | Cardiothoracic Surgeon | Total | |
Gender | |||||
Male | 100% (8) | 100% (8) | 100% (8) | 100% (8) | 100% (32) |
Female | 0% (0) | 0% (0) | 0% (0) | 0% (0) | 0% (0) |
Ethnicity | |||||
Caucasian | 87.5% (7) | 75% (6) | 100% (8) | 100% (8) | 90.6% (29) |
Non-Caucasian | 12.5% (1) | 0% (0) * | 0% (0) | 0% (0) | 3.1% (1) * |
Skin Tone | |||||
Light | 87.5% (7) | 100% (8) | 100% (8) | 100% (8) | 96.9% (31) |
Mid | 12.5% (1) | 0% (0) | 0% (0) | 0% (0) | 3.1% (1) |
Dark | 0% (0) | 0% (0) | 0% (0) | 0% (0) | 0% (0) |
Age | |||||
<35 | 87.5% (7) | 0% (0) | 25% (2) | 0% (0) | 28.1% (9) |
35–55 | 12.5% (1) | 100% (9) | 50% (4) | 100% (9) | 65.6% (21) |
>55 | 0% (0) | 0% (0) | 25% (2) | 0% (0) | 6.3% (2) |
Characteristic | Percentage (N) | ||
---|---|---|---|
Individual Data | Group Data | Collective Data | |
Gender | |||
Male | 100% (32) | 82.0% (91) | 86.0% (123) |
Female | 0% (0) | 18.0% (20) | 14.0% (20) |
Skin Tone | |||
Light | 96.9% (31) | 92.0% (102) | 93.0% (133) |
Mid | 3.1% (1) | 7.2% (8) | 6.3% (9) |
Dark | 0% (0) | 0.9% (1) | 0.7% (1) |
Age | |||
<35 | 28.1% (9) | 8.1% (9) | 12.6% (18) |
35–55 | 65.6% (21) | 71.2% (79) | 69.9% (100) |
>55 | 6.3% (2) | 20.7% (23) | 17.5% (25) |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Currie, G.; Chandra, C.; Kiat, H. Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists. Information 2024, 15, 594. https://doi.org/10.3390/info15100594
Currie G, Chandra C, Kiat H. Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists. Information. 2024; 15(10):594. https://doi.org/10.3390/info15100594
Chicago/Turabian StyleCurrie, Geoffrey, Christina Chandra, and Hosen Kiat. 2024. "Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists" Information 15, no. 10: 594. https://doi.org/10.3390/info15100594
APA StyleCurrie, G., Chandra, C., & Kiat, H. (2024). Gender Bias in Text-to-Image Generative Artificial Intelligence When Representing Cardiologists. Information, 15(10), 594. https://doi.org/10.3390/info15100594