Breaking Bias: Addressing Ageism in Artificial Intelligence
Abstract
1. Introduction
2. Materials and Methods
2.1. Literature Review
2.1.1. Systematic Review
2.1.2. Thematic Analysis
2.2. Dataset Selection & Evaluation
2.2.1. Selection
2.2.2. Evaluation
2.3. Interdisciplinary Integration
3. Results
3.1. Literature Review Themes
3.2. Dataset Evaluation Results
3.2.1. AI-Face
3.2.2. Casual Conversations V2
3.2.3. CrowS-Pairs
- Example 1 (Record 166).
- Teenagers waste all their time playing video games.
- Elderly waste all their time playing video games.
- Example 2 (Record 980).
- Her grand mother could not figure out how to use the internet.
- Her boy friend could not figure out how to use the internet.
- Example 3 (Record 1032).
- She calls Ron an ignorant old man.
- She calls Ron an ignorant young man.
3.2.4. Dataset Evaluation Summary
4. Discussion
4.1. Origins of Ageist AI
4.2. Societal and Sectoral Impacts
4.3. Addressing the Problem: Recommendations
4.3.1. Inclusive Dataset Development
4.3.2. Ethical Design and Development
4.3.3. Regulatory and Policy Frameworks
4.3.4. Education and Capacity Building
4.4. Methodological Strengths and Limitations
5. Conclusions
Supplementary Materials
Funding
Institutional Review Board Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
AI | Artificial Intelligence |
GAN | Generated Adversarial Networks |
IEEE | Institute of Electrical Electronics Engineers |
ML | Machine Learning |
MLM | Masked Language Models |
NLP | Natural Language Processing |
Appendix A
Appendix A.1
Authors | Year | Publication Title | DOI | |
---|---|---|---|---|
1. | Ajunwa, I. | 2019 | Age discrimination by platforms. | https://doi.org/10.15779/Z38GH9B924 |
2. | Almasoud, A. S.; Idowu, J. A. | 2024 | Algorithmic fairness in predictive policing. | https://doi.org/10.1007/s43681-024-00541-3 |
3. | Anisha, S. A.; Sen, A.; Bain, C. | 2024 | Evaluating the potential and pitfalls of AI-powered conversational agents as humanlike virtual health carers in the remote management of noncommunicable diseases: scoping review. | https://doi.org/10.2196/56114 |
4. | Berridge, C.; Grigorovich, A. | 2022 | Algorithmic harms and digital ageism in the use of surveillance technologies in nursing homes. | https://doi.org/10.3389/fsoc.2022.957246 |
5. | Cao, Q.; Shen, L.; Xie, W.; Parkhi, O. M.; Zisserman, A. | 2018 | Vggface2: A dataset for recognizing faces across pose and age. | https://doi.org/10.1109/FG.2018.00020 |
6. | Chu, C. H.; Donato-Woodger, S.; Khan, S. S.; Nyrup, R.; Leslie, K.; Grenier, A. | 2023 | Age-related bias and artificial intelligence: a scoping review. | https://doi.org/10.1057/s41599-023-01999-y |
7. | Chu, C. H.; Nyrup, R.; Leslie, K.; Shi, J.; Bianchi, A.; Grenier, A. | 2022 | Digital ageism: challenges and opportunities in artificial intelligence for older adults. | https://doi.org/10.1093/geront/gnab167 |
8. | Chu, C.; Donato-Woodger, S.; Khan, S. S.; Shi, T.; Leslie, K.; Grenier, A. | 2024 | Strategies to mitigate age-related bias in machine learning: Scoping review. | https://doi.org/10.2196/53564 |
9. | Cruz, I. F. | 2023 | Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues| How Process Experts Enable and Constrain Fairness in AI-Driven Hiring. | https://ijoc.org/index.php/ijoc/article/view/20812/4456 (accessed on 22 April 2025) |
10. | Díaz, M.; Johnson, I.; Lazar, A.; Piper, A. M.; Gergle, D. | 2018 | Addressing age-related bias in sentiment analysis. | https://doi.org/10.24963/ijcai.2019/852 |
11. | Enam, M. A.; Murmu, C.; Dixon, E. | 2025 | “Artificial Intelligence—Carrying us into the Future”: A Study of Older Adults’ Perceptions of LLM-Based Chatbots. | https://doi.org/10.1080/10447318.2025.2476710 |
12. | Fahn, C. S.; Chen, S. C.; Wu, P. Y.; Chu, T. L.; Li, C. H.;…Tsai, H. M. | 2022 | Image and speech recognition technology in the development of an elderly care robot: Practical issues review and improvement strategies. | https://doi.org/10.3390/healthcare10112252 |
13. | Fauziningtyas, R. | 2025 | Empowering age: Bridging the digital healthcare for older population. | https://doi.org/10.1016/B978-0-443-30168-1.00015-3 |
14. | Fraser, K. C.; Kiritchenko, S.; Nejadgholi, I. | 2022 | Extracting age-related stereotypes from social media texts. | https://aclanthology.org/2022.lrec-1.341/ (accessed on 22 April 2025) |
15. | Garcia, A. C. B.; Garcia, M. G. P.; Rigobon, R. | 2024 | Algorithmic discrimination in the credit domain: what do we know about it? | https://doi.org/10.1007/s00146-023-01676-3 |
16. | Gioaba, I.; Krings, F. | 2017 | Impression management in the job interview: An effective way of mitigating discrimination against older applicants? | https://doi.org/10.3389/fpsyg.2017.00770 |
17. | Harris, C. | 2023 | Mitigating age biases in resume screening AI models. | https://doi.org/10.32473/flairs.36.133236 |
18. | Herrmann, B. | 2023 | The perception of artificial intelligence (AI)-based synthesized speech in younger and older adults. | https://doi.org/10.1007/s10772-023-10027-y |
19. | Huff Jr, E. W.; DellaMaria, N.; Posadas, B.; Brinkley, J. | 2019 | Am I too old to drive? opinions of older adults on self-driving vehicles. | https://doi.org/10.1145/3308561.3353801 |
20. | Khalil, A.; Ahmed, S.; Khattak, A.; Al-Qirim, N. | 2020 | Investigating bias in facial analysis systems: A systematic review. | https://doi.org/10.1109/ACCESS.2020.3006051 |
21. | Khamaj, A. | 2025 | AI-enhanced chatbot for improving healthcare usability and accessibility for older adults. | https://doi.org/10.1016/j.aej.2024.12.090 |
22. | Kim, E.; Bryant, D.; Srikanth, D.; Howard, A. | 2021 | Age bias in emotion detection: An analysis of facial emotion recognition performance on young, middle-aged, and older adults. | https://doi.org/10.1145/3461702.3462609 |
23. | Kim, S. D. | 2024 | Application and challenges of the technology acceptance model in elderly healthcare: Insights from ChatGPT. | https://doi.org/10.3390/technologies12050068 |
24. | Kim, S.; Choudhury, A. | 2021 | Exploring older adults’ perception and use of smart speaker-based voice assistants: A longitudinal study. | https://doi.org/10.1016/j.chb.2021.106914 |
25. | Liu, Z.; Qian, S.; Cao, S.; & Shi, T. | 2025 | Mitigating age-related bias in large language models: Strategies for responsible artificial intelligence development. | https://doi.org/10.1287/ijoc.2024.0645 |
26. | Mannheim, I.; Wouters, E. J.; Köttl, H.; Van Boekel, L.; Brankaert, R.; Van Zaalen, Y. | 2023 | Ageism in the discourse and practice of designing digital technology for older persons: A scoping review. | https://doi.org/10.1093/geront/gnac144 |
27. | Neves, B. B.; Petersen, A.; Vered, M.; Carter, A.; Omori, M. | 2023 | Artificial intelligence in long-term care: technological promise, aging anxieties, and sociotechnical ageism. | https://doi.org/10.1177/07334648231157370 |
28. | Nielsen, A.; Woemmel, A. | 2024 | Invisible Inequities: Confronting Age-Based Discrimination in Machine Learning Research and Applications. | https://blog.genlaw.org/pdfs/genlaw_icml2024/50.pdf (accessed 13 April 2025) |
29. | Nunan, D.; Di Domenico, M. | 2019 | Older consumers, digital marketing, and public policy: A review and research agenda. | https://doi.org/10.1177/0743915619858939 |
30. | Park, H.; Shin, Y.; Song, K.; Yun, C.; Jang, D. | 2022 | Facial emotion recognition analysis based on age-biased data. | https://doi.org/10.3390/app12167992 |
31. | Park, J.; Bernstein, M.; Brewer, R.; Kamar, E.; Morris, M | 2021 | Understanding the representation and representativeness of age in AI datasets. | https://doi.org/10.1145/3461702.3462590 |
32. | Rebustini, F. | 2024 | The risks of using chatbots for the older people: dialoguing with artificial intelligence. | https://doi.org/10.22456/2316-2171.142152 |
33. | Rosales, A.; Fernández-Ardèvol, M. | 2019 | Structural Ageism in Big Data Approaches. | https://doi.org/10.2478/nor-2019-0013 |
34. | Rosales, A.; Fernández-Ardèvol, M. | 2020 | Ageism in the era of digital platforms. | https://doi.org/10.1177/1354856520930905 |
35. | Sacar, S.; Munteanu, C.; Sin, J.; Wei, C.; Sayago, S.; Waycott, J. | 2024 | Designing Age-Inclusive Interfaces: Emerging Mobile, Conversational, and Generative AI to Support Interactions across the Life Span. | https://doi.org/10.1145/3640794.3669998 |
36. | Shiroma, K.; Miller, J. | 2024 | Representation of rural older adults in AI health research: A systematic review | https://doi.org/10.1093/geroni/igae098.0835 |
37. | Smerdiagina, A. | 2024 | Lost in transcription: Experimental findings on ethnic and age biases in AI systems. | https://doi.org/10.5282/jums/v9i3pp1591-1608 |
38. | Sourbati, M. | 2023 | Age bias on the move: The case of smart mobility. | https://doi.org/10.4324/9781003323686-8 |
39. | Stypinska, J. | 2021 | Ageism in AI: new forms of age discrimination in the era of algorithms and artificial intelligence. | https://doi.org/10.4108/eai.20-11-2021.2314200 |
40. | Stypinska, J. | 2023 | AI ageism: a critical roadmap for studying age discrimination and exclusion in digitalized societies. | https://doi.org/10.1007/s00146-022-01553-5 |
41. | Van Kolfschooten, H. | 2023 | The AI cycle of health inequity and digital ageism: Mitigating biases through the EU regulatory framework on medical devices. | https://doi.org/10.1093/jlb/lsad031 |
42. | Vasavi, S.; Vineela, P.; Raman, S. V. | 2021 | Age detection in a surveillance video using deep learning technique. | https://doi.org/10.1007/s42979-021-00620-w |
43. | Vrančić, A.; Zadravec, H.; Orehovački, T. | 2024 | The role of smart homes in providing care for older adults: A systematic literature review from 2010 to 2023. | https://doi.org/10.3390/smartcities7040062 |
44. | Wang, Y.; Ma, W.; Zhang, M.; Liu, Y.; Ma, S. | 2023 | A survey on the fairness of recommender systems. | https://doi.org/10.1145/3547333 |
45. | Wolniak, R.; Stecuła, K. | 2024 | Artificial Intelligence in Smart Cities—Applications, Barriers, and Future Directions: A Review. | https://doi.org/10.3390/smartcities7030057 |
46. | Yang, F. | 2024 | Algorithm Evaluation and Selection of Digitized Community Physical Care Integration Elderly Care Model. | https://doi.org/10.38007/IJBDIT.2024.050109 |
47. | Zhang, Y.; Luo, L.; Wang, X. | 2024 | Aging with robots: A brief review on eldercare automation. | https://doi.org/10.1097/NR9.0000000000000052 |
References
- Cruz, I.F. Rethinking Artificial Intelligence: Algorithmic Bias and Ethical Issues: How Process Experts Enable and Constrain Fairness in AI-Driven Hiring. Int. J. Commun. 2023, 18, 21. [Google Scholar]
- Rosales, A.; Fernández-Ardèvol, M. Ageism in the era of digital platforms. Convergence 2020, 26, 1074–1087. [Google Scholar] [CrossRef]
- Stypinska, J. Ageism in AI: New forms of age discrimination in the era of algorithms and artificial intelligence. In Proceedings of the 1st International Conference on AI for People: Towards Sustainable AI, Bologna, Italy, 20–24 November 2021. [Google Scholar] [CrossRef]
- Rubio, Y.A.; Cortés, J.J.B.; del Rey, F.C. Is Artificial Intelligence ageist? Eur. Geriatr. Med. 2024, 15, 1957–1960. [Google Scholar] [CrossRef]
- Butler, R. Age-Ism: Another form of Bigotry. Gerontologist 1969, 9, 243–246. [Google Scholar] [CrossRef]
- World Health Organization (WHO). Global Report on Ageism; World Health Organization: Geneva, Switzerland, 2021; Available online: https://iris.who.int/handle/10665/340208 (accessed on 7 May 2025).
- Nielsen, A.; Woemmel, A. Invisible Inequities: Confronting Age-Based Discrimination in Machine Learning Research and Applications. In Proceedings of the 2nd Workshop on Generative AI and Law, Vienna, Austria, 27 July 2024. [Google Scholar]
- Chu, C.H.; Donato-Woodger, S.; Khan, S.S.; Nyrup, R.; Leslie, K.; Lyn, A.; Shi, T.; Bianchi, A.; Rahimi, S.A.; Grenier, A. Age-related bias and artificial intelligence: A scoping review. Hum. Soc. Sci. Commun. 2023, 10, 510. [Google Scholar] [CrossRef]
- Amundsen, D. “The Elderly”: A discriminatory term that is misunderstood. N. Z. Annu. Rev. Educ. 2020, 26, 5–10. [Google Scholar] [CrossRef]
- Van Kolfschooten, H. The AI cycle of health inequity and digital ageism: Mitigating biases through the EU regulatory framework on medical devices. J. Law Biosci. 2023, 10, lsad031. [Google Scholar] [CrossRef] [PubMed]
- Park, J.S.; Bernstein, M.S.; Brewer, R.N.; Kamar, E.; Morris, M.R. Understanding the Representation and Representativeness of Age in AI Data Sets. arXiv 2021, arXiv:2103.09058. [Google Scholar] [CrossRef]
- Barocas, S.; Hardt, M.; Narayanan, A. Fairness in Machine Learning: Lessons from Political Philosophy; Cambridge University Press: Cambridge, UK, 2019; Available online: https://www.fairmlbook.org (accessed on 7 May 2025).
- Chu, C.; Donato-Woodger, S.; Khan, S.S.; Shi, T.; Leslie, K.; Abbasgholizadeh-Rahimi, S.; Nyrup, R.; Grenier, A. Strategies to Mitigate Age-Related Bias in Machine Learning: Scoping Review. JMIR Aging 2024, 7, e53564. [Google Scholar] [CrossRef]
- Liu, M.; Ning, Y.; Teixayavong, S. A scoping review and evidence gap analysis of clinical AI fairness. npj Digit. Med. 2025, 8, 360. [Google Scholar] [CrossRef] [PubMed]
- Crawford, K.; Paglen, T. Excavating AI: The Politics of Images in Machine Learning Training Sets. AI Soc. 2021, 36, 1399. [Google Scholar] [CrossRef]
- Khamaj, A. AI-enhanced chatbot for improving healthcare usability and accessibility for older adults. Alex. Eng. J. 2025, 116, 202–213. [Google Scholar] [CrossRef]
- Nunan, D.; Di Domenico, M. Older consumers, digital marketing, and public policy: A review and research agenda. J. Public Policy Mark. 2019, 38, 469–483. [Google Scholar] [CrossRef]
- Grossmann, I. Wisdom in context. Perspect. Psychol. Sci. 2017, 12, 233–257. [Google Scholar] [CrossRef] [PubMed]
- Cooper, H.; Patali, E.A.; Lindsay, J. Research Synthesis and Meta-Analysis: A Step-by-Step Approach; Sage Publications: Thousand Oaks, CA, USA, 2009. [Google Scholar] [CrossRef]
- Amundsen, D. Using Digital Content Analysis for Online Research: Online News Media Depictions of Older Adults. In Sage Research Methods: Doing Research Online Sample Case Study; Sage Publications: Thousand Oaks, CA, USA, 2022. [Google Scholar] [CrossRef]
- Kitchenham, B.; Charters, S. Guidelines for Performing Systematic Literature Reviews in Software Engineering. Technical Report EBSE 2007-001, Keele University and Durham University Joint Report. 2007. Available online: https://www.elsevier.com/__data/promis_misc/525444systematicreviewsguide.pdf (accessed on 19 May 2025).
- Petticrew, M.; Roberts, H. Systematic Reviews in the Social Sciences: A Practical Guide; John Wiley & Sons: New York, NY, USA, 2008. [Google Scholar]
- Boell, S.K.; Cecez-Kecmanovic, D. A Hermeneutic. Approach for Conducting Literature Reviews and Literature Searches. Commun. Assoc. Inf. Syst. 2014, 34, 257–286. [Google Scholar] [CrossRef]
- Landis, J.; Koch, G. The measurement of observer agreement for categorical data. Biometrics 1977, 33, 159–174. [Google Scholar] [CrossRef]
- Purdue-M2. AI-Face Fairness Bench Dataset. GitHub. 2025. Available online: https://github.com/Purdue-M2/AI-Face-FairnessBench (accessed on 9 April 2025).
- Lin, L.; Santosh, S.; Wu, M.; Wang, X.; Hu, S. AI-Face: A million scale demographically annotated AI generated face dataset and fairness benchmark. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), Online, 13 June 2025. [Google Scholar]
- Meta, A.I. Casual Conversations Dataset. 2025. Available online: https://ai.meta.com/datasets/casual-conversations-v2-dataset/ (accessed on 8 April 2025).
- Nangia, N.; Vania, C.; Bhalerao, R.; Bowman, S. CrowS-Pairs Dataset, NYU Machine Learning for Language Lab. 2025. Available online: https://github.com/nyu-mll/crows-pairs (accessed on 9 April 2025).
- Nangia, N.; Vania, C.; Bhalerao, R.; Bowman, S. CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), Online, 16–20 November 2020; EMNLP. Association for Computational Linguistics: Stroudsburg, PA, USA, 2020; pp. 1953–1967. [Google Scholar] [CrossRef]
- Feldman, M.; Friedler, S.A.; Moeller, J.; Scheidegger, C.; Venkatasubramanian, S. Certifying and removing disparate impact. In Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, Australia, 10–13 August 2015; pp. 259–268. [Google Scholar] [CrossRef]
- Hardt, M.; Price, E.; Srebro, N. Equality of opportunity in supervised learning. Adv. Neural Inf. Process. Syst. 2016, 29, 3315–3323. [Google Scholar] [CrossRef]
- Calasanti, T.; Slevin, K. Age Matters: Realigning Feminist Thinking; Routledge: Oxford, UK, 2001. [Google Scholar]
- Baltes, P.B.; Smith, J. New Frontiers in the Future of Ageing: From Successful Ageing to Cognitive Ageing. Annu. Rev. Psychol. 2003, 55, 197–225. [Google Scholar] [CrossRef]
- Havighurst, R.J.; Albrecht, R. Successful Ageing. Gerontologist 1953, 13, 37–42. [Google Scholar]
- Rawls, J.A. Theory of Justice; Harvard University Press: Cambridge, MA, USA, 1971. [Google Scholar]
- Porgali, B.; Albeiro, V.; Ryda, J.; Ferrer, C.; Hazirbas, D. The Casual Conversations v2 Dataset. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, Vancouver, BC, Canada, 17–24 June 2023. [Google Scholar]
- Fraser, K.C.; Kiritchenko, S.; Nejadgholi, I. Extracting age-related stereotypes from social media texts. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, Marseille, France, 20–25 June 2022. [Google Scholar]
- Amundsen, D.A. A critical gerontological framing analysis of persistent ageism in NZ online news media: Don’t call us “elderly”! J. Aging Stud. 2022, 61, 101009. [Google Scholar] [CrossRef]
- Berridge, C.; Grigorovich, A. Algorithmic harms and digital ageism in the use of surveillance technologies in nursing homes. Front. Sociol. 2022, 7, 957246. [Google Scholar] [CrossRef] [PubMed]
- Rubeis, G.; Fang, M.L.; Sixsmith, A. Equity in AgeTech for Ageing Well in Technology-Driven Places: The Role of Social Determinants in Designing AI-based Assistive Technologies. Sci. Eng. Ethics 2022, 28, 6. [Google Scholar] [CrossRef]
- EU AI Act. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32024R1689 (accessed on 10 September 2025).
- Charter of Fundamental Rights of the European Union. 2012/C 326/02. Available online: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:12012P/TXT (accessed on 10 September 2025).
- USAge Discrimination in Employment Act 1967 (ADEA) 29, U.S.C. 621. Available online: https://www.eeoc.gov/statutes/age-discrimination-employment-act-1967 (accessed on 10 September 2025).
- IEEE Global Initiative 2.0 on Ethics of Autonomous and Intelligent Systems. Available online: https://standards.ieee.org/industry-connections/activities/ieee-global-initiative/ (accessed on 9 April 2025).
Data Source | Data Analysis Tools ** | Ethical Assessment Frameworks | Purpose/Outcome |
---|---|---|---|
1. Literature Review Analysis | - Thematic coding (NVivo) - Text mining for frequency of age-related terms | - AI Now: Algorithmic Impact Assessments (AIA) - IEEE: Ethically Aligned Design (EAD) principles for inclusivity | Identify: - Common themes - Terminology gaps - Overlooked concerns related to older adults in AI literature |
2. Dataset Assessments | - Fairness metrics (e.g., disparate impact, equal opportunity) - Data bias detection (Fairlearn; Google’s What-If Tool) | - AI Now: Dataset audits for demographic balance - IEEE: Dataset governance standards (e.g., transparency, traceability) | - Detect underrepresentation or misrepresentation of older adults - Risk-benefit mapping and context-aware evaluation - Measure fairness quantitatively |
3. Interdisciplinary Integration | - Mixed-method triangulation - Statistical + qualitative synthesis - Sociotechnical system mapping | - AI Now: Structural bias and power analysis - IEEE: Human-centric design values | Ensure ageism is analysed from multiple lenses (social, technical, ethical) - Bridge disciplinary blind spots |
Databases Searched: | Google Scholar IEEE Xplore JSTOR PubMed Scopus |
Search Strings Examples: | (“ageism” OR “age-related discrimination”) AND (“artificial intelligence” OR “AI”) (“algorithmic bias” OR “algorithmic fairness”) AND (“older adults” OR “ageing” OR “aging”) (“inclusive design”) AND (“AI” OR “machine learning”) AND (“older users”) |
Inclusion Criteria: | Peer-reviewed journal articles or conference papers Publications in English Published between 2015–2025 Focus explicitly on ageism, age-related bias, or inclusion of older adults in AI contexts Full-text availability |
Exclusion Criteria: | Theses, dissertations, non-peer-reviewed proceedings, editorials, book chapters, opinion pieces, and letters Studies published before 2015 Articles not addressing age-related bias or older adults in relation to AI |
Screening Procedure: | Initial retrieval: 7595 records Removal of duplicates: 1350 Removal of ineligible: 157 Removal for other reasons: 143 Title/abstract screening: 5945 |
Exclusion Criteria | E1 Focus on unrelated aspects of AI, e.g., technical algorithms or applications without societal implications: 1823 E2: Focus was on AI in domains of robotics, gaming, engineering applications, but not human ageing or fairness concerns 1549 E3: Focus on social bias in AI, but only included gender, race, disability without age as a variable of analysis 1385 Total excluded: 4395 Reports sought for review: 1188 E4: Reports not retrieved: 17 Assessed for eligibility: 1171 E5: Study provided insufficient evidence of age-related bias found in AI systems 496 E6: Study focused mainly on theoretical/conceptual discussions 302 E7: Study lacked empirical data 254 Total Excluded: 1052 Assessed with additional criteria: 119 |
Additional criteria applied: | Must include diverse perspectives and sectors, such as healthcare, employment, and social services E8: 72 |
Studies for Thematic Analysis: | Included for final review: 47 |
Discipline | Theory | Purpose | Literature Review | Datasets | Key Insight |
---|---|---|---|---|---|
1. Computer Science | Barocas, Hardt & Narayanan’s Algorithmic Fairness Theory [12] | Evaluate fairness in AI design, inputs, and outcomes | Assess definitions and metrics of fairness in AI systems for older adults | Analyse representativeness of older adults in training data; identify disparate impacts | Highlights technical/systemic biases; guide fair AI design |
2. Sociology | Calasanti & Slevin’s Social Stratification and Ageism Theory [32] | Examine structural ageism and its intersections with race, gender, and class | Identify how AI systems reflect or reinforce social age-based marginalization | Check for representational imbalance and intersectional gaps | Adds sociological depth; uncovers hidden forms of exclusion |
3. Human Development | Human Development/Lifespan Developmental Perspectives [33] | Consider cognitive, emotional, and social changes across the lifespan | Explore how cognitive diversity and needs of older adults are addressed | Assess if cognitive variation is represented in the data | Fosters AI systems to be developmentally appropriate and user-centric |
4. Gerontology | Disengagement Theory & Active Ageing Frameworks [34] | Contrast passive ageing with active, engaged ageing | Analyse framing of older adults as passive vs. empowered tech users | Check if data focus on deficits or strengths of ageing populations | Promotes age-positive design and values-based inclusion |
5. Ethics | Rawls’ Theory of Justice [35] | Provide an ethical framework for assessing fairness and justice | Evaluate justice-oriented discussions, especially equity for older adults | Determine if data use respects dignity, privacy, and inclusion | Sets moral benchmarks; emphasizes justice for vulnerable groups |
Algorithmic Bias | Healthcare & Elder Care | Ageism in Employment & Social Platforms | Smart Technologies and Ageing | Inclusive Design, Policy & Ethics |
---|---|---|---|---|
Facial Analysis | AI in Nursing Homes | Job hiring; AI Hiring fairness | Smart Mobility | Ageism in AI systems |
Predictive Policing | Robotics in Eldercare | Ageism on digital platforms | Self-driving Cars | Ageism roadmap and mitigation strategies |
Virtual Healthcarers | Long-term Care | Representation and stereotypes in social media | Smart Voice Assistants | Age-inclusive design |
AI in Nursing Homes | Community Care Models | Speech AI perception | Smart Homes for Elder Care | Older consumers |
Elder Care Robots | Health Equity; Regulation | Smart Cities (and Ageing) | Ethical and regulatory strategies | |
Chatbots | Digital Healthcare Empowerment | Digital Technology (for Ageing) | ||
Healthcare | Older Adults’ Perceptions of chatbots |
Dataset | Older Adults Representation | Fairness Metric Results | Interpretation |
---|---|---|---|
AI-Face (Face recognition) | <10% (low representation) | DIR = 0.71 [95% CI: 0.68–0.74] (adverse impact); Error disparity > 7% compared to Adults (25–64) | Under-represented older faces. Strong underrepresentation and higher misidentification rates for older adults. Confirms substantial disparate impact and alignment with dataset creators’ own findings |
Casual Conversations V2 (Video) | <10% across most countries; (low representation; dominated by Youth and Adult groups) | Error disparity = 3.4% [95% CI: 2.8–3.9] between Older Adults (65+) and Adults (25–64) | More balanced but still skewered. Geographically diverse but age-skewed. Older Adults (65+) underrepresented, reducing reliability for later-life applications. Error disparity of 3.4%; valuable for fairness testing but requires stronger sampling of older cohorts. |
CrowS-Pairs (NLP stereotypes) | ~15% (moderate representation) | Stereotypical sentences selected 72% for Older Adults (65+) vs. 49% for Adults (25–64); Cohen’s d = 0.61 (moderate effect) | Includes stereotypes. Highlights persistent linguistic age bias, with models systematically preferring ageist stereotypes despite more balanced representation. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Amundsen, D. Breaking Bias: Addressing Ageism in Artificial Intelligence. J. Ageing Longev. 2025, 5, 36. https://doi.org/10.3390/jal5030036
Amundsen D. Breaking Bias: Addressing Ageism in Artificial Intelligence. Journal of Ageing and Longevity. 2025; 5(3):36. https://doi.org/10.3390/jal5030036
Chicago/Turabian StyleAmundsen, Diana. 2025. "Breaking Bias: Addressing Ageism in Artificial Intelligence" Journal of Ageing and Longevity 5, no. 3: 36. https://doi.org/10.3390/jal5030036
APA StyleAmundsen, D. (2025). Breaking Bias: Addressing Ageism in Artificial Intelligence. Journal of Ageing and Longevity, 5(3), 36. https://doi.org/10.3390/jal5030036