Introducing SAFE-AI: A Behavioral Framework for Managing Ethical Dilemmas in AI-Driven Human Resource Practices
Abstract
“AI [Artificial Intelligence] will be the most transformative technology of the 21st century. It will affect every industry and aspect or our lives.”Jensen Huang, CEO at NVIDIA
1. Introduction
1.1. A Behavior-First Approach for Embedding AI-Ethics
1.2. Research Gap and Contributions
2. AI in HR Practice and the Ethics-to-Implementation Problem
2.1. AI-Ethics Failure Modes in Practice: Recruitment Bias and Data Exposure
2.1.1. Case #1: Amazon.com Inc. AI Recruiting Tool and Gender Bias
2.1.2. Case #2: Microsoft AI Research Team Exposes Sensitive Data
2.2. Bridging AI-Ethics with HR Practices
3. Conceptual Development
3.1. Integrating Ethical Philosophies in AI-Enhanced Organizational Practices
3.2. Linking Ethical Philosophies with Social Information Processing Theory and Heuristics
3.3. A Brief Argument for Heuristics
4. Conceptual Framework
4.1. Developing a Behavioral Framework for AI-Ethics
4.2. Enterprise Application of SAFE-AI and Boundary Conditions
4.3. Integrating AI Ethics with HR Practices
4.4. AI-Ethics in HR Governance
4.5. Ethical Considerations in AI-Enabled Recruitment and Selection
4.6. Ethical Considerations in Employee Evaluation and Performance Management
4.7. Ethical Integration into Organizational Decision-Making
5. Discussion
5.1. Interpreting the Contribution
5.1.1. What SAFE-AI Adds Beyond Risk Catalogs
5.1.2. SAFE-AI’s Core Mechanism
5.2. Leadership as an Enabling Condition and Moderator of SAFE-AI
5.3. Stage-Based Enactment in HR Practice
5.3.1. Moving In (Initiation): Ethics as Design Constraints
Decision rule (Heuristic #1): Do not progress from intent to deployment unless duties and outcomes can be jointly justified for affected stakeholders, and non-negotiable constraints (fairness, privacy, duty of care) are explicitly specified as adoption requirements.
5.3.2. Moving Through (Navigation): Interpretation Management and Feedback
Decision rule (Heuristic #2): Treat feedback and cue consistency as governance inputs: if stakeholder signals indicate drift, perceived injustice, or unanticipated harm, adapt the workflow, communication, and controls before scaling or normalizing use.
5.3.3. Moving Out (Culmination): Institutionalization and Learning Loops
Decision rule (Heuristic #3): Institutionalize ethics through enforceable accountability and learning loops; if standards cannot be monitored, audited, and remediated with visible consequences, ethical AI cannot be sustained.
5.4. Enterprise Integration
5.4.1. HR Practice as a High-Leverage Node in an Enterprise Socio-Technical Chain
5.4.2. Strategy-to-Execution Linkage
- Corporate level (red lines and risk posture): define non-negotiable constraints for AI use in people-related decisions (e.g., nondiscrimination, privacy boundaries, accountable human authority) and specify tradeoff tolerances between efficiency gains and ethical exposure, recognizing that AI-enabled HR decisions shape reputation, regulatory scrutiny, and employee trust.
- Program level (gating and portfolio decisions): treat AI-enabled HR initiatives as managed programs with clear owners, budgets, and milestones, embedding SAFE-AI requirements into stage gates (approval to pilot, approval to scale, approval to institutionalize) so systems that cannot meet minimum ethical performance thresholds do not progress.
- Operating model level (rhythms, metrics, incentives): institutionalize SAFE-AI through recurring management cadences (planning, risk review, audit reporting), measurable indicators (e.g., validation completeness, adverse impact monitoring, incident rates, remediation lead time), and incentives that reward documentation quality, transparency, and corrective action rather than speed-only deployment.
5.4.3. Integrating SAFE-AI into Organizational Strategy
6. Implications for Practice and Research
6.1. Implications for Practice
6.2. Implications for Research
Future Research Directions
7. Limitations
8. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Abiodun, O. I., Jantan, A., Omolara, A. E., Dada, K. V., Umar, A. M., Linus, O. U., & Kiru, M. U. (2019). Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access, 7, 158820–158846. [Google Scholar] [CrossRef]
- Adkins, B. (2017). A guide to ethics and moral philosophy. Edinburgh University Press. [Google Scholar]
- Aldawood, H., Alashoor, T., & Skinner, G. (2020). Does awareness of social engineering make employees more secure? International Journal of Computer Applications, 177(38), 45–49. [Google Scholar] [CrossRef]
- Alizadeh, A., & Kurian, D. (2024). Introduction to ethical theories. In D. F. Russ-Eft, & A. Alizadeh (Eds.), Ethics and human resource development: Societal and organizational contexts (pp. 13–28). Springer International Publishing. [Google Scholar]
- Anderson, M., Anderson, S. L., & Armen, C. (2006). An approach to computing ethics. IEEE Intelligent Systems, 21(4), 56–63. Available online: www.computer.org/intelligent (accessed on 26 May 2025). [CrossRef]
- Andersson, L., Eriksson, J., Stillesjö, S., Juslin, P., Nyberg, L., & Wirebring, L. K. (2020). Neurocognitive processes underlying heuristic and normative probability judgments. Cognition, 196, 104153. [Google Scholar] [CrossRef] [PubMed]
- Ardichvili, A. (2022). The impact of artificial intelligence on expertise development: Implications for HRD. Advances in Developing Human Resources, 24(2), 78–98. [Google Scholar] [CrossRef]
- Ashok, M., Madan, R., Joha, A., & Sivarajah, U. (2022). Ethical framework for artificial intelligence and digital technologies. International Journal of Information Management, 62, 102433. [Google Scholar] [CrossRef]
- Augusto, L. M. (2021). From symbols to knowledge systems: A. Newell and H. A. Simon’s contribution to symbolic AI. Journal of Knowledge Structures and Systems, 2(1), 29–62. Available online: https://philpapers.org/rec/AUGFST-2 (accessed on 19 May 2025).
- Bandura, A. (1976). Social learning theory. Prentice Hall. [Google Scholar]
- Bandura, A. (1991). Social cognitive theory of self-regulation. Organizational Behavior and Human Decision Processes, 50(2), 248–287. [Google Scholar] [CrossRef]
- Bangura, S., Duma, P. T., & Mthembu, N. A. (2025). Ethical considerations of implementing artificial intelligence in human resource management: A review. International Journal of Business Ecosystem & Strategy, 7(5), 274–281. [Google Scholar] [CrossRef]
- Bankins, S. (2021). The ethical use of artificial intelligence in human resource management: A decision-making framework. Ethics and Information Technology, 23(4), 841–854. [Google Scholar] [CrossRef]
- Banks, S. (2020). Ethics and values in social work. Bloomsbury Publishing. [Google Scholar]
- Beauchamp, T. L., & Bowie, N. E. (1979). Ethical theory and business. Prentice Hall. [Google Scholar]
- Beeri, I., Dayan, R., Vigoda-Gadot, E., & Werner, S. B. (2013). Advancing ethics in public organizations: The impact of an ethics program on employees’ perceptions and behaviors in a regional council. Journal of Business Ethics, 112, 59–78. [Google Scholar] [CrossRef]
- Ben-Sasson, H., & Greenberg, R. (2023, September 18). 38TB of data accidentally exposed by Microsoft AI researchers. Wiz.io. Available online: https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers (accessed on 19 May 2025).
- Bingham, C. B., & Eisenhardt, K. M. (2011). Rational heuristics: The ‘simple rules’ that strategists learn from process experience. Strategic Management Journal, 32(13), 1437–1464. [Google Scholar] [CrossRef]
- Binns, R., Van Kleek, M., Veale, M., Lyngs, U., Zhao, J., & Shadbolt, N. (2018, April 21–26). It’s reducing a human being to a percentage: Perceptions of justice in algorithmic decisions. 2018 CHI Conference on Human Factors in Computing Systems (pp. 1–14), Montreal, QC, Canada. [Google Scholar] [CrossRef]
- Boekhorst, J. A. (2015). The role of authentic leadership in fostering workplace inclusion: A social information processing perspective. Human Resource Management, 54(2), 241–264. [Google Scholar] [CrossRef]
- Bogen, M. (2019, May 6). All the ways hiring algorithms can introduce bias. Harvard Business Review. Available online: https://hbr.org/2019/05/all-the-ways-hiring-algorithms-can-introduce-bias (accessed on 25 May 2025).
- Bordage, G. (2009). Conceptual frameworks to illuminate and magnify. Medical Education, 43(4), 312–319. [Google Scholar] [CrossRef]
- Brase, G. L. (2014). Behavioral science integration: A practical framework of multi-level converging evidence for behavioral science theories. New Ideas in Psychology, 33, 8–20. [Google Scholar] [CrossRef]
- Brey, P., & Dainow, B. (2023). Ethics by design for artificial intelligence. AI and Ethics, 4, 1265–1277. [Google Scholar] [CrossRef]
- Calabretta, G., Gemser, G., & Wijnberg, N. M. (2017). The interplay between intuition and rationality in strategic decision making: A paradox perspective. Organization Studies, 38(3–4), 365–401. [Google Scholar] [CrossRef]
- Caldwell, C., & Karri, R. (2005). Organizational governance and ethical systems: A covenantal approach to building trust. Journal of Business Ethics, 58, 249–259. [Google Scholar] [CrossRef]
- Carpenter, R. E. (2021). Learning as cognition: A developmental process for organizational learning. Development and Learning in Organizations: An International Journal, 35(6), 18–21. [Google Scholar] [CrossRef]
- Chandra, S., Shirish, A., & Srivastava, S. C. (2022). To be or not to be… human? Theorizing the role of human-like competencies in conversational artificial intelligence agents. Journal of Management Information Systems, 39(4), 969–1005. [Google Scholar] [CrossRef]
- Chowdhury, S., Dey, P., Joel-Edgar, S., Bhattacharya, S., Rodriguez-Espindola, O., Abadie, A., & Truong, L. (2023). Unlocking the value of artificial intelligence in human resource management through AI capability framework. Human Resource Management Review, 33(1), 100899. [Google Scholar] [CrossRef]
- Chuang, S., & Graham, C. M. (2018). Embracing the sobering reality of technological influences on jobs, employment and human resource development: A systematic literature review. European Journal of Training and Development, 42(7–8), 400–416. [Google Scholar] [CrossRef]
- Claure, H., Kim, S., Kizilcec, R. F., & Jung, M. (2023). The social consequences of machine allocation behavior: Fairness, interpersonal perceptions and performance. Computers in Human Behavior, 146, 107628. [Google Scholar] [CrossRef]
- Clore, G. L., Schwarz, N., & Conway, M. (2014). Affective causes and consequences of social information processing. In R. S. Wyer Jr., & T. K. Srull (Eds.), Handbook of social cognition (pp. 323–418). Psychology Press. [Google Scholar]
- Crawshaw, J. R., Cropanzano, R., Bell, C. M., & Nadisic, T. (2013). Organizational justice: New insights from behavioural ethics. Human Relations, 66(7), 885–904. [Google Scholar] [CrossRef]
- Danysz, K., Cicirello, S., Mingle, E., Assuncao, B., Tetarenko, N., Mockute, R., & Desai, S. (2019). Artificial intelligence and the future of the drug safety professional. Drug Safety, 42, 491–497. [Google Scholar] [CrossRef]
- Dastin, J. (2018, October 11). Insight—Amazon scraps secret AI recruiting tool that showed bias against women. Reuters World. Available online: https://www.reuters.com/article/idUSKCN1MK0AG/ (accessed on 25 May 2025).
- Davies, G. B., & Brooks, P. (2017). Practical challenges of implementing behavioral finance: Reflections from the field. In H. K. Baker, G. Filbeck, & V. Ricciardi (Eds.), Financial behavior: Players, services, products, and markets (pp. 542–560). Oxford University Press. [Google Scholar]
- Elia, J. (2009). Transparency rights, technology, and trust. Ethics and Information Technology, 11, 145–153. [Google Scholar] [CrossRef]
- Foote, M. F., & Ruona, W. E. (2008). Institutionalizing ethics: A synthesis of frameworks and the implications for HRD. Human Resource Development Review, 7(3), 292–308. [Google Scholar] [CrossRef]
- Francolini, G., Desideri, I., Stocchi, G., Salvestrini, V., Ciccone, L. P., Garlatti, P., & Livi, L. (2020). Artificial intelligence in radiotherapy: State of the art and future directions. Medical Oncology, 37, 50. [Google Scholar] [CrossRef]
- Fryer, M. (2018). HRM: An ethical perspective. In D. G. Collings, G. T. Wood, & L. T. Szamosi (Eds.), Human resource management (pp. 98–116). Routledge. [Google Scholar]
- Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3(1), 20–29. [Google Scholar] [CrossRef]
- Gigerenzer, G. (2018). The bias bias in behavioral economics. Review of Behavioral Economics, 5(3–4), 303–336. [Google Scholar] [CrossRef]
- Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451–482. [Google Scholar] [CrossRef]
- Gigerenzer, G., Reb, J., & Luan, S. (2022). Smart heuristics for individuals, teams, and organizations. Annual Review of Organizational Psychology and Organizational Behavior, 9, 171–198. [Google Scholar] [CrossRef]
- Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. [Google Scholar] [CrossRef]
- Goldstein, D. G., & Gigerenzer, G. (2002). Models of ecological rationality: The recognition heuristic. Psychological Review, 109(1), 75–90. [Google Scholar] [CrossRef] [PubMed]
- Goldstein, D. G., & Gigerenzer, G. (2008). The recognition heuristic and the less-is-more effect. Handbook of Experimental Economics Results, 1, 987–992. [Google Scholar] [CrossRef]
- Gutierrez, G. (2020). Artificial intelligence in the intensive care unit. In Annual update in intensive care and emergency medicine 2020 (pp. 667–681). Springer. [Google Scholar] [CrossRef]
- Gürses, S., Troncoso, C., & Diaz, C. (2011). Engineering privacy by design. Computers, Privacy & Data Protection, 14(3), 25. Available online: https://software.imdea.org/~carmela.troncoso/papers/Gurses-CPDP11.pdf (accessed on 6 June 2025).
- Haidt, J., & Joseph, C. (2004). Intuitive ethics: How innately prepared intuitions generate culturally variable virtues. Daedalus, 133(4), 55–66. [Google Scholar] [CrossRef]
- Han, J. W., Hoe, O. J., Wing, J. S., & Brohi, S. N. (2017, December 5–7). A conceptual security approach with awareness strategy and implementation policy to eliminate ransomware. 2017 International Conference on Computer Science and Artificial Intelligence (pp. 222–226), Jakarta, Indonesia. [Google Scholar] [CrossRef]
- Hatcher, T., & Aragon, S. R. (2000). Rationale for and development of a standard on ethics and integrity for international HRD research and practice. Human Resource Development International, 3(2), 207–219. [Google Scholar] [CrossRef]
- Heldal, F., Sjøvold, E., & Stålsett, K. (2020). Shared cognition in intercultural teams: Collaborating without understanding each other. Team Performance Management: An International Journal, 26(3/4), 211–226. [Google Scholar] [CrossRef]
- Hendrycks, D., Burns, C., Basart, S., Critch, A. C., Li, J. L., Song, D., & Steinhardt, J. (2021, May 3–7). Aligning AI with shared human values [Poster]. International Conference on Learning Representations, Vienna, Austria. [Google Scholar]
- Hertwig, R., & Hoffrage, U. (2013). Simple heuristics in a social world. Oxford University Press. [Google Scholar] [CrossRef]
- Hesselbarth, I., Alnoor, A., & Tiberius, V. (2023). Behavioral strategy: A systematic literature review and research framework. Management Decision, 61(9), 2740–2756. [Google Scholar] [CrossRef]
- Hibbert, P., & Cunliffe, A. (2015). Responsible management: Engaging moral reflexive practice through threshold concepts. Journal of Business Ethics, 127, 177–188. [Google Scholar] [CrossRef]
- Hjeij, M., & Vilks, A. (2023). A brief history of heuristics: How did research on heuristics evolve? Humanities and Social Sciences Communications, 10(1), 64. [Google Scholar] [CrossRef]
- Ho, M. K., & Griffiths, T. L. (2022). Cognitive science as a source of forward and inverse models of human decisions for robotics and control. Annual Review of Control, Robotics, and Autonomous Systems, 5, 33–53. [Google Scholar] [CrossRef]
- Hu, X. J., Pawirosetiko, J. S., Santuzzi, A. M., & Barber, L. K. (2024). Does your job shape your experience or interpretation of workplace telepressure? Exploring measurement invariance across occupational characteristics. Computers in Human Behavior Reports, 14, 100426. [Google Scholar] [CrossRef]
- Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 178(4), 977–1007. [Google Scholar] [CrossRef]
- Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI-ethics guidelines. Nature Machine Intelligence, 1(9), 389–399. [Google Scholar] [CrossRef]
- Jones, M., Butler, D., & Plenert, G. (2022). Transform behaviors, transform results! Identifying and using behavioral indicators to drive sustainable change and improvement. Productivity Press. [Google Scholar]
- Jones, T. M., Felps, W., & Bigley, G. A. (2007). Ethical theory and stakeholder-related decisions: The role of stakeholder culture. Academy of Management Review, 32(1), 137–155. [Google Scholar] [CrossRef]
- Krakowski, S., Luger, J., & Raisch, S. (2023). Artificial intelligence and the changing sources of competitive advantage. Strategic Management Journal, 44(6), 1425–1452. [Google Scholar] [CrossRef]
- Kurniawan, B., Marnis, Samsir, & Jahrizal. (2025). A conceptual framework for sustainable human resource management: Integrating green practices, ethical leadership, and digital resilience to advance the SDGs. Sustainability, 17(21), 9904. [Google Scholar] [CrossRef]
- Lakshmanan, R. (2023, September 19). Microsoft AI researchers accidentally expose 38 terabytes of confidential data. The Hacker News. Available online: https://thehackernews.com/2023/09/microsoft-ai-researchers-accidentally.html (accessed on 2 February 2026).
- Lavanchy, M. (2018, November 1). Amazon’s sexist hiring algorithm could still be better than a human. Phys.org. Available online: https://phys.org/news/2018-11-amazon-sexist-hiring-algorithm-human.html (accessed on 8 June 2025).
- Ledro, C., Nosella, A., & Dalla Pozza, I. (2023). Integration of AI in CRM: Challenges and guidelines. Journal of Open Innovation: Technology, Market, and Complexity, 9(4), 100151. [Google Scholar] [CrossRef]
- Lefkowitz, J. (2023). Values and ethics of industrial-organizational psychology. Routledge. [Google Scholar]
- Loi, M. (2020). People analytics must benefit the people. An ethical analysis of data-driven algorithmic systems in human resources management (pp. 1–56). Algorithmwatch. Available online: https://algorithmwatch.org/de/wp-content/uploads/2020/03/AlgorithmWatch_AutoHR_Study_Ethics_Loi_2020.pdf (accessed on 4 October 2025).
- Luan, S., Reb, J., & Gigerenzer, G. (2019). Ecological rationality: Fast-and-frugal heuristics for managerial decision making under uncertainty. Academy of Management Journal, 62(6), 1735–1759. [Google Scholar] [CrossRef]
- Manganello, F., Nico, A., Ragusa, M., & Boccuzzi, G. (2025). Testing the applicability of a governance checklist for high-risk AI-based learning outcome assessment in Italian universities under the EU AI act annex III. Frontiers in Artificial Intelligence, 8, 1718613. [Google Scholar] [CrossRef]
- Mattison, M. (2000). Ethical decision making: The person in the process. Social Work, 45(3), 201–212. [Google Scholar] [CrossRef] [PubMed]
- McGuire, D., Germain, M. L., & Reynolds, K. (2021). Reshaping HRD in light of the COVID-19 pandemic: An ethics of care approach. Advances in Developing Human Resources, 23(1), 26–40. [Google Scholar] [CrossRef]
- McWhorter, R. R. (2023). Virtual human resource development: Definitions, challenges, and opportunities. Human Resource Development Review, 22(4), 582–601. [Google Scholar] [CrossRef]
- Melé, D. (2012). The firm as a ‘community of persons’: A pillar of humanistic business ethos. Journal of Business Ethics, 106, 89–101. [Google Scholar] [CrossRef]
- Melé, D. (2019). Business ethics in action: Managing human excellence in organizations. Bloomsbury Publishing. [Google Scholar]
- Microsoft Corporation. (n.d.-a). Azure. Microsoft. Available online: https://azure.microsoft.com (accessed on 25 May 2025).
- Microsoft Corporation. (n.d.-b). Coordinated vulnerability disclosure. Microsoft Security Response Center. Available online: https://www.microsoft.com/en-us/msrc/cvd (accessed on 25 May 2025).
- Microsoft Corporation. (n.d.-c). Microsoft mitigated exposure of internal information in a storage account due to overly-permissive SAS token. MSRC, Security Research & Defense. Available online: https://msrc.microsoft.com/blog/2023/09/microsoft-mitigated-exposure-of-internal-information-in-a-storage-account-due-to-overly-permissive-sas-token/ (accessed on 25 May 2025).
- Mohammed, A. Q. (2019). HR analytics: A modern tool in HR for predictive decision making. Journal of Management, 6(3), 51–63. Available online: https://ssrn.com/abstract=3525328 (accessed on 2 June 2025). [CrossRef]
- Moore, R. (2021). The cultural evolution of mind-modelling. Synthese, 199(1), 1751–1776. [Google Scholar] [CrossRef]
- Nastase, C., Adomnitei, A., & Apetri, A. (2025). Strategic human resource management in the digital era: Technology, transformation, and sustainable advantage. Merits, 5(4), 23. [Google Scholar] [CrossRef]
- National Institute of Standards and Technology. (2024). Artificial intelligence risk management framework: Generative artificial intelligence profile (NIST AI 600-1). U.S. Department of Commerce. [CrossRef]
- Njoto, S., Cheong, M., Lederman, R., McLoughney, A., Ruppanner, L., & Wirth, A. (2022, November 10–12). Gender bias in AI recruitment systems: A sociological-and data science-based case study. 2022 IEEE International Symposium on Technology and Society, Hong Kong, China. [Google Scholar] [CrossRef]
- Oliveira, J., Murphy, T., Vaughn, G., Elfahim, S., & Carpenter, R. E. (2024). Exploring the adoption phenomenon of artificial intelligence by doctoral students within doctoral education. New Horizons in Adult Education and Human Resource Development, 36(4), 248–262. [Google Scholar] [CrossRef]
- Olson, M. H., & Ramírez, J. J. (2020). An introduction to theories of learning. Routledge. [Google Scholar]
- Ortega-Bolaños, R., Bernal-Salcedo, J., & Ortiz, M. G. (2024). Applying the ethics of AI: A systematic review of tools for developing and assessing AI-based systems. Artificial Intelligence Review, 57, 110. [Google Scholar] [CrossRef]
- Pleskac, T. J., & Hertwig, R. (2014). Ecologically rational choice and the structure of the environment. Journal of Experimental Psychology, 143(5), 2000–2019. [Google Scholar] [CrossRef] [PubMed]
- Prikshat, V., Malik, A., & Budhwar, P. (2023). AI-augmented HRM: Antecedents, assimilation and multilevel consequences. Human Resource Management Review, 33(1), 100860. [Google Scholar] [CrossRef]
- Raihan, A. (2023). A comprehensive review of artificial intelligence and machine learning applications in energy sector. Journal of Technology Innovations and Energy, 2(4), 1–26. [Google Scholar] [CrossRef]
- Reese, S. D. (2023). Writing the conceptual article: A practical guide. Digital Journalism, 11(7), 1195–1210. [Google Scholar] [CrossRef]
- Rizzolatti, G., & Craighero, L. (2004). The mirror-neuron system. Annual Review of Neuroscience, 27, 169–192. [Google Scholar] [CrossRef]
- Rodgers, W., Murray, J. M., Stefanidis, A., Degbey, W. Y., & Tarba, S. Y. (2023). An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Human Resource Management Review, 33(1), 100925. [Google Scholar] [CrossRef]
- Royakkers, L., Timmer, J., Kool, L., & Van Est, R. (2018). Societal and ethical issues of digitization. Ethics and Information Technology, 20, 127–142. [Google Scholar] [CrossRef]
- Salancik, G. R., & Pfeffer, J. (1978). A social information processing approach to job attitudes and task design. Administrative Science Quarterly, 22, 224–253. [Google Scholar] [CrossRef]
- Sales, B. D., & Lavin, M. (2000). Identifying conflicts of interests and resolving ethical dilemmas. In B. D. Sales, & S. Folkman (Eds.), Ethics in research with human participants (pp. 109–128). American Psychological Association. [Google Scholar]
- Sanderson, C., Douglas, D., Lu, Q., Schleiger, E., Whittle, J., Lacey, J., & Hansen, D. (2023). AI-ethics principles in practice: Perspectives of designers and developers. IEEE Transactions on Technology and Society, 4(2), 171–187. [Google Scholar] [CrossRef]
- Sane, M. G., Kumar, V. R., & Ger, A. (2025). Enhancing AI adoption efficiency in enterprises: The role of infrastructure readiness and decision-making. Journal of Marketing & Social Research, 2, 678–684. [Google Scholar] [CrossRef]
- Scherer, M. U. (2015). Regulating artificial intelligence systems: Risks, challenges, competencies, and strategies. Harvard Journal of Law and Technology, 29(2), 353–398. [Google Scholar] [CrossRef]
- Schoenherr, J. R. (2022). Ethical artificial intelligence from popular to cognitive science: Trust in the age of entanglement. Routledge. [Google Scholar]
- Schraw, G., Dunkle, M. E., & Bendixen, L. D. (1995). Cognitive processes in well-defined and ill-defined problem solving. Applied Cognitive Psychology, 9(6), 523–538. [Google Scholar] [CrossRef]
- Shah, A. K., & Oppenheimer, D. M. (2008). Heuristics made easy: An effort-reduction framework. Psychological Bulletin, 134(2), 207–222. [Google Scholar] [CrossRef]
- Shefrin, H., & Statman, M. (2003). The contributions of Daniel Kahneman and Amos Tversky. The Journal of Behavioral Finance, 4(2), 54–58. [Google Scholar] [CrossRef]
- Shneiderman, B. (2020). Bridging the gap between ethics and practice: Guidelines for reliable, safe, and trustworthy human-centered AI systems. ACM Transactions on Interactive Intelligent Systems, 10(4), 1–31. [Google Scholar] [CrossRef]
- Solomon, R. C. (1994). The corporation as community: A reply to Ed Hartman. Business Ethics Quarterly, 4(3), 271–285. [Google Scholar] [CrossRef]
- Stilgoe, J., Owen, R., & Macnaghten, P. (2020). Developing a framework for responsible innovation. In A. Maynard, & J. Stilgoe (Eds.), The ethics of nanotechnology, geoengineering, and clean energy (pp. 347–359). Routledge. [Google Scholar]
- Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4), 15–42. [Google Scholar] [CrossRef]
- Tanantong, T., & Wongras, P. (2024). A UTAUT-based framework for analyzing users’ intention to adopt artificial intelligence in human resource recruitment: A case study of Thailand. Systems, 12(1), 28. [Google Scholar] [CrossRef]
- Taylor, A., & Taylor, M. (2014). Factors influencing effective implementation of performance measurement systems in small and medium-sized enterprises and large firms: A perspective from contingency theory. International Journal of Production Research, 52(3), 847–866. [Google Scholar] [CrossRef]
- Taylor, P. L. (2020). Dispatch priming and the police decision to use deadly force. Police Quarterly, 23(3), 311–332. [Google Scholar] [CrossRef]
- Tegarden, D. P., & Sheetz, S. D. (2003). Group cognitive mapping: A methodology and system for capturing and evaluating managerial and organizational cognition. Omega, 31(2), 113–125. [Google Scholar] [CrossRef]
- Textor, C., Zhang, R., Lopez, J., Schelble, B. G., McNeese, N. J., Freeman, G., & de Visser, E. J. (2022). Exploring the relationship between ethics and trust in human–artificial intelligence teaming: A mixed methods approach. Journal of Cognitive Engineering and Decision Making, 16(4), 252–281. [Google Scholar] [CrossRef]
- Thibault, P. (2004). Agency and consciousness in discourse: Self-other dynamics as a complex system. A&C Black. [Google Scholar]
- Thiel, C. E., Bagdasarov, Z., Harkrider, L., Johnson, J. F., & Mumford, M. D. (2012). Leader ethical decision-making in organizations: Strategies for sensemaking. Journal of Business Ethics, 107, 49–64. [Google Scholar] [CrossRef]
- Tudor, K. (2023). Critical heuristics in psychotherapy research: From ‘I-who-feels’ to ‘we-who-care—And act’. In K. Tudor, & J. Wyatt (Eds.), Qualitative research approaches for psychotherapy (pp. 115–132). Routledge. [Google Scholar]
- Tursunbayeva, A., Di Lauro, S., & Pagliari, C. (2018). People analytics—A scoping review of conceptual boundaries and value propositions. International Journal of Information Management, 43, 224–247. [Google Scholar] [CrossRef]
- Walther, J. B. (2008). Social information processing theory. In D. O. Braithwaite, & P. Schrodt (Eds.), Engaging theories in interpersonal communication: Multiple perspectives (pp. 391–452). Routledge. [Google Scholar]
- Weiskopf, R., & Hansen, H. K. (2023). Algorithmic governmentality and the space of ethics: Examples from ‘people analytics’. Human Relations, 76(3), 483–506. [Google Scholar] [CrossRef]
- Welch, R. V., & Dixon, J. R. (1994). Guiding conceptual design through behavioral reasoning. Research in Engineering Design, 6, 169–188. [Google Scholar] [CrossRef]
- Whelehan, D. F., Conlon, K. C., & Ridgway, P. F. (2020). Medicine and heuristics: Cognitive biases and medical decision-making. Irish Journal of Medical Science, 189, 1477–1484. [Google Scholar] [CrossRef] [PubMed]
- Whittlestone, J., & Clarke, S. (2022). AI challenges for society and ethics. In J. B. Bullock, Y. Chen, J. Himmelreich, V. M. Hudson, A. Korinek, M. M. Young, & B. Zhang (Eds.), The oxford handbook of AI governance (pp. 45–64). Oxford University Press. [Google Scholar] [CrossRef]
- Worden, S. (2003). The role of integrity as a mediator in strategic leadership: A recipe for reputational capital. Journal of Business Ethics, 46(1), 31–44. [Google Scholar] [CrossRef]
- Yorks, L., Rotatori, D., Sung, S., & Justice, S. (2020). Workplace reflection in the age of AI: Materiality, technology, and machines. Advances in Developing Human Resources, 22(3), 308–319. [Google Scholar] [CrossRef]
- Yunos, Z., Ab Hamid, R. S., & Ahmad, M. (2016, July 13–15). Development of a cyber security awareness strategy using focus group discussion. 2016 SAI Computing Conference (pp. 1063–1067), London, UK. [Google Scholar] [CrossRef]
- Zhu, L., Xu, X., Lu, Q., Governatori, G., & Whittle, J. (2022). AI and ethics—Operationalizing responsible AI. In F. Chen, & J. Zhou (Eds.), Humanity driven AI: Productivity, well-being, sustainability and partnership (pp. 15–33). Springer. [Google Scholar] [CrossRef]

| Topic Area | What Prior Studies Converge On | SAFE-AI Alignment | SAFE-AI Divergence and Incremental Contribution |
|---|---|---|---|
| Algorithmic discrimination and fairness | AI-enabled HR decisions can reproduce or scale bias; fairness and adverse impact are central risks | Retains discrimination risk as a baseline hazard that must be monitored across the HR lifecycle | Adds a behavioral translation pathway: fairness becomes durable only when translated into repeatable routines and cues employees recognize as legitimate, not only technical mitigation. |
| Opacity, explainability, and intelligibility | Opacity undermines accountability and trust; explainability is often proposed as mitigation | Treats intelligibility as a core adoption requirement and maps it to stage-specific heuristics | Reframes transparency as a social process: explanations must function as sensegiving cues that remain consistent over time, not merely documentation artifacts. |
| Privacy, surveillance, and autonomy | People analytics and AI-enabled monitoring introduce privacy and autonomy risks | Aligns with privacy as a foundational ethical constraint and governance requirement | Extends the literature by embedding privacy into adoption stages (moving in/through/out), specifying where privacy drift occurs operationally and how accountable routines counter it. |
| Accountability and diffuse responsibility | AI systems can diffuse responsibility across vendors, HR, managers, and IT; accountability is often unclear | Keeps accountability as a central ethical requirement | Makes accountability implementable: clarifies decision authority, assigns owners across the socio-technical chain, and emphasizes cue consistency so employees can infer “who owns the decision” in practice. |
| Institutionalization and governance maturity | Responsible AI requires ongoing governance, monitoring, and adjustment, not one-time compliance | Aligns with a lifecycle view through staged implementation and feedback loops | Adds boundary conditions: durable implementation requires organizational capability (documentation discipline, cross-functional ownership, measurement maturity), distinguishing symbolic compliance from institutionalized practice. |
| Employee interpretation, legitimacy, and voice | Emerging work recognizes worker acceptance, perceived fairness, and legitimacy as adoption constraints | Centers interpretation and legitimacy as causal mechanisms | Differentiates by grounding the model in social information processing: ethical AI “works” when employees repeatedly observe credible cues (leader action, safe voice, corrective response), making ethics an organizational accomplishment rather than a policy claim. |
| Stage | Objective | Heuristic | Action | Implementation | Example |
|---|---|---|---|---|---|
| Stage 1: Moving In (Initiation) | Establish a foundation for ethical AI adoption by considering stakeholder interests, ethical principles, and potential biases. | Ethical Philosophies Guiding HR Practices | Apply ethical philosophies such as consequentialism and deontology to balance duties and outcomes pre-AI adoption. | Assess organizational readiness. Evaluate potential employee impacts. Establish ethical guidelines from the outset. | Any AI adoption plan must comply with fundamental ethical principles such as fairness and non-discrimination to be considered. |
| Stage 2: Moving Through (Navigation) | Navigate the AI adoption and execution process by continuously integrating social feedback and adapting strategies to uphold ethical standards. | Leveraging Social Processing Feedback | Adopt social information processing theory to understand and respond to employee perceptions of the organization’s ethical commitment. | Maintain transparency in decision-making. Gather ongoing feedback from employees and stakeholders. Adapt AI strategies to align with organizational values and ethics. | Assign weights to different decision dimensions (e.g., ethical compliance, operational efficiency) and calculate the total value to select the best alternative. |
| Stage 3: Moving Out (Culmination) | Ensure that ethical principles remain embedded in the organizational culture post-AI adoption and foster continuous ethical engagement. | Embedded Accountability for Ethical Principles | Develop mechanisms to uphold and monitor ethical standards within the organization. | Implement regular ethics training. Perform audits. Appoint ethics officers or committees for continuous ethical oversight. | Regularly review and adjust AI systems based on feedback from employees and stakeholders to ensure ongoing alignment with ethical principles. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Carpenter, R.E.; Huyler, D.; Patole, S.R.; McWhorter, R. Introducing SAFE-AI: A Behavioral Framework for Managing Ethical Dilemmas in AI-Driven Human Resource Practices. Adm. Sci. 2026, 16, 85. https://doi.org/10.3390/admsci16020085
Carpenter RE, Huyler D, Patole SR, McWhorter R. Introducing SAFE-AI: A Behavioral Framework for Managing Ethical Dilemmas in AI-Driven Human Resource Practices. Administrative Sciences. 2026; 16(2):85. https://doi.org/10.3390/admsci16020085
Chicago/Turabian StyleCarpenter, Rob E., Debaro Huyler, Sanket Ramchandra Patole, and Rochell McWhorter. 2026. "Introducing SAFE-AI: A Behavioral Framework for Managing Ethical Dilemmas in AI-Driven Human Resource Practices" Administrative Sciences 16, no. 2: 85. https://doi.org/10.3390/admsci16020085
APA StyleCarpenter, R. E., Huyler, D., Patole, S. R., & McWhorter, R. (2026). Introducing SAFE-AI: A Behavioral Framework for Managing Ethical Dilemmas in AI-Driven Human Resource Practices. Administrative Sciences, 16(2), 85. https://doi.org/10.3390/admsci16020085

