Safety of Human–Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People
Abstract
:1. Introduction
2. Sources of Loopholes between Human Organizations and Individual People
2.1. Emerging Loopholes from Changing Culture-Bound Social Choices
2.2. Exploiting Loopholes to Conform with Group Affiliations
2.3. Loopholes Amplified by Technologies
2.4. Loopholes in HAI Systems
3. Sources of Loopholes from AI
3.1. Deterministic AI
3.2. Probabilistic AI
3.3. Hybrid AI
4. Applying Safety Science to HAI Systems
4.1. The Swiss Cheese Model
4.2. Applying SCM to HAI System for Delivery Driving
4.2.1. Human Organizations
4.2.2. AI Implementation
4.2.3. Individual People
4.2.4. HAI System
5. Conclusions
5.1. Principal Contributions
5.2. Practical Implications
5.3. Directions for Future Research
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Nindler, R. The United Nation’s capability to manage existential risks with a focus on artificial intelligence. Int. Commun. Law Rev. 2019, 21, 5–34. [Google Scholar] [CrossRef]
- Federspiel, F.; Mitchell, R.; Asokan, A.; Umana, C.; McCoy, D. Threats by artificial intelligence to human health and human existence. BMJ Glob. Health 2023, 8, e010435. [Google Scholar] [CrossRef] [PubMed]
- Christian, B. The Alignment Problem: Machine Learning and Human Values; W.W. Norton & Company: New York, NY, USA, 2020. [Google Scholar]
- Gabriel, I. Artificial Intelligence. Values, and Alignment. Minds Mach. 2020, 30, 411–437. [Google Scholar] [CrossRef]
- Huang, H. Algorithmic management in food-delivery platform economy in China. New Technol. Work. Employ. 2023, 38, 185–205. [Google Scholar] [CrossRef]
- Loske, D.; Klumpp, M. Intelligent and efficient? An empirical analysis of human–AI collaboration for truck drivers in retail logistics. Int. J. Logist. Manag. 2021, 32, 1356–1383. [Google Scholar] [CrossRef]
- Kafoutis, G.C.E.; Dokas, I.M. MIND-VERSA: A new Methodology for IdentifyiNg and Determining loopholes and the completeness Value of Emergency ResponSe plAns. Safety Sci. 2021, 136, 105154. [Google Scholar] [CrossRef]
- Bracci, E. The loopholes of algorithmic public services: An “intelligent” accountability research agenda. Account. Audit. Account. J. 2023, 36, 739–763. [Google Scholar] [CrossRef]
- Licato, J.; Marji, Z. Probing formal/informal misalignment with the loophole task. In Hybrid Worlds: Societal and Ethical Challenges, Proceedings of the 2018 International Conference on Robot Ethics and Standards, Troy, NY, USA, 20–21 August 2018; Bringsjord, S., Tokhi, M.O., Ferreira, M.I.A., Govindarajulu, N.S., Eds.; Clawar Association Ltd.: London, UK, 2018; pp. 39–45. [Google Scholar]
- Navaretti, G.B.; Calzolari, G.; Pozzolo, A.F. What Are the Wider Supervisory Implications of the Wirecard Case? Economic Governance Support Unit European Parliament: Brussels, Belgium, 2020. [Google Scholar]
- Montes, G.A.; Goertzel, B. Distributed, decentralized, and democratized artificial intelligence. Technol. Forecast. Soc. Chang. 2019, 141, 354–358. [Google Scholar] [CrossRef]
- Baur, C.; Soucek, R.; Kühnen, U.; Baumeister, R.F. Unable to resist the temptation to tell the truth or to lie for the organization? Identification makes the difference. J. Bus. Ethics 2020, 167, 643–662. [Google Scholar] [CrossRef]
- Lee, E.J.; Yun, J.H. Moral incompetency under time constraint. J. Bus. Res. 2019, 99, 438–445. [Google Scholar] [CrossRef]
- Reason, J. The contribution of latent human failures to the breakdown of complex systems. Philos. Trans. R. Soc. Lond. B Biol. Sci. 1990, 327, 475–484. [Google Scholar] [PubMed]
- Shabani, T.; Jerie, S.; Shabani, T. A comprehensive review of the Swiss cheese model in risk management. Saf. Extrem. Environ. 2024, 6, 43–57. [Google Scholar] [CrossRef]
- Peetz, D. Can and how should the gig worker loophole be closed? Econ. Labour Relat. Rev. 2023, 34, 840–854. [Google Scholar] [CrossRef]
- Rawling, M. Submission to the Senate Education and Employment Legislation Committee Inquiry into the Fair Work Legislation Amendment (Closing Loopholes) Bill 2023 (Cth). Senate Education and Employment Legislation Committee Inquiry into the Fair Work Legislation Amendment (Closing Loopholes) Bill 2023 (Cth). Available online: https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.aph.gov.au/DocumentStore.ashx%3Fid%3Dad454db5-a3c8-4544-9b10-b39d27ecb76e%26subId%3D748939&ved=2ahUKEwiVqqD827GGAxVNafUHHZv_F04QFnoECBgQAQ&usg=AOvVaw1UJyLwb0jnWcmFTG6ixL_f (accessed on 3 May 2024).
- Weissensteiner, P.; Stettinger, G.; Rumetshofer, J.; Watzenig, D. Virtual validation of an automated lane-keeping system with an extended operational design domain. Electronics 2021, 11, 72. [Google Scholar] [CrossRef]
- De Vos, B.; Cuenen, A.; Ross, V.; Dirix, H.; Brijs, K.; Brijs, T. The effectiveness of an intelligent speed assistance system with real-time speeding interventions for truck drivers: A Belgian simulator study. Sustainability 2023, 15, 5226. [Google Scholar] [CrossRef]
- Loomis, B. 1900–1930: The Years of Driving Dangerously. The Detroit News. 26 April 2015. Available online: https://eu.detroitnews.com/story/news/local/michigan-history/2015/04/26/auto-traffic-history-detroit/26312107 (accessed on 10 June 2022).
- Kairys, D. The Politics of Law: A Progressive Critique; Basic Books: New York, NY, USA, 1998. [Google Scholar]
- Raz, J. Ethics in the Public Domain: Essays in the Morality of Law and Politics; Oxford University Press: Oxford, UK, 1994. [Google Scholar]
- Coleman, J.; Ferejohn, J. Democracy and social choice. Ethics 1986, 97, 6–25. [Google Scholar] [CrossRef]
- Plott, C.R. Ethics, social choice theory and the theory of economic policy. J. Math. Sociol. 1972, 2, 181–208. [Google Scholar] [CrossRef]
- Wright, N.A.; Lee, L.T. Alcohol-related traffic laws and drunk-driving fatal accidents. Accid. Anal. Prev. 2021, 161, 106358. [Google Scholar] [CrossRef] [PubMed]
- Cassini, M. Traffic lights: Weapons of mass distraction, danger and delay. Econ. Aff. 2010, 30, 79–80. [Google Scholar] [CrossRef]
- Hamilton-Baillie, B. Shared space: Reconciling people, places and traffic. Built Environ. 2008, 34, 161–181. [Google Scholar] [CrossRef]
- Braess, D.; Nagurney, A.; Wakolbinger, T. On a paradox of traffic planning. Transp. Sci. 2005, 39, 446–450. [Google Scholar] [CrossRef]
- Fuller, R. The task-capability interface model of the driving process. Rech. Transp. Sécur. 2000, 66, 47–57. [Google Scholar] [CrossRef]
- Van der Wiele, T.; Kok, P.; McKenna, R.; Brown, A. A corporate social responsibility audit within a quality management framework. J. Bus. Ethics 2001, 31, 285–297. [Google Scholar] [CrossRef]
- Winfield, A. Ethical standards in robotics and AI. Nat. Electron. 2019, 2, 46–48. [Google Scholar] [CrossRef]
- ISO. Smart Systems and Vehicles. Available online: https://www.iso.org/sectors/transport/smart-systems-vehicles (accessed on 3 May 2024).
- Graham, J.; Nosek, B.A.; Haidt, J.; Iyer, R.; Koleva, S.; Ditto, P.H. Mapping the moral domain. J. Pers. Soc. Psychol. 2011, 101, 366–385. [Google Scholar] [CrossRef] [PubMed]
- Chen, M.; Chen, C.C.; Sheldon, O.J. Relaxing moral reasoning to win: How organizational identification relates to un-ethical pro-organizational behavior. J. Appl. Psychol. 2016, 101, 1082–1096. [Google Scholar] [CrossRef] [PubMed]
- Umphress, E.E.; Bingham, J.B.; Mitchell, M.S. Unethical behavior in the name of the company: The moderating effect of organizational identification and positive reciprocity beliefs on unethical pro-organizational behavior. J. Appl. Psychol. 2010, 95, 769–780. [Google Scholar] [CrossRef]
- Jo, H.; Hsu, A.; Llanos-Popolizio, R.; Vergara-Vega, J. Corporate governance and financial fraud of Wirecard. Eur. J. Bus. Manag. Res. 2021, 6, 96–106. [Google Scholar] [CrossRef]
- Zhang, M.; Atwal, G.; Kaiser, M. Corporate social irresponsibility and stakeholder ecosystems: The case of Volkswagen Dieselgate scandal. Strateg. Chang. 2021, 30, 79–85. [Google Scholar] [CrossRef]
- Houdek, P. Fraud and understanding the moral mind: Need for implementation of organizational characteristics into behavioral ethics. Sci. Eng. Ethics 2020, 26, 691–707. [Google Scholar] [CrossRef]
- Seeger, M.W.; Ulmer, R.R. Explaining Enron: Communication and responsible leadership. Manag. Comm. Q. 2003, 17, 58–84. [Google Scholar] [CrossRef]
- Hartwig, M.; Bhat, A.; Peters, A. How stress can change our deepest preferences: Stress habituation explained using the free energy principle. Front. Psychol. 2022, 13, 865203. [Google Scholar] [CrossRef]
- Prakash, C.; Fields, C.; Hoffman, D.D.; Prentner, R.; Singh, M. Fact, fiction, and fitness. Entropy 2020, 22, 514. [Google Scholar] [CrossRef] [PubMed]
- Isomura, T.; Parr, T.; Friston, K. Bayesian filtering with multiple internal models: Toward a theory of social intelligence. Neural Comput. 2019, 31, 2390–2431. [Google Scholar] [CrossRef] [PubMed]
- Hirsh, J.B.; Lu, J.G.; Galinsky, A.D. Moral utility theory: Understanding the motivation to behave (un)ethically. Res. Org. Behav. 2018, 38, 43–59. [Google Scholar] [CrossRef]
- Agre, P.E. Real-time politics: The Internet and the political process. Inf. Soc. 2002, 18, 311–331. [Google Scholar] [CrossRef]
- Toyama, K. Technology as amplifier in international development. In Proceedings of the 2011 Conference, Seattle, WA, USA, 8–11 February 2011; pp. 75–82. [Google Scholar]
- White, A.E.; Weinstein, E.; Selman, R.L. Adolescent friendship challenges in a digital context: Are new technologies game changers, amplifiers, or just a new medium? Convergence 2018, 24, 269–288. [Google Scholar] [CrossRef]
- Ying, M.; Lei, R.; Chen, L.; Zhou, L. Health information seeking behaviours of the elderly in a technology-amplified social environment. In Proceedings of the Smart Health: International Conference, ICSH 2019, Shenzhen, China, 1–2 July 2019; pp. 198–206. [Google Scholar]
- Fox, S. Human-artificial intelligence systems: How human survival first principles influence machine learning world models. Systems 2022, 10, 260. [Google Scholar] [CrossRef]
- Rhoades, A. Big tech makes big data out of your child: The FERPA loophole edtech exploits to monetize student data. Am. Univ. Bus. Law Rev. 2020, 9, 445. [Google Scholar]
- Arnbak, A.; Goldberg, S. Loopholes for circumventing the constitution: Unrestricted bulk surveillance on Americans by collecting network traffic abroad. Mich. Telecommun. Technol. Law Rev. 2014, 21, 317. [Google Scholar]
- Salgado-Criado, J.; Fernández-Aller, C. A wide human-rights approach to artificial intelligence regulation in Europe. IEEE Technol. Soc. Mag. 2021, 40, 55–65. [Google Scholar] [CrossRef]
- Gedye, G.; Scherer, M. Are These States about to Make a Big Mistake on AI? Politico Magazine. 30 April 2024. Available online: https://www.politico.com/news/magazine/2024/04/30/ai-legislation-states-mistake-00155006 (accessed on 2 May 2024).
- Katz, L. A theory of loopholes. J. Leg. Stud. 2010, 39, 1–31. [Google Scholar] [CrossRef]
- Katz, L.; Sandroni, A. Circumvention of law and the hidden logic behind it. J. Leg. Stud. 2023, 52, 51–81. [Google Scholar] [CrossRef]
- The Local France. If Your Departement Is Planning to Scrap France’s 80 km/h Limit. The Local France. 22 May 2019. Available online: https://www.thelocal.fr/20190522/if-your-dpartement-planning-to-scrapfrances-80kmh-limit (accessed on 2 May 2024).
- Kauffman, S. Innovation and the evolution of the economic web. Entropy 2019, 21, 864. [Google Scholar] [CrossRef]
- Jong, W.; van der Linde, V. Clean diesel and dirty scandal: The echo of Volkswagen’s dieselgate in an intra-industry setting. Publ. Relat. Rev. 2022, 48, 102146. [Google Scholar] [CrossRef]
- Kharpal, A. Samsung Bans Use of AI-like ChatGPT for Employees after Misuse of the Chatbot. CNBC 2.5. 2023. Available online: https://www.nbcnews.com/tech/tech-news/samsung-bans-use-chatgpt-employees-misuse-chatbot-rcna82407 (accessed on 16 May 2023).
- Johnson, B. Metacognition for artificial intelligence system safety–An approach to safe and desired behavior. Saf. Sci. 2022, 151, 105743. [Google Scholar] [CrossRef]
- Boden, M.A. GOFAI. In The Cambridge Handbook of Artificial Intelligence; Frankish, K., Ramsey, W.M., Eds.; Cambridge University Press: Cambridge, UK, 2014; Chapter 4; pp. 89–107. [Google Scholar]
- Kleesiek, J.; Wu, Y.; Stiglic, G.; Egger, J.; Bian, J. An Opinion on ChatGPT in Health Care—Written by Humans Only. J. Nucl. Med. 2023, 64, 701–703. [Google Scholar] [CrossRef]
- Beutel, G.; Geerits, E.; Kielstein, J.T. Artificial hallucination: GPT on LSD? Crit. Care 2023, 27, 148. [Google Scholar] [CrossRef]
- Fernando, K.R.M.; Tsokos, C.P. Dynamically Weighted Balanced Loss: Class Imbalanced Learning and Confidence Calibration of Deep Neural Networks. IEEE Trans. Neural Netw. Learn. Syst. 2022, 33, 2940–2951. [Google Scholar] [CrossRef]
- Koh, P.W.; Sagawa, S.; Marklund, H.; Xie, S.M.; Zhang, M.; Balsubramani, A.; Hu, W.; Yasunaga, M.; Phillips, R.L.; Gao, I.; et al. Wilds: A benchmark of in-the-wild distribution shifts. In Proceedings of the 2021 International Conference on Machine Learning, Virtual, 18–24 July 2021; pp. 5637–5664. [Google Scholar]
- Cai, F.; Koutsoukos, X. Real-time out-of-distribution detection in cyber-physical systems with learning-enabled components. IET Cyber-Phys. Syst Theory Appl. 2022, 7, 212–234. [Google Scholar] [CrossRef]
- Paullada, A.; Raji, I.D.; Bender, E.M.; Denton, E.; Hanna, A. Data and its (dis) contents: A survey of dataset development and use in machine learning research. Patterns 2021, 2, 100336. [Google Scholar] [CrossRef] [PubMed]
- Kuutti, S.; Bowden, R.; Joshi, H.; de Temple, R.; Fallah, S. Safe deep neural network-driven autonomous vehicles using software safety cages. In Proceedings of the International Conference on Intelligent Data Engineering and Automated Learning, Manchester, UK, 14–16 November 2019; Springer: Cham, Switzerland, 2019; pp. 150–160. [Google Scholar]
- Martin-Maroto, F.; de Polavieja, G.G. Semantic Embeddings in Semilattices. arXiv 2022, arXiv:2205.12618. [Google Scholar]
- Martin-Maroto, F.; de Polavieja, G.G. Algebraic Machine Learning. arXiv 2018, arXiv:1803.05252. [Google Scholar]
- Reason, J. Managing the Risks of Organisational Accidents; Ashgate Publishing Limited: Aldershot, UK, 1997. [Google Scholar]
- Stein, J.E.; Heiss, K. The Swiss cheese model of adverse event occurrence—Closing the holes. Semin. Pediatr. Surg. 2015, 24, 278–282. [Google Scholar] [CrossRef] [PubMed]
- Wiegmann, D.A.; Wood, L.J.; Cohen, T.N.; Shappell, S.A. Understanding the “Swiss Cheese Model” and its application to patient safety. J. Pat. Safety 2022, 18, 119–123. [Google Scholar] [CrossRef] [PubMed]
- Song, W.; Li, J.; Li, H.; Ming, X. Human factors risk assessment: An integrated method for improving safety in clinical use of medical devices. Appl. Soft Comput. 2020, 86, 105918. [Google Scholar] [CrossRef]
- Zhou, T.; Zhang, J. Analysis of commercial truck drivers’ potentially dangerous driving behaviors based on 11-month digital tachograph data and multilevel modeling approach. Accid. Anal. Prev. 2019, 132, 105256. [Google Scholar] [CrossRef] [PubMed]
- Kaiser-Schatzlein, R. How Life as a Trucker Devolved into a Dystopian Nightmare. The New York Times. 15 March 2022. Available online: https://www.nytimes.com/2022/03/15/opinion/truckers-surveillance.html (accessed on 14 June 2022).
- Christie, N.; Ward, H. The health and safety risks for people who drive for work in the gig economy. J. Transp. Health 2019, 13, 115–127. [Google Scholar] [CrossRef]
- Knox, W.B.; Allievi, A.; Banzhaf, H.; Schmitt, F.; Stone, P. Reward (mis)design for autonomous driving. Artif. Intell. 2023, 316, 103829. [Google Scholar] [CrossRef]
- Probst, M.; Wenzel, R.; Puphal, T.; Komuro, M.; Weisswange, T.H.; Steinhardt, N.; Steinhardt, N.; Bolder, B.; Flade, B.; Sakamoto, Y.; et al. Automated driving in complex real-world scenarios using a scalable risk-based behavior generation framework. In Proceedings of the IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, IN, USA, 19–22 September 2021; pp. 629–636. [Google Scholar]
- Kaviani, F.; Young, K.L.; Robards, B.; Koppel, S. “Like it’s wrong, but it’s not that wrong:” Exploring the normalization of risk-compensatory strategies among young drivers engaging in illegal smartphone use. J. Safe Res. 2021, 78, 292–302. [Google Scholar] [CrossRef]
- Yam, K.C.; Reynolds, S.J.; Hirsh, J.B. The hungry thief: Physiological deprivation and its effects on unethical behavior. Org. Behav. Hum. Dec. Proc. 2014, 125, 123–133. [Google Scholar] [CrossRef]
- Yu, W.; Li, J.; Peng, L.-M.; Xiong, X.; Yang, K.; Wang, H. SOTIF risk mitigation based on unified ODD monitoring for autonomous vehicles. J. Intell. Connect. Veh. 2022, 5, 157–166. [Google Scholar] [CrossRef]
- Evans, D.R.; Boggero, I.A.; Segerstrom, S.C. The nature of self-regulatory fatigue and “ego depletion” lessons from physical fatigue. Pers. Soc. Psychol. Rev. 2016, 20, 291–310. [Google Scholar] [CrossRef] [PubMed]
- Gino, F.; Schweitzer, M.E.; Mead, N.L.; Ariely, D. Unable to resist temptation: How self-control depletion promotes unethical behavior. Organ. Behav. Hum. Dec. Proc. 2011, 115, 191–203. [Google Scholar] [CrossRef]
- Mead, N.L.; Baumeister, R.F.; Gino, F.; Schweitzer, M.E.; Ariely, D. Too tired to tell the truth: Self-control resource depletion and dishonesty. J. Exp. Soc. Psychol. 2009, 45, 594–597. [Google Scholar] [CrossRef] [PubMed]
- Wang, Y.; Wang, G.; Chen, Q.; Li, L. Depletion, moral identity, and unethical behavior: Why people behave unethically after self-control exertion. Cons. Cogn. 2017, 56, 188–198. [Google Scholar] [CrossRef] [PubMed]
- Geiger, A.; Lenz, P.; Stiller, C.; Urtasun, R. Vision meets robotics: The kitti dataset. Int. J. Rob. Res. 2013, 32, 1231–1237. [Google Scholar] [CrossRef]
- Jensen, M.B.; Philipsen, M.P.; Møgelmose, A.; Moeslund, T.B.; Trivedi, M.M. Vision for looking at traffic lights: Issues, survey, and perspectives. IEEE Trans. Intell. Transp. Syst. 2016, 17, 1800–1815. [Google Scholar] [CrossRef]
- Wang, Q.; Zhang, Q.; Liang, X.; Wang, Y.; Zhou, C.; Mikulovich, V.I. Traffic lights detection and recognition method based on the improved YOLOv4 algorithm. Sensors 2022, 22, 200. [Google Scholar] [CrossRef]
- Possatti, L.C.; Guidolini, R.; Cardoso, V.B.; Berriel, R.F.; Paixão, T.M.; Badue, C.; De Souza, A.F.; Oliveira-Santos, T. Traffic light recognition using deep learning and prior maps for autonomous cars. In Proceedings of the 2019 International Joint Conference on Neural Networks (IJCNN), Budapest, Hungary, 14–19 July 2019; pp. 1–8. [Google Scholar]
- Jiao, J.; Wang, H. Traffic behavior recognition from traffic videos under occlusion condition: A Kalman filter approach. Transp. Res. Rec. 2022, 2676, 55–65. [Google Scholar] [CrossRef]
- Kadry, A.M.; Torad, A.; Elwan, M.A.; Kakar, R.S.; Bradley, D.; Chaudhry, S.; Boolani, A. Using Machine Learning to Identify Feelings of Energy and Fatigue in Single-Task Walking Gait: An Exploratory Study. App. Sci. 2022, 12, 3083. [Google Scholar] [CrossRef]
- Williams, J.; Francombe, J.; Murphy, D. Evaluating the Influence of Room Illumination on Camera-Based Physiological Measurements for the Assessment of Screen-Based Media. Appl. Sci. 2023, 13, 8482. [Google Scholar] [CrossRef]
- Kalanadhabhatta, M.; Min, C.; Montanari, A.; Kawsar, F. FatigueSet: A Multi-modal Dataset for Modeling Mental Fatigue and Fatigability. In Pervasive Computing Technologies for Healthcare. PH 2021; Lewy, H., Barkan, R., Eds.; Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering; Springer: Cham, Switzerland; London, UK, 2021; Volume 431, pp. 204–217. [Google Scholar]
- Lohani, M.; Payne, B.R.; Strayer, D.L. A review of psychophysiological measures to assess cognitive states in real-world driving. Front. Hum. Neurosci. 2019, 13, 57. [Google Scholar] [CrossRef] [PubMed]
- Studer, L.; Paglino, V.; Gandini, P.; Stelitano, A.; Triboli, U.; Gallo, F.; Andreoni, G. Analysis of the Relationship between Road Accidents and Psychophysical State of Drivers through Wearable Devices. Appl. Sci. 2018, 8, 1230. [Google Scholar] [CrossRef]
- Cacciabue, P.C.; Saad, F. Behavioural adaptations to driver support systems: A modelling and road safety perspective. Cogn Technol. Work 2008, 10, 31–39. [Google Scholar] [CrossRef]
- McGee-Lennon, M.R.; Wolters, M.K.; Brewster, S. User-centred multimodal reminders for assistive living. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7–12 May 2011; pp. 2105–2114. [Google Scholar]
- Sharot, T.; Fleming, S.M.; Yu, X.; Koster, R.; Dolan, R.J. Is choice-induced preference change long lasting? Psychol. Sci. 2012, 23, 1123–1129. [Google Scholar] [CrossRef] [PubMed]
- Hartwig, M.; Peters, A. Cooperation and social rules emerging from the principle of surprise minimization. Front. Psychol. 2021, 11, 606174. [Google Scholar] [CrossRef] [PubMed]
- Popan, C. Embodied precariat and digital control in the “gig economy”: The mobile labor of food delivery workers. J. Urban Technol. 2021, 1–20. [Google Scholar] [CrossRef]
- De Croon, E.M.; Sluiter, J.K.; Blonk, R.W.; Broersen, J.P.; Frings-Dresen, M.H. Stressful work, psychological job strain, and turnover: A 2-year prospective cohort study of truck drivers. J. Appl. Psychol. 2004, 89, 442–454. [Google Scholar] [CrossRef]
- Soppitt, S.; Oswald, R.; Walker, S. Condemned to precarity? Criminalised youths, social enterprise and the sub-precariat. Soc. Enterp. J. 2022, 18, 470–488. [Google Scholar] [CrossRef]
- Standing, G. The Precariat: The New Dangerous Class; Bloomsbury: London, UK, 2011. [Google Scholar]
- Wild, D.; Grove, A.; Martin, M.; Eremenco, S.; McElroy, S.; Verjee-Lorenz, A.; Erikson, P. Principles of good practice for the translation and cultural adaptation process for patient-reported outcomes (PRO) measures: Report of the ISPOR task force for translation and cultural adaptation. Value Health 2005, 8, 94–104. [Google Scholar] [CrossRef] [PubMed]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Fox, S.; Victores, J.G. Safety of Human–Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People. Informatics 2024, 11, 36. https://doi.org/10.3390/informatics11020036
Fox S, Victores JG. Safety of Human–Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People. Informatics. 2024; 11(2):36. https://doi.org/10.3390/informatics11020036
Chicago/Turabian StyleFox, Stephen, and Juan G. Victores. 2024. "Safety of Human–Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People" Informatics 11, no. 2: 36. https://doi.org/10.3390/informatics11020036
APA StyleFox, S., & Victores, J. G. (2024). Safety of Human–Artificial Intelligence Systems: Applying Safety Science to Analyze Loopholes in Interactions between Human Organizations, Artificial Intelligence, and Individual People. Informatics, 11(2), 36. https://doi.org/10.3390/informatics11020036