Achieving Ethical Algorithmic Behaviour in the Internet of Things: A Review
Abstract
:1. Introduction
- Internet or network connectivity (e.g., WiFi or Bluetooth enabled).
- Sensors (e.g., think of the sensors in the smartphone but also in a fork to detect its movement and how fast people eat [1,2] (See https://www.hapilabs.com/product/hapifork, accessed on 1 July 2021)).
- Computational ability (e.g., with embedded AI [3] and cooperation protocols).
- Actuators, or the ability to affect the physical world, including taking action autonomously.
- Greater cooperation among IoT devices can now happen; devices that were previously not connected can now not only communicate (provided that the time and resource constraints allow), but also carry out cooperative behaviours. In fact, the work in [6] envisions universal machine-to-machine collaboration across manufacturers and industries by the year 2025, though this can be restricted due to proprietary data. Having a cooperation layer above the networking layer is an important development; the social IoT has been widely discussed [7,8,9,10];
- Network effects emerge. The value of a network is dependent on the size of the network; the greater the size of the network, the greater the value of joining or connecting to the network, so device manufacturers may tend to favour cooperative IoT (e.g., see the Economics of the Internet of Things (https://www.technologyreview.com/s/527361/the-economics-of-the-internet-of-things/, accessed on 1 July 2021)). A device that can cooperate with more devices could have greater value, compared to ones that cooperate with only a few—such cooperation among devices can be triggered by users directly or indirectly (if decided by a device autonomously), with a consequent impact on communication latency and delay.
- Devices that are connected to the internet are controllable via the internet, which means they are also vulnerable to (remote) hacking, in the same way that a computer being on the internet can be hacked.
- Sensors on such IoT devices gather significant amounts of data and, being internet-enabled, such data are typically uploaded to a server (or to a Cloud computing server somewhere); potentially, such data can cause issues with people who are privacy-conscious (e.g., data from an internet-connected light bulb could indicate when someone is home and not home). An often-linked topic to the IoT is the notion of data analytics, due to the need to process and analyse data from such sensing devices. There are also ethical and privacy issues of placing sensors in different areas (e.g., in certain public areas) and there could be cultural sensitivities in relation to where sensors are placed.
- IoT devices may be deployed over a long time (e.g., embedded in a building or as part of urban street lighting) so they need to be upgraded (or their software upgraded) over the internet as improvements are made, errors are found and fixed, and as security vulnerabilities are discovered and patched.
- Non-tech savvy users might find working with internet-connected devices challenging (e.g., set up and maintenance, and be unaware of security or privacy effects of devices), and the users might feel a loss of control.
- Computation on such devices suggests that greater autonomy and more complex decision-making is possible (and devices with spare capacity can also be used to supplement other devices); in fact, autonomous behaviour in smart things are not new. Smart things detecting sensor-based context and responding autonomously (using approaches ranging from simple Even-Condition-Action rule-based approaches to sophisticated reasoning in agent-based approaches) have been explored extensively in context-aware computing [11,12].
1.1. Scope and Context
1.2. Organization
2. Ethical Concerns and Issues
2.1. Unsecured Consumer IoT Devices
2.2. Ethical Issues with Health Related IoT
- Personal privacy: This relates not just to privacy of collected data, but the notion that a person has the freedom not to be observed or to have his/her own personal space. The use of smart space monitoring (e.g., a smart home or in public spaces such as aged care facilities) of its inhabitants raises the concern of continual observation of people, even if it is for their own good—being able to monitor individuals or groups can be substantially beneficial but presents issues of privacy and access.
- Informational privacy: This relates to one’s ability to control one’s own health data. It is not uncommon for organizations to ask consumers for private data with the promise that the data will not be misused—in fact, privacy laws can prohibit the use of data beyond its intended context. The issues are myriad (e.g., see [42]), including how one can access data that were collected by an IoT device but are now possibly owned by the company, how much an insurance company can demand of user health data, (https://www.iothub.com.au/news/intel-brings-iot-to-health-insurance-411714, accessed on 1 July 2021), how one can share data in a controlled manner, how one can prove the veracity of personal health data, and how users can understand the privacy-utility trade-offs when using an IoT device.
- Risk of non-professional care: The notion of self health-monitoring and self-care as facilitated by health IoT devices can provide false optimism, limiting a patient’s condition to a narrow range of device-measurable conditions. Confidence in non-professional carers armed with IoT devices might be misplaced.
2.3. Hackable Vehicles and the Moral Dilemma for Autonomous Vehicles (AVs)
2.4. Roboethics
2.4.1. Robots Rights
2.4.2. Robotic Surgery
2.4.3. Social and Assistive Robots and Smart Things
- Social robots or IoT devices may be able to form bonds with humans, e.g., an elderly person or a child. A range of questions arises, such as whether such robots should be providing emotional support in place of humans, if they can be designed to do so. Another question regards the psychological and physical risk of humans forming such bonds with such devices or robots—when a user is emotionally attached to a thing, a concern is what would happen if the thing is damaged or no longer supported by the manufacturer, or if such things can be hacked to deceive the user. This question can be considered for smart things that have learnt and adapted to the person’s behaviour and are not easily replaced.
- Such social robots or IoT devices can be designed to have authority to provide reminders, therapy or rehabilitation to users. Ethical issues can arise when harm or injury is caused due to interaction with such robots. For example, death caused from medication taken at the wrong time, due to a robot’s reminder at the wrong time due to malfunction. A similar concern carries over to a smart pill bottle (an IoT device) intended to track when a person has and has not taken medication with an associated reminder system. There is also a question of harm being caused inadvertently, e.g., when an elderly person trips over a robot that approached too suddenly, or a robot makes decisions on behalf of its owner, without the owner’s full consent or before the owner can intervene.
“The principles of beneficence and non-maleficence state that caregivers should act in the best interests of the patient and should do nothing rather than take any action that may harm a patient.”
2.4.4. Robots in War
2.5. Algorithmic Bias and IoT
2.5.1. Racist Algorithms
2.5.2. Other Algorithmic Bias
2.6. Issues with Cooperative IoT
2.7. User Choice and Freedom
2.8. Summary and Discussion
- Communicate securely.
- Manage data in a way that is privacy-respecting.
- Act in a way that is sensitive to and aware of personal spaces and privacy concerns of users.
- Be able to make effective justifiable moral decisions depending on an appropriate ethical framework and guidelines.
- Act (if it is able to do so) in a way that is ethical when performing important functions that affect people (e.g., robots in health and social applications).
- Be deployed in an appropriate setting and environment in relation to its capabilities and function.
- Be free of algorithmic bias as far as this is relevant and feasible.
- Cooperate with other devices appropriately.
- Behave in ways that preserve user choice and freedom, and which invite and maintain user trust.
- Maintain transparency of operations as appropriate to different stakeholders.
3. Towards a Multi-Pronged Approach
3.1. Programming Ethical Behaviour
3.1.1. Rule-Based Ethics
3.1.2. Game-Theoretic Calculation of Ethics
3.1.3. Ethics Settings
3.1.4. Ethical by Design
3.2. Enveloping IoT Systems
3.3. White-Box Algorithms
3.3.1. Transparency
3.3.2. Detecting Algorithmic Bias
3.4. Black-Box Validation of Algorithmic Behaviour
“the study of machine behaviour will often require experimental intervention to study human–machine interactions in real-world settings”.
3.5. Algorithmic Social Contracts
3.6. Code of Ethics and Guidelines for IoT Developers
3.7. Summary and Discussion
- A multi-pronged approach;Table 1 summarises the above discussion detailing the ideas and their main methods with their key advantages and technical challenges. It can be seen that each idea has advantages and challenges; they could complement each other so that combinations of ideas could be a way forward. Combining processes and artefact strategies would mean taking into account ethical guidelines and practices in the development of IoT devices and, where applicable, also building functionality into the device which allows the device to behave in an ethical manner (according to an agreed criteria) during operation. Devices can be built to work within the constraints of their enveloping environment, with user-informed limitations and clear expectations in terms of applicability, configurability, and behaviour. Developers could encode rules for ethical behaviour, but only after engagement and consultation with the community and stakeholders on what rules are relevant, based on a transparent and open process (e.g., consultative processes, technology trials, crowdsourcing viewpoints or online workshops). White- or gray-boxed devices could allow end-user intelligibility, consent and configurability, so that users retain a desired degree of control. Individual IoT devices should be secured against certain cyber-attacks, and the data they collect should be handled in a way that is intelligible and configurable by the user, according to best practice standards. When they take action, it should be in agreement with acceptable social norms, and auditable.
- Context is key: Many people are involved in an IoT ecosystem, including developers, IoT device retailers, IoT system administrators, IoT system maintainers, end-users, the local community and society at large. Society and communities can be affected by the deployment of such IoT systems in public, e.g., autonomous vehicles and robots in public; therefore, the broader context of deployment needs to be considered.Moreover, what is considered ethical behaviour might depend on the context of operation and the application; a device’s actions might be considered ethical in one context but unethical in another, as also noted in [72] with regards to the use of location-based services. Broader contexts of operations include local culture, norms, or application domain (e.g., IoT in health, transport, or finance would have different rules for ethical behaviour); hence, it would require multiple levels of norms and ethical rules to guide the design and development of IoT devices and ecosystems. A basic ethical standard could apply (e.g., basic security built into devices, basic user-definable data handling options, and basic action tracking), with additional configurable options for context-specific ethical behaviour being added.
- Ethical considerations with autonomy: Guidelines for developers and consideration of what is built into an artefact to achieve ethical algorithmic behaviour could incorporate features that take into account, at least, the following:
- -
- Security of data and physical security as impacted by device actions.
- -
- Privacy of user data and device actions that impinge on privacy.
- -
- Consequences of over-reliance or human attachment to IoT devices.
- -
- Algorithmic bias and design bias, and fairness of device actions.
- -
- The possible need to engage not just end users, but anyone affected by the IoT deployment, e.g., via crowdsourcing viewpoints pre-development, and obtaining feedback from users and society at large, post-deployment.
- -
- User choice and freedom retained, including allowing user adjustments to ethical behaviour (e.g., opt in and out, adequate range of options, and designing devices with ethical settings).
- -
- End-user experience, including user intelligibility, scrutability and explainability, when needed, usability not just for certain groups of people, user control over data management and device behaviour, and appropriate manual overrides (https://cacm.acm.org/blogs/blog-cacm/238899-the-autocracy-of-autonomous-systems/fulltext, accessed on 1 July 2021).
- -
- Accountability for device actions, including legal and moral responsibilities, and support for traceability of actions.
- -
- Implications and possible unintended effects of cooperation among devices, e.g., where physical actions from multiple devices could mutually interfere, and the extent of data sharing during communications.
- -
- Deployment for long-term use (if applicable) and updatability, arising from security updates, improvements from feedback, adapting to changing human needs, policy changes, and
- -
- ethical consequences of autonomous action in IoT deployments (from physical movements to driving in certain ways).
4. Conclusions and Future Work
- Programming approaches to add ethical behaviour to devices, including adding moral reasoning capabilities to machines, and configuring devices with user ethics preferences.
- Detection and prevention of algorithmic bias, via accountability models and transparency.
- Behaviour-based validation techniques.
- The notion of algorithmic social contracts, and crowdsourcing solutions to ethical issues.
- The idea of enveloping systems, and
- developing guidelines and proposals for regulations, and codes of ethics, to encourage ethical developers and ethical development of IoT devices, and requiring security and privacy measures in such devices. Suitable data privacy laws in the IoT context, secure-by-design, privacy-by-design, ethical-by-design and design-for-responsibility principles are also needed.
Funding
Conflicts of Interest
References
- Hermsen, S.; Frost, J.H.; Robinson, E.; Higgs, S.; Mars, M.; Hermans, R.C.J. Evaluation of a Smart Fork to Decelerate Eating Rate. J. Acad. Nutr. Diet. 2016, 116, 1066–1067. [Google Scholar] [CrossRef] [Green Version]
- Kadomura, A.; Li, C.Y.; Chen, Y.C.; Tsukada, K.; Siio, I.; Chu, H.H. Sensing Fork: Eating Behavior Detection Utensil and Mobile Persuasive Game. In Proceedings of the CHI’13 Extended Abstracts on Human Factors in Computing Systems; ACM: New York, NY, USA, 2013; pp. 1551–1556. [Google Scholar] [CrossRef]
- Wu, Q.; Ding, G.; Xu, Y.; Feng, S.; Du, Z.; Wang, J.; Long, K. Cognitive Internet of Things: A New Paradigm Beyond Connection. IEEE Internet Things J. 2014, 1, 129–143. [Google Scholar] [CrossRef] [Green Version]
- Minerva, R.; Biru, A.; Rotondi, D. IEEE Internet Initiative. Available online: https://internetinitiative.ieee.org/ (accessed on 1 July 2021).
- Pintus, A.; Carboni, D.; Piras, A. Paraimpu: A Platform for a Social Web of Things. In Proceedings of the 21st International Conference on World Wide Web; Lyon, France, 16–20 April 2012, Association for Computing Machinery: New York, NY, USA, 2012; pp. 401–404. [Google Scholar] [CrossRef]
- Taivalsaari, A.; Mikkonen, T. A Roadmap to the Programmable World: Software Challenges in the IoT Era. IEEE Softw. 2017, 34, 72–80. [Google Scholar] [CrossRef]
- Tripathy, B.K.; Dutta, D.; Tazivazvino, C. On the Research and Development of Social Internet of Things. In Internet of Things (IoT) in 5G Mobile Technologies; Mavromoustakis, C.X., Mastorakis, G., Batalla, J.M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 153–173. [Google Scholar] [CrossRef]
- Pticek, M.; Podobnik, V.; Jezic, G. Beyond the Internet of Things: The Social Networking of Machines. Int. J. Distrib. Sens. Netw. 2016, 12, 8178417. [Google Scholar] [CrossRef] [Green Version]
- Lin, Z.; Dong, L. Clarifying Trust in Social Internet of Things. CoRR 2017. [Google Scholar] [CrossRef] [Green Version]
- Farris, I.; Girau, R.; Militano, L.; Nitti, M.; Atzori, L.; Iera, A.; Morabito, G. Social Virtual Objects in the Edge Cloud. IEEE Cloud Comput. 2015, 2, 20–28. [Google Scholar] [CrossRef]
- Loke, S. Context-Aware Pervasive Systems; Auerbach Publications: Boston, MA, USA, 2006. [Google Scholar]
- Cristea, V.; Dobre, C.; Pop, F. Context-Aware Environments for the Internet of Things. In Internet of Things and Inter-Cooperative Computational Technologies for Collective Intelligence; Bessis, N., Xhafa, F., Varvarigou, D., Hill, R., Li, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 25–49. [Google Scholar] [CrossRef]
- Berman, F.; Cerf, V.G. Social and Ethical Behavior in the Internet of Things. Commun. ACM 2017, 60, 6–7. [Google Scholar] [CrossRef] [Green Version]
- Atlam, H.F.; Wills, G.B. IoT Security, Privacy, Safety and Ethics; Springer International Publishing: Cham, Switzerland, 2020; pp. 123–149. [Google Scholar]
- Calvo, P. The ethics of Smart City (EoSC): Moral implications of hyperconnectivity, algorithmization and the datafication of urban digital society. Ethics Inf. Technol. 2020, 22, 141–149. [Google Scholar] [CrossRef]
- Singh, M.P.; Chopra, A.K. The Internet of Things and Multiagent Systems: Decentralized Intelligence in Distributed Computing. In Proceedings of the 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), Atlanta, GA, USA, 5–8 June 2017; pp. 1738–1747. [Google Scholar] [CrossRef] [Green Version]
- Lipson, H.; Kurman, M. Driverless: Intelligent Cars and the Road Ahead; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Simoens, P.; Dragone, M.; Saffiotti, A. The Internet of Robotic Things: A review of concept, added value and applications. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418759424. [Google Scholar] [CrossRef]
- Ray, P.P. Internet of Robotic Things: Concept, Technologies, and Challenges. IEEE Access 2016, 4, 9489–9500. [Google Scholar] [CrossRef]
- Yu, H.; Shen, Z.; Miao, C.; Leung, C.; Lesser, V.R.; Yang, Q. Building Ethics into Artificial Intelligence. In Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18, Stockholm, Sweden, 13–19 July 2018; pp. 5527–5533. [Google Scholar]
- Stahl, B.C.; Timmermans, J.; Mittelstadt, B.D. The Ethics of Computing: A Survey of the Computing-Oriented Literature. ACM Comput. Surv. 2016, 48, 55:1–55:38. [Google Scholar] [CrossRef] [Green Version]
- Allhoff, F.; Henschke, A. The Internet of Things: Foundational ethical issues. Internet Things 2018, 1–2, 55–66. [Google Scholar] [CrossRef]
- Karale, A. The Challenges of IoT Addressing Security, Ethics, Privacy, and Laws. Internet Things 2021, 15, 100420. [Google Scholar] [CrossRef]
- Sha, K.; Wei, W.; Yang, T.A.; Wang, Z.; Shi, W. On security challenges and open issues in Internet of Things. Future Gener. Comput. Syst. 2018, 83, 326–337. [Google Scholar] [CrossRef]
- Ge, M.; Hong, J.B.; Guttmann, W.; Kim, D.S. A framework for automating security analysis of the internet of things. J. Netw. Comput. Appl. 2017, 83, 12–27. [Google Scholar] [CrossRef]
- Sicari, S.; Rizzardi, A.; Grieco, L.; Coen-Porisini, A. Security, privacy and trust in Internet of Things: The road ahead. Comput. Netw. 2015, 76, 146–164. [Google Scholar] [CrossRef]
- Malina, L.; Hajny, J.; Fujdiak, R.; Hosek, J. On perspective of security and privacy-preserving solutions in the internet of things. Comput. Netw. 2016, 102, 83–95. [Google Scholar] [CrossRef]
- Atwady, Y.; Hammoudeh, M. A Survey on Authentication Techniques for the Internet of Things. In Proceedings of the International Conference on Future Networks and Distributed Systems, Cambridge, UK, 19–20 July 2017; ACM: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
- Sfar, A.R.; Natalizio, E.; Challal, Y.; Chtourou, Z. A roadmap for security challenges in the Internet of Things. Digit. Commun. Netw. 2018, 4, 118–137. [Google Scholar] [CrossRef]
- Alaba, F.A.; Othman, M.; Hashem, I.A.T.; Alotaibi, F. Internet of Things security: A survey. J. Netw. Comput. Appl. 2017, 88, 10–28. [Google Scholar] [CrossRef]
- Trnka, M.; Cerny, T.; Stickney, N. Survey of Authentication and Authorization for the Internet of Things. Secur. Commun. Netw. 2018, 2018, 4351603. [Google Scholar] [CrossRef] [Green Version]
- Díaz, M.; Martín, C.; Rubio, B. State-of-the-art, challenges, and open issues in the integration of Internet of things and cloud computing. J. Netw. Comput. Appl. 2016, 67, 99–117. [Google Scholar] [CrossRef]
- Jayaraman, P.P.; Yang, X.; Yavari, A.; Georgakopoulos, D.; Yi, X. Privacy preserving Internet of Things: From privacy techniques to a blueprint architecture and efficient implementation. Future Gener. Comput. Syst. 2017, 76, 540–549. [Google Scholar] [CrossRef]
- Weinberg, B.D.; Milne, G.R.; Andonova, Y.G.; Hajjat, F.M. Internet of Things: Convenience vs. privacy and secrecy. Bus. Horizons 2015, 58, 615–624. [Google Scholar] [CrossRef]
- Caron, X.; Bosua, R.; Maynard, S.B.; Ahmad, A. The Internet of Things (IoT) and its impact on individual privacy: An Australian perspective. Comput. Law Secur. Rev. 2016, 32, 4–15. [Google Scholar] [CrossRef]
- Weber, R.H. Internet of things: Privacy issues revisited. Comput. Law Secur. Rev. 2015, 31, 618–627. [Google Scholar] [CrossRef]
- Vegh, L. A Survey of Privacy and Security Issues for the Internet of Things in the GDPR Era. In Proceedings of the 2018 International Conference on Communications (COMM), Bucharest, Romania, 14–16 June 2018; pp. 453–458. [Google Scholar] [CrossRef]
- Humayed, A.; Lin, J.; Li, F.; Luo, B. Cyber-Physical Systems Security—A Survey. IEEE Internet Things J. 2017, 4, 1802–1831. [Google Scholar] [CrossRef]
- Dutta, S. Striking a Balance between Usability and Cyber-Security in IoT Devices. Master’s Thesis, Massachusetts Institute of Technology, Cambridge, MA, USA, 2017. [Google Scholar]
- Halperin, D.; Heydt-Benjamin, T.S.; Ransford, B.; Clark, S.S.; Defend, B.; Morgan, W.; Fu, K.; Kohno, T.; Maisel, W.H. Pacemakers and Implantable Cardiac Defibrillators: Software Radio Attacks and Zero-Power Defenses. In Proceedings of the 2008 IEEE Symposium on Security and Privacy (SP 2008), Oakland, CA, USA, 18–22 May 2008; pp. 129–142. [Google Scholar]
- Mittelstadt, B. Ethics of the health-related internet of things: A narrative review. Ethics Inf. Technol. 2017, 19, 157–175. [Google Scholar] [CrossRef]
- Chamberlain, A.; Crabtree, A.; Haddadi, H.; Mortier, R. Special theme on privacy and the Internet of things. Pers. Ubiquitous Comput. 2017, 22, 289–292. [Google Scholar] [CrossRef] [Green Version]
- Popescul, D.; Georgescu, M. Internet of Things—Some Ethical Issues. USV Ann. Econ. Public Adm. 2013, 13, 210–216. [Google Scholar]
- Ali, M.S.; Dolui, K.; Antonelli, F. IoT Data Privacy via Blockchains and IPFS. In Proceedings of the Seventh International Conference on the Internet of Things, Linz, Austria, 22–25 October 2017; ACM: New York, NY, USA, 2017; pp. 14:1–14:7. [Google Scholar] [CrossRef]
- Fernandez-Carames, T.M.; Fraga-Lamas, P. A Review on the Use of Blockchain for the Internet of Things. IEEE Access 2018, 6, 32979–33001. [Google Scholar] [CrossRef]
- Griggs, K.N.; Ossipova, O.; Kohlios, C.P.; Baccarini, A.N.; Howson, E.A.; Hayajneh, T. Healthcare Blockchain System Using Smart Contracts for Secure Automated Remote Patient Monitoring. J. Med. Syst. 2018, 42, 130. [Google Scholar] [CrossRef]
- Reyna, A.; Martín, C.; Chen, J.; Soler, E.; Díaz, M. On blockchain and its integration with IoT. Challenges and opportunities. Future Gener. Comput. Syst. 2018, 88, 173–190. [Google Scholar] [CrossRef]
- Yu, B.; Wright, J.; Nepal, S.; Zhu, L.; Liu, J.; Ranjan, R. IoTChain: Establishing Trust in the Internet of Things Ecosystem Using Blockchain. IEEE Cloud Comput. 2018, 5, 12–23. [Google Scholar] [CrossRef]
- Fister, I.; Ljubič, K.; Suganthan, P.N.; Perc, M.; Fister, I. Computational intelligence in sports: Challenges and opportunities within a new research domain. Appl. Math. Comput. 2015, 262, 178–186. [Google Scholar] [CrossRef]
- Lima, A.; Rocha, F.; Völp, M.; Esteves-Veríssimo, P. Towards Safe and Secure Autonomous and Cooperative Vehicle Ecosystems. In Proceedings of the 2Nd ACM Workshop on Cyber-Physical Systems Security and Privacy; ACM: New York, NY, USA, 2016; pp. 59–70. [Google Scholar] [CrossRef] [Green Version]
- Nasser, A.M.; Ma, D.; Muralidharan, P. An Approach for Building Security Resilience in AUTOSAR Based Safety Critical Systems. J. Cyber Secur. Mobil. 2017, 6, 271–304. [Google Scholar] [CrossRef]
- Nawrath, T.; Fischer, D.; Markscheffel, B. Privacy-sensitive data in connected cars. In Proceedings of the 2016 11th International Conference for Internet Technology and Secured Transactions (ICITST), Barcelona, Spain, 5–7 December 2016; pp. 392–393. [Google Scholar] [CrossRef]
- Bonnefon, J.F.; Shariff, A.; Rahwan, I. The social dilemma of autonomous vehicles. Science 2016, 352, 1573–1576. [Google Scholar] [CrossRef] [Green Version]
- Lin, P. Why Ethics Matters for Autonomous Cars. In Autonomous Driving: Technical, Legal and Social Aspects; Maurer, M., Gerdes, J.C., Lenz, B., Winner, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 69–85. [Google Scholar] [CrossRef] [Green Version]
- Sparrow, R.; Howard, M. When human beings are like drunk robots: Driverless vehicles, ethics, and the future of transport. Transp. Res. Part C Emerg. Technol. 2017, 80, 206–215. [Google Scholar] [CrossRef]
- Bagloee, S.A.; Tavana, M.; Asadi, M.; Oliver, T. Autonomous vehicles: Challenges, opportunities, and future implications for transportation policies. J. Mod. Transp. 2016, 24, 284–303. [Google Scholar] [CrossRef] [Green Version]
- Lin, P.; Abney, K.; Bekey, G.A. Robot Ethics: The Ethical and Social Implications of Robotics; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Lin, P.; Abney, K.; Jenkins, R. Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence; Oxford University Press: Oxford, UK, 2017; Available online: https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190652951.001.0001/oso-9780190652951 (accessed on 1 July 2021).
- Tzafestas, S.G. Roboethics: A Navigating Overview; Springer: Berlin/Heidelberg, Germany, 2016. [Google Scholar]
- Gunkel, D.J. The other question: Can and should robots have rights? Ethics Inf. Technol. 2017, 20, 87–99. [Google Scholar] [CrossRef] [Green Version]
- Cai, X.; Ning, H.; Dhelim, S.; Zhou, R.; Zhang, T.; Xu, Y.; Wan, Y. Robot and its living space: A roadmap for robot development based on the view of living space. Digit. Commun. Netw. 2020. Available online: https://www.sciencedirect.com/science/article/pii/S2352864820302881 (accessed on 1 July 2021).
- Enemark, C. Armed Drones and the Ethics of War: Military Virtue in a Post-Heroic Age; Routledge , 2013. Available online: https://www.semanticscholar.org/paper/Armed-Drones-and-the-Ethics-of-War%3A-Military-virtue-Enemark/7126533ca42895dada35ac0106d1c3956ab32e8b (accessed on 1 July 2021).
- Mittelstadt, B.; Allo, P.; Taddeo, M.; Wachter, S.; Floridi, L. The ethics of algorithms: Mapping the debate. Big Data Soc. 2016, 3, 2053951716679679. [Google Scholar] [CrossRef] [Green Version]
- O’Neil, C. Weapons of Math Destruction; Crown Books , 2016. Available online: https://www.amazon.com/Weapons-Math-Destruction-Increases-Inequality/dp/0553418815 (accessed on 1 July 2021).
- Eubanks, V. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor; St. Martin’s Press, 2018; Available online: https://virginia-eubanks.com/books/ (accessed on 1 July 2021).
- Noble, S.U. Algorithms of Oppression: How Search Engines Reinforce Racism; NYU Press, 2018. Available online: https://www.tandfonline.com/doi/abs/10.1080/01419870.2019.1635260?journalCode=rers20 (accessed on 1 July 2021).
- Danks, D.; London, A.J. Algorithmic Bias in Autonomous Systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence, Melbourne, Australia, 19–25 August 2017; pp. 4691–4697. [Google Scholar]
- Tschider, C.A. Regulating the IoT: Discrimination, Privacy, and Cybersecurity in the Artificial Intelligence Age. Denver Univ. Law Rev. 2018. [Google Scholar] [CrossRef]
- Kraemer, F.; van Overveld, K.; Peterson, M. Is there an ethics of algorithms? Ethics Inf. Technol. 2011, 13, 251–260. [Google Scholar] [CrossRef] [Green Version]
- Desai, P.; Loke, S.W.; Desai, A.; Singh, J. CARAVAN: Congestion Avoidance and Route Allocation Using Virtual Agent Negotiation. IEEE Trans. Intell. Transp. Syst. 2013, 14, 1197–1207. [Google Scholar] [CrossRef]
- May, T. The Concept of Autonomy. Am. Philos. Q. 1994, 31, 133–144. [Google Scholar]
- Abbas, R.; Michael, K.; Michael, M. Using a Social-Ethical Framework to Evaluate Location-Based Services in an Internet of Things World. Int. Rev. Inf. Ethics 2014, 22, 42–73. [Google Scholar]
- Anderson, M.; Anderson, S.L. Robot be good. Sci. Am. 2010, 303, 72–77. [Google Scholar] [CrossRef]
- Anderson, M.; Anderson, S.L. ETHEL: Toward a Principled Ethical Eldercare System. In Proceedings of the AI in Eldercare: New Solutions to Old Problems, Papers from the 2008 AAAI Fall Symposium, Arlington, VA, USA, 7–9 November 2008; pp. 4–11. [Google Scholar]
- Ajmeri, N.; Guo, H.; Murukannaiah, P.K.; Singh, M.P. Designing Ethical Personal Agents. IEEE Internet Comput. 2018, 22, 16–22. [Google Scholar] [CrossRef]
- Gerdes, J.C.; Thornton, S.M. Implementable Ethics for Autonomous Vehicles. In Autonomous Driving: Technical, Legal and Social Aspects; Maurer, M., Gerdes, J.C., Lenz, B., Winner, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 87–102. [Google Scholar] [CrossRef] [Green Version]
- Shalev-Shwartz, S.; Shammah, S.; Shashua, A. On a Formal Model of Safe and Scalable Self-driving Cars. CoRR 2017. Available online: https://export.arxiv.org/pdf/1708.06374 (accessed on 1 July 2021).
- Awad, E.; Dsouza, S.; Kim, R.; Schulz, J.; Henrich, J.; Shariff, A.; Bonnefon, J.F.; Rahwan, I. The Moral Machine experiment. Nature 2018, 563, 59–64. [Google Scholar] [CrossRef] [PubMed]
- Wallach, W.; Allen, C. Moral Machines: Teaching Robots Right from Wrong; Oxford University Press, Inc.: New York, NY, USA, 2010. [Google Scholar]
- Anderson, M.; Anderson, S.L. Machine Ethics, 1st ed.; Cambridge University Press: New York, NY, USA, 2011. [Google Scholar]
- Dennis, L.; Fisher, M.; Slavkovik, M.; Webster, M. Formal verification of ethical choices in autonomous systems. Robot. Auton. Syst. 2016, 77, 1–14. [Google Scholar] [CrossRef] [Green Version]
- Vanderelst, D.; Winfield, A. An architecture for ethical robots inspired by the simulation theory of cognition. Cogn. Syst. Res. 2018, 48, 56–66. [Google Scholar] [CrossRef]
- Leben, D. A Rawlsian Algorithm for Autonomous Vehicles. Ethics Inf. Technol. 2017, 19, 107–115. [Google Scholar] [CrossRef]
- Contissa, G.; Lagioia, F.; Sartor, G. The Ethical Knob: Ethically-customisable automated vehicles and the law. Artif. Intell. Law 2017, 25, 365–378. [Google Scholar] [CrossRef]
- Gogoll, J.; Müller, J.F. Autonomous Cars: In Favor of a Mandatory Ethics Setting. Sci. Eng. Ethics 2017, 23, 681–700. [Google Scholar] [CrossRef] [PubMed]
- Baldini, G.; Botterman, M.; Neisse, R.; Tallacchini, M. Ethical Design in the Internet of Things. Sci. Eng. Ethics 2016. [Google Scholar] [CrossRef] [PubMed]
- Sicari, S.; Rizzardi, A.; Miorandi, D.; Coen-Porisini, A. Dynamic Policies in Internet of Things: Enforcement and Synchronization. IEEE Internet Things J. 2017, 4, 2228–2238. [Google Scholar] [CrossRef]
- Seliem, M.; Elgazzar, K.; Khalil, K. Towards Privacy Preserving IoT Environments: A Survey. Wirel. Commun. Mob. Comput. 2018, 2018, 1032761. [Google Scholar] [CrossRef] [Green Version]
- Floridi, L. What the Near Future of Artificial Intelligence Could Be. Philos. Technol. 2019, 32, 1–15. [Google Scholar] [CrossRef] [Green Version]
- Wilson, J.; Wahby, R.S.; Corrigan-Gibbs, H.; Boneh, D.; Levis, P.; Winstein, K. Trust but Verify: Auditing the Secure Internet of Things. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services; ACM: New York, NY, USA, 2017; pp. 464–474. [Google Scholar] [CrossRef]
- Mirzamohammadi, S.; Chen, J.A.; Sani, A.A.; Mehrotra, S.; Tsudik, G. Ditio: Trustworthy Auditing of Sensor Activities in Mobile & IoT Devices. In Proceedings of the 15th ACM Conference on Embedded Network Sensor Systems, Delft, The Netherlands, 6–8 November 2017; ACM: New York, NY, USA, 2017; pp. 28:1–28:14. [Google Scholar] [CrossRef]
- Singh, J.; Millard, C.; Reed, C.; Cobbe, J.; Crowcroft, J. Accountability in the IoT: Systems, Law, and Ways Forward. Computer 2018, 51, 54–65. [Google Scholar] [CrossRef]
- Tan, S.; Caruana, R.; Hooker, G.; Lou, Y. Detecting Bias in Black-Box Models Using Transparent Model Distillation. arXiv 2017, arXiv:1710.06169. [Google Scholar]
- Rahwan, I.; Cebrian, M.; Obradovich, N.; Bongard, J.; Bonnefon, J.F.; Breazeal, C.; Crandall, J.; Christakis, N.; Couzin, I.; Jackson, M.; et al. Machine behaviour. Nature 2019, 568, 477–486. [Google Scholar] [CrossRef] [Green Version]
- Kufieta, K.; Ditze, M. A virtual environment for the development and validation of highly automated driving systems. In 17. Internationales Stuttgarter Symposium; Bargende, M., Reuss, H.C., Wiedemann, J., Eds.; Springer Fachmedien Wiesbaden: Wiesbaden, Germany, 2017; pp. 1391–1401. [Google Scholar]
- Ebert, C.; Weyrich, M. Validation of Automated and Autonomous Vehicles. ATZelectronics Worldw. 2019, 14, 26–31. [Google Scholar] [CrossRef]
- Ebert, C.; Weyrich, M. Validation of Autonomous Systems. IEEE Softw. 2019, 36, 15–23. [Google Scholar] [CrossRef]
- Naujoks, F.; Wiedemann, K.; Schömig, N.; Hergeth, S.; Keinath, A. Towards guidelines and verification methods for automated vehicle HMIs. Transp. Res. Part F Traffic Psychol. Behav. 2019, 60, 121–136. [Google Scholar] [CrossRef]
- Koopman, P.; Ferrell, U.; Fratrik, F.; Wagner, M. A Safety Standard Approach for Fully Autonomous Vehicles. In Computer Safety, Reliability, and Security; Romanovsky, A., Troubitsyna, E., Gashi, I., Schoitsch, E., Bitsch, F., Eds.; Springer International Publishing: Cham, Switzerland, 2019; pp. 326–332. [Google Scholar]
- Rahwan, I. Society-in-the-loop: Programming the algorithmic social contract. Ethics Inf. Technol. 2017. [Google Scholar] [CrossRef] [Green Version]
- Lieberman, H.; Dinakar, K.; Jones, B. Crowdsourced Ethics with Personalized Story Matching. In Proceedings of the CHI’13 Extended Abstracts on Human Factors in Computing Systems, Paris, France, 27 April–2 May 2013; ACM: New York, NY, USA, 2013; pp. 709–714. [Google Scholar] [CrossRef]
- Luetge, C. The German Ethics Code for Automated and Connected Driving. Philos. Technol. 2017, 30, 547–558. [Google Scholar] [CrossRef]
- Happa, J.; Nurse, J.R.C.; Goldsmith, M.; Creese, S.; Williams, R. An ethics framework for research into heterogeneous systems. In Proceedings of the Living in the Internet of Things: Cybersecurity of the IoT—2018, London, UK, 28–29 March 2018; pp. 1–8. [Google Scholar] [CrossRef]
- Rand, D.G.; Nowak, M.A. Human cooperation. Trends Cogn. Sci. 2013, 17, 413–425. [Google Scholar] [CrossRef] [PubMed]
- Burton-Chellew, M.N.; West, S.A. Prosocial preferences do not explain human cooperation in public-goods games. Proc. Natl. Acad. Sci. USA 2013, 110, 216–221. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Perc, M.; Jordan, J.J.; Rand, D.G.; Wang, Z.; Boccaletti, S.; Szolnoki, A. Statistical physics of human cooperation. Phys. Rep. 2017, 687, 1–51. [Google Scholar] [CrossRef] [Green Version]
- Stahl, B.C.; Coeckelbergh, M. Ethics of healthcare robotics: Towards responsible research and innovation. Robot. Auton. Syst. 2016, 86, 152–161. [Google Scholar] [CrossRef]
- Pagallo, U. The Laws of Robots—Crimes, Contracts, and Torts; Law Governance and Technology Series; Springer: Berlin/Heidelberg, Germany, 2013; Volume 10. [Google Scholar] [CrossRef]
- Nardin, L.G.; Balke-Visser, T.; Ajmeri, N.; Kalia, A.K.; Sichman, J.S.; Singh, M.P. Classifying sanctions and designing a conceptual sanctioning process model for socio-technical systems. Knowl. Eng. Rev. 2016, 31, 142–166. [Google Scholar] [CrossRef] [Green Version]
- Shahraki, A.; Haugen, ø. Social ethics in Internet of Things: An outline and review. In Proceedings of the 2018 IEEE Industrial Cyber-Physical Systems (ICPS) Conference, Saint Petersburg, Russia, 15–18 May 2018; pp. 509–516. [Google Scholar] [CrossRef]
- Stead, M.; Coulton, P.; Lindley, J.; Coulton, C. The Little Book of SUSTAINABILITY for the Internet of Things; 2019; Available online: https://www.researchgate.net/publication/331114232_The_Little_Book_of_SUSTAINABILITY_for_the_Internet_of_Things (accessed on 1 July 2021).
- Bibri, S.E. The IoT for smart sustainable cities of the future: An analytical framework for sensor-based big data applications for environmental sustainability. Sustain. Cities Soc. 2018, 38, 230–253. [Google Scholar] [CrossRef]
- Kearns, M.; Roth, A. The Ethical Algorithm: The Science of Socially Aware Algorithm Design; Oxford University Press: Oxford, UK, 2019. [Google Scholar]
Idea | Methods | Key Advantages | Key Challenges | Selected Related Work | |
---|---|---|---|---|---|
Build Behaviour into Artefact & Validate | Designing and Programming Ethical Behaviour | rule-based, game-theoretic calculations, ethics settings, ethical design templates | algorithmic or declarative representation of ethical behaviour, user control explicitly considered in artifact design | difficult for a set of rules to be complete, data used in development (e.g., to train Machine Learning models used in IoT devices) might be inadequate, hard for situations to be quantified, raises the question of who decides what is ethical | [16,73,74,75,77,83,84,85,86,87] |
Enveloping | setting physical/cyber-physical boundaries of operation | reduces complexity of operating environments, sets expectations in behaviour and contexts of trustworthy operation | may be hard to create suitable envelopes that do not hinder functioning of IoT systems | [89] (though originally proposed to achieve better AI systems) | |
White-box Algorithms | improve transparency, detect algorithmic bias | greater traceability and accountability (possibly allow engagement with non-developers) | transparency does not equate understandability, scrutability does not equate user control, | [90,91,92,93] | |
Black-box Validation | cognitive testing, simulation heuristic evaluation | where white-boxing is difficult, basis for certification | difficult to consider all cases and situations | [95,97,98] | |
Algorithmic Social Contracts | crowdsourcing ethics, processes for algorithmic regulation | wider engagement (possibly with non-developers) | complex, may be hard to create suitable efficient processes or to obtain adequate participation | [100,101] | |
Guide Developers | Code of Ethics and Guidelines for IoT Developers | formal guidelines, regulations, community best practice for developers | highlights ethical considerations in development | application- or domain-specific considerations required | German ethics code for automated and connected driving [102], IoT data privacy guidelines and regulations [37], (also, Code of ethics for robotics engineers, Asilomar Principles, IoT design manifesto, IoT Alliance Australia Security Guideline, design-for-responsibility), RRI [107] |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the author. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Loke, S.W. Achieving Ethical Algorithmic Behaviour in the Internet of Things: A Review. IoT 2021, 2, 401-427. https://doi.org/10.3390/iot2030021
Loke SW. Achieving Ethical Algorithmic Behaviour in the Internet of Things: A Review. IoT. 2021; 2(3):401-427. https://doi.org/10.3390/iot2030021
Chicago/Turabian StyleLoke, Seng W. 2021. "Achieving Ethical Algorithmic Behaviour in the Internet of Things: A Review" IoT 2, no. 3: 401-427. https://doi.org/10.3390/iot2030021
APA StyleLoke, S. W. (2021). Achieving Ethical Algorithmic Behaviour in the Internet of Things: A Review. IoT, 2(3), 401-427. https://doi.org/10.3390/iot2030021