Impacts of Artificial Intelligence Development on Humanity and Social Values
Abstract
1. Introduction
2. AI Development
2.1. Affective Computing with Emotional Intelligence
2.2. Energy Harvesting for AIoT
2.2.1. Energy Sources and Their Energy Harvesters
2.2.2. Autonomous IoT
2.3. ANN to LLM
2.4. LLM with Moral Value Consideration
3. Humans and Non-Human Agents
3.1. Limitation of Current Models
3.2. Human and Non-Human’s Moral Agency and Patiency
3.3. Practical Moral Character Representation
3.4. Moral Dataset Construction
3.5. LLM-Based Moral Alignment
4. Discussion on the Impacts
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Hegde, K.; Jayalath, H. Emotions in the Loop: A Survey of Affective Computing for Emotional Support. arXiv 2025, arXiv:2505.01542. [Google Scholar] [CrossRef]
- Tan, Y.K. Energy Harvesting Autonomous Sensor Systems: Design, Analysis, and Practical Implementation; CRC Press: Oxford, UK, 2013; pp. 13–18. [Google Scholar]
- Borghoff, U.M.; Bottoni, P.; Pareschi, R. Human-Artificial Interaction in the Age of Agentic AI: A System-Theoretical Approach. Front. Hum. Dyn. 2025, 7, 1579166. [Google Scholar] [CrossRef]
- OpenAI. Gpt-4 technical report. arXiv 2023, arXiv:2303.08774. [Google Scholar] [CrossRef]
- Google Deepmind. Gemini: A family of highly capable multimodal models. arXiv 2023, arXiv:2312.11805. [Google Scholar] [CrossRef]
- Gallegos, I.O.; Rossi, R.A.; Barrow, J.; Tanjim, M.M.; Kim, S.; Dernoncourt, F.; Yu, T.; Zhang, R.; Ahmed, N.K. Bias and fairness in large language models: A survey. Comput. Linguist. 2024, 50, 1097–1179. [Google Scholar] [CrossRef]
- Chua, J.; Li, Y.; Yang, S.; Wang, C.; Yao, L. AI safety in generative AI large language models: A survey. arXiv 2024, arXiv:2407.18369. [Google Scholar]
- Nida-Rümelin, J.; Weidenfeld, N. Digital Optimization, Utilitarianism, and AI. In Digital Humanism: For a Humane Transformation of Democracy, Economy and Culture in the Digital Age; Springer: Berlin/Heidelberg, Germany, 2022; pp. 31–34. [Google Scholar]
- Shah, K.; Joshi, H.; Joshi, H. Integrating Moral Values in AI: Addressing Ethical Challenges for Fair and Responsible Technology. J. Inform. Web Eng. 2025, 4, 213–227. [Google Scholar] [CrossRef]
- Bryson, J.J. Patiency is not a virtue: The design of intelligent systems and systems of ethics. Ethics Inf. Technol. 2018, 20, 15–26. [Google Scholar] [CrossRef]
- Manna, R. Kantian Moral Agency and the Ethics of Artificial Intelligence. Problemos 2021, 100, 139–151. [Google Scholar] [CrossRef]
- Stein, M.S. Nussbaum: A Utilitarian Critique. Boston Coll. Law Rev. 2009, 50, 489–531. [Google Scholar]
- Nyholm, S.; Smids, J. The Ethics of Accident-Algorithms for Self-Driving Cars: An Applied Trolley Problem? Ethical Theory Moral Pract. 2016, 19, 1275–1289. [Google Scholar] [CrossRef]
- Nozick, R. Anarchy, State, and Utopia; Basic Books: New York, NY, USA, 1974; p. 41. [Google Scholar]
- Véliz, C. Moral zombies: Why algorithms are not moral agents? AI Soc. 2021, 36, 487–497. [Google Scholar] [CrossRef]
- Hooker, B. Ideal Code, Real World: A Rule-Consequentialist Theory of Morality; Oxford University Press: Oxford, UK, 2000; pp. 59–65. [Google Scholar]
- Bigman, Y.E.; Waytz, A.; Alterovitz, R.; Gray, K. Holding robots responsible: The elements of machine morality. Trends Cogn. Sci. 2019, 23, 365–368. [Google Scholar] [CrossRef]
- Ayad, R.; Plaks, J.E. Attributions of intent and moral responsibility to AI agents. Comput. Hum. Behav. Artif. Hum. 2025, 3, 100–107. [Google Scholar] [CrossRef]
- Brundage, M. Taking superintelligence seriously: Superintelligence: Paths, dangers, strategies by Nick Bostrom. Futures 2015, 72, 32–35. [Google Scholar] [CrossRef]
- Gray, K.; Wegner, D.M. Morality takes two: Dyadic morality and mind perception. In The Social Psychology of Morality: Exploring the Causes of Good and Evil; American Psychological Association: Washington, DC, USA, 2012; pp. 109–127. [Google Scholar]
- Sharkey, A.J.; Sharkey, N. Granny and the robots: Ethical issues in robot care for the elderly. Ethics Inf. Technol. 2010, 14, 27–40. [Google Scholar] [CrossRef]
- Xu, X. Growth or Decline: Christian Virtues and Artificial Moral Advisors. Stud. Christ. Ethics 2025, 38, 46–62. [Google Scholar] [CrossRef]
- Giubilini, A.; Savulescu, J. The artificial moral advisor. The “ideal observer” meets artificial intelligence. Philos. Technol. 2018, 31, 169–188. [Google Scholar] [CrossRef] [PubMed]
- Constantinescu, M.; Vică, C.; Uszkai, R.; Voinea, C. Blame it on the AI? On the moral responsibility of artificial moral advisors. Philos. Technol. 2022, 35, 35. [Google Scholar] [CrossRef]
- Landes, E.; Voinea, C.; Uszkai, R. Rage against the authority machines: How to design artificial moral advisors for moral enhancement. AI Soc. 2024, 40, 2237–2248. [Google Scholar] [CrossRef]
- Floridi, L. Distributed morality in an information society. Sci. Eng. Ethics 2013, 19, 727–743. [Google Scholar] [CrossRef] [PubMed]
- Binns, R. Fairness in Machine Learning: Lessons from Political Philosophy. In Proceedings of the 2018 Conference on Fairness, Accountability, and Transparency, New York, NY, USA, 23–24 February 2018; pp. 149–159. [Google Scholar]
- Gibert, M. The Case for Virtuous Robots. AI Ethics 2023, 3, 135–144. [Google Scholar] [CrossRef]
- Stenseke, J. On the computational complexity of ethics: Moral tractability for minds and machines. Artif. Intell. Rev. 2024, 57, 90. [Google Scholar] [CrossRef]
- Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.L.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. Training Language Models to Follow Instructions with Human Feedback. arXiv 2022, arXiv:2203.02155. [Google Scholar] [CrossRef]
- Rafailov, R.; Sharma, A.; Mitchell, E.; Manning, C.D.; Ermon, S.; Finn, C. Direct preference optimization: Your language model is secretly a reward model. Adv. Neural Inf. Process. Syst. 2023, 36, 53728–53741. [Google Scholar]
- Emelin, D.; Le Bras, R.; Hwang, J.D.; Forbes, M.; Choi, Y. Moral Stories: Situated Reasoning about Norms, Intents, Actions, and their Consequences. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Punta Cana, Dominican Republic, 7–11 November 2021; Association for Computational Linguistics: Stroudsburg, PA, USA; pp. 698–718. [Google Scholar]
Methods | Mechanism | Advantages | Limitations |
---|---|---|---|
Reinforcement Learning from Human Feedback (RLHF) | Trains a reward model using human rankings of model outputs, then optimizes the LLM using PPO (Proximal Policy Optimization). | Incorporates nuanced human values; effective for conversational models. | Requires extensive human feedback; computationally expensive; reward model can be misaligned. |
Imitation Learning (IL) | Supervised learning on expert-generated data or behavior cloning. | Simple to implement; effective with high-quality demonstrations. | Limited by quality and diversity of demonstrations; may not generalize well. |
Supervised Fine-Tuning (SFT) | Directly optimizes model parameters using labeled data with standard supervised learning. | Straightforward; effective for well-defined tasks; requires less human-in-the-loop effort. | Limited to predefined tasks; may not capture nuanced human values. |
Direct Preference Optimization (DPO) | Uses a loss function derived from preference data to align model outputs with desired behavior. | Simpler than RLHF; avoids reward model training; computationally efficient. | May struggle with complex value alignment; less robust for diverse preferences. |
Context | Moral Action | Immoral Action |
---|---|---|
As clouds and wind without rain, So is he who boasts of gifts deceptively. | “Moral Lesson”—The passage condemns “hypocrisy” and “self-serving generosity”. True generosity is rooted in humility, integrity, and a desire to serve others, not to gain recognition.—It also emphasizes the importance of “truthfulness” in speech and action, as deception ultimately leads to emptiness and spiritual harm. | “Futility of Empty Actions”—The metaphor of “clouds and wind without rain” underscores the “ineffectiveness” of insincere actions. Just as clouds and wind are transient and unproductive, so too are the gifts and boasts of the deceitful person. |
Then you will understand righteousness and justice, Equity and every good path. For wisdom will enter into your heart. Knowledge will be pleasant to your soul. Discretion will watch over you. Understanding will keep you. | The moral actions described are centered on the pursuit of ethical and righteous living. The passage emphasizes the following virtues and behaviors: 1. “Righteousness”—Living in accordance with moral and ethical standards, especially in relation to justice and fairness. 2. “Justice”—Upholding fairness and equality, ensuring that actions align with the principles of right and wrong. 3. “Equity”—Practicing fairness and impartiality, treating all people with equal regard and without bias. 4. “Every Good Path”—Choosing and following paths that lead to moral integrity, wisdom, and alignment with God’s will. These actions are the result of The passage highlights that wisdom, knowledge, discretion, and understanding are the tools that enable one to embody these moral actions and remain protected by them. | Making an ethical judgment without seeking wisdom and knowledge is immoral. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chong, K.C.M.; Tan, Y.-K.; Zhou, X. Impacts of Artificial Intelligence Development on Humanity and Social Values. Information 2025, 16, 810. https://doi.org/10.3390/info16090810
Chong KCM, Tan Y-K, Zhou X. Impacts of Artificial Intelligence Development on Humanity and Social Values. Information. 2025; 16(9):810. https://doi.org/10.3390/info16090810
Chicago/Turabian StyleChong, Kelvin C. M., Yen-Kheng Tan, and Xin Zhou. 2025. "Impacts of Artificial Intelligence Development on Humanity and Social Values" Information 16, no. 9: 810. https://doi.org/10.3390/info16090810
APA StyleChong, K. C. M., Tan, Y.-K., & Zhou, X. (2025). Impacts of Artificial Intelligence Development on Humanity and Social Values. Information, 16(9), 810. https://doi.org/10.3390/info16090810