Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,015)

Search Parameters:
Keywords = ethical implications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 596 KB  
Review
Hashing in the Fight Against CSAM: Technology at the Crossroads of Law and Ethics
by Evangelia Daskalaki, Emmanouela Kokolaki and Paraskevi Fragopoulou
J. Cybersecur. Priv. 2025, 5(4), 92; https://doi.org/10.3390/jcp5040092 (registering DOI) - 31 Oct 2025
Abstract
Hashes are vital in limiting the spread of child sexual abuse material online, yet their use introduces unresolved technical, legal, and ethical challenges. This paper bridges a critical gap by analyzing both cryptographic and perceptual hashing, not only in terms of detection capabilities, [...] Read more.
Hashes are vital in limiting the spread of child sexual abuse material online, yet their use introduces unresolved technical, legal, and ethical challenges. This paper bridges a critical gap by analyzing both cryptographic and perceptual hashing, not only in terms of detection capabilities, but also their vulnerabilities and implications for privacy governance. Unlike prior work, it reframes CSAM detection as a multidimensional issue, at the intersection of cybersecurity, data protection law, and digital ethics. Three key contributions are made: first, a comparative evaluation of hashing techniques, revealing weaknesses, such as susceptibility to media edits, collision attacks, hash inversion, and data leakage; second, a call for standardized benchmarks and interoperable evaluation protocols to assess system robustness; and third, a legal argument that perceptual hashes qualify as personal data under EU law, with implications for transparency and accountability. Ethically, the paper underscores the tension faced by service providers in balancing user privacy with the duty to detect CSAM. It advocates for detection systems that are not only technically sound, but also legally defensible and ethically governed. By integrating technical analysis with legal insight, this paper offers a comprehensive framework for evaluating CSAM detection, within the broader context of digital safety and privacy. Full article
(This article belongs to the Section Cryptography and Cryptology)
26 pages, 4427 KB  
Review
Digital Technology Integration in Risk Management of Human–Robot Collaboration Within Intelligent Construction—A Systematic Review and Future Research Directions
by Xingyuan Ding, Yinshuang Xu, Min Zheng, Weide Kang and Xiaer Xiahou
Systems 2025, 13(11), 974; https://doi.org/10.3390/systems13110974 (registering DOI) - 31 Oct 2025
Viewed by 24
Abstract
With the digital transformation of the construction industry toward intelligent construction, advanced digital technologies—including Artificial Intelligence (AI), Digital Twins (DTs), and Internet of Things (IoT)—increasingly support Human–Robot Collaboration (HRC), offering productivity gains while introducing new safety risks. This study presents a systematic review [...] Read more.
With the digital transformation of the construction industry toward intelligent construction, advanced digital technologies—including Artificial Intelligence (AI), Digital Twins (DTs), and Internet of Things (IoT)—increasingly support Human–Robot Collaboration (HRC), offering productivity gains while introducing new safety risks. This study presents a systematic review of digital technology applications and risk management practices in HRC scenarios within intelligent construction environments. Following the PRISMA protocol, this study retrieved 7640 publications from the Web of Science database. After screening, 70 high-quality studies were selected for in-depth analysis. This review identifies four core digital technologies central to current HRC research: multi-modal acquisition technology, artificial intelligence learning technology (AI learning technology), Digital Twins (DTs), and Augmented Reality (AR). Based on the findings, this study constructed a systematic framework for digital technology in HRC, consisting of data acquisition and perception, data transmission and storage, intelligent analysis and decision support, human–machine interaction and collaboration, and intelligent equipment and automation. The study highlights core challenges across risk management stages, including difficulties in multi-modal fusion (risk identification), lack of quantitative systems (risk assessment), real-time performance issues (risk response), and weak feedback loops in risk monitoring and continuous improvement. Moreover, future research directions are proposed, including trust in HRC, privacy and ethics, and closed-loop optimization. This research provides theoretical insights and practical recommendations for advancing digital safety systems and supporting the safe digital transformation of the construction industry. These research findings hold significant important implications for advancing the digital transformation of the construction industry and enabling efficient risk management. Full article
Show Figures

Figure 1

17 pages, 221 KB  
Article
The Question of AI During the Papacy of Pope Francis: A Philosophical and Theological Analysis
by Justin Nnaemeka Onyeukaziri
Religions 2025, 16(11), 1379; https://doi.org/10.3390/rel16111379 - 29 Oct 2025
Viewed by 177
Abstract
The papacy of Pope Francis, like other previous papacies, addressed several contemporary global issues, of which the questions of climate change, global poverty, war, and artificial intelligence (AI) were given recurrent emphasis. There were four elaborate documents dedicated to the question of AI [...] Read more.
The papacy of Pope Francis, like other previous papacies, addressed several contemporary global issues, of which the questions of climate change, global poverty, war, and artificial intelligence (AI) were given recurrent emphasis. There were four elaborate documents dedicated to the question of AI design and development with respect to their ethical, philosophical, theological, and socio-political implications. The aim of this study is to philosophically analyze the philosophical and theological intuitions that underpin the urgency and the cogency that inform the pontification over the subject matter, AI. Additionally, it aims to examine the exhaustiveness and cohesiveness of the scientific and technological epistemological foundations that ground the argumentations of the documents and theoretical unification that underpins Pope Francis’s ecclesiology of AI. This will help to evaluate its contribution to the ongoing discourse on AI ethics and governance and expound the humanistic imagination on the reality of the co-existence of humans and AI as cognitive systems. Full article
(This article belongs to the Special Issue Religious Ethics and Theology in Contemporary Human Life)
16 pages, 211 KB  
Article
Towards a Socio-Theological Evaluation of Artificial Intelligence
by Hilary Ndu Okeke
Religions 2025, 16(11), 1372; https://doi.org/10.3390/rel16111372 - 29 Oct 2025
Viewed by 187
Abstract
Artificial intelligence has revolutionized multiple aspects of human existence and raised substantive questions regarding the ultimate purpose of the human person. Spiritual knowledge advances through scientific discovery, as understanding the universe contributes to knowledge of God. Theology, as a discipline that is both [...] Read more.
Artificial intelligence has revolutionized multiple aspects of human existence and raised substantive questions regarding the ultimate purpose of the human person. Spiritual knowledge advances through scientific discovery, as understanding the universe contributes to knowledge of God. Theology, as a discipline that is both theocentric and anthropocentric, considers AI a product of human scientific ingenuity. Despite extensive debate over the decades concerning AI’s impact on the human person, few studies have resolved the complex theological and epistemological issues involved. This article contends that AI represents a significant development in creation, rapidly redefining established paradigms. How does AI as imago hominis reshape Christian anthropology, and what are the socio-theological implications? This core question guides our exploration. We will examine the human person as an individual endowed with intellect, sensibility, volition, as well as imago Dei. In contrast, AI is governed by code and programmers, and is characterized as imago hominis, subject to inherent limitations. By examining the relationship between AI as imago hominis and the human person as imago Dei, this paper succinctly addresses some key ethical and anthropological concerns. Full article
(This article belongs to the Special Issue Religious Ethics and Theology in Contemporary Human Life)
22 pages, 1329 KB  
Article
Voices of Researchers: Ethics and Artificial Intelligence in Qualitative Inquiry
by Juan Luis Cabanillas-García, María Cruz Sánchez-Gómez and Irene del Brío-Alonso
Information 2025, 16(11), 938; https://doi.org/10.3390/info16110938 - 28 Oct 2025
Viewed by 370
Abstract
The rapid emergence of Generative Artificial Intelligence (GenAI) has sparked a growing debate about its ethical, methodological, and epistemological implications for qualitative research. This study aimed to examine and deeply understand researchers’ perceptions regarding the use of GenAI tools in different phases of [...] Read more.
The rapid emergence of Generative Artificial Intelligence (GenAI) has sparked a growing debate about its ethical, methodological, and epistemological implications for qualitative research. This study aimed to examine and deeply understand researchers’ perceptions regarding the use of GenAI tools in different phases of the qualitative research process. The study involved a sample of 214 researchers from diverse disciplinary areas, with publications indexed in Web of Science or Scopus that apply qualitative methods. Data collection was conducted using an open-ended questionnaire, and analysis was carried out using coding and thematic analysis procedures, which allowed us to identify patterns of perception, user experiences, and barriers. The findings show that, while GenAI is valued for its ability to optimize tasks such as corpus organization, initial coding, transcription, translation, and information synthesis, its implementation raises concerns regarding privacy, consent, authorship, the reliability of results, and the loss of interpretive depth. Furthermore, a dual ecosystem is observed, where some researchers already incorporate it, mainly generative text assistants like ChatGPT, while others have yet to use it or are unfamiliar with it. Overall, the results suggest that the most solid path is an assisted model, supported by clear ethical frameworks, adapted methodological guidelines, and critical training for responsible and humanistic use. Full article
(This article belongs to the Special Issue Generative AI Technologies: Shaping the Future of Higher Education)
Show Figures

Figure 1

25 pages, 2253 KB  
Entry
Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration
by Manolis Adamakis and Theodoros Rachiotis
Encyclopedia 2025, 5(4), 180; https://doi.org/10.3390/encyclopedia5040180 - 28 Oct 2025
Viewed by 524
Definition
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the [...] Read more.
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the higher education landscape, emphasizing mature knowledge aimed at educators, researchers, and policymakers. AI technologies now support personalized learning pathways, enhance instructional efficiency, and improve academic productivity by facilitating tasks such as automated grading, adaptive feedback, and academic writing assistance. The widespread adoption of AI tools among students and faculty members has created a critical need for AI literacy—encompassing not only technical proficiency but also critical evaluation, ethical awareness, and metacognitive engagement with AI-generated content. Key opportunities include the deployment of adaptive tutoring and real-time feedback mechanisms that tailor instruction to individual learning trajectories; automated content generation, grading assistance, and administrative workflow optimization that reduce faculty workload; and AI-driven analytics that inform curriculum design and early intervention to improve student outcomes. At the same time, AI poses challenges related to academic integrity (e.g., plagiarism and misuse of generative content), algorithmic bias and data privacy, digital divides that exacerbate inequities, and risks of “cognitive debt” whereby over-reliance on AI tools may degrade working memory, creativity, and executive function. The lack of standardized AI policies and fragmented institutional governance highlight the urgent necessity for transparent frameworks that balance technological adoption with academic values. Anchored in several foundational pillars (such as a brief description of AI higher education, AI literacy, AI tools for educators and teaching staff, ethical use of AI, and institutional integration of AI in higher education), this entry emphasizes that AI is neither a panacea nor an intrinsic threat but a “technology of selection” whose impact depends on the deliberate choices of educators, institutions, and learners. When embraced with ethical discernment and educational accountability, AI holds the potential to foster a more inclusive, efficient, and democratic future for higher education; however, its success depends on purposeful integration, balancing innovation with academic values such as integrity, creativity, and inclusivity. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Figure 1

22 pages, 1553 KB  
Article
Factors Influencing the Reported Intention of Higher Vocational Computer Science Students in China to Use AI After Ethical Training: A Study in Guangdong Province
by Huiwen Zou, Ka Ian Chan, Patrick Cheong-Iao Pang, Blandina Manditereza and Yi-Huang Shih
Educ. Sci. 2025, 15(11), 1431; https://doi.org/10.3390/educsci15111431 - 24 Oct 2025
Viewed by 301
Abstract
This paper reports a study conducting an in-depth analysis of the impacts of ethical training on the adoption of AI tools among computer science students in higher vocational colleges. These students will serve as the pivotal human factor for advancing the field of [...] Read more.
This paper reports a study conducting an in-depth analysis of the impacts of ethical training on the adoption of AI tools among computer science students in higher vocational colleges. These students will serve as the pivotal human factor for advancing the field of AI. Aiming to explore practical models for integrating AI ethics into computer science education, the research seeks to promote more responsible and effective AI application and therefore become a positive influence in the field. Employing a mixed-methods approach, the study included 105 students aged 20–24 from a vocational college in Guangdong Province, a developed region in China. Based on the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model, a five-point Likert scale was used to evaluate the participants’ perceptions of AI tool usage based on ethical principles. The Structural Equation Modeling (SEM) results indicate that while participants are motivated to adopt AI technologies in certain aspects, performance expectancy negatively impacts their intention and actual usage. After systematically studying and understanding AI ethics, participants attribute a high proportion of responsibility (84.89%) to objective factors and prioritized safety (27.11%) among eight ethical principles. Statistical analysis shows that habit (β = 0.478, p < 0.001) and hedonic motivation (β = 0.239, p = 0.004) significantly influence behavioral intention. Additionally, social influence (β = 0.234, p = 0.008) affects use behavior. Findings regarding factors that influence AI usage can inform a strategic framework for the integration of ethical instruction in AI applications. These findings have significant implications for curriculum design, policy formulation, and the establishment of ethical guidelines for AI deployment in higher educational contexts. Full article
Show Figures

Figure 1

28 pages, 770 KB  
Review
Leveraging Artificial Intelligence and Modulation of Oxidative Stressors to Enhance Healthspan and Radical Longevity
by Donald D. Haines, Stephen Christopher Rose, Fred M. Cowan, Fadia F. Mahmoud, Albert A. Rizvanov and Arpad Tosaki
Biomolecules 2025, 15(11), 1501; https://doi.org/10.3390/biom15111501 - 24 Oct 2025
Viewed by 569
Abstract
This review explores the transformative potentials of artificial intelligence (AI) in promoting healthspan and longevity. Healthspan focuses on enhancing quality of life free from chronic conditions, while longevity defines current lifespan limits within a particular species and encompasses biological aging at multiple levels. [...] Read more.
This review explores the transformative potentials of artificial intelligence (AI) in promoting healthspan and longevity. Healthspan focuses on enhancing quality of life free from chronic conditions, while longevity defines current lifespan limits within a particular species and encompasses biological aging at multiple levels. AI methodologies—including machine learning, deep learning, natural language processing, robotics, and data analytics—offer unprecedented tools to analyze complex biological data, accelerate biomarker discovery, optimize therapeutic interventions, and personalize medicine. Notably, AI has facilitated breakthroughs in identifying accurate biomarkers of biological age, developing precision medicine approaches, accelerating drug discovery, and enhancing genomic editing technologies such as CRISPR. Further, AI-based analysis of endogenous cytoprotection, especially the activity of molecules such as heme oxygenase, with particular application to hemolytic diseases. AI-driven robotics and automated monitoring systems significantly improve elderly care, lifestyle interventions, and clinical trials, demonstrating considerable potential to extend both healthspan and lifespan. However, the integration of AI into longevity research poses ethical and societal challenges, including concerns over privacy, equitable access, and broader implications of extended human lifespans. Strategic interdisciplinary collaboration, transparent AI methodologies, standardized data frameworks, and equitable policy approaches are essential to responsibly harness AI’s full potential in transforming longevity science and improving human health. Full article
Show Figures

Figure 1

14 pages, 826 KB  
Article
Balancing Accuracy and Readability: Comparative Evaluation of AI Chatbots for Patient Education on Rotator Cuff Tears
by Ali Can Koluman, Mehmet Utku Çiftçi, Ebru Aloğlu Çiftçi, Başar Burak Çakmur and Nezih Ziroğlu
Healthcare 2025, 13(21), 2670; https://doi.org/10.3390/healthcare13212670 - 23 Oct 2025
Viewed by 210
Abstract
Background/Objectives: Rotator cuff (RC) tears are a leading cause of shoulder pain and disability. Artificial intelligence (AI)-based chatbots are increasingly applied in healthcare for diagnostic support and patient education, but the reliability, quality, and readability of their outputs remain uncertain. International guidelines (AMA, [...] Read more.
Background/Objectives: Rotator cuff (RC) tears are a leading cause of shoulder pain and disability. Artificial intelligence (AI)-based chatbots are increasingly applied in healthcare for diagnostic support and patient education, but the reliability, quality, and readability of their outputs remain uncertain. International guidelines (AMA, NIH, European health communication frameworks) recommend that patient materials be written at a 6th–8th grade reading level, yet most online and AI-generated content exceeds this threshold. Methods: We compared responses from three AI chatbots—ChatGPT-4o (OpenAI), Gemini 1.5 Flash (Google), and DeepSeek-V3 (Deepseek AI)—to 20 frequently asked patient questions about RC tears. Four orthopedic surgeons independently rated reliability and usefulness (7-point Likert) and overall quality (5-point Global Quality Scale). Readability was assessed using six validated indices. Statistical analysis included Kruskal–Wallis and ANOVA with Bonferroni correction; inter-rater agreement was measured using intraclass correlation coefficients (ICCs). Results: Inter-rater reliability was good to excellent (ICC 0.726–0.900). Gemini 1.5 Flash achieved the highest reliability and quality, ChatGPT-4o performed comparably but slightly lower in diagnostic content, and DeepSeek-V3 consistently scored lowest in reliability and quality but produced the most readable text (FKGL ≈ 6.5, within the 6th–8th grade target). None of the models reached a Flesch Reading Ease (FRE) score above 60, indicating that even the most readable outputs remained more complex than plain-language standards. Conclusions: Gemini 1.5 Flash and ChatGPT-4o generated more accurate and higher-quality responses, whereas DeepSeek-V3 provided more accessible content. No single model fully balanced accuracy and readability. Clinical Implications: Hybrid use of AI platforms—leveraging high-accuracy models alongside more readable outputs, with clinician oversight—may optimize patient education by ensuring both accuracy and accessibility. Future work should assess real-world comprehension and address the legal, ethical, and generalizability challenges of AI-driven patient education. Full article
Show Figures

Figure 1

17 pages, 982 KB  
Review
The Role of Gene Therapy and RNA-Based Therapeutic Strategies in Diabetes
by Mustafa Tariq Khan, Reem Emad Al-Dhaleai, Sarah M. Alayadhi, Zainab Alhalwachi and Alexandra E. Butler
Int. J. Mol. Sci. 2025, 26(21), 10264; https://doi.org/10.3390/ijms262110264 - 22 Oct 2025
Viewed by 427
Abstract
Gene therapy and RNA (ribonucleic acid)-based therapeutic strategies have emerged as promising alternatives to conventional diabetes treatments, significantly expanding the therapeutic landscape using viral and non-viral vectors, and RNA modalities such as mRNA (messenger ribonucleic acid), siRNA (small interfering ribonucleic acid) and miRNA [...] Read more.
Gene therapy and RNA (ribonucleic acid)-based therapeutic strategies have emerged as promising alternatives to conventional diabetes treatments, significantly expanding the therapeutic landscape using viral and non-viral vectors, and RNA modalities such as mRNA (messenger ribonucleic acid), siRNA (small interfering ribonucleic acid) and miRNA (micro ribonucleic acid). Recent advancements in these fields have led to notable preclinical successes and ongoing clinical trials, yet they are accompanied by debates over safety, efficacy and ethical considerations that underscore the complexity of clinical translation. This review offers a comprehensive analysis of the underlying mechanisms by which these treatments target diabetes, critically evaluating the fundamental concepts and mechanistic insights that form their basis, while highlighting current research gaps, such as the challenges in long-term stability and efficient delivery of RNA-based therapies, and potential adverse effects associated with gene therapy techniques. By synthesizing diverse perspectives and controversies, the review outlines future directions and interdisciplinary approaches aimed at overcoming existing hurdles, ultimately setting the stage for innovative, personalized diabetes management and addressing the broader clinical and regulatory implications of these emerging therapeutic strategies. Full article
Show Figures

Figure 1

22 pages, 634 KB  
Review
What Are the Ethical Issues Surrounding Extended Reality in Mental Health? A Scoping Review of the Different Perspectives
by Marie-Hélène Goulet, Laura Dellazizzo, Simon Goyer, Stéphanie Dollé, Alexandre Hudon, Kingsada Phraxayavong, Marie Désilets and Alexandre Dumais
Behav. Sci. 2025, 15(10), 1431; https://doi.org/10.3390/bs15101431 - 21 Oct 2025
Viewed by 473
Abstract
As extended reality (XR) technologies such as virtual and augmented reality rapidly enter mental health care, ethical considerations lag behind and require urgent attention to safeguard patient safety, uphold research integrity, and guide clinical practice. This scoping review aims to map the current [...] Read more.
As extended reality (XR) technologies such as virtual and augmented reality rapidly enter mental health care, ethical considerations lag behind and require urgent attention to safeguard patient safety, uphold research integrity, and guide clinical practice. This scoping review aims to map the current understanding regarding the main ethical issues arising on the use of XR in clinical psychiatry. Methods: Searches were conducted in 5 databases and included 29 studies. Relevant excerpts discussing ethical issues were documented and then categorized. Results: The analysis led to the identification of 5 core ethical challenges: (i) Balancing beneficence and non-maleficence as a question of patient safety, (ii) Altering autonomy by altering reality and information, (iii) data privacy risks and confidentiality concerns, (iv) clinical liability and regulation, and v) fostering inclusiveness and equity in XR development. Most authors have stated ethical concerns primarily for the first two topics, whereas the remaining four themes were not consistently addressed across all papers. Conclusions: There remains a great research void regarding such an important topic due the limited number of empirical studies, the lack of involvement of those living with a mental health issue in the development of these XR-based technologies, and the lack of clear clinical and ethical guidelines regarding their use. Identifying broader ethical implications of such novel technology is crucial for best mental healthcare practices. Full article
(This article belongs to the Special Issue Digital Interventions for Addiction and Mental Health)
Show Figures

Figure 1

32 pages, 410 KB  
Article
Embedding AI Ethics in Technical Training: A Multi-Stakeholder Pilot Module Emphasizing Co-Design and Interdisciplinary Collaboration at Rome Technopole
by Giuseppe Esposito, Massimo Sanchez, Federica Fratini, Egidio Iorio, Lucia Bertuccini, Serena Cecchetti, Valentina Tirelli and Daniele Giansanti
Educ. Sci. 2025, 15(10), 1416; https://doi.org/10.3390/educsci15101416 - 21 Oct 2025
Viewed by 296
Abstract
Higher technical education plays a strategic role in equipping the workforce to navigate rapid technological advancements and evolving labor market demands. Within the Rome Technopole framework, Spoke 4 targets ITS Academies, promoting the development of flexible, modular programs that integrate advanced technical skills [...] Read more.
Higher technical education plays a strategic role in equipping the workforce to navigate rapid technological advancements and evolving labor market demands. Within the Rome Technopole framework, Spoke 4 targets ITS Academies, promoting the development of flexible, modular programs that integrate advanced technical skills with ethical, legal, and societal perspectives. This study reports on a pilot training initiative on Artificial Intelligence (AI) co-designed by the Istituto Superiore di Sanità (ISS), aimed at exploring the ethical, practical, and educational relevance of AI in higher technical education. The module was developed and tested through a multi-stakeholder collaboration involving educators, institutional actors, and learners. A four-phase approach was adopted: (1) initial stakeholder consultation to identify needs and content directions, (2) collaborative design of the training module, (3) online delivery and engagement using a CAWI-based focus group, and (4) mixed-method evaluation, combining quantitative assessments and open-ended qualitative feedback. This design facilitated asynchronous participation and encouraged critical reflection on the real-world implications of AI. Through the four-phase approach, the pilot module was developed, delivered, and assessed with 37 participants. Quantitative analysis revealed high ratings for clarity, relevance, and perceived utility in terms of employability. Qualitative feedback highlighted the interdisciplinary design, the integration of ethical reasoning, and the module’s broad applicability across sectors—particularly Healthcare and Industry. Participants suggested including more real-world case studies and collaborative learning activities to enhance engagement. The findings support the feasibility and added value of embedding ethically informed, interdisciplinary AI education in professional technical training pathways. Developed within the Rome Technopole ecosystem, the pilot module offers a promising approach to fostering critical digital literacy and preparing learners for responsible engagement with emerging technologies. Full article
(This article belongs to the Special Issue AI Literacy: An Essential 21st Century Competence)
38 pages, 32547 KB  
Article
Recoding Reality: A Case Study of YouTube Reactions to Generative AI Videos
by Levent Çalli and Büşra Alma Çalli
Systems 2025, 13(10), 925; https://doi.org/10.3390/systems13100925 - 21 Oct 2025
Viewed by 867
Abstract
The mainstream launch of generative AI video platforms represents a major change to the socio-technical system of digital media, raising critical questions about public perception and societal impact. While research has explored isolated technical or ethical facets, a holistic understanding of the user [...] Read more.
The mainstream launch of generative AI video platforms represents a major change to the socio-technical system of digital media, raising critical questions about public perception and societal impact. While research has explored isolated technical or ethical facets, a holistic understanding of the user experience of AI-generated videos—as an interrelated set of perceptions, emotions, and behaviors—remains underdeveloped. This study addresses this gap by conceptualizing public discourse as a complex system of interconnected themes. We apply a mixed-methods approach that combines quantitative LDA topic modeling with qualitative interpretation to analyze 11,418 YouTube comments reacting to AI-generated videos. The study’s primary contribution is the development of a novel, three-tiered framework that models user experience. This framework organizes 15 empirically derived topics into three interdependent layers: (1) Socio-Technical Systems and Platforms (the enabling infrastructure), (2) AI-Generated Content and Esthetics (the direct user-artifact interaction), and (3) Societal and Ethical Implications (the emergent macro-level consequences). Interpreting this systemic structure through the lens of the ABC model of attitudes, our analysis reveals the distinct Affective (e.g., the “uncanny valley”), Behavioral (e.g., memetic participation), and Cognitive (e.g., epistemic anxiety) dimensions that constitute the major elements of user experience. This empirically grounded model provides a holistic map of public discourse, offering actionable insights for managing the complex interplay between technological innovation and societal adaptation within this evolving digital system. Full article
Show Figures

Figure 1

19 pages, 1202 KB  
Article
Sustainable Leadership and Green HRM: Fostering Environmentally Responsible Organizational Cultures
by Megren Abdullah Altassan
Sustainability 2025, 17(20), 9331; https://doi.org/10.3390/su17209331 - 21 Oct 2025
Viewed by 510
Abstract
This study explores how sustainability leadership and Green Human Resource Management (Green HRM) practices interplay to cultivate an environmentally responsible culture in organizations based in Jeddah. Through thematic analysis of participant interviews, the research identifies key leadership behaviors, such as visionary communication, role [...] Read more.
This study explores how sustainability leadership and Green Human Resource Management (Green HRM) practices interplay to cultivate an environmentally responsible culture in organizations based in Jeddah. Through thematic analysis of participant interviews, the research identifies key leadership behaviors, such as visionary communication, role modeling, and operational integration, that align culturally grounded ethical values to drive sustainability. Green HRM practices, including green recruitment, targeted training, eco-friendly performance appraisals, and recognition systems, further reinforce these leadership efforts. The study highlights the importance of authentic alignment between leadership values and HRM policies to avoid perceptions of greenwashing and to institutionalize sustainable practices effectively. Findings emphasize that embedding sustainability within organizational culture requires a synergistic approach integrating leadership vision, HRM systems, and cultural context, fostering employee motivation and long-term environmental commitment. The implications provide valuable insights for organizations seeking to implement meaningful sustainability strategies aligned with both global goals and local values. Full article
(This article belongs to the Section Sustainable Management)
Show Figures

Figure 1

17 pages, 877 KB  
Article
Accountability Between Compliance and Legitimacy: Rethinking Governance for Corporate Sustainability
by Antonio Prencipe
Sustainability 2025, 17(20), 9305; https://doi.org/10.3390/su17209305 - 20 Oct 2025
Viewed by 465
Abstract
The concept of accountability is central to understanding how sustainable corporate governance (SCG) structures shape organizational behavior, legitimacy, and firm performance in the pursuit of sustainability goals. While widely invoked, accountability is often treated inconsistently across governance contexts—oscillating between technical compliance and ethical [...] Read more.
The concept of accountability is central to understanding how sustainable corporate governance (SCG) structures shape organizational behavior, legitimacy, and firm performance in the pursuit of sustainability goals. While widely invoked, accountability is often treated inconsistently across governance contexts—oscillating between technical compliance and ethical legitimacy. This paper provides a structured conceptual review of how accountability is framed and operationalized within sustainability governance, with a specific focus on its implications for sustainable performance, corporate sustainability strategies, and governance effectiveness. Based on a qualitative analysis of thirteen peer-reviewed articles published between 2006 and 2025, the study identifies three dominant conceptual clusters: compliance-oriented, legitimacy-oriented, and hybrid approaches. Each cluster reflects different accountability logics and governance mechanisms—ranging from ESG metrics and sustainability reporting frameworks to participatory forums and stakeholder engagement processes that support sustainable development. The article synthesizes theoretical contributions from institutional theory, stakeholder theory, and deliberative democracy to explore how accountability serves as a bridge between formal governance mechanisms and legitimacy claims. A conceptual framework is proposed to illustrate the tensions and complementarities between compliance-driven and legitimacy-driven governance models in sustainability contexts. By deepening the theoretical understanding of accountability in corporate sustainability, this review contributes to the literature on ESG governance, social and environmental reporting, and the legitimacy–performance nexus in corporate settings. The findings offer a foundation for advancing more inclusive, transparent, and sustainability-oriented corporate governance practices in response to global sustainability challenges and the Sustainable Development Goals (SDGs). Full article
(This article belongs to the Special Issue Sustainable Corporate Governance and Firm Performance)
Show Figures

Figure 1

Back to TopTop