Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (200)

Search Parameters:
Keywords = ethical AI implications

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 508 KiB  
Review
The Role of Artificial Intelligence in the Diagnosis and Management of Diabetic Retinopathy
by Areeb Ansari, Nabiha Ansari, Usman Khalid, Daniel Markov, Kristian Bechev, Vladimir Aleksiev, Galabin Markov and Elena Poryazova
J. Clin. Med. 2025, 14(14), 5150; https://doi.org/10.3390/jcm14145150 - 20 Jul 2025
Viewed by 275
Abstract
Background/Objectives: Diabetic retinopathy (DR) is a progressive microvascular complication of diabetes mellitus and a leading cause of vision impairment worldwide. Early detection and timely management are critical in preventing vision loss, yet current screening programs face challenges, including limited specialist availability and [...] Read more.
Background/Objectives: Diabetic retinopathy (DR) is a progressive microvascular complication of diabetes mellitus and a leading cause of vision impairment worldwide. Early detection and timely management are critical in preventing vision loss, yet current screening programs face challenges, including limited specialist availability and variability in diagnoses, particularly in underserved areas. This literature review explores the evolving role of artificial intelligence (AI) in enhancing the diagnosis, screening, and management of diabetic retinopathy. It examines AI’s potential to improve diagnostic accuracy, accessibility, and patient outcomes through advanced machine-learning and deep-learning algorithms. Methods: We conducted a non-systematic review of the published literature to explore advancements in the diagnostics of diabetic retinopathy. Relevant articles were identified by searching the PubMed and Google Scholar databases. Studies focusing on the application of artificial intelligence in screening, diagnosis, and improving healthcare accessibility for diabetic retinopathy were included. Key information was extracted and synthesized to provide an overview of recent progress and clinical implications. Conclusions: Artificial intelligence holds transformative potential in diabetic retinopathy care by enabling earlier detection, improving screening coverage, and supporting individualized disease management. Continued research and ethical deployment will be essential to maximize AI’s benefits and address challenges in real-world applications, ultimately improving global vision health outcomes. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

27 pages, 750 KiB  
Article
Ethical Leadership and Management of Small- and Medium-Sized Enterprises: The Role of AI in Decision Making
by Tjaša Štrukelj and Petya Dankova
Adm. Sci. 2025, 15(7), 274; https://doi.org/10.3390/admsci15070274 - 12 Jul 2025
Viewed by 457
Abstract
The integration of artificial intelligence (AI) within the decision-making processes of small- and medium-sized enterprises (SMEs) presents both significant opportunities and substantial ethical challenges. The aim of this paper is to provide a theoretical model depicting the interdependence of organisational decision-making levels and [...] Read more.
The integration of artificial intelligence (AI) within the decision-making processes of small- and medium-sized enterprises (SMEs) presents both significant opportunities and substantial ethical challenges. The aim of this paper is to provide a theoretical model depicting the interdependence of organisational decision-making levels and decision-making styles, with an emphasis on exploring the role of AI in organisations’ decision making, based on selected process dimension of the MER model of integral governance and management, particularly in relation to routine, analytical, and intuitive decision-making capabilities. The research methodology employs a comprehensive qualitative analysis of the scientific literature published between 2010 and 2024, focusing on AI implementation in SMEs, ethical decision making in integral management, and regulatory frameworks governing AI use in business contexts. The findings reveal that AI technologies influence decision making across business policy, strategic, tactical, and operative management levels, with distinct implications for intuitive, analytical, and routine decision-making approaches. The analysis demonstrates that while AI can enhance data processing capabilities and reduce human biases, it presents significant challenges for normative–ethical decision making, requiring human judgment and stakeholder consideration. We conclude that effective AI integration in SMEs requires a balanced approach where AI primarily serves as a tool for data collection and analysis rather than as an autonomous decision maker. These insights contribute to the discourse on responsible AI implementation in SMEs and provide practical guidance for leaders navigating the complex interplay between (non)technological capabilities, ethical considerations, and regulatory requirements in the evolving business landscape. Full article
Show Figures

Figure 1

25 pages, 2618 KiB  
Review
International Trends and Influencing Factors in the Integration of Artificial Intelligence in Education with the Application of Qualitative Methods
by Juan Luis Cabanillas-García
Informatics 2025, 12(3), 61; https://doi.org/10.3390/informatics12030061 - 4 Jul 2025
Viewed by 426
Abstract
This study offers a comprehensive examination of the scientific output related to the integration of Artificial Intelligence (AI) in education using qualitative research methods, which is an emerging intersection that reflects growing interest in understanding the pedagogical, ethical, and methodological implications of AI [...] Read more.
This study offers a comprehensive examination of the scientific output related to the integration of Artificial Intelligence (AI) in education using qualitative research methods, which is an emerging intersection that reflects growing interest in understanding the pedagogical, ethical, and methodological implications of AI in educational contexts. Grounded in a theoretical framework that emphasizes the potential of AI to support personalized learning, augment instructional design, and facilitate data-driven decision-making, this study conducts a Systematic Literature Review and bibliometric analysis of 630 publications indexed in Scopus between 2014 and 2024. The results show a significant increase in scholarly output, particularly since 2020, with notable contributions from authors and institutions in the United States, China, and the United Kingdom. High-impact research is found in top-tier journals, and dominant themes include health education, higher education, and the use of AI for feedback and assessment. The findings also highlight the role of semi-structured interviews, thematic analysis, and interdisciplinary approaches in capturing the nuanced impacts of AI integration. The study concludes that qualitative methods remain essential for critically evaluating AI’s role in education, reinforcing the need for ethically sound, human-centered, and context-sensitive applications of AI technologies in diverse learning environments. Full article
(This article belongs to the Section Social Informatics and Digital Humanities)
Show Figures

Figure 1

47 pages, 1040 KiB  
Systematic Review
Impact of EU Regulations on AI Adoption in Smart City Solutions: A Review of Regulatory Barriers, Technological Challenges, and Societal Benefits
by Bo Nørregaard Jørgensen and Zheng Grace Ma
Information 2025, 16(7), 568; https://doi.org/10.3390/info16070568 - 2 Jul 2025
Viewed by 691
Abstract
This review investigates the influence of European Union regulations on the adoption of artificial intelligence in smart city solutions, with a structured emphasis on regulatory barriers, technological challenges, and societal benefits. It offers a comprehensive analysis of the legal frameworks in effect by [...] Read more.
This review investigates the influence of European Union regulations on the adoption of artificial intelligence in smart city solutions, with a structured emphasis on regulatory barriers, technological challenges, and societal benefits. It offers a comprehensive analysis of the legal frameworks in effect by 2025, including the Artificial Intelligence Act, General Data Protection Regulation, Data Act, and sector-specific directives governing mobility, energy, and surveillance. This study critically assesses how these regulations affect the deployment of AI systems across urban domains such as traffic optimization, public safety, waste management, and energy efficiency. A comparative analysis of regulatory environments in the United States and China reveals differing governance models and their implications for innovation, safety, citizen trust, and international competitiveness. The review concludes that although the European Union’s focus on ethics and accountability establishes a solid basis for trustworthy artificial intelligence, the complexity and associated compliance costs create substantial barriers to adoption. It offers recommendations for policymakers, municipal authorities, and technology developers to align regulatory compliance with effective innovation in the context of urban digital transformation. Full article
(This article belongs to the Special Issue Artificial Intelligence and Data Science for Smart Cities)
Show Figures

Figure 1

25 pages, 2109 KiB  
Article
Designing Artificial Intelligence: Exploring Inclusion, Diversity, Equity, Accessibility, and Safety in Human-Centric Emerging Technologies
by Matteo Zallio, Chiara Bianca Ike and Camelia Chivăran
AI 2025, 6(7), 143; https://doi.org/10.3390/ai6070143 - 2 Jul 2025
Viewed by 578
Abstract
Background: The implementation of artificial intelligence (AI) has become a pivotal interdisciplinary challenge, creating new opportunities for sharing information, driving innovation, and transforming societal interactions with technology. While AI offers numerous benefits, its rapid evolution raises critical concerns about its impact on inclusion, [...] Read more.
Background: The implementation of artificial intelligence (AI) has become a pivotal interdisciplinary challenge, creating new opportunities for sharing information, driving innovation, and transforming societal interactions with technology. While AI offers numerous benefits, its rapid evolution raises critical concerns about its impact on inclusion, diversity, equity, accessibility, and safety (IDEAS). Method: This pilot study aimed to explore these issues and identify ways to embed the IDEAS principles into AI design. A qualitative study was conducted with industrial and academic experts in the field. Semi-structured interviews gathered insights into the opportunities, challenges, and future implications of AI from diverse professional and cultural perspectives. Result: Findings highlight uncertainties in AI’s trajectory and its profound cross-sector influence. Key issues emerged, including bias, data privacy, transparency, and accessibility. Participants stressed the need for greater awareness and structured dialogue to integrate the IDEAS principles throughout the AI lifecycle. Conclusion: This study underscores the urgency of addressing AI’s ethical and societal impacts. Embedding the IDEAS principles into its development can help mitigate risks and foster more inclusive, equitable, and accessible technologies. Full article
Show Figures

Figure 1

32 pages, 3625 KiB  
Article
Artificial Intelligence for Smart Cities: A Comprehensive Review Across Six Pillars and Global Case Studies
by Joel John, Rayappa David Amar Raj, Maryam Karimi, Rouzbeh Nazari, Rama Muni Reddy Yanamala and Archana Pallakonda
Urban Sci. 2025, 9(7), 249; https://doi.org/10.3390/urbansci9070249 - 1 Jul 2025
Viewed by 834
Abstract
Rapid urbanization in the twenty-first century has significantly accelerated the adoption of artificial intelligence (AI) technologies to address growing challenges in governance, mobility, energy, and urban security. This paper explores how AI is transforming smart city infrastructure, analyzing more than 92 academic publications [...] Read more.
Rapid urbanization in the twenty-first century has significantly accelerated the adoption of artificial intelligence (AI) technologies to address growing challenges in governance, mobility, energy, and urban security. This paper explores how AI is transforming smart city infrastructure, analyzing more than 92 academic publications published between 2012 and 2024. Key AI applications ranging from predictive analytics in e-governance to machine learning models in renewable energy management and autonomous mobility systems are synthesized domain-wise throughout this study. This paper highlights the benefits of AI-enabled decision making, finds current implementation barriers, and discusses the associated ethical implications. Furthermore, it presents a research agenda that stresses data interoperability, transparency, and human–AI collaboration to steer upcoming advancements in smart urban ecosystems. Full article
Show Figures

Figure 1

20 pages, 402 KiB  
Review
ChatGPT and Digital Transformation: A Narrative Review of Its Role in Health, Education, and the Economy
by Dag Øivind Madsen and David Matthew Toston
Digital 2025, 5(3), 24; https://doi.org/10.3390/digital5030024 - 28 Jun 2025
Viewed by 944
Abstract
ChatGPT, a prominent large language model developed by OpenAI, has rapidly become embedded in digital infrastructures across various sectors. This narrative review examines its evolving role and societal implications in three key domains: healthcare, education, and the economy. Drawing on recent literature and [...] Read more.
ChatGPT, a prominent large language model developed by OpenAI, has rapidly become embedded in digital infrastructures across various sectors. This narrative review examines its evolving role and societal implications in three key domains: healthcare, education, and the economy. Drawing on recent literature and examples, the review explores ChatGPT’s applications, limitations, and ethical challenges in each context. In healthcare, the model is used to support patient communication and mental health services, while raising concerns about misinformation and privacy. In education, it offers new forms of personalized learning and feedback, but also complicates assessment and equity. In the economy, ChatGPT augments business operations and knowledge work, yet introduces risks related to job displacement, data governance, and automation bias. The review synthesizes these developments to highlight how ChatGPT is driving digital transformation while generating new demands for oversight, regulation, and critical inquiry. It concludes by outlining priorities for future research and policy, emphasizing the need for interdisciplinary collaboration, transparency, and inclusive access as generative AI continues to evolve. Full article
Show Figures

Figure 1

24 pages, 429 KiB  
Systematic Review
Advances in NLP Techniques for Detection of Message-Based Threats in Digital Platforms: A Systematic Review
by José Saias
Electronics 2025, 14(13), 2551; https://doi.org/10.3390/electronics14132551 - 24 Jun 2025
Viewed by 792
Abstract
Users of all ages face risks on social media and messaging platforms. When encountering suspicious messages, legitimate concerns arise about a sender’s malicious intent. This study examines recent advances in Natural Language Processing for detecting message-based threats in digital communication. We conducted a [...] Read more.
Users of all ages face risks on social media and messaging platforms. When encountering suspicious messages, legitimate concerns arise about a sender’s malicious intent. This study examines recent advances in Natural Language Processing for detecting message-based threats in digital communication. We conducted a systematic review following PRISMA guidelines, to address four research questions. After applying a rigorous search and screening pipeline, 30 publications were selected for analysis. Our work assessed the NLP techniques and evaluation methods employed in recent threat detection research, revealing that large language models appear in only 20% of the reviewed works. We further categorized detection input scopes and discussed ethical and privacy implications. The results show that AI ethical aspects are not systematically addressed in the reviewed scientific literature. Full article
Show Figures

Figure 1

18 pages, 2046 KiB  
Review
Ethics, Animal Welfare, and Artificial Intelligence in Livestock: A Bibliometric Review
by Taize Calvacante Santana, Cristiane Guiselini, Héliton Pandorfi, Ricardo Brauer Vigoderis, José Antônio Delfino Barbosa Filho, Rodrigo Gabriel Ferreira Soares, Maria de Fátima Araújo, Nicoly Farias Gomes, Leandro Dias de Lima and Paulo César da Silva Santos
AgriEngineering 2025, 7(7), 202; https://doi.org/10.3390/agriengineering7070202 - 24 Jun 2025
Viewed by 732
Abstract
This study presents a bibliometric review aimed at mapping and analyzing the scientific literature related to the ethical implications of artificial intelligence (AI) in livestock farming, which is a rapidly emerging yet still underexplored field in international research. Based on the Scopus database, [...] Read more.
This study presents a bibliometric review aimed at mapping and analyzing the scientific literature related to the ethical implications of artificial intelligence (AI) in livestock farming, which is a rapidly emerging yet still underexplored field in international research. Based on the Scopus database, 151 documents published between 2015 and 2025 were identified and analyzed using the VOSviewer version 1.6.20 and Biblioshiny for Bibliometrix (RStudio version 2023.12.1) tools. The results show a significant increase in publications from 2021 onwards, reflecting the growing maturity of discussions around the integration of digital technologies in the agricultural sector. Keyword co-occurrence and bibliographic coupling analyses revealed the formation of four main thematic clusters, covering technical applications in precision livestock farming as well as reflections on governance, animal welfare, and algorithmic justice. The most influential authors, high-impact journals, and leading countries in the field were also identified. As a key contribution, this study highlights the lack of robust ethical guidelines and proposes future research directions for the development of regulatory frameworks, codes of conduct, and interdisciplinary approaches. The findings underscore the importance of aligning technological innovation with ethical responsibility and social inclusion in the transition to digital livestock farming. Full article
Show Figures

Graphical abstract

15 pages, 218 KiB  
Article
Assessing Clinicians’ Legal Concerns and the Need for a Regulatory Framework for AI in Healthcare: A Mixed-Methods Study
by Abdullah Alanazi
Healthcare 2025, 13(13), 1487; https://doi.org/10.3390/healthcare13131487 - 21 Jun 2025
Viewed by 414
Abstract
Background: The rapid integration of artificial intelligence (AI) technologies into healthcare systems presents new opportunities and challenges, particularly regarding legal and ethical implications. In Saudi Arabia, the lack of legal awareness could hinder safe implementation of AI tools. Methods: A sequential explanatory mixed-methods [...] Read more.
Background: The rapid integration of artificial intelligence (AI) technologies into healthcare systems presents new opportunities and challenges, particularly regarding legal and ethical implications. In Saudi Arabia, the lack of legal awareness could hinder safe implementation of AI tools. Methods: A sequential explanatory mixed-methods design was employed. In Phase One, a structured electronic survey was administered to 357 clinicians across public and private healthcare institutions in Saudi Arabia, assessing legal awareness, liability concerns, data privacy, and trust in AI. In Phase Two, a qualitative expert panel involving health law specialists, digital health advisors, and clinicians was conducted to interpret survey findings and identify key regulatory needs. Results: Only 7% of clinicians reported high familiarity with AI legal implications, and 89% had no formal legal training. Confidence in AI compliance with data laws was low (mean score: 1.40/3). Statistically significant associations were found between professional role and legal familiarity (χ2 = 18.6, p < 0.01), and between legal training and confidence in AI compliance (t ≈ 6.1, p < 0.001). Qualitative findings highlighted six core legal barriers including lack of training, unclear liability, and gaps in regulatory alignment with national laws like the Personal Data Protection Law (PDPL). Conclusions: The study highlights a major gap in legal readiness among Saudi clinicians, which affects patient safety, liability, and trust in AI. Although clinicians are open to using AI, unclear regulations pose barriers to safe adoption. Experts call for national legal standards, mandatory training, and informed consent protocols. A clear legal framework and clinician education are crucial for the ethical and effective use of AI in healthcare. Full article
(This article belongs to the Special Issue Artificial Intelligence in Healthcare: Opportunities and Challenges)
15 pages, 1003 KiB  
Systematic Review
Deep Learning Applications in Dental Image-Based Diagnostics: A Systematic Review
by Osama Khattak, Ahmed Shawkat Hashem, Mohammed Saad Alqarni, Raha Ahmed Shamikh Almufarrij, Amna Yusuf Siddiqui, Rabia Anis, Shahzad Ahmad, Muhammad Amber Fareed, Osama Shujaa Alothmani, Lama Habis Samah Alkhershawy, Wesam Waleed Zain Alabidin, Rakhi Issrani and Anshoo Agarwal
Healthcare 2025, 13(12), 1466; https://doi.org/10.3390/healthcare13121466 - 18 Jun 2025
Viewed by 894
Abstract
Background: AI has been adopted in dentistry for diagnosis, decision making, and therapy prognosis prediction. This systematic review aimed to identify AI models in dentistry, assess their performance, identify their shortcomings, and discuss their potential for adoption and integration in dental practice [...] Read more.
Background: AI has been adopted in dentistry for diagnosis, decision making, and therapy prognosis prediction. This systematic review aimed to identify AI models in dentistry, assess their performance, identify their shortcomings, and discuss their potential for adoption and integration in dental practice in the future. Methodology: The sources of the papers were the following electronic databases: PubMed, Scopus, and Cochrane Library. A total of 20 out of 947 needed further studies, and this was encompassed in the present meta-analysis. It identified diagnostic accuracy, predictive performance, and potential biases. Results: AI models demonstrated an overall diagnostic accuracy of 82%, primarily leveraging artificial neural networks (ANNs) and convolutional neural networks (CNNs). These models have significantly improved the diagnostic precision for dental caries compared with traditional methods. Moreover, they have shown potential in detecting and managing conditions such as bone loss, malignant lesions, vertical root fractures, apical lesions, salivary gland disorders, and maxillofacial cysts, as well as in performing orthodontic assessments. However, the integration of AI systems into dentistry poses challenges, including potential data biases, cost implications, technical requirements, and ethical concerns such as patient data security and informed consent. AI models may also underperform when faced with limited or skewed datasets, thus underscoring the importance of robust training and validation procedures. Conclusions: AI has the potential to revolutionize dentistry by significantly improving diagnostic accuracy and treatment planning. However, before integrating this tool into clinical practice, a critical assessment of its advantages, disadvantages, and utility or ethical issues must be established. Future studies should aim to eradicate existing barriers and enhance the model’s ease of understanding and challenges regarding expense and data protection, to ensure the effective utilization of AI in dental healthcare. Full article
(This article belongs to the Special Issue Artificial Intelligence in Healthcare: Opportunities and Challenges)
Show Figures

Figure 1

27 pages, 1935 KiB  
Review
Generative Artificial Intelligence and Transversal Competencies in Higher Education: A Systematic Review
by Angel Deroncele-Acosta, Rosa María Elizabeth Sayán-Rivera, Angel Deciderio Mendoza-López and Emerson Damián Norabuena-Figueroa
Appl. Syst. Innov. 2025, 8(3), 83; https://doi.org/10.3390/asi8030083 - 18 Jun 2025
Viewed by 1043
Abstract
Generative AI is an emerging tool in higher education; however, its connection with transversal competencies, as well as their sustainable adoption, remains underexplored. The study aims to analyze the scientific and conceptual development of generative artificial intelligence in higher education to identify the [...] Read more.
Generative AI is an emerging tool in higher education; however, its connection with transversal competencies, as well as their sustainable adoption, remains underexplored. The study aims to analyze the scientific and conceptual development of generative artificial intelligence in higher education to identify the most relevant transversal competencies, strategic processes for its sustainable implementation, and global trends in academic production. A systematic literature review (PRISMA) was conducted on the Web of Science, Scopus, and PubMed, analyzing 35 studies for narrative synthesis and 897 publications for bibliometric analysis. The transversal competencies identified were: Academic Integrity, Critical Thinking, Innovation, Ethics, Creativity, Communication, Collaboration, AI Literacy, Responsibility, Digital Literacy, AI Ethics, Autonomous Learning, Self-Regulation, Flexibility, and Leadership. The conceptual framework connotes the interdisciplinary nature and five key processes were identified to achieve the sustainable integration of Generative AI in higher education oriented to the development of transversal competencies: (1) critical and ethical appropriation, (2) institutional management of technological infrastructure, (3) faculty development, (4) curricular transformation, and (5) pedagogical innovation. On bibliometric behavior, scientific articles predominate, with few systematic reviews. China leads in publication volume, and social sciences are the most prominent area. It is concluded that generative artificial intelligence is key to the development of transversal competencies if it is adopted from a critical, ethical, and pedagogically intentional approach. Its implications and future projections in the field of higher education are discussed. Full article
Show Figures

Figure 1

22 pages, 706 KiB  
Article
Privacy Ethics Alignment in AI: A Stakeholder-Centric Framework for Ethical AI
by Ankur Barthwal, Molly Campbell and Ajay Kumar Shrestha
Systems 2025, 13(6), 455; https://doi.org/10.3390/systems13060455 - 9 Jun 2025
Viewed by 615
Abstract
The increasing integration of artificial intelligence (AI) in digital ecosystems has reshaped privacy dynamics, particularly for young digital citizens navigating data-driven environments. This study explores evolving privacy concerns across three key stakeholder groups—young digital citizens, parents/educators, and AI professionals—and assesses differences in data [...] Read more.
The increasing integration of artificial intelligence (AI) in digital ecosystems has reshaped privacy dynamics, particularly for young digital citizens navigating data-driven environments. This study explores evolving privacy concerns across three key stakeholder groups—young digital citizens, parents/educators, and AI professionals—and assesses differences in data ownership, trust, transparency, parental mediation, education, and risk–benefit perceptions. Employing a grounded theory methodology, this research synthesizes insights from key participants through structured surveys, qualitative interviews, and focus groups to identify distinct privacy expectations. Young digital citizens emphasized autonomy and digital agency, while parents and educators prioritized oversight and AI literacy. AI professionals focused on balancing ethical design with system performance. The analysis revealed significant gaps in transparency and digital literacy, underscoring the need for inclusive, stakeholder-driven privacy frameworks. Drawing on comparative thematic analysis, this study introduces the Privacy–Ethics Alignment in AI (PEA-AI) model, which conceptualizes privacy decision-making as a dynamic negotiation among stakeholders. By aligning empirical findings with governance implications, this research provides a scalable foundation for adaptive, youth-centered AI privacy governance. Full article
Show Figures

Figure 1

25 pages, 325 KiB  
Article
AI Personalization and Its Influence on Online Gamblers’ Behavior
by Florin Mihai, Ofelia Ema Aleca and Daniel-Marius Iordache
Behav. Sci. 2025, 15(6), 779; https://doi.org/10.3390/bs15060779 - 4 Jun 2025
Viewed by 1104
Abstract
Technological advancements in algorithmic personalization are widely believed to influence user behavior on online gambling platforms. This study explores how such developments, potentially including AI-driven mechanisms, may affect cognitive and motivational processes, especially in relation to risk perception, decision-making, and betting persistence. Using [...] Read more.
Technological advancements in algorithmic personalization are widely believed to influence user behavior on online gambling platforms. This study explores how such developments, potentially including AI-driven mechanisms, may affect cognitive and motivational processes, especially in relation to risk perception, decision-making, and betting persistence. Using ordinary least squares (OLS) and panel regression models applied to behavioral data from a gambling platform, we examine patterns that are consistent with increased personalization between two distinct time periods, 2016 and 2021. The datasets do not contain any direct metadata regarding AI interventions. However, we interpret changes in user behavior over time as indicative of evolving personalization dynamics within a broader technological and contextual landscape. Accordingly, our conclusions about algorithmic personalization are inferential and exploratory, drawn from temporal comparisons between 2016 and 2021. Our findings show that users receiving personalized bonuses or making early cash-out decisions tend to adjust their stake sizes and betting frequency in systematic ways, which may reflect indirect effects of technological reinforcement strategies. These behavioral patterns raise important ethical and regulatory questions, particularly regarding user autonomy, algorithmic transparency, and the protection of at-risk users. This research contributes to the literature on digital behavior influencing gambling by framing the analysis as observational and quasi-experimental and suggests that further studies use experimental and log-level data to more specifically analyze the algorithmic effects. However, no causal claims can be made about AI influence as the temporal contradictions are interpreted as broad phenomena of technological developments, since they are not measured as algorithmic interventions. Further studies should also investigate the development of predictive models aimed at countering gambling addiction; evaluate the long-term ethical implications of algorithmic personalization; and discuss potential solutions codeveloped to foster a responsible gambling climate. Full article
(This article belongs to the Special Issue The Impact of Technology on Human Behavior)
15 pages, 1000 KiB  
Article
Integrating Large Language Models into Accessible and Inclusive Education: Access Democratization and Individualized Learning Enhancement Supported by Generative Artificial Intelligence
by Inigo Lopez-Gazpio
Information 2025, 16(6), 473; https://doi.org/10.3390/info16060473 - 3 Jun 2025
Viewed by 1435
Abstract
This study explores the integration of large language models (LLMs) into educational environments, emphasizing enhanced accessibility, inclusivity, and individualized learning experiences. The study evaluates trends in the transformative potential of artificial intelligence (AI) technologies in their capacity to significantly mitigate traditional barriers related [...] Read more.
This study explores the integration of large language models (LLMs) into educational environments, emphasizing enhanced accessibility, inclusivity, and individualized learning experiences. The study evaluates trends in the transformative potential of artificial intelligence (AI) technologies in their capacity to significantly mitigate traditional barriers related to language diversity, learning disabilities, cultural differences, and socioeconomic inequalities. The result of the analysis highlights how LLMs personalize instructional content and dynamically respond to each learner’s educational and emotional needs. The work also advocates for an instructor-guided deployment of LLMs as pedagogical catalysts rather than replacements, emphasizing educators’ role in ethical oversight, cultural sensitivity, and emotional support within AI-enhanced classrooms. Finally, while recognizing concerns regarding data privacy, potential biases, and ethical implications, the study argues that the proactive and responsible integration of LLMs by educators is necessary for democratizing access to education and to foster inclusive learning practices, thereby advancing the effectiveness and equity of contemporary educational frameworks. Full article
Show Figures

Figure 1

Back to TopTop