Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (287)

Search Parameters:
Keywords = AI adoption in education

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 1935 KB  
Article
Innovation Flow: A Human–AI Collaborative Framework for Managing Innovation with Generative Artificial Intelligence
by Michelle Catta-Preta, Alex Trejo Omeñaca, Jan Ferrer i Picó and Josep Maria Monguet-Fierro
Appl. Sci. 2025, 15(22), 11951; https://doi.org/10.3390/app152211951 - 11 Nov 2025
Viewed by 84
Abstract
Conventional innovation management methodologies (IMMs) often struggle to respond to the complexity, uncertainty, and cognitive diversity that characterise contemporary innovation projects. This study introduces Innovation Flow (IF), a human-centred and adaptive framework grounded in Flow Theory and enhanced by Generative Artificial Intelligence (GenAI). [...] Read more.
Conventional innovation management methodologies (IMMs) often struggle to respond to the complexity, uncertainty, and cognitive diversity that characterise contemporary innovation projects. This study introduces Innovation Flow (IF), a human-centred and adaptive framework grounded in Flow Theory and enhanced by Generative Artificial Intelligence (GenAI). At its core, IF operationalises Personalised Innovation Techniques (PInnTs)—adaptive variations of established methods tailored to project genetics and team profiles, generated dynamically through a GenAI-based system. Unlike traditional IMMs that rely on static toolkits and expert facilitation, Innovation Flow (IF) introduces a dynamic, GenAI-enhanced system capable of tailoring techniques in real time to each project’s characteristics and team profile. This adaptive model achieved a 60% reduction in ideation and prototyping time while maintaining high creative performance and autonomy. IF thus bridges the gap between human-centred design and AI augmentation, providing a scalable, personalised, and more inclusive pathway for managing innovation. Using a mixed-methods design that combines grounded theory with quasi-experimental validation, the framework was tested in 28 innovation projects across healthcare, manufacturing, and education. Findings show that personalisation improves application fidelity, engagement, and resilience, with 87% of cases achieving high efficacy. GenAI integration accelerated ideation and prototyping by more than 60%, reduced dependence on expert facilitators, and broadened participation by lowering the expertise barrier. Qualitative analyses emphasised the continuing centrality of human agency, as the most effective teams critically adapted rather than passively adopted AI outputs. The research establishes IF as a scalable methodology that augments, rather than replaces, human creativity, accelerating innovation cycles while reinforcing motivation and autonomy. Full article
(This article belongs to the Special Issue Advances in Human–Computer Interaction and Collaboration)
Show Figures

Figure 1

36 pages, 1027 KB  
Article
Initial Validation of the IMPACT Model: Technological Appropriation of ChatGPT by University Faculty
by Luz-M. Pereira-González, Andrea Basantes-Andrade, Miguel Naranjo-Toro and Mailevy Guia-Pereira
Educ. Sci. 2025, 15(11), 1520; https://doi.org/10.3390/educsci15111520 - 10 Nov 2025
Viewed by 327
Abstract
This study presents the initial validation of the IMPACT model, a psychometric tool developed to evaluate how university faculty adopt ChatGPT in higher education. It specifically addresses the existing gap in validated instruments designed for educators, as most prior research has focused on [...] Read more.
This study presents the initial validation of the IMPACT model, a psychometric tool developed to evaluate how university faculty adopt ChatGPT in higher education. It specifically addresses the existing gap in validated instruments designed for educators, as most prior research has focused on student-based adoption models. A total of 206 professors completed a 39-item Likert-scale questionnaire. Exploratory factor analysis using principal axis factoring with oblimin rotation identified the underlying structure of the instrument. Reliability and internal consistency were examined through Cronbach’s alpha and McDonald’s omega. The analysis revealed a five-factor structure comprising functional appropriation, ethical and academic concerns, cost and accessibility, facilitating conditions, and perceived reliability and trustworthiness. Intention to use and performance expectancy merged into a single factor, and social influence did not emerge as a determinant. The model demonstrated strong reliability and internal consistency across all dimensions. The IMPACT model offers a validated framework for understanding faculty adoption of ChatGPT, emphasizing functional, ethical, and infrastructural factors over social influence. These findings provide a foundation for confirmatory analyses and contribute to advancing theoretical and practical insights into AI integration in higher education teaching. Full article
(This article belongs to the Special Issue ChatGPT as Educative and Pedagogical Tool: Perspectives and Prospects)
Show Figures

Figure 1

24 pages, 598 KB  
Article
Privacy Concerns in ChatGPT Data Collection and Its Impact on Individuals
by Leena Mohammad Alzamil, Alawiayyah Mohammed Alhasani and Suhair Alshehri
Future Internet 2025, 17(11), 511; https://doi.org/10.3390/fi17110511 - 10 Nov 2025
Viewed by 154
Abstract
With the rapid adoption of generative AI technologies across various sectors, it has become increasingly important to understand how these systems handle personal data. The study examines users’ awareness of the types of data collected, the risks involved, and their implications for privacy [...] Read more.
With the rapid adoption of generative AI technologies across various sectors, it has become increasingly important to understand how these systems handle personal data. The study examines users’ awareness of the types of data collected, the risks involved, and their implications for privacy and security. A comprehensive literature review was conducted to contextualize the ethical, technical, and regulatory challenges associated with generative AI, followed by a pilot survey targeting ChatGPT users from a variety of demographics. The results of the study revealed a significant gap in users’ understanding of data practices, with many participants expressing concerns about unauthorized access to data, prolonged data retention, and a lack of transparency. Despite recognizing the benefits of ChatGPT in various applications, users expressed strong demands for greater control over their data, clearer consent mechanisms, and more transparent communication from developers. The study concludes by emphasizing the need for multi-dimensional solutions that combine technological innovation, regulatory reform, and user-centered design. Recommendations include implementing explainable AI, enhancing educational efforts, adopting privacy-by-design principles, and establishing robust governance frameworks. By addressing these challenges, developers, policymakers, and stakeholders can enhance trust, promote ethical AI deployment, and ensure that generative AI systems serve the public good while respecting individual rights and privacy. Full article
Show Figures

Figure 1

30 pages, 3613 KB  
Article
Redefining Organizational Resilience and Success: A Natural Language Analysis of Strategic Domains, Semantics, and AI Opportunities
by Olga Bucovețchi, Andreea Elena Voipan, Daniel Voipan and Radu D. Stanciu
Systems 2025, 13(11), 999; https://doi.org/10.3390/systems13110999 - 7 Nov 2025
Viewed by 502
Abstract
Organizational resilience and long-term success have become essential capabilities in turbulent and uncertain environments. This study redefines these concepts by applying a natural language analysis to a corpus of 1597 peer-reviewed publications retrieved from Web of Science and Scopus. The methodology adopts a [...] Read more.
Organizational resilience and long-term success have become essential capabilities in turbulent and uncertain environments. This study redefines these concepts by applying a natural language analysis to a corpus of 1597 peer-reviewed publications retrieved from Web of Science and Scopus. The methodology adopts a three-level framework: first, a thematic clustering of literature into strategic domains; second, a semantic comparison of classical and emerging terms; and third, the mapping of artificial intelligence (AI) opportunities. The results identify five overarching domains: Health and Wellbeing; Organizations, HR and Leadership; Strategy, Innovation, and Culture; Education, Knowledge and Communities; and Society, Environment and Development. These domains illustrate how resilience and success are addressed at micro, meso, and macro levels. Semantically, the discourse expands from traditional notions such as robustness, risk management, and performance towards more human-centered, systemic, and digitally enabled perspectives. The study further highlights how AI functions both as a methodological tool and as a strategic enabler, with applications ranging from predictive health analytics and leadership support systems to foresight tools and sustainability monitoring. The findings contribute to organizational resilience theory and offer practitioners actionable pathways to strengthen resilience and competitiveness in the face of volatility, uncertainty, complexity, and ambiguity. Full article
(This article belongs to the Special Issue Strategic Management Towards Organisational Resilience)
Show Figures

Figure 1

13 pages, 906 KB  
Review
Artificial Intelligence in Breast Reconstruction: Enhancing Surgical Planning, Aesthetic Outcomes, and Patient-Centered Care
by Brianna M. Peet, Arianna Sidoti, Robert J. Allen, Jonas A. Nelson and Francis Graziano
J. Clin. Med. 2025, 14(21), 7821; https://doi.org/10.3390/jcm14217821 - 4 Nov 2025
Viewed by 442
Abstract
The integration of artificial intelligence (AI) is rapidly transforming the field of breast reconstruction, with applications spanning surgical planning, complication prediction, patient-reported outcome assessment, esthetic evaluation, and patient education. A comprehensive narrative review was performed to evaluate the integration of AI technologies in [...] Read more.
The integration of artificial intelligence (AI) is rapidly transforming the field of breast reconstruction, with applications spanning surgical planning, complication prediction, patient-reported outcome assessment, esthetic evaluation, and patient education. A comprehensive narrative review was performed to evaluate the integration of AI technologies in breast reconstruction, encompassing preoperative planning, intraoperative use, and postoperative care. Emerging evidence highlights AI’s growing utility across these domains. Machine learning algorithms can predict postoperative complications and patient-reported outcomes by leveraging clinical, surgical, and patient-specific factors. Neural networks provide objective assessments of breast esthetics following reconstruction, while large language models enhance patient education by guiding consultation questions and reinforcing in-clinic discussions with accessible medical information. As these tools continue to advance, their adoption in everyday practice is becoming increasingly relevant. Staying current with AI applications is essential for plastic surgeons, as AI is not only reshaping breast reconstruction today, but is also poised to become an integral component of routine clinical care. Full article
Show Figures

Figure 1

33 pages, 1523 KB  
Review
Early Detection of Lung Cancer: A Review of Innovative Milestones and Techniques
by Faisal M. Habbab, Eric L. R. Bédard, Anil A. Joy, Zarmina Alam, Aswin G. Abraham and Wilson H. Y. Roa
J. Clin. Med. 2025, 14(21), 7812; https://doi.org/10.3390/jcm14217812 - 3 Nov 2025
Viewed by 1005
Abstract
Lung cancer is the most frequently diagnosed cancer and the leading cause of cancer death worldwide. Early detection of lung cancer can lead to identification of the cancer at its initial treatable stages and improves survival. Low-dose CT scan (LDCT) is currently the [...] Read more.
Lung cancer is the most frequently diagnosed cancer and the leading cause of cancer death worldwide. Early detection of lung cancer can lead to identification of the cancer at its initial treatable stages and improves survival. Low-dose CT scan (LDCT) is currently the gold standard for lung cancer screening in high-risk individuals. Despite the observed stage migration and consistently demonstrated disease-specific overall survival benefit, LDCT has inherent limitations, including false-positive results, radiation exposure, and low compliance. Recently, new techniques have been investigated for early detection of lung cancer. Several studies have shown that liquid biopsy biomarkers such as circulating cell-free DNA (cfDNA), microRNA molecules (miRNA), circulating tumor cells (CTCs), tumor-derived exosomes (TDEs), and tumor-educated platelets (TEPs), as well as volatile organic compounds (VOCs), have the power to distinguish lung cancer patients from healthy subjects, offering potential for minimally invasive and non-invasive means of early cancer detection. Furthermore, recent studies have shown that the integration of artificial intelligence (AI) with clinical, imaging, and laboratory data has provided significant advancements and can offer potential solutions to some challenges related to early detection of lung cancer. Adopting AI-based multimodality strategies, such as multi-omics liquid biopsy and/or VOCs’ detection, with LDCT augmented by advanced AI, could revolutionize early lung cancer screening by improving accuracy, efficiency, and personalization, especially when combined with patient clinical data. However, challenges remain in validating, standardizing, and integrating these approaches into clinical practice. In this review, we described these innovative milestones and methods, as well as their advantages and limitations in screening and early diagnosis of lung cancer. Full article
(This article belongs to the Section Oncology)
Show Figures

Figure 1

27 pages, 1108 KB  
Article
Deepfake-Style AI Tutors in Higher Education: A Mixed-Methods Review and Governance Framework for Sustainable Digital Education
by Hanan Sharif, Amara Atif and Arfan Ali Nagra
Sustainability 2025, 17(21), 9793; https://doi.org/10.3390/su17219793 - 3 Nov 2025
Viewed by 769
Abstract
Deepfake-style AI tutors are emerging in online education, offering personalized and multilingual instruction while introducing risks to integrity, privacy, and trust. This study aims to understand their pedagogical potential and governance needs for responsible integration. A PRISMA-guided, systematic review of 42 peer-reviewed studies [...] Read more.
Deepfake-style AI tutors are emerging in online education, offering personalized and multilingual instruction while introducing risks to integrity, privacy, and trust. This study aims to understand their pedagogical potential and governance needs for responsible integration. A PRISMA-guided, systematic review of 42 peer-reviewed studies (2015–early 2025) was conducted from 362 screened records, complemented by semi-structured questionnaires with 12 assistant professors (mean experience = 7 years). Thematic analysis using deductive codes achieved strong inter-coder reliability (κ = 0.81). Four major themes were identified: personalization and engagement, detection challenges and integrity risks, governance and policy gaps, and ethical and societal implications. The results indicate that while deepfake AI tutors enhance engagement, adaptability, and scalability, they also pose risks of impersonation, assessment fraud, and algorithmic bias. Current detection approaches based on pixel-level artifacts, frequency features, and physiological signals remain imperfect. To mitigate these challenges, a four-pillar governance framework is proposed, encompassing Transparency and Disclosure, Data Governance and Privacy, Integrity and Detection, and Ethical Oversight and Accountability, supported by a policy checklist, responsibility matrix, and risk-tier model. Deepfake AI tutors hold promise for expanding access to education, but fairness-aware detection, robust safeguards, and AI literacy initiatives are essential to sustain trust and ensure equitable adoption. These findings not only strengthen the ethical and governance foundations for generative AI in higher education but also contribute to the broader agenda of sustainable digital education. By promoting transparency, fairness, and equitable access, the proposed framework advances the long-term sustainability of learning ecosystems and aligns with the United Nations Sustainable Development Goal 4 (Quality Education) through responsible innovation and institutional resilience. Full article
Show Figures

Figure 1

20 pages, 1226 KB  
Article
The Digital Centaur as a Type of Technologically Augmented Human in the AI Era: Personal and Digital Predictors
by Galina U. Soldatova, Svetlana V. Chigarkova and Svetlana N. Ilyukhina
Behav. Sci. 2025, 15(11), 1487; https://doi.org/10.3390/bs15111487 - 31 Oct 2025
Viewed by 392
Abstract
Industry 4.0 is steadily advancing a reality of deepening integration between humans and technology, a phenomenon aptly described by the metaphor of the “technologically augmented human”. This study identifies the digital and personal factors that predict a preference for the “digital centaur” strategy [...] Read more.
Industry 4.0 is steadily advancing a reality of deepening integration between humans and technology, a phenomenon aptly described by the metaphor of the “technologically augmented human”. This study identifies the digital and personal factors that predict a preference for the “digital centaur” strategy among adolescents and young adults. This strategy is defined as a model of human–AI collaboration designed to enhance personal capabilities. A sample of 1841 participants aged 14–39 completed measures assessing digital centaur preference and identification, emotional intelligence (EI), mindfulness, digital competence, technology attitudes, and AI usage, as well as AI-induced emotions and fears. The results indicate that 27.3% of respondents currently identify as digital centaurs, with an additional 41.3% aspiring to adopt this identity within the next decade. This aspiration was most prevalent among 18- to 23-year-olds. Hierarchical regression showed that interpersonal and intrapersonal EI and mindfulness are personal predictors of the digital centaur preference, while digital competence, technophilia, technopessimism (inversely), and daily internet use emerged as significant digital predictors. Notably, intrapersonal EI and mindfulness became non-significant when technology attitudes were included. Digital centaurs predominantly used AI functionally and reported positive emotions (curiosity, pleasure, trust, gratitude) but expressed concerns about human misuse of AI. These findings position the digital centaur as an adaptive and preadaptive strategy for the technologically augmented human. This has direct implications for education, highlighting the need to foster balanced human–AI collaboration. Full article
(This article belongs to the Section Social Psychology)
Show Figures

Figure 1

24 pages, 607 KB  
Article
How AI-Driven Personalization Shapes Green Purchasing Behavior Among Youth in Java Island
by Feliks Prasepta Sejahtera Surbakti, Hotma Antoni Hutahaean, Maria Magdalena Wahyuni Inderawati, Jovan Moreno Madjid, Leonard Edward Sely and Yann-May Yee
Sustainability 2025, 17(21), 9600; https://doi.org/10.3390/su17219600 - 28 Oct 2025
Viewed by 686
Abstract
Sustainable consumption has become a global priority, yet the factors that encourage people to adopt environmentally friendly purchasing behavior differ across cultures and technologies. This study explores how environmental knowledge, environmental attitude, and the perception of AI-driven personalization influence green purchasing intention and [...] Read more.
Sustainable consumption has become a global priority, yet the factors that encourage people to adopt environmentally friendly purchasing behavior differ across cultures and technologies. This study explores how environmental knowledge, environmental attitude, and the perception of AI-driven personalization influence green purchasing intention and actual purchasing behavior among young consumers in Java, Indonesia. A survey of 517 university students was conducted, and the relationships among these factors were analyzed using structural equation modeling. The findings reveal that environmental knowledge strongly shapes environmental attitudes, which in turn enhance the intention and behavior to purchase green products. Perception of AI-driven personalization also strengthens green purchasing intention, although its direct effect on behavior is limited. These results suggest that digital platforms and marketers can promote sustainable consumption by combining environmental education with transparent and value-based AI personalization. The study contributes to understanding how psychological readiness and technological engagement together encourage greener consumption among youth in emerging economies. Full article
(This article belongs to the Section Economic and Business Aspects of Sustainability)
Show Figures

Figure 1

21 pages, 1209 KB  
Article
Sustainable Adoption of AIEd in Higher Education: Determinants of Students’ Willingness in China
by Qiang Song, Xiyin Gao and Wei Guo
Sustainability 2025, 17(21), 9598; https://doi.org/10.3390/su17219598 - 28 Oct 2025
Viewed by 565
Abstract
The sustainable integration of Artificial Intelligence in Education (AIEd) in higher education hinges on students’ prolonged and meaningful adoption. Grounded in the Acceptance of AI Device Usage (AIDUA) framework, this study extends the model by incorporating novelty value and trust to investigate the [...] Read more.
The sustainable integration of Artificial Intelligence in Education (AIEd) in higher education hinges on students’ prolonged and meaningful adoption. Grounded in the Acceptance of AI Device Usage (AIDUA) framework, this study extends the model by incorporating novelty value and trust to investigate the determinants of students’ willingness to use AIEd Tools sustainably. Data from 400 university students in China were analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results reveal that novelty value acts as a powerful catalyst, substantially boosting performance expectancy and diminishing effort expectancy. Furthermore, this study delineates a dual-pathway mechanism where performance and effort expectancies shape both emotions and trust, which in turn directly determine adoption intention—with emotion exhibiting the stronger influence. Theoretically, this research validates an extended AIDUA model, highlighting the critical roles of sustained innovation perception and cognitive-affective dual pathways. Practically, it advises higher education institutions to prioritize building robust trust through transparent practices and to design AIEd Tools that deliver lasting innovative value and positive learning experiences to foster sustainable adoption. Full article
Show Figures

Figure 1

25 pages, 2253 KB  
Entry
Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration
by Manolis Adamakis and Theodoros Rachiotis
Encyclopedia 2025, 5(4), 180; https://doi.org/10.3390/encyclopedia5040180 - 28 Oct 2025
Viewed by 1308
Definition
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the [...] Read more.
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the higher education landscape, emphasizing mature knowledge aimed at educators, researchers, and policymakers. AI technologies now support personalized learning pathways, enhance instructional efficiency, and improve academic productivity by facilitating tasks such as automated grading, adaptive feedback, and academic writing assistance. The widespread adoption of AI tools among students and faculty members has created a critical need for AI literacy—encompassing not only technical proficiency but also critical evaluation, ethical awareness, and metacognitive engagement with AI-generated content. Key opportunities include the deployment of adaptive tutoring and real-time feedback mechanisms that tailor instruction to individual learning trajectories; automated content generation, grading assistance, and administrative workflow optimization that reduce faculty workload; and AI-driven analytics that inform curriculum design and early intervention to improve student outcomes. At the same time, AI poses challenges related to academic integrity (e.g., plagiarism and misuse of generative content), algorithmic bias and data privacy, digital divides that exacerbate inequities, and risks of “cognitive debt” whereby over-reliance on AI tools may degrade working memory, creativity, and executive function. The lack of standardized AI policies and fragmented institutional governance highlight the urgent necessity for transparent frameworks that balance technological adoption with academic values. Anchored in several foundational pillars (such as a brief description of AI higher education, AI literacy, AI tools for educators and teaching staff, ethical use of AI, and institutional integration of AI in higher education), this entry emphasizes that AI is neither a panacea nor an intrinsic threat but a “technology of selection” whose impact depends on the deliberate choices of educators, institutions, and learners. When embraced with ethical discernment and educational accountability, AI holds the potential to foster a more inclusive, efficient, and democratic future for higher education; however, its success depends on purposeful integration, balancing innovation with academic values such as integrity, creativity, and inclusivity. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Figure 1

27 pages, 1802 KB  
Perspective
Toward Artificial Intelligence in Oncology and Cardiology: A Narrative Review of Systems, Challenges, and Opportunities
by Visar Vela, Ali Yasin Sonay, Perparim Limani, Lukas Graf, Besmira Sabani, Diona Gjermeni, Andi Rroku, Arber Zela, Era Gorica, Hector Rodriguez Cetina Biefer, Uljad Berdica, Euxhen Hasanaj, Adisa Trnjanin, Taulant Muka and Omer Dzemali
J. Clin. Med. 2025, 14(21), 7555; https://doi.org/10.3390/jcm14217555 - 24 Oct 2025
Viewed by 620
Abstract
Background: Artificial intelligence (AI), the overarching field that includes machine learning (ML) and its subfield deep learning (DL), is rapidly transforming clinical research by enabling the analysis of high-dimensional data and automating the output of diagnostic and prognostic tests. As clinical trials become [...] Read more.
Background: Artificial intelligence (AI), the overarching field that includes machine learning (ML) and its subfield deep learning (DL), is rapidly transforming clinical research by enabling the analysis of high-dimensional data and automating the output of diagnostic and prognostic tests. As clinical trials become increasingly complex and costly, ML-based approaches (especially DL for image and signal data) offer promising solutions, although they require new approaches in clinical education. Objective: Explore current and emerging AI applications in oncology and cardiology, highlight real-world use cases, and discuss the challenges and future directions for responsible AI adoption. Methods: This narrative review summarizes various aspects of AI technology in clinical research, exploring its promise, use cases, and its limitations. The review was based on a literature search in PubMed covering publications from 2019 to 2025. Search terms included “artificial intelligence”, “machine learning”, “deep learning”, “oncology”, “cardiology”, “digital twin”. and “AI-ECG”. Preference was given to studies presenting validated or clinically applicable AI tools, while non-English articles, conference abstracts, and gray literature were excluded. Results: AI demonstrates significant potential in improving diagnostic accuracy, facilitating biomarker discovery, and detecting disease at an early stage. In clinical trials, AI improves patient stratification, site selection, and virtual simulations via digital twins. However, there are still challenges in harmonizing data, validating models, cross-disciplinary training, ensuring fairness, explainability, as well as the robustness of gold standards to which AI models are built. Conclusions: The integration of AI in clinical research can enhance efficiency, reduce costs, and facilitate clinical research as well as lead the way towards personalized medicine. Realizing this potential requires robust validation frameworks, transparent model interpretability, and collaborative efforts among clinicians, data scientists, and regulators. Interoperable data systems and cross-disciplinary education will be critical to enabling the integration of scalable, ethical, and trustworthy AI into healthcare. Full article
(This article belongs to the Section Clinical Research Methods)
Show Figures

Figure 1

22 pages, 1553 KB  
Article
Factors Influencing the Reported Intention of Higher Vocational Computer Science Students in China to Use AI After Ethical Training: A Study in Guangdong Province
by Huiwen Zou, Ka Ian Chan, Patrick Cheong-Iao Pang, Blandina Manditereza and Yi-Huang Shih
Educ. Sci. 2025, 15(11), 1431; https://doi.org/10.3390/educsci15111431 - 24 Oct 2025
Viewed by 479
Abstract
This paper reports a study conducting an in-depth analysis of the impacts of ethical training on the adoption of AI tools among computer science students in higher vocational colleges. These students will serve as the pivotal human factor for advancing the field of [...] Read more.
This paper reports a study conducting an in-depth analysis of the impacts of ethical training on the adoption of AI tools among computer science students in higher vocational colleges. These students will serve as the pivotal human factor for advancing the field of AI. Aiming to explore practical models for integrating AI ethics into computer science education, the research seeks to promote more responsible and effective AI application and therefore become a positive influence in the field. Employing a mixed-methods approach, the study included 105 students aged 20–24 from a vocational college in Guangdong Province, a developed region in China. Based on the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2) model, a five-point Likert scale was used to evaluate the participants’ perceptions of AI tool usage based on ethical principles. The Structural Equation Modeling (SEM) results indicate that while participants are motivated to adopt AI technologies in certain aspects, performance expectancy negatively impacts their intention and actual usage. After systematically studying and understanding AI ethics, participants attribute a high proportion of responsibility (84.89%) to objective factors and prioritized safety (27.11%) among eight ethical principles. Statistical analysis shows that habit (β = 0.478, p < 0.001) and hedonic motivation (β = 0.239, p = 0.004) significantly influence behavioral intention. Additionally, social influence (β = 0.234, p = 0.008) affects use behavior. Findings regarding factors that influence AI usage can inform a strategic framework for the integration of ethical instruction in AI applications. These findings have significant implications for curriculum design, policy formulation, and the establishment of ethical guidelines for AI deployment in higher educational contexts. Full article
Show Figures

Figure 1

23 pages, 3532 KB  
Review
Generative Artificial Intelligence in Healthcare: A Bibliometric Analysis and Review of Potential Applications and Challenges
by Vanita Kouomogne Nana and Mark T. Marshall
AI 2025, 6(11), 278; https://doi.org/10.3390/ai6110278 - 23 Oct 2025
Viewed by 992
Abstract
The remarkable progress of artificial intelligence (AI) in recent years has significantly extended its application possibilities within the healthcare domain. AI has become more accessible to a wider range of healthcare personnel and service users, in particular due to the proliferation of Generative [...] Read more.
The remarkable progress of artificial intelligence (AI) in recent years has significantly extended its application possibilities within the healthcare domain. AI has become more accessible to a wider range of healthcare personnel and service users, in particular due to the proliferation of Generative AI (GenAI). This study presents a bibliometric analysis of GenAI in healthcare. By analysing the Scopus database academic literature, our study explores the knowledge structure, emerging trends, and challenges of GenAI in healthcare. The results showed that GenAI is increasingly being adoption in developed countries, with major US institutions leading the way, and a large number of papers are being published on the topic in top-level academic venues. Our findings also show that there is a focus on particular areas of healthcare, with medical education and clinical decision-making showing active research, while areas such as emergency medicine remain poorly explored. Our results also show that while there is a focus on the benefits of GenAI for the healthcare industry, its limitations need to be acknowledged and addressed to facilitate its integration in clinical settings. The findings of this study can serve as a foundation for understanding the field, allowing academics, healthcare practitioners, educators, and policymakers to better understand the current focus within GenAI for healthcare, as well as highlighting potential application areas and challenges around accuracy, privacy, and ethics that must be taken into account when developing healthcare-focused GenAI applications. Full article
Show Figures

Figure 1

19 pages, 257 KB  
Review
From Recall to Resilience: Reforming Assessment Practices in Saudi Theory-Based Higher Education to Advance Vision 2030
by Mubarak S. Aldosari
Sustainability 2025, 17(21), 9415; https://doi.org/10.3390/su17219415 - 23 Oct 2025
Viewed by 376
Abstract
Assessment practices are central to higher education, particularly critical in theory-based programs, where they facilitate the development of conceptual understanding and higher-order cognitive skills. They also support Saudi Arabia’s Vision 2030 agenda, which aims to drive educational innovation. This narrative review examines assessment [...] Read more.
Assessment practices are central to higher education, particularly critical in theory-based programs, where they facilitate the development of conceptual understanding and higher-order cognitive skills. They also support Saudi Arabia’s Vision 2030 agenda, which aims to drive educational innovation. This narrative review examines assessment practices in theory-based programs at a Saudi public university, identifies discrepancies with learning objectives, and proposes potential solutions. A narrative review synthesised peer-reviewed literature (2015–2025) from Scopus, Web of Science, ERIC, and Google Scholar, focusing on traditional and alternative assessments, barriers, progress, and comparisons with international standards. The review found that traditional summative methods (quizzes, final exams) still dominate and emphasise memorisation, limiting the development of higher-order skills. Emerging techniques, such as projects, portfolios, oral presentations, and peer assessment, are gaining traction but face institutional constraints and resistance from faculty. Digital adoption is growing: 63% of students are satisfied with learning management system tools, and 75% find online materials easy to understand; yet, advanced analytics and AI-based assessments are rare. A comparative analysis reveals that international standards favour formative feedback, adaptive technologies, and holistic competencies. The misalignment between current practices and Vision 2030 highlights the need to broaden assessment portfolios, integrate technology, and provide faculty training. Saudi theory-based programs must transition from memory-oriented evaluations to student-centred, evidence-based assessments that foster critical thinking and real-world application. Adopt diverse assessments (projects, portfolios, peer reviews), invest in digital analytics and adaptive learning, align assessments with learning outcomes and Vision 2030 competencies, and implement ongoing faculty development. The study offers practical pathways for reform and highlights strategic opportunities for achieving Saudi Arabia’s national learning outcomes. Full article
(This article belongs to the Section Sustainable Education and Approaches)
Back to TopTop