Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (52)

Search Parameters:
Keywords = GenAI risks

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 453 KB  
Article
Healthcare Providers’ Perspectives on Generative Artificial Intelligence (GenAI) Adoption, Adaptation, Assimilation, and Use in the United States
by Obinna O. Oleribe, Marissa Brash, Adati Tarfa, Ricardo Izurieta and Simon D. Taylor-Robinson
Healthcare 2026, 14(6), 775; https://doi.org/10.3390/healthcare14060775 - 19 Mar 2026
Abstract
Background: Generative artificial intelligence (GenAI) is rapidly permeating healthcare; yet, U.S. clinicians still report mixed feelings about its reliability, impact on workflow, and ethical implications. Current data on provider sentiment are needed to guide safe, patient-centered AI implementation in healthcare. Objective: This study [...] Read more.
Background: Generative artificial intelligence (GenAI) is rapidly permeating healthcare; yet, U.S. clinicians still report mixed feelings about its reliability, impact on workflow, and ethical implications. Current data on provider sentiment are needed to guide safe, patient-centered AI implementation in healthcare. Objective: This study aimed to assess U.S. healthcare providers’ perceptions of generative AI adoption, perceived usefulness, training needs, barriers, and strategies for safe integration. Methods: A nationwide, IRB-approved, cross-sectional survey was administered to healthcare professionals using Qualtrics. A convenience sample of clinicians was recruited via professional listservs and e-mail invitations. The 20-page questionnaire captured demographics, GenAI exposure, organizational adoption status, perceived usefulness (5-point scale), barriers, and mitigation strategies. SPSS v27 and Microsoft Excel were used for statistical analysis. Results: Of 130 respondents, 109 completed the core survey (completion rate 83.8%). Participants were 38.5% physicians, 16.5% nurses, 12.8% allied professionals, and 32.2% other providers; 54.2% were women, and 64.8% were ≥50 years. Overall, 86.9% agreed that GenAI is useful in current patient care, rising to 92.9% when asked about future usefulness. Only 42.4% had received formal GenAI training, and just 23.2% reported that their organization had begun adopting AI. The top perceived benefits were improved documentation/clerking (57.0%) and error reduction (49.4%). Dominant barriers included limited AI knowledge (24.7%) and fear of job loss (16.9%). Despite concerns, 72% expressed willingness to support broader GenAI adoption, favoring human oversight (67.1%) and staff training (60.8%) as key safeguards. There were statistically significant findings in perceived AI usefulness by gender (χ2 = 29.2; p < 0.001); organizational adoption of AI (χ2 = 31.6.2; p = 0.047) and where AI is most useful (χ2 = 101.1; p < 0.001) by qualifications; and support for AI adoption by age (χ2 = 18.0; p = 0.02). Conclusions: U.S. clinicians in our survey viewed GenAI as useful but reported limited training and organizational infrastructure needed for confident use while also expressing concerns regarding data privacy and ethical risk. Education programs and transparent, provider-led implementation strategies may accelerate responsible GenAI assimilation while addressing ethical and workforce concerns. Also, health administrators should use the efficiency gains to improve provider–patient relationships and clinicians’ work–life balance while reducing clinician burnout rates. Full article
(This article belongs to the Section Artificial Intelligence in Healthcare)
Show Figures

Figure 1

21 pages, 398 KB  
Article
Infusing Gen Z’s Pro-Ecological Intentions: From AI Hallucinations to the Ethical Governance of Green Digital Footprints
by Mostafa Aboulnour Salem
Educ. Sci. 2026, 16(3), 431; https://doi.org/10.3390/educsci16030431 - 12 Mar 2026
Viewed by 153
Abstract
Green AI contributes to digital sustainability in higher education by encouraging computationally efficient technologies and responsible digital practices. Despite growing interest in sustainable AI, empirical evidence remains limited on how Gen Z students develop socially responsible intentions toward the use of sustainability-aligned AI, [...] Read more.
Green AI contributes to digital sustainability in higher education by encouraging computationally efficient technologies and responsible digital practices. Despite growing interest in sustainable AI, empirical evidence remains limited on how Gen Z students develop socially responsible intentions toward the use of sustainability-aligned AI, particularly within a single host-country higher-education context. This study examines these intentions among students enrolled in Saudi Arabia, using a culturally diverse sample of Saudi and international students while treating national origin as a demographic characteristic rather than a basis for cross-national comparison. The research also addresses emerging concerns related to AI hallucinations and ethical governance in educational settings. An integrated framework is employed that combines the instrumental appraisal logic of UTAUT with responsibility-oriented constructs. The model includes Sustainable Performance Value (SPV), Responsible Use Ease (RUE), Ethical Social Norms (ESN), Institutional Ethical Support (IES), Responsible AI Competence (RAC), AI Hallucination Awareness (AHA), and Green Digital Responsibility (GDR) as predictors of Socially Responsible Intentions (SRI). Data were collected through an anonymous survey of 1159 higher-education students residing and studying within the Saudi higher-education system. The study design reflects one institutional context rather than a multi-country comparison. The findings show strong explanatory and predictive capability (R2 = 0.64; Q2 = 0.43). SPV, RAC, AHA, and GDR are the strongest predictors of SRI, while RUE shows a moderate association and IES provides contextual support; ESN is not significant. The results highlight the importance of values, competence, and risk awareness in shaping the responsible use of AI. Implications focus on governance and curriculum strategies that support sustainability-aligned engagement with AI in higher education. Full article
Show Figures

Figure 1

25 pages, 920 KB  
Systematic Review
A Systematic Literature Review on the Pedagogical Implications and Impact of GenAI on Students’ Critical Thinking
by Trini Balart, Brayan Díaz and Kristi Shryock
Algorithms 2026, 19(3), 179; https://doi.org/10.3390/a19030179 - 27 Feb 2026
Viewed by 581
Abstract
Critical Thinking (CT) is recognized as a foundational competency for professional readiness, innovation, and ethical reasoning in higher education, enabling students to analyze information, evaluate evidence, and make reasoned decisions in complex environments. The rapid integration of Generative Artificial Intelligence (GenAI) tools, such [...] Read more.
Critical Thinking (CT) is recognized as a foundational competency for professional readiness, innovation, and ethical reasoning in higher education, enabling students to analyze information, evaluate evidence, and make reasoned decisions in complex environments. The rapid integration of Generative Artificial Intelligence (GenAI) tools, such as large language models, presents new opportunities and risks for CT development. This study conducts a systematic literature review to synthesize empirical evidence on the pedagogical implications and cognitive impact of GenAI on students’ CT. Following PRISMA guidelines, and search terms around GenAI Tools, Critical Thinking And Higher Education, on five major education research databases—Web of Science; Scopus; EBSCOhost (Education Source, ERIC, and APA PsycInfo); and Compendex and Inspec (Elsevier)—63 empirical studies published between January 2023 and April 2025 were analyzed across higher education contexts, disciplines, and intervention designs. Results indicate that GenAI offers notable cognitive affordances, including scaffolding reflective reasoning, promoting self-regulation, and facilitating iterative dialogue and argument evaluation. Pedagogical strategies clustered into four primary integration typologies: AI-based feedback prompts, dialogue simulation and reflection, AI-supported peer review, and critical engagement with AI-generated content. Nearly half of the studies reported statistically significant CT improvements, particularly when GenAI use was guided by structured prompts, reflective activities, and performance-based assessment. However, multiple risks persist, including cognitive offloading, uncritical acceptance of AI outputs, and diminished intellectual autonomy, especially in unguided or surface-level usage. This review highlights the need for intentional pedagogical design, validated CT assessment tools, and longitudinal studies to ensure GenAI acts as a catalyst rather than a substitute for human reasoning. By identifying effective integration strategies and outlining potential pitfalls, this study provides evidence-informed guidance for educators and institutions aiming to responsibly leverage GenAI to strengthen students’ CT skills. Full article
(This article belongs to the Special Issue Artificial Intelligence in Education: Innovations and Implications)
Show Figures

Figure 1

27 pages, 623 KB  
Article
Generative Artificial Intelligence in HRM Practice: Patterns, Profiles, and Theoretical Insights
by Nuno Melão and João Reis
Adm. Sci. 2026, 16(3), 113; https://doi.org/10.3390/admsci16030113 - 27 Feb 2026
Viewed by 539
Abstract
Although Generative Artificial Intelligence (GenAI) has the potential to transform Human Resource Management (HRM), empirical research on its actual use is still rare. This study aims to investigate how HR professionals use GenAI in HRM, the benefits and challenges they associate with it, [...] Read more.
Although Generative Artificial Intelligence (GenAI) has the potential to transform Human Resource Management (HRM), empirical research on its actual use is still rare. This study aims to investigate how HR professionals use GenAI in HRM, the benefits and challenges they associate with it, and how these patterns vary with organizational context. An exploratory cross-sectional survey of 150 HR professionals in the UK (n = 70) and the US (n = 80) was conducted to investigate usage patterns. Results show that GenAI is mainly applied in job analysis and design, training and development, and recruitment and selection, but concerns persist around operational and technical difficulties, privacy and ethics, output accuracy, and employee resistance. Cluster analysis revealed four user profiles that represent different ways of reconciling efficiency gains and risks. Viewed through the lens of Diffusion of Innovation, Technology–Organization–Environment, and Task–Technology Fit, the results highlight ethical and legal compatibility as a relevant condition for sustained use, point to the potential importance of the organization’s GenAI governance environment, and reveal a boundary condition when tasks involve consequential decisions. This study provides insights into early patterns of GenAI use in HRM and advances theory with propositions that can guide future confirmatory research on responsible and effective use. Full article
Show Figures

Figure 1

23 pages, 635 KB  
Article
Generative AI Recommendations for Environmental Sustainability: A Hybrid SEM–ANN Analysis of Gen Z Users in the Philippines
by Victor James C. Escolano, Yann-Mey Yee, Wei-Jung Shiang, Alexander A. Hernandez and Do Van Nang
Information 2026, 17(2), 203; https://doi.org/10.3390/info17020203 - 15 Feb 2026
Viewed by 643
Abstract
Generative AI offers promising potential to promote environmental sustainability through personalized recommendations that influence individual behavior. This study examines the factors influencing the adoption and actual use of generative AI recommendations for environmental sustainability among Gen Z users in the Philippines by integrating [...] Read more.
Generative AI offers promising potential to promote environmental sustainability through personalized recommendations that influence individual behavior. This study examines the factors influencing the adoption and actual use of generative AI recommendations for environmental sustainability among Gen Z users in the Philippines by integrating the Theory of Planned Behavior (TPB) and the Technology–Environmental, Economic, and Social Sustainability Theory (T-EESST) with key generative AI attributes, together with trust and perceived risk. Survey data were collected from 531 Gen Z users in higher education institutions in the National Capital Region (NCR), Philippines, and analyzed using a hybrid SEM and ANN approach. Results from SEM indicate that key AI attributes, namely perceived anthropomorphism, perceived intelligence, and perceived animacy, significantly influenced users’ attitude towards generative AI recommendations. Attitude, perceived behavioral control, and trust emerged as significant predictors of behavioral intention, which have an eventual positive relation to actual use and environmental sustainability outcomes. In contrast, subjective norms and perceived risk did not significantly affect behavioral intention, which may suggest that Gen Z users’ engagement with generative AI for environmental sustainability is primarily driven by internal evaluations, perceived capability, and trust rather than social pressure or risk concerns. Complementing these findings, the ANN analysis identified perceived behavioral control, attitude, and trust as the most important factors, reinforcing the robustness of the SEM results. Overall, this study integrates existing sustainability and technology-adoption literature by demonstrating how generative AI recommendations can support environmental sustainability among Gen Z users by combining behavioral theory, sustainability theory, and AI attributes through a hybrid SEM–ANN approach in the context of a developing country. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies for Sustainable Development)
Show Figures

Figure 1

21 pages, 358 KB  
Article
Adoption of Generative AI in Higher Education: Perceptions of Journalism Students
by Laura Alonso-Muñoz and Andreu Casero-Ripollés
Information 2026, 17(2), 189; https://doi.org/10.3390/info17020189 - 13 Feb 2026
Viewed by 575
Abstract
Higher education has undergone a profound transformation since the release of ChatGPT in November 2022. The introduction of this tool generated immediate interest among students while simultaneously provoking concern among faculty, who perceived it as an unparalleled pedagogical challenge. This study aims to [...] Read more.
Higher education has undergone a profound transformation since the release of ChatGPT in November 2022. The introduction of this tool generated immediate interest among students while simultaneously provoking concern among faculty, who perceived it as an unparalleled pedagogical challenge. This study aims to analyze how university students use generative Artificial Intelligence (Gen AI). To this end, an online survey (n = 281) was administered to journalism students at the Universitat Jaume I de Castelló (Spain). Specifically, the study examined the frequency of use, academic applications, interaction patterns, evaluation of outcomes, and ethical perspectives regarding GenAI tools. The results indicate that 93% of students report using Gen AI, with significantly higher usage among advanced students (i.e., 3rd and 4rth academic year Journalism degree students) [F(1, 279) = 11.09, p < 0.001, n2 = 0.038]. Moreover, 77.2% of respondents use it for learning or studying, while 44.2% use it to complete class assignments. Regarding motivation, the data show that students primarily turn to artificial intelligence to perform tasks more efficiently and effectively and to achieve better results. Although students acknowledge certain risks in the academic use of Gen AI, they perceive its benefits more clearly than its limitations. Additionally, they are aware that they need more AI literacy. These findings provide valuable insights for reorienting undergraduate curricula to address the challenges of generative AI and to educate students on its ethical and appropriate use. Full article
(This article belongs to the Special Issue Digital Technologies for Communication in the Age of AI)
Show Figures

Figure 1

34 pages, 2177 KB  
Article
Securing Generative AI Systems: Threat-Centric Architectures and the Impact of Divergent EU–US Governance Regimes
by Vijay Kanabar and Kalinka Kaloyanova
J. Cybersecur. Priv. 2026, 6(1), 27; https://doi.org/10.3390/jcp6010027 - 6 Feb 2026
Viewed by 1111
Abstract
Generative AI (GenAI) systems are increasingly deployed across high-impact sectors, introducing security risks that fundamentally differ from those of traditional software. Their probabilistic behavior, emergent failure modes, and expanded attack surface, particularly through retrieval and tool integration, complicate threat modeling and control assurance. [...] Read more.
Generative AI (GenAI) systems are increasingly deployed across high-impact sectors, introducing security risks that fundamentally differ from those of traditional software. Their probabilistic behavior, emergent failure modes, and expanded attack surface, particularly through retrieval and tool integration, complicate threat modeling and control assurance. This paper presents a threat-centric analysis that maps adversarial techniques to the core architectural layers of generative AI systems, including training pipelines, model behavior, retrieval mechanisms, orchestration, and runtime interaction. Using established taxonomies such as the OWASP LLM Top 10 and MITRE ATLAS alongside empirical research, we show that many GenAI security risks are structural rather than configurable, limiting the effectiveness of perimeter-based and policy-only controls. We additionally analyze the impact of regulatory divergence on GenAI security architecture and find that EU frameworks serve in practice as the highest common technical baseline for transatlantic deployments. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

19 pages, 416 KB  
Article
Hybrid Intelligence in Requirements Education: Preserving Student Agency in Refining User Stories with Generative AI
by Leon Sterling and Eduardo Oliveira
Information 2026, 17(2), 166; https://doi.org/10.3390/info17020166 - 6 Feb 2026
Viewed by 315
Abstract
Generative Artificial Intelligence (Gen AI) offers significant potential to support requirements engineering (RE) education; however, its integration poses challenges regarding accuracy and student engagement. While Gen AI cannot independently specify requirements without hallucinating or overstepping scope, it can serve as a powerful partner [...] Read more.
Generative Artificial Intelligence (Gen AI) offers significant potential to support requirements engineering (RE) education; however, its integration poses challenges regarding accuracy and student engagement. While Gen AI cannot independently specify requirements without hallucinating or overstepping scope, it can serve as a powerful partner in a hybrid intelligence workflow. In this paper, we address the challenge of translating high-level motivational models into detailed user stories, a process that is traditionally labour-intensive for novices. We introduce a structured, human-in-the-loop workflow that uses Gen AI to refine and polish user stories while strictly preserving student agency. By grounding the output from Gen AI in a validated motivational model, the workflow minimises the risk of metacognitive offloading, requiring students to actively critique and validate the initially generated requirements. Our analysis of instructional artefacts demonstrates that Gen AI helps in three ways: suggesting structural improvements, offering alternative professional phrasing, and enhancing readability. However, we also identify risks of intent drift and scope expansion, reinforcing the need for rigorous human oversight. The findings advocate for a pedagogical approach where the Gen AI system acts as a reflective assistant rather than an autonomous generator. Full article
(This article belongs to the Special Issue Using Generative Artificial Intelligence Within Software Engineering)
Show Figures

Figure 1

15 pages, 339 KB  
Article
Teacher Education Students’ Practices, Benefits, and Challenges in the Use of Generative AI Tools in Higher Education
by Stavros Athanassopoulos, Aggeliki Tzavara, Spyridon Aravantinos, Konstantinos Lavidas, Vassilis Komis and Stamatios Papadakis
Educ. Sci. 2026, 16(2), 228; https://doi.org/10.3390/educsci16020228 - 2 Feb 2026
Viewed by 739
Abstract
Despite the growing adoption of generative artificial intelligence (GenAI) tools in higher education, limited research has examined how future educators perceive and use these technologies in their academic practices. This study investigates the practices, perceived benefits, and challenges associated with the use of [...] Read more.
Despite the growing adoption of generative artificial intelligence (GenAI) tools in higher education, limited research has examined how future educators perceive and use these technologies in their academic practices. This study investigates the practices, perceived benefits, and challenges associated with the use of GenAI tools—such as ChatGPT—among undergraduate students enrolled in programs that confer teaching qualifications. Using a mixed-methods design, data were collected from 314 students from the Early Childhood Education, Philosophy, and Philology departments. The findings indicate that the majority of students use GenAI tools primarily for academic purposes, most commonly for information searching, data analysis, study advice, and exam preparation. Students reported several perceived benefits, including rapid access to information, time efficiency, improved comprehension of complex concepts, enhanced study organization, and support with assignments and research-related tasks such as summarizing or translating academic texts. At the same time, participants expressed notable concerns, particularly regarding over-reliance on AI, reduced personal effort, risks to academic integrity, diminished critical thinking, and weakened research skills. Additional challenges included misinformation, reduced creativity, improper use of AI-generated content, skill underdevelopment, and potential technological dependence. The study concludes that teacher education programs should systematically integrate AI literacy and responsible-use training to prepare future educators to address the pedagogical and ethical implications of GenAI in educational settings. Full article
(This article belongs to the Special Issue Unleashing the Potential of E-learning in Higher Education)
Show Figures

Figure 1

18 pages, 5241 KB  
Viewpoint
The Generative AI Paradox: GenAI and the Erosion of Trust, the Corrosion of Information Verification, and the Demise of Truth
by Emilio Ferrara
Future Internet 2026, 18(2), 73; https://doi.org/10.3390/fi18020073 - 1 Feb 2026
Viewed by 1103
Abstract
Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as “deepfakes” or incremental extensions of misinformation and fraud, this view misses a [...] Read more.
Generative AI (GenAI) now produces text, images, audio, and video that can be perceptually convincing at scale and at negligible marginal cost. While public debate often frames the associated harms as “deepfakes” or incremental extensions of misinformation and fraud, this view misses a broader socio-technical shift: GenAI enables synthetic realities—coherent, interactive, and potentially personalized information environments in which content, identity, and social interaction are jointly manufactured and mutually reinforcing. We argue that the most consequential risk is not merely the production of isolated synthetic artifacts, but the progressive erosion of shared epistemic ground and institutional verification practices as synthetic content, synthetic identity, and synthetic interaction become easy to generate and hard to audit. This paper (i) formalizes synthetic reality as a layered stack (content, identity, interaction, institutions), (ii) expands a taxonomy of GenAI harms spanning personal, economic, informational, and socio-technical risks, (iii) articulates the qualitative shifts introduced by GenAI (cost collapse, throughput, customization, micro-segmentation, provenance gaps, and trust erosion), and (iv) synthesizes recent risk realizations (2023–2025) into a compact case bank illustrating how these mechanisms manifest in fraud, elections, harassment, documentation, and supply-chain compromise. We then propose a mitigation stack that treats provenance infrastructure, platform governance, institutional workflow redesign, and public resilience as complementary rather than substitutable, and outline a research agenda focused on measuring epistemic security. We conclude with the Generative AI Paradox: as synthetic media becomes ubiquitous, societies may rationally discount digital evidence altogether, raising the cost of truth for everyday life and for democratic and economic institutions. Full article
Show Figures

Figure 1

15 pages, 1045 KB  
Systematic Review
AI at the Bedside of Psychiatry: Comparative Meta-Analysis of Imaging vs. Non-Imaging Models for Bipolar vs. Unipolar Depression
by Andrei Daescu, Ana-Maria Cristina Daescu, Alexandru-Ioan Gaitoane, Ștefan Maxim, Silviu Alexandru Pera and Liana Dehelean
J. Clin. Med. 2026, 15(2), 834; https://doi.org/10.3390/jcm15020834 - 20 Jan 2026
Viewed by 409
Abstract
Background: Differentiating bipolar disorder (BD) from unipolar major depressive disorder (MDD) at first episode is clinically consequential but challenging. Artificial intelligence/machine learning (AI/ML) may improve early diagnostic accuracy across imaging and non-imaging data sources. Methods: Following PRISMA 2020 and a pre-registered [...] Read more.
Background: Differentiating bipolar disorder (BD) from unipolar major depressive disorder (MDD) at first episode is clinically consequential but challenging. Artificial intelligence/machine learning (AI/ML) may improve early diagnostic accuracy across imaging and non-imaging data sources. Methods: Following PRISMA 2020 and a pre-registered protocol on protocols.io, we searched PubMed, Scopus, Europe PMC, Semantic Scholar, OpenAlex, The Lens, medRxiv, ClinicalTrials.gov, and Web of Science (2014–8 October 2025). Eligible studies developed/evaluated supervised ML classifiers for BD vs. MDD at first episode and reported test-set discrimination. AUCs were meta-analyzed on the logit (GEN) scale using random effects (REML) with Hartung–Knapp adjustment and then back-transformed. Subgroup (imaging vs. non-imaging), leave-one-out (LOO), and quality sensitivity (excluding high risk of leakage) analyses were prespecified. Risk of bias used QUADAS-2 with PROBAST/AI considerations. Results: Of 158 records, 39 duplicates were removed and 119 records screened; 17 met qualitative criteria; and 6 had sufficient data for meta-analysis. The pooled random-effects AUC was 0.84 (95% CI 0.75–0.90), indicating above-chance discrimination, with substantial heterogeneity (I2 = 86.5%). Results were robust to LOO, exclusion of two high-risk-of-leakage studies (pooled AUC 0.83, 95% CI 0.72–0.90), and restriction to higher-rigor validation (AUC 0.83, 95% CI 0.69–0.92). Non-imaging models showed higher point estimates than imaging models; however, subgroup comparisons were exploratory due to the small number of studies: pooled AUC ≈ 0.90–0.92 with I2 = 0% vs. 0.79 with I2 = 64%; test for subgroup difference Q = 7.27, df = 1, p = 0.007. Funnel plot inspection and Egger/Begg tests found that we could not reliably assess small-study effects/publication bias due to the small number of studies. Conclusions: AI/ML models provide good and robust discrimination of BD vs. MDD at first episode. Non-imaging approaches are promising due to higher point estimates in the available studies and practical scalability, but prospective evaluation is needed and conclusions about modality superiority remain tentative given the small number of non-imaging studies (k = 2). Full article
(This article belongs to the Special Issue How Clinicians See the Use of AI in Psychiatry)
Show Figures

Figure 1

25 pages, 636 KB  
Article
K-12 Teachers’ Adoption of Generative AI for Teaching: An Extended TAM Perspective
by Ying Tang and Linrong Zhong
Educ. Sci. 2026, 16(1), 136; https://doi.org/10.3390/educsci16010136 - 15 Jan 2026
Viewed by 908
Abstract
This study investigates the factors influencing Chinese K-12 teachers’ adoption of generative artificial intelligence (GenAI) for instructional purposes by extending the Technology Acceptance Model (TAM) with pedagogical beliefs, perceived intelligence, perceived ethical risks, GenAI anxiety, and demographic moderators. Drawing on a theory-driven framework, [...] Read more.
This study investigates the factors influencing Chinese K-12 teachers’ adoption of generative artificial intelligence (GenAI) for instructional purposes by extending the Technology Acceptance Model (TAM) with pedagogical beliefs, perceived intelligence, perceived ethical risks, GenAI anxiety, and demographic moderators. Drawing on a theory-driven framework, survey data were collected from 218 in-service teachers across K-12 schools in China. The respondents were predominantly from urban schools and most had prior GenAI use experience. Eight latent constructs and fourteen hypotheses were tested using structural equation modeling and multi-group analysis. Results show that perceived usefulness and perceived ease of use are the strongest predictors of teachers’ intention to adopt GenAI. Constructivist pedagogical beliefs positively predict both perceived usefulness and intention, whereas transmissive beliefs negatively predict intention. Perceived intelligence exerts strong positive effects on perceived usefulness and ease of use but has no direct effect on intention. Perceived ethical risks significantly heighten GenAI anxiety, yet neither directly reduces adoption intention. Gender, teaching stage, and educational background further moderate key relationships, revealing heterogeneous adoption mechanisms across teacher subgroups. The study extends TAM for the GenAI era and highlights the need for professional development and policy initiatives that simultaneously strengthen perceived usefulness and ease of use, engage with pedagogical beliefs, and address ethical and emotional concerns in context-sensitive ways. Full article
Show Figures

Figure 1

21 pages, 573 KB  
Article
Ai-RACE as a Framework for Writing Assignment Design in Higher Education
by Amira El-Soussi and Dima Yousef
Educ. Sci. 2026, 16(1), 119; https://doi.org/10.3390/educsci16010119 - 13 Jan 2026
Viewed by 761
Abstract
Higher education continues to encounter the challenge of redesigning writing pedagogy beyond the rapid adoption of emerging technologies. This challenge is particularly evident in English writing courses, which play a role in developing students’ writing and research skills in universities across the United [...] Read more.
Higher education continues to encounter the challenge of redesigning writing pedagogy beyond the rapid adoption of emerging technologies. This challenge is particularly evident in English writing courses, which play a role in developing students’ writing and research skills in universities across the United Arab Emirates (UAE). While generative artificial intelligence (GenAI) tools offer practical affordances for writing instruction, their growing use has also raised concerns about academic integrity, authenticity, and critical engagement. Although early discourse has focused on the risks and potential of GenAI, there remains a clear dearth of frameworks to guide instructors in designing meaningful and engaging writing assignments. This paper introduces Ai-RACE, an adaptable pedagogical framework for designing purposeful and innovative writing tasks. Grounded in classroom-based insights, principles of writing pedagogy, constructivist and multimodal learning theories, Ai-RACE conceptualises assignment design around five interconnected components: AI integration, Relevance, Authenticity, the 4Cs, and Engagement. Employing a design-focused qualitative approach, the study uses instructional practices and student reflections to examine the implementation of Ai-RACE in writing contexts. Although situated within a specific institutional context, the study offers transferable guidelines for designing writing assignments across international higher education settings. By positioning Ai-RACE as a design heuristic, the study demonstrates its potential in supporting engagement, critical thinking, writing skills and ethical use of AI, and highlights the importance of rethinking writing pedagogy and the professional development in AI- influenced contexts. Full article
Show Figures

Figure 1

29 pages, 522 KB  
Article
Crowdfunding as an E-Commerce Mechanism: A Deep Learning Approach to Predicting Success Using Reduced Generative AI Embeddings
by Hakan Gunduz, Muge Klein and Ela Sibel Bayrak Meydanoglu
J. Theor. Appl. Electron. Commer. Res. 2026, 21(1), 28; https://doi.org/10.3390/jtaer21010028 - 8 Jan 2026
Viewed by 753
Abstract
Crowdfunding platforms like Kickstarter have reshaped early-stage financing by allowing entrepreneurs to connect directly with potential supporters. As a fast-expanding part of digital commerce, crowdfunding offers significant opportunities but also substantial risks for both entrepreneurs and platform operators, making predictive analytics an essential [...] Read more.
Crowdfunding platforms like Kickstarter have reshaped early-stage financing by allowing entrepreneurs to connect directly with potential supporters. As a fast-expanding part of digital commerce, crowdfunding offers significant opportunities but also substantial risks for both entrepreneurs and platform operators, making predictive analytics an essential capability. Although crowdfunding shares some operational features with traditional e-commerce, its mix of financial uncertainty, emotionally charged storytelling, and fast-evolving social interactions makes it a distinct and more challenging forecasting problem. Accurately predicting campaign outcomes is especially difficult because of the high-dimensionality and diversity of the underlying textual and behavioral data. These factors highlight the need for scalable, intelligent data science methods that can jointly exploit structured and unstructured information. To address these issues, this study proposes a novel AI-based predictive framework that integrates a Convolutional Block Attention Module (CBAM)-enhanced symmetric autoencoder for compressing high-dimensional Generative AI (GenAI) BERT embeddings with meta-heuristic feature selection and advanced classification models. The framework systematically couples attention-driven feature compression with optimization techniques—Genetic Algorithm (GA), Jaya, and Artificial Rabbit Optimization (ARO)—and then applies Long Short-Term Memory (LSTM) and Gradient Boosting Machine (GBM) classifiers. Experiments on a large-scale Kickstarter dataset demonstrate that the proposed approach attains 77.8% accuracy while reducing feature dimensionality by more than 95%, surpassing standard baseline methods. In addition to its technical merits, the study yields practical insights for platform managers and campaign creators, enabling more informed choices in campaign design, promotional tactics, and backer targeting. Overall, this work illustrates how advanced AI methodologies can strengthen predictive analytics in digital commerce, thereby enhancing the strategic impact and long-term sustainability of crowdfunding ecosystems. Full article
Show Figures

Figure 1

27 pages, 1142 KB  
Article
Digital Skills and Personal Innovativeness Shaping Stratified Use of ChatGPT in Polish Adults’ Education
by Robert Wolny, Kinga Hoffmann-Burdzińska, Magdalena Jaciow, Anna Sączewska-Piotrowska, Agata Stolecka-Makowska and Grzegorz Szojda
Sustainability 2026, 18(2), 619; https://doi.org/10.3390/su18020619 - 7 Jan 2026
Viewed by 502
Abstract
The development of generative artificial intelligence tools, including large language models, opens new opportunities for adult education while simultaneously posing the risk of deepening inequalities resulting from differences in digital competences and individual dispositions. The aim of this article is to examine how [...] Read more.
The development of generative artificial intelligence tools, including large language models, opens new opportunities for adult education while simultaneously posing the risk of deepening inequalities resulting from differences in digital competences and individual dispositions. The aim of this article is to examine how digital skills (DS) and personal innovativeness (PI) shape differentiated and advanced use of ChatGPT (UC) among adult learners in Poland, with particular attention to the moderating role of gender. The study was conducted using the CAWI method on a nationwide sample of 757 adult ChatGPT users engaged in upgrading their qualifications. Validated scales of DS, PI, and UC were applied, along with confirmatory factor analysis (CFA) and structural equation modeling (SEM) using the WLSMV estimator, as well as multigroup SEM for women and men. The results confirm that both digital skills (β ≈ 0.46) and personal innovativeness (β ≈ 0.37) significantly and positively predict advanced use of ChatGPT, jointly explaining approximately 41% of the variance in UC, with stronger effects observed among men than women. Attention is therefore drawn to the need to incorporate a gender perspective in further research on the use of GenAI in adult education The findings point to a stratification of GenAI use in adult education and underscore the need to incorporate critical digital competences and AI literacy into sustainable education policies in order to limit the reproduction of existing inequalities. Full article
Show Figures

Figure 1

Back to TopTop