Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (131)

Search Parameters:
Keywords = ManyChat

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 2159 KB  
Article
Association of Mobile-Enhanced Remote Patient Monitoring with Blood Pressure Control in Hypertensive Patients with Comorbidities: A Multicenter Pre–Post Evaluation
by Ashfaq Ullah, Irfan Ahmad and Wei Deng
Diagnostics 2026, 16(2), 244; https://doi.org/10.3390/diagnostics16020244 - 12 Jan 2026
Viewed by 262
Abstract
Background and Objectives: Hypertension affects more than 27% of adults in China, and despite ongoing public health efforts, substantial gaps remain in awareness, treatment, and blood pressure control, particularly among older adults and patients with multiple comorbidities. Conventional clinic-based care often provides limited [...] Read more.
Background and Objectives: Hypertension affects more than 27% of adults in China, and despite ongoing public health efforts, substantial gaps remain in awareness, treatment, and blood pressure control, particularly among older adults and patients with multiple comorbidities. Conventional clinic-based care often provides limited opportunity for frequent monitoring and timely treatment adjustment, which may contribute to persistent poor control in routine practice. The objective of this study was to evaluate changes in blood pressure control and related clinical indicators during implementation of a mobile-enhanced remote patient monitoring (RPM)–supported care model among hypertensive patients with comorbidities, including patterns of medication adjustment, adherence, and selected cardiometabolic parameters. Methods: We conducted a multicenter, pre–post evaluation of a mobile-enhanced remote patient monitoring (RPM) program among 6874 adults with hypertension managed at six hospitals in Chongqing, China. Participants received usual care during the pre-RPM phase (April–September 2024; clinic blood pressure measured using an Omron HEM-7136 device), followed by an RPM-supported phase (October 2024–March 2025; home blood pressure measured twice daily using connected A666G monitors with automated transmission via WeChat, medication reminders, and clinician follow-up). Given the use of different devices and measurement settings, blood pressure comparisons may be influenced by device- and setting-related measurement differences. Monthly blood pressure averages were calculated from all available readings. Subgroup analyses explored patterns by sex, age, baseline BP category, and comorbidity status. Results: The cohort was 48.9% male with a mean age of 66.9 ± 13.7 years. During the RPM-supported care period, the proportion meeting the study’s blood pressure control threshold increased from 62.4% (pre-RPM) to 90.1%. Mean systolic blood pressure decreased from 140 mmHg at baseline to 116–118 mmHg at 6 months during the more frequent monitoring and active treatment adjustment period supported by RPM (p < 0.001), alongside modest reductions in fasting blood glucose and total cholesterol. These achieved SBP levels are below commonly recommended office targets for many older adults (typically <140 mmHg for ages 65–79, with individualized lower targets only if well tolerated; and less stringent targets for adults ≥80 years) and therefore warrant cautious interpretation and safety contextualization. Medication adherence improved, and antihypertensive regimen intensity increased during follow-up, suggesting that more frequent monitoring and active treatment adjustment contributed to the early blood pressure decline. Subgroup patterns were broadly similar across age and baseline BP categories; observed differences by sex and comorbidity groups were exploratory. Conclusions: In this large multicenter pre–post study, implementation of an RPM-supported hypertension care model was associated with substantial improvements in blood pressure control and concurrent intensification of guideline-concordant therapy. Given the absence of a concurrent control group, clinic-to-home measurement differences, and concurrent medication changes, findings should be interpreted as associations observed during an intensified monitoring and treatment period rather than definitive causal effects of RPM technology alone. Pragmatic randomized evaluations with standardized measurement protocols, longer follow-up, and cost-effectiveness analyses are warranted. Full article
Show Figures

Figure 1

43 pages, 10782 KB  
Article
Nested Learning in Higher Education: Integrating Generative AI, Neuroimaging, and Multimodal Deep Learning for a Sustainable and Innovative Ecosystem
by Rubén Juárez, Antonio Hernández-Fernández, Claudia Barros Camargo and David Molero
Sustainability 2026, 18(2), 656; https://doi.org/10.3390/su18020656 - 8 Jan 2026
Viewed by 258
Abstract
Industry 5.0 challenges higher education to adopt human-centred and sustainable uses of artificial intelligence, yet many current deployments still treat generative AI as a stand-alone tool, neurophysiological sensing as largely laboratory-bound, and governance as an external add-on rather than a design constraint. This [...] Read more.
Industry 5.0 challenges higher education to adopt human-centred and sustainable uses of artificial intelligence, yet many current deployments still treat generative AI as a stand-alone tool, neurophysiological sensing as largely laboratory-bound, and governance as an external add-on rather than a design constraint. This article introduces Nested Learning as a neuro-adaptive ecosystem design in which generative-AI agents, IoT infrastructures and multimodal deep learning orchestrate instructional support while preserving student agency and a “pedagogy of hope”. We report an exploratory two-phase mixed-methods study as an initial empirical illustration. First, a neuro-experimental calibration with 18 undergraduate students used mobile EEG while they interacted with ChatGPT in problem-solving tasks structured as challenge–support–reflection micro-cycles. Second, a field implementation at a university in Madrid involved 380 participants (300 students and 80 lecturers), embedding the Nested Learning ecosystem into regular courses. Data sources included EEG (P300) signals, interaction logs, self-report measures of engagement, self-regulated learning and cognitive safety (with strong internal consistency; α/ω0.82), and open-ended responses capturing emotional experience and ethical concerns. In Phase 1, P300 dynamics aligned with key instructional micro-events, providing feasibility evidence that low-cost neuro-adaptive pipelines can be sensitive to pedagogical flow in ecologically relevant tasks. In Phase 2, participants reported high levels of perceived nested support and cognitive safety, and observed associations between perceived Nested Learning, perceived neuro-adaptive adjustments, engagement and self-regulation were moderate to strong (r=0.410.63, p<0.001). Qualitative data converged on themes of clarity, adaptive support and non-punitive error culture, alongside recurring concerns about privacy and cognitive sovereignty. We argue that, under robust ethical, data-protection and sustainability-by-design constraints, Nested Learning can strengthen academic resilience, learner autonomy and human-centred uses of AI in higher education. Full article
Show Figures

Figure 1

13 pages, 454 KB  
Review
Social Media Use and Sleep Quality in Adolescents and Young Adults: A Scoping Review of Reviews
by Awele Ndubisi, Felix Agyapong-Opoku and Belinda Agyapong
Children 2026, 13(1), 51; https://doi.org/10.3390/children13010051 - 30 Dec 2025
Viewed by 875
Abstract
Background: Social media use has grown rapidly and has been integrated into the lives of many adolescents and young adults worldwide. Research indicates that excessive social media engagement can negatively impact sleep quality through various mechanisms. Objective: This scoping review of reviews aims [...] Read more.
Background: Social media use has grown rapidly and has been integrated into the lives of many adolescents and young adults worldwide. Research indicates that excessive social media engagement can negatively impact sleep quality through various mechanisms. Objective: This scoping review of reviews aims to explore the relationship between social media use and sleep quality among adolescents and young adults, synthesize existing evidence, identify research gaps, and highlight directions for future research. Methods: Arksey’s and O’Malley’s five-stage framework was used to conduct this scoping review. Searches were conducted in PubMed, Web of Science, Embase, Medline, and Scopus for articles published between 2020 and 2025. The inclusion criteria were systematic reviews or meta-analyses focused on adolescents and young adults, examining social media use in relation to sleep quality, and peer-reviewed articles written in English. Ten articles met all eligibility criteria and were included in the review. Results: The findings indicate a small but consistent negative effect of social media use on sleep quality. Problematic social media use showed a stronger association with poorer sleep than general social media use. Specific platforms such as Facebook and Twitter contributed most to shorter sleep duration, later bedtimes, and poorer sleep quality, while Snapchat and Instagram showed moderate effects, and WhatsApp and WeChat showed smaller effects. Conclusions: Problematic social media use is strongly associated with poorer sleep quality, while general use may have smaller effects. Future research focusing on longitudinal studies would help deepen the understanding of the effects of social media on sleep and guide targeted interventions. Encouraging responsible or healthy social media use is vital in reducing the risks of problematic use while highlighting the benefits as well. Full article
(This article belongs to the Section Pediatric Pulmonary and Sleep Medicine)
Show Figures

Figure 1

23 pages, 1575 KB  
Article
Developing Time Management Competencies for First-Year College Students Through Experiential Learning: Design-Based Research
by Kunyu Wang, Mingzhang Zuo, Xiaotang Zhou, Yunhan Wang, Pengxuan Tang and Heng Luo
Behav. Sci. 2026, 16(1), 27; https://doi.org/10.3390/bs16010027 - 22 Dec 2025
Viewed by 423
Abstract
Time management is a critical competency for first-year college students, yet many struggle with limited self-regulation, and existing interventions are often short-term and weakly grounded in theory. This study explored how a design-based research (DBR) approach integrating experiential learning and digital tools could [...] Read more.
Time management is a critical competency for first-year college students, yet many struggle with limited self-regulation, and existing interventions are often short-term and weakly grounded in theory. This study explored how a design-based research (DBR) approach integrating experiential learning and digital tools could strengthen students’ time management skills. From 2021 to 2023, 238 first-year students at a research university in central China participated in a three-month hybrid Freshman Orientation Seminar, with data collected from daily submissions via a WeChat mini-program. Over three iterative DBR cycles, the intervention combined experiential learning theory with authentic time management practice, guided by quantitative and qualitative evidence to refine the pedagogical model. The process yielded six design principles and a supporting digital tool. In the final iteration, students demonstrated substantial gains, including improved planning, greater task completion, more accurate time allocation, and higher satisfaction with time use. These findings suggest that sustained, theory-guided experiential learning, when supported by digital tools, can significantly enhance time management competencies. The study contributes practical strategies for embedding self-regulated learning into higher education through technology-enhanced experiential approaches. Full article
(This article belongs to the Special Issue The Promotion of Self-Regulated Learning (SRL) in the Classroom)
Show Figures

Figure 1

21 pages, 318 KB  
Article
Help Is Just a Message Away: Online Counselling Chat Services Bridging Gaps in Youth Mental Health?
by Alexis Dewaele, Elke Denayer, Maria Cabello, Irati Higuera-Lozano, Tuuli Pitkänen, Katalin Felvinczi, Zsuzsa Kaló, Siiri Soininvaara and Lien Goossens
Eur. J. Investig. Health Psychol. Educ. 2025, 15(12), 257; https://doi.org/10.3390/ejihpe15120257 - 15 Dec 2025
Viewed by 755
Abstract
Adolescents and young adults across Europe face growing mental health challenges, yet many do not seek professional help. Online counselling chat services (OCCS) offer anonymous, accessible, and youth-friendly support, but their varied aims, formats, and resources complicate evaluation and integration into formal care [...] Read more.
Adolescents and young adults across Europe face growing mental health challenges, yet many do not seek professional help. Online counselling chat services (OCCS) offer anonymous, accessible, and youth-friendly support, but their varied aims, formats, and resources complicate evaluation and integration into formal care systems. This study aimed to identify shared priorities for the development, evaluation, and implementation of OCCS for youth. Eight focus groups were conducted with 38 stakeholders—including researchers, counsellors, and service coordinators—from eight European countries. Through qualitative content analysis, six key thematic domains emerged: usability and engagement, service quality and effectiveness, infrastructure and integration, sustainability, ethical considerations, and future visions. Participants highlighted OCCS as valuable tools for fostering emotional safety, trust, and accessibility, while also noting persistent challenges such as limited funding, fragile infrastructure, and ethical tensions around anonymity and safeguarding. Crucially, the need for flexible evaluation frameworks that reflect service diversity and for stronger cross-model collaboration was emphasized. These findings provide a strategic foundation for advancing inclusive, sustainable, and youth-centered digital mental health support across Europe. Full article
9 pages, 188 KB  
Brief Report
Pharmacy Students’ Perspectives on Integrating Generative AI into Pharmacy Education
by Kaitlin M. Alexander, Eli O. Jorgensen, Casey Rowe and Khoa Nguyen
Pharmacy 2025, 13(6), 183; https://doi.org/10.3390/pharmacy13060183 - 15 Dec 2025
Viewed by 444
Abstract
Objective: This study aims to evaluate pharmacy students’ perceptions regarding the integration of generative artificial intelligence (GenAI) into pharmacy curricula, providing evidence to inform future curriculum development. Methods: A cross-sectional survey of Doctor of Pharmacy (PharmD) students at a single U.S. College of [...] Read more.
Objective: This study aims to evaluate pharmacy students’ perceptions regarding the integration of generative artificial intelligence (GenAI) into pharmacy curricula, providing evidence to inform future curriculum development. Methods: A cross-sectional survey of Doctor of Pharmacy (PharmD) students at a single U.S. College of Pharmacy was conducted in April 2025. Students from all four professional years (P1–P4) were invited to participate. The 10-item survey assessed four domains: (1) General GenAI Use, (2) Knowledge and Experience with GenAI Tools, (3) Learning Preferences with GenAI, and (4) Perspectives on GenAI in the curriculum. Results: A total of 110 students responded (response rate = 12.4%). Most were P1 students (56/110, 50.9%). Many reported using GenAI tools for personal (65/110, 59.1%) and school-related purposes (64/110, 58.1%) sometimes, often, or frequently. ChatGPT was the most used tool. While 40% (40/99) agreed or strongly agreed that GenAI could enhance their learning, 62.6% (62/99) preferred traditional teaching methods. Open-ended responses (n = 25) reflected a mix of positive, neutral, and negative views on GenAI in education. Conclusions: Many pharmacy students in this cohort reported using GenAI tools and demonstrated a basic understanding of GenAI functions, yet students also reported that they preferred traditional learning methods and expressed mixed views on incorporating GenAI into teaching. These findings provide valuable insights for faculty and schools of pharmacy as they develop strategies to integrate GenAI into pharmacy education. Full article
(This article belongs to the Special Issue AI Use in Pharmacy and Pharmacy Education)
Show Figures

Graphical abstract

43 pages, 7699 KB  
Review
Unveiling the Algorithm: The Role of Explainable Artificial Intelligence in Modern Surgery
by Sara Lopes, Miguel Mascarenhas, João Fonseca, Maria Gabriela O. Fernandes and Adelino F. Leite-Moreira
Healthcare 2025, 13(24), 3208; https://doi.org/10.3390/healthcare13243208 - 8 Dec 2025
Viewed by 1008
Abstract
Artificial Intelligence (AI) is rapidly transforming surgical care by enabling more accurate diagnosis and risk prediction, personalized decision-making, real-time intraoperative support, and postoperative management. Ongoing trends such as multi-task learning, real-time integration, and clinician-centered design suggest AI is maturing into a safe, pragmatic [...] Read more.
Artificial Intelligence (AI) is rapidly transforming surgical care by enabling more accurate diagnosis and risk prediction, personalized decision-making, real-time intraoperative support, and postoperative management. Ongoing trends such as multi-task learning, real-time integration, and clinician-centered design suggest AI is maturing into a safe, pragmatic asset in surgical care. Yet, significant challenges, such as the complexity and opacity of many AI models (particularly deep learning), transparency, bias, data sharing, and equitable deployment, must be surpassed to achieve clinical trust, ethical use, and regulatory approval of AI algorithms in healthcare. Explainable Artificial Intelligence (XAI) is an emerging field that plays an important role in bridging the gap between algorithmic power and clinical use as surgery becomes increasingly data-driven. The authors reviewed current applications of XAI in the context of surgery—preoperative risk assessment, surgical planning, intraoperative guidance, and postoperative monitoring—and highlighted the absence of these mechanisms in Generative AI (e.g., ChatGPT). XAI will allow surgeons to interpret, validate, and trust AI tools. XAI applied in surgery is not a luxury: it must be a prerequisite for responsible innovation. Model bias, overfitting, and user interface design are key challenges that need to be overcome and will be explored in this review to achieve the integration of XAI into the surgical field. Unveiling the algorithm is the first step toward a safe, accountable, transparent, and human-centered surgical AI. Full article
(This article belongs to the Section Artificial Intelligence in Healthcare)
Show Figures

Figure 1

15 pages, 2714 KB  
Article
Analyzing Global Attitudes Towards ChatGPT via Ensemble Learning on X (Twitter)
by Yassir Touhami Chahdi, Fouad Mohamed Abbou, Farid Abdi, Mohamed Bouhadda and Lamiae Bouanane
Algorithms 2025, 18(12), 748; https://doi.org/10.3390/a18120748 - 28 Nov 2025
Viewed by 377
Abstract
This research investigates global public attitudes towards ChatGPT by analyzing opinions on X (Twitter) to better understand societal perceptions of generative artificial intelligence (AI) applications. As conversational AI systems become increasingly integrated into daily life, evaluating public sentiment is crucial for informing responsible [...] Read more.
This research investigates global public attitudes towards ChatGPT by analyzing opinions on X (Twitter) to better understand societal perceptions of generative artificial intelligence (AI) applications. As conversational AI systems become increasingly integrated into daily life, evaluating public sentiment is crucial for informing responsible AI development and policymaking. Unlike many prior studies that adopt a binary (positive-negative) sentiment framework, this research presents a three-class classification scheme-positive, neutral, and negative framework, enabling more comprehensive evaluation of public attitudes using X (Twitter) data. To achieve this, tweets referencing ChatGPT were collected and categorized into positive, neutral, and negative opinions. Several algorithms, including Naïve Bayes, Support Vector Machines (SVMs), Random Forest, and an Ensemble Learning model, were employed to classify sentiments. The Ensemble model demonstrated superior performance, achieving an accuracy of 86%, followed by SVM (84%), Random Forest (79%), and Naïve Bayes (66%). Notably, the Ensemble approach improved the classification of neutral sentiments, increasing recall from 73% (SVM) to 76%, underscoring its robustness in handling ambiguous or mixed opinions. These findings highlight the advantages of Ensemble Learning techniques in social media sentiment analysis and provide valuable insights for AI developers and policymakers seeking to understand and address public perspectives on emerging AI technologies such as ChatGPT. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

16 pages, 3476 KB  
Article
ROboMC: A Portable Multimodal System for eHealth Training and Scalable AI-Assisted Education
by Marius Cioca and Adriana-Lavinia Cioca
Inventions 2025, 10(6), 103; https://doi.org/10.3390/inventions10060103 - 11 Nov 2025
Viewed by 884
Abstract
AI-based educational chatbots can expand access to learning, but many remain limited to text-only interfaces and fixed infrastructures, while purely generative responses raise concerns of reliability and consistency. In this context, we present ROboMC, a portable and multimodal system that combines a validated [...] Read more.
AI-based educational chatbots can expand access to learning, but many remain limited to text-only interfaces and fixed infrastructures, while purely generative responses raise concerns of reliability and consistency. In this context, we present ROboMC, a portable and multimodal system that combines a validated knowledge base with generative responses (OpenAI) and voice–text interaction, designed to enable both text and voice interaction, ensuring reliability and flexibility in diverse educational scenarios. The system, developed in Django, integrates two response pipelines: local search using normalized keywords and fuzzy matching in the LocalQuestion database, and fallback to the generative model GPT-3.5-Turbo (OpenAI, San Francisco, CA, USA) with a prompt adapted exclusively for Romanian and an explicit disclaimer. All interactions are logged in AutomaticQuestion for later analysis, supported by a semantic encoder (SentenceTransformer—paraphrase-multilingual-MiniLM-L12-v2’, Hugging Face Inc., New York, NY, USA) that ensures search tolerance to variations in phrasing. Voice interaction is managed through gTTS (Google LLC, Mountain View, CA, USA) with integrated audio playback, while portability is achieved through deployment on a Raspberry Pi 4B (Raspberry Pi Foundation, Cambridge, UK) with microphone, speaker, and battery power. Voice input is enabled through a cloud-based speech-to-text component (Google Web Speech API accessed via the Python SpeechRecognition library, (Anthony Zhang, open-source project, USA) using the Google Web Speech API (Google LLC, Mountain View, CA, USA; language = “ro-RO”)), allowing users to interact by speaking. Preliminary tests showed average latencies of 120–180 ms for validated responses on laptop and 250–350 ms on Raspberry Pi, respectively, 2.5–3.5 s on laptop and 4–6 s on Raspberry Pi for generative responses, timings considered acceptable for real educational scenarios. A small-scale usability study (N ≈ 35) indicated good acceptability (SUS ~80/100), with participants valuing the balance between validated and generative responses, the voice integration, and the hardware portability. Although system validation was carried out in the eHealth context, its architecture allows extension to any educational field: depending on the content introduced into the validated database, ROboMC can be adapted to medicine, engineering, social sciences, or other disciplines, relying on ChatGPT only when no clear match is found in the local base, making it a scalable and interdisciplinary solution. Full article
Show Figures

Figure 1

24 pages, 598 KB  
Article
Privacy Concerns in ChatGPT Data Collection and Its Impact on Individuals
by Leena Mohammad Alzamil, Alawiayyah Mohammed Alhasani and Suhair Alshehri
Future Internet 2025, 17(11), 511; https://doi.org/10.3390/fi17110511 - 10 Nov 2025
Viewed by 3730
Abstract
With the rapid adoption of generative AI technologies across various sectors, it has become increasingly important to understand how these systems handle personal data. The study examines users’ awareness of the types of data collected, the risks involved, and their implications for privacy [...] Read more.
With the rapid adoption of generative AI technologies across various sectors, it has become increasingly important to understand how these systems handle personal data. The study examines users’ awareness of the types of data collected, the risks involved, and their implications for privacy and security. A comprehensive literature review was conducted to contextualize the ethical, technical, and regulatory challenges associated with generative AI, followed by a pilot survey targeting ChatGPT users from a variety of demographics. The results of the study revealed a significant gap in users’ understanding of data practices, with many participants expressing concerns about unauthorized access to data, prolonged data retention, and a lack of transparency. Despite recognizing the benefits of ChatGPT in various applications, users expressed strong demands for greater control over their data, clearer consent mechanisms, and more transparent communication from developers. The study concludes by emphasizing the need for multi-dimensional solutions that combine technological innovation, regulatory reform, and user-centered design. Recommendations include implementing explainable AI, enhancing educational efforts, adopting privacy-by-design principles, and establishing robust governance frameworks. By addressing these challenges, developers, policymakers, and stakeholders can enhance trust, promote ethical AI deployment, and ensure that generative AI systems serve the public good while respecting individual rights and privacy. Full article
Show Figures

Figure 1

32 pages, 6188 KB  
Article
Siyasat: AI-Powered AI Governance Tool to Generate and Improve AI Policies According to Saudi AI Ethics Principles
by Dabiah Alboaneen, Shaikha Alhajri, Khloud Alhajri, Muneera Aljalal, Noura Alalyani, Hajer Alsaadan, Zainab Al Thonayan and Raja Alyafer
Computers 2025, 14(11), 452; https://doi.org/10.3390/computers14110452 - 22 Oct 2025
Viewed by 1615
Abstract
The rapid development of artificial intelligence (AI) and growing reliance on generative AI (GenAI) tools such as ChatGPT and Bing Chat have raised concerns about risks, including privacy violations, bias, and discrimination. AI governance is viewed as a solution, and in Saudi Arabia, [...] Read more.
The rapid development of artificial intelligence (AI) and growing reliance on generative AI (GenAI) tools such as ChatGPT and Bing Chat have raised concerns about risks, including privacy violations, bias, and discrimination. AI governance is viewed as a solution, and in Saudi Arabia, the Saudi Data and Artificial Intelligence Authority (SDAIA) has introduced the AI Ethics Principles. However, many organizations face challenges in aligning their AI policies with these principles. This paper presents Siyasat, an Arabic web-based governance tool designed to generate and enhance AI policies based on SDAIA’s AI Ethics Principles. Powered by GPT-4-turbo and a Retrieval-Augmented Generation (RAG) approach, the tool uses a dataset of ten AI policies and SDAIA’s official ethics document. The results show that Siyasat achieved a BERTScore of 0.890 and Self-BLEU of 0.871 in generating AI policies, while in improving AI policies, it scored 0.870 and 0.980, showing strong consistency and quality. The paper contributes a practical solution to support public, private, and non-profit sectors in complying with Saudi Arabia’s AI Ethics Principles. Full article
Show Figures

Figure 1

35 pages, 1642 KB  
Article
Adopting Generative AI in Higher Education: A Dual-Perspective Study of Students and Lecturers in Saudi Universities
by Doaa M. Bamasoud, Rasheed Mohammad and Sara Bilal
Big Data Cogn. Comput. 2025, 9(10), 264; https://doi.org/10.3390/bdcc9100264 - 18 Oct 2025
Cited by 2 | Viewed by 2829
Abstract
The integration of Generative Artificial Intelligence (GenAI) tools, such as ChatGPT, into higher education has introduced new opportunities and challenges for students and lecturers alike. This study investigates the psychological, ethical, and institutional factors that shape the adoption of GenAI tools in Saudi [...] Read more.
The integration of Generative Artificial Intelligence (GenAI) tools, such as ChatGPT, into higher education has introduced new opportunities and challenges for students and lecturers alike. This study investigates the psychological, ethical, and institutional factors that shape the adoption of GenAI tools in Saudi Arabian universities, drawing on an extended Technology Acceptance Model (TAM) that incorporates constructs from Self-Determination Theory (SDT) and ethical decision-making. A cross-sectional survey was administered to 578 undergraduate students and 309 university lecturers across three major institutions in Southern Saudi Arabia. Quantitative analysis using Structural Equation Modelling (SmartPLS 4) revealed that perceived usefulness, intrinsic motivation, and ethical trust significantly predicted students’ intention to use GenAI. Perceived ease of use influenced intention both directly and indirectly through usefulness, while institutional support positively shaped perceptions of GenAI’s value. Academic integrity and trust-related concerns emerged as key mediators of motivation, highlighting the ethical tensions in AI-assisted learning. Lecturer data revealed a parallel set of concerns, including fear of overreliance, diminished student effort, and erosion of assessment credibility. Although many faculty members had adapted their assessments in response to GenAI, institutional guidance was often perceived as lacking. Overall, the study offers a validated, context-sensitive model for understanding GenAI adoption in education and emphasises the importance of ethical frameworks, motivation-building, and institutional readiness. These findings offer actionable insights for policy-makers, curriculum designers, and academic leaders seeking to responsibly integrate GenAI into teaching and learning environments. Full article
Show Figures

Figure 1

16 pages, 870 KB  
Systematic Review
Effects of AI-Assisted Feedback via Generative Chat on Academic Writing in Higher Education Students: A Systematic Review of the Literature
by Claudio Andrés Cerón Urzúa, Ranjeeva Ranjan, Eleazar Eduardo Méndez Saavedra, María Graciela Badilla-Quintana, Nancy Lepe-Martínez and Andrew Philominraj
Educ. Sci. 2025, 15(10), 1396; https://doi.org/10.3390/educsci15101396 - 18 Oct 2025
Cited by 2 | Viewed by 5560
Abstract
The use of generative chat in education has become widespread over the last four years, raising many questions about its use and the effects of AI on learning. The aim of the current systematic review is to analyze the main effects of feedback [...] Read more.
The use of generative chat in education has become widespread over the last four years, raising many questions about its use and the effects of AI on learning. The aim of the current systematic review is to analyze the main effects of feedback through the use of generative chat on the production of academic texts by university students. This research is defined as a systematic review of the literature according to the guidelines of the PRISMA statement. The search was conducted in three international important databases (Scopus, Eric, and WoS), from which 12 articles were selected. The results highlighted that there are positive effects on university students’ writing when generative chat is used as a means of providing feedback. Among the main results, it was observed that feedback via chat helps to improve aspects mainly associated with the structure and organization of texts, allows for the proper use of grammatical conventions, and improves the fluency and cohesion of sentences, as well as the precision of ideas and vocabulary. In addition, other benefits were observed in the review, such as improved self-efficacy, self-regulation, proactivity, motivation, and reflection on writing, which promotes critical thinking about the text but also about AI, reducing anxiety and stress. Full article
Show Figures

Figure 1

23 pages, 506 KB  
Review
Evaluating the Effectiveness and Ethical Implications of AI Detection Tools in Higher Education
by Promethi Das Deep, William D. Edgington, Nitu Ghosh and Md. Shiblur Rahaman
Information 2025, 16(10), 905; https://doi.org/10.3390/info16100905 - 16 Oct 2025
Viewed by 10142
Abstract
The rapid rise of generative AI tools such as ChatGPT has prompted significant shifts in how higher education institutions approach academic integrity. Many universities have implemented AI detection tools like Turnitin AI, GPTZero, Copyleaks, and ZeroGPT to identify AI-generated content in student work. [...] Read more.
The rapid rise of generative AI tools such as ChatGPT has prompted significant shifts in how higher education institutions approach academic integrity. Many universities have implemented AI detection tools like Turnitin AI, GPTZero, Copyleaks, and ZeroGPT to identify AI-generated content in student work. This qualitative evidence synthesis draws on peer-reviewed journal articles published between 2021 and 2024 to evaluate the effectiveness, limitations, and ethical implications of AI detection tools in academic settings. While AI detectors offer scalable solutions, they frequently produce false positives and lack transparency, especially for multilingual or non-native English speakers. Ethical concerns surrounding surveillance, consent, and fairness are central to the discussion. The review also highlights gaps in institutional policies, inconsistent enforcement, and limited faculty training. It calls for a shift away from punitive approaches toward AI-integrated pedagogies that emphasize ethical use, student support, and inclusive assessment design. Emerging innovations such as watermarking and hybrid detection systems are discussed, though implementation challenges persist. Overall, the findings suggest that while AI detection tools play a role in preserving academic standards, institutions must adopt balanced, transparent, and student-centered strategies that align with evolving digital realities and uphold academic integrity without compromising rights or equity. Full article
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)
Show Figures

Figure 1

17 pages, 352 KB  
Article
Promoting Reflection Skills of Pre-Service Teachers—The Power of AI-Generated Feedback
by Florian Hofmann, Tina-Myrica Daunicht, Lea Plößl and Michaela Gläser-Zikuda
Educ. Sci. 2025, 15(10), 1315; https://doi.org/10.3390/educsci15101315 - 3 Oct 2025
Viewed by 1270
Abstract
Reflection skills are a key but challenging element in teacher training. Feedback on reflective writing assignments can improve reflection skills, but it is affected by challenges (high variability in judgments and time investment). AI-generated feedback offers many options. Therefore, the aim of this [...] Read more.
Reflection skills are a key but challenging element in teacher training. Feedback on reflective writing assignments can improve reflection skills, but it is affected by challenges (high variability in judgments and time investment). AI-generated feedback offers many options. Therefore, the aim of this study was to examine the potential of AI-generated feedback compared to that provided by lecturers for developing reflective skills. A total of 93 randomly selected pre-service teachers (70% female) in a course at a German university wrote two reflections and received feedback from either lecturers or ChatGPT 4.0 based on the same prompts. Pre-service teachers’ written reflections were assessed, and an online questionnaire based on standard instruments was applied. Control variables included metacognitive learning strategies and reflection-related dispositions. Based on a linear mixed model, the main effects on reflective skills were identified for time (β^ = 0.41, p = 0.003) and feedback condition (β^ = −0.42, p = 0.032). Both forms of feedback similarly fostered reflective skills over time, with academic self-efficacy emerging as a pertinent disposition (β^ = 0.25, p = 0.014). The limitations of this study and implications for teacher training are discussed. Full article
(This article belongs to the Special Issue The Role of Reflection in Teaching and Learning)
Back to TopTop