Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,016)

Search Parameters:
Keywords = ChatGPT4

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
50 pages, 2360 KB  
Review
The Rise of Agentic AI: A Review of Definitions, Frameworks, Architectures, Applications, Evaluation Metrics, and Challenges
by Ajay Bandi, Bhavani Kongari, Roshini Naguru, Sahitya Pasnoor and Sri Vidya Vilipala
Future Internet 2025, 17(9), 404; https://doi.org/10.3390/fi17090404 (registering DOI) - 4 Sep 2025
Abstract
Agentic AI systems are a recently emerged and important approach that goes beyond traditional AI, generative AI, and autonomous systems by focusing on autonomy, adaptability, and goal-driven reasoning. This study provides a clear review of agentic AI systems by bringing together their definitions, [...] Read more.
Agentic AI systems are a recently emerged and important approach that goes beyond traditional AI, generative AI, and autonomous systems by focusing on autonomy, adaptability, and goal-driven reasoning. This study provides a clear review of agentic AI systems by bringing together their definitions, frameworks, and architectures, and by comparing them with related areas like generative AI, autonomic computing, and multi-agent systems. To do this, we reviewed 143 primary studies on current LLM-based and non-LLM-driven agentic systems and examined how they support planning, memory, reflection, and goal pursuit. Furthermore, we classified architectural models, input–output mechanisms, and applications based on their task domains where agentic AI is applied, supported using tabular summaries that highlight real-world case studies. Evaluation metrics were classified as qualitative and quantitative measures, along with available testing methods of agentic AI systems to check the system’s performance and reliability. This study also highlights the main challenges and limitations of agentic AI, covering technical, architectural, coordination, ethical, and security issues. We organized the conceptual foundations, available tools, architectures, and evaluation metrics in this research, which defines a structured foundation for understanding and advancing agentic AI. These findings aim to help researchers and developers build better, clearer, and more adaptable systems that support responsible deployment in different domains. Full article
Show Figures

Figure 1

8 pages, 609 KB  
Brief Report
AI-Generated Patient-Friendly MRI Fistula Summaries: A Pilot Randomised Study
by Easan Anand, Itai Ghersin, Gita Lingam, Theo Pelly, Daniel Singer, Chris Tomlinson, Robin EJ Munro, Rachel Capstick, Anna Antoniou, Ailsa L Hart, Phil Tozer, Kapil Sahnan and Phillip Lung
J. Imaging 2025, 11(9), 302; https://doi.org/10.3390/jimaging11090302 - 4 Sep 2025
Abstract
Perianal fistulising Crohn’s disease (pfCD) affects 1 in 5 Crohn’s patients and requires frequent MRI monitoring. Standard radiology reports are written for clinicians using technical language often inaccessible to patients, which can cause anxiety and hinder engagement. This study evaluates the feasibility and [...] Read more.
Perianal fistulising Crohn’s disease (pfCD) affects 1 in 5 Crohn’s patients and requires frequent MRI monitoring. Standard radiology reports are written for clinicians using technical language often inaccessible to patients, which can cause anxiety and hinder engagement. This study evaluates the feasibility and safety of AI-generated patient-friendly MRI fistula summaries to improve patient understanding and shared decision-making. MRI fistula reports spanning healed to complex disease were identified and used to generate AI patient-friendly summaries via ChatGPT-4. Six de-identified MRI reports and corresponding AI summaries were assessed by clinicians for hallucinations and readability (Flesch-Kincaid score). Sixteen patients with perianal fistulas were randomized to review either AI summaries or original reports and rated them on readability, comprehensibility, utility, quality, follow-up questions, and trustworthiness using Likert scales. Patients rated AI summaries significantly higher in readability (median 5 vs. 2, p = 0.011), comprehensibility (5 vs. 2, p = 0.007), utility (5 vs. 3, p = 0.014), and overall quality (4.5 vs. 4, p = 0.013), with fewer follow-up questions (3 vs. 4, p = 0.018). Clinicians found AI summaries more readable (mean Flesch-Kincaid 54.6 vs. 32.2, p = 0.005) and free of hallucinations. No clinically significant inaccuracies were identified. AI-generated patient-friendly MRI summaries have potential to enhance patient communication and clinical workflow in pfCD. Larger studies are needed to validate clinical utility, hallucination rates, and acceptability. Full article
(This article belongs to the Section Medical Imaging)
Show Figures

Figure 1

16 pages, 1471 KB  
Article
Leveraging Explainable AI for LLM Text Attribution: Differentiating Human-Written and Multiple LLM-Generated Text
by Ayat A. Najjar, Huthaifa I. Ashqar, Omar Darwish and Eman Hammad
Information 2025, 16(9), 767; https://doi.org/10.3390/info16090767 (registering DOI) - 4 Sep 2025
Abstract
The development of generative AI Large Language Models (LLMs) raised the alarm regarding the identification of content produced by generative AI vs. humans. In one case, issues arise when students heavily rely on such tools in a manner that can affect the development [...] Read more.
The development of generative AI Large Language Models (LLMs) raised the alarm regarding the identification of content produced by generative AI vs. humans. In one case, issues arise when students heavily rely on such tools in a manner that can affect the development of their writing or coding skills. Other issues of plagiarism also apply. This study aims to support efforts to detect and identify textual content generated using LLM tools. We hypothesize that LLM-generated text is detectable by machine learning (ML) and investigate ML models that can recognize and differentiate between texts generated by humans and multiple LLM tools. We used a dataset of student-written text in comparison with LLM-written text. We leveraged several ML and Deep Learning (DL) algorithms, such as Random Forest (RF) and Recurrent Neural Networks (RNNs) and utilized Explainable Artificial Intelligence (XAI) to understand the important features in attribution. Our method is divided into (1) binary classification to differentiate between human-written and AI-generated text and (2) multi-classification to differentiate between human-written text and text generated by five different LLM tools (ChatGPT, LLaMA, Google Bard, Claude, and Perplexity). Results show high accuracy in multi- and binary classification. Our model outperformed GPTZero (78.3%), with an accuracy of 98.5%. Notably, GPTZero was unable to recognize about 4.2% of the observations, but our model was able to recognize the complete test dataset. XAI results showed that understanding feature importance across different classes enables detailed author/source profiles, aiding in attribution and supporting plagiarism detection by highlighting unique stylistic and structural elements, thereby ensuring robust verification of content originality. Full article
(This article belongs to the Special Issue Generative AI Transformations in Industrial and Societal Applications)
Show Figures

Figure 1

26 pages, 1428 KB  
Article
Investigation of Generative AI Adoption in IT-Focused Vocational Secondary School Programming Education
by Norbert Annuš
Educ. Sci. 2025, 15(9), 1152; https://doi.org/10.3390/educsci15091152 - 4 Sep 2025
Abstract
The application of artificial intelligence in education, particularly in learning programming, is gaining increasing significance. However, research on secondary school students specializing in IT at an early stage has received relatively little attention in this field. The aim of this study is to [...] Read more.
The application of artificial intelligence in education, particularly in learning programming, is gaining increasing significance. However, research on secondary school students specializing in IT at an early stage has received relatively little attention in this field. The aim of this study is to assess how vocational secondary school IT students utilize Generative artificial intelligence in learning programming. The study employed a survey-based methodology, where students with varying levels of knowledge were surveyed to understand their AI usage patterns. The sample consisted of students from vocational IT schools, and data were analyzed using descriptive statistics and independent samples t-tests. The results indicate that students with different levels of knowledge use AI tools differently, with ChatGPT being the most popular tool. The study further highlights that AI usage brings significant benefits, such as providing a personalized learning experience and enabling quick error correction. However, excessive reliance on AI tools may hinder students from acquiring fundamental programming skills. The findings support the idea that while AI can effectively complement teachers’ explanations, overdependence on it can be risky, potentially reducing students’ creativity and problem-solving abilities. The study emphasizes the crucial role of educators in teaching the responsible and ethical use of artificial intelligence. The results of this research offer new perspectives on the effective integration of Generative artificial intelligence into vocational secondary school programming education and suggest further studies to compare its applications at the university level. However, the study acknowledges certain limitations, such as the potential bias of self-reported data, which may affect the generalizability of the results. Unlike other studies, the age groups we surveyed, and the cohorts formed from them are nearly evenly distributed, making our sample representative of the region in question. Full article
(This article belongs to the Special Issue Generative-AI-Enhanced Learning Environments and Applications)
Show Figures

Figure 1

19 pages, 276 KB  
Review
The Role of AI in Academic Writing: Impacts on Writing Skills, Critical Thinking, and Integrity in Higher Education
by Promethi Das Deep and Yixin Chen
Societies 2025, 15(9), 247; https://doi.org/10.3390/soc15090247 - 4 Sep 2025
Abstract
Artificial Intelligence (AI) tools have transformed academic writing and literacy development in higher education. Students can now receive instant feedback on grammar, coherence, style, and argumentation using AI-powered writing assistants, like Grammarly, ChatGPT, and QuillBot. Moreover, these writing assistants can quickly produce completed [...] Read more.
Artificial Intelligence (AI) tools have transformed academic writing and literacy development in higher education. Students can now receive instant feedback on grammar, coherence, style, and argumentation using AI-powered writing assistants, like Grammarly, ChatGPT, and QuillBot. Moreover, these writing assistants can quickly produce completed essays and papers, leaving little else for the student to do aside from reading and perhaps editing the content. Many teachers are concerned that this erodes critical thinking skills and undermines ethical considerations since students are not performing the work themselves. This study addresses this concern by synthesizing and evaluating peer-reviewed literature on the effectiveness of AI in supporting writing pedagogy. Studies were selected based on their relevance and scholarly merit, following the Scale for the Assessment of Narrative Review Articles (SANRA) guidelines to ensure methodological rigor and quality. The findings reveal that although AI tools can be detrimental to the development of writing skills, they can foster self-directed learning and improvement when carefully integrated into coursework. They can facilitate enhanced writing fluency, offer personalized tutoring, and reduce the cognitive load of drafting and revising. This study also compares AI-assisted and traditional writing approaches and discusses best practices for integrating AI tools into curricula while preserving academic integrity and creativity in student writing. Full article
18 pages, 1609 KB  
Article
Using Large Language Models to Extract Structured Data from Health Coaching Dialogues: A Comparative Study of Code Generation Versus Direct Information Extraction
by Sai Sangameswara Aadithya Kanduri, Apoorv Prasad and Susan McRoy
BioMedInformatics 2025, 5(3), 50; https://doi.org/10.3390/biomedinformatics5030050 - 4 Sep 2025
Abstract
Background: Virtual coaching can help people adopt new healthful behaviors by encouraging them to set specific goals and helping them review their progress. One challenge in creating such systems is analyzing clients’ statements about their activities. Limiting people to selecting among predefined [...] Read more.
Background: Virtual coaching can help people adopt new healthful behaviors by encouraging them to set specific goals and helping them review their progress. One challenge in creating such systems is analyzing clients’ statements about their activities. Limiting people to selecting among predefined answers detracts from the naturalness of conversations and user engagement. Large Language Models (LLMs) offer the promise of covering a wide range of expressions. However, using an LLM for simple entity extraction would not necessarily perform better than functions coded in a programming language, while creating higher long-term costs. Methods: This study uses a real data set of annotated human coaching dialogs to develop LLM-based models for two training scenarios: one that generates pattern-matching functions and the other which does direct extraction. We use models of different sizes and complexity, including Meta-Llama, Gemma, and ChatGPT, and calculate their speed and accuracy. Results: LLM-generated pattern-matching functions took an average of 10 milliseconds (ms) per item as compared to 900 ms. (ChatGPT 3.5 Turbo) to 5 s (Llama 2 70B). The accuracy for pattern matching was 99% on real data, while LLM accuracy ranged from 90% (Llama 2 70B) to 100% (ChatGPT 3.5 Turbo), on both real and synthetically generated examples created for fine-tuning. Conclusions: These findings suggest promising directions for future research that combines both methods (reserving the LLM for cases that cannot be matched directly) or that use LLMs to generate synthetic training data with more expressive variety which can be used to improve the coverage of either generated codes or fine-tuned models. Full article
(This article belongs to the Section Methods in Biomedical Informatics)
Show Figures

Figure 1

18 pages, 6356 KB  
Article
ChatGPT as a Virtual Peer: Enhancing Critical Thinking in Flipped Veterinary Anatomy Education
by Nieves Martín-Alguacil, Luis Avedillo, Rubén A. Mota-Blanco, Mercedes Marañón-Almendros and Miguel Gallego-Agúndez
Int. Med. Educ. 2025, 4(3), 34; https://doi.org/10.3390/ime4030034 - 3 Sep 2025
Abstract
Artificial intelligence is transforming higher education, particularly in flipped classroom settings, in which students learn independently prior to class and collaborate during in-person sessions. This study examines the role of ChatGPT as a virtual peer in a veterinary anatomy course centered on cardiovascular [...] Read more.
Artificial intelligence is transforming higher education, particularly in flipped classroom settings, in which students learn independently prior to class and collaborate during in-person sessions. This study examines the role of ChatGPT as a virtual peer in a veterinary anatomy course centered on cardiovascular and respiratory systems. Over two academic years (2023–2025), 297 first-year veterinary students worked in small groups to explore anatomy through structured prompts in English and Spanish using ChatGPT versions 3.5 and 4. Activities involved analyzing AI output, evaluating anatomical accuracy, and suggesting alternative names for vascular variations. Learning outcomes were assessed using Bloom’s Taxonomy-based questions, and student perceptions were captured via online surveys. Progressive performance improvement was noted across three instructional phases, particularly in higher-level cognitive tasks (Bloom level 4). Responses to English prompts were more accurate than those to Spanish prompts. While students appreciated ChatGPT’s role in reinforcing knowledge and sparking discussion, they also flagged inaccuracies and emphasized the need for critical evaluation. Peer collaboration was found to be more influential than chatbot input. Conclusions: ChatGPT can enrich flipped anatomy instruction when paired with structured guidance. It supports content review, fosters group learning and promotes reflective thinking. However, developing digital literacy and ensuring expert oversight are essential to maximizing the educational value of AI. Full article
Show Figures

Figure 1

19 pages, 1153 KB  
Article
ChatGPT in Early Childhood Science Education: Can It Offer Innovative Effective Solutions to Overcome Challenges?
by Mustafa Uğraş, Zehra Çakır, Georgios Zacharis and Michail Kalogiannakis
Computers 2025, 14(9), 368; https://doi.org/10.3390/computers14090368 - 3 Sep 2025
Abstract
This study explores the potential of ChatGPT to address challenges in Early Childhood Science Education (ECSE) from the perspective of educators. A qualitative case study was conducted with 33 Early Childhood Education (ECE) teachers in Türkiye, using semi-structured interviews. Data were analyzed through [...] Read more.
This study explores the potential of ChatGPT to address challenges in Early Childhood Science Education (ECSE) from the perspective of educators. A qualitative case study was conducted with 33 Early Childhood Education (ECE) teachers in Türkiye, using semi-structured interviews. Data were analyzed through content analysis with MAXQDA 24 software. The results indicate that ECE teachers perceive ChatGPT as a partial solution to the scarcity of educational resources, appreciating its ability to propose alternative material uses and creative activity ideas. Participants also recognized its potential to support differentiated instruction by suggesting activities tailored to children’s developmental needs. Furthermore, ChatGPT was seen as a useful tool for generating lesson plans and activity options, although concerns were expressed that overreliance on the tool might undermine teachers’ pedagogical skills. Additional limitations highlighted include dependence on technology, restricted access to digital tools, diminished interpersonal interactions, risks of misinformation, and ethical concerns. Overall, while educators acknowledged ChatGPT’s usefulness in supporting ECSE, they emphasized that its integration into teaching practice should be cautious and balanced, considering both its educational benefits and its limitations. Full article
(This article belongs to the Special Issue STEAM Literacy and Computational Thinking in the Digital Era)
Show Figures

Figure 1

41 pages, 966 KB  
Review
ChatGPT’s Expanding Horizons and Transformative Impact Across Domains: A Critical Review of Capabilities, Challenges, and Future Directions
by Taiwo Raphael Feyijimi, John Ogbeleakhu Aliu, Ayodeji Emmanuel Oke and Douglas Omoregie Aghimien
Computers 2025, 14(9), 366; https://doi.org/10.3390/computers14090366 - 2 Sep 2025
Abstract
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified [...] Read more.
The rapid proliferation of Chat Generative Pre-trained Transformer (ChatGPT) marks a pivotal moment in artificial intelligence, eliciting responses from academic shock to industrial awe. As these technologies advance from passive tools toward proactive, agentic systems, their transformative potential and inherent risks are magnified globally. This paper presents a comprehensive, critical review of ChatGPT’s impact across five key domains: natural language understanding (NLU), content generation, knowledge discovery, education, and engineering. While ChatGPT demonstrates profound capabilities, significant challenges remain in factual accuracy, bias, and the inherent opacity of its reasoning—a core issue termed the “Black Box Conundrum”. To analyze these evolving dynamics and the implications of this shift toward autonomous agency, this review introduces a series of conceptual frameworks, each specifically designed to illuminate the complex interactions and trade-offs within these domains: the “Specialization vs. Generalization” tension in NLU; the “Quality–Scalability–Ethics Trilemma” in content creation; the “Pedagogical Adaptation Imperative” in education; and the emergence of “Human–LLM Cognitive Symbiosis” in engineering. The analysis reveals an urgent need for proactive adaptation across sectors. Educational paradigms must shift to cultivate higher-order cognitive skills, while professional practices (including practices within education sector) must evolve to treat AI as a cognitive partner, leveraging techniques like Retrieval-Augmented Generation (RAG) and sophisticated prompt engineering. Ultimately, this paper argues for an overarching “Ethical–Technical Co-evolution Imperative”, charting a forward-looking research agenda that intertwines technological innovation with vigorous ethical and methodological standards to ensure responsible AI development and integration. Ultimately, the analysis reveals that the challenges of factual accuracy, bias, and opacity are interconnected and acutely magnified by the emergence of agentic systems, demanding a unified, proactive approach to adaptation across all sectors. Full article
(This article belongs to the Special Issue Natural Language Processing (NLP) and Large Language Modelling)
Show Figures

Figure 1

23 pages, 1259 KB  
Article
Maieutic, Natural, and Artificial Forms in Automatic Control Case Study
by Luigi Fortuna and Adriano Scibilia
Information 2025, 16(9), 761; https://doi.org/10.3390/info16090761 - 2 Sep 2025
Abstract
Maieutics is a remarkable method for discovering new insights through deep dialogue. Defined as “relating to or resembling the Socratic method of eliciting new ideas from another”, the term originates from the Greek word for “midwifery”—as noted in the Merriam-Webster Dictionary. Recently, maieutics [...] Read more.
Maieutics is a remarkable method for discovering new insights through deep dialogue. Defined as “relating to or resembling the Socratic method of eliciting new ideas from another”, the term originates from the Greek word for “midwifery”—as noted in the Merriam-Webster Dictionary. Recently, maieutics has gained renewed relevance in advanced discussions about artificial intelligence, the nature of the mind, and scientific inquiry. This contribution presents a real and extended dialogue, illustrating the power of the maieutic method in addressing key developments in the field of Automatic Control. Over the past 40 years, the authors have followed a unique intellectual path shaped by this method. Inspired by recent research, they have also applied maieutics in interaction with AI systems—particularly ChatGPT. This experiment aimed to replicate, in a condensed timeframe, the long intellectual journey taken over decades. The preliminary results suggest that although AI systems can retrieve historical information, they struggle to capture the deeper, guiding principles of this journey. The authors also identify a significant concern: while the maieutic approach with ChatGPT can serve as a valuable educational tool, it must be complemented by a strong knowledge of dynamical systems leading to innovative paradigms of learning. Full article
(This article belongs to the Special Issue Learning and Knowledge: Theoretical Issues and Applications)
26 pages, 3949 KB  
Article
An AI-Based Risk Analysis Framework Using Large Language Models for Web Log Security
by Hoseong Jeong and Inwhee Joe
Electronics 2025, 14(17), 3512; https://doi.org/10.3390/electronics14173512 - 2 Sep 2025
Abstract
Web log data analysis is essential for monitoring and securing modern software systems. However, traditional manual analysis methods struggle to cope with the rapidly growing volumes and complexity of log data, resulting in inefficiencies and potential security risks. To address these challenges, this [...] Read more.
Web log data analysis is essential for monitoring and securing modern software systems. However, traditional manual analysis methods struggle to cope with the rapidly growing volumes and complexity of log data, resulting in inefficiencies and potential security risks. To address these challenges, this paper proposes an AI-driven log analysis framework utilizing advanced natural language processing techniques from large language models (LLMs), specifically ChatGPT. The framework aims to automate log data normalization, anomaly detection, and risk assessment, enabling the real-time identification and mitigation of security threats. Our objectives include reducing dependency on human analysis, enhancing the accuracy and speed of threat detection, and providing a scalable solution suitable for diverse web service environments. Through extensive experimentation with realistic log scenarios, we demonstrate the effectiveness of the proposed framework in swiftly identifying and responding to web-based security threats, ultimately improving both security posture and operational efficiency. Full article
Show Figures

Figure 1

26 pages, 883 KB  
Article
Insights into EFL Students’ Perceptions of the ‘ChatGPT Essentials’ Training Course for Language Learning
by Maha Alghasab
Educ. Sci. 2025, 15(9), 1138; https://doi.org/10.3390/educsci15091138 - 1 Sep 2025
Viewed by 113
Abstract
This paper introduces ‘ChatGPT essentials’, a pedagogically driven training course for pre-service English as a Foreign language (EFL) teachers at the College of Basic Education (CBE) in Kuwait. It responds to current ethical and academic integrity issues by empowering students to use ChatGPT [...] Read more.
This paper introduces ‘ChatGPT essentials’, a pedagogically driven training course for pre-service English as a Foreign language (EFL) teachers at the College of Basic Education (CBE) in Kuwait. It responds to current ethical and academic integrity issues by empowering students to use ChatGPT both effectively and ethically. Prior to ChatGPT essentials training sessions, semi-structured interviews were conducted with twenty-five male undergraduate students in a Computer Assisted Language Learning (CALL) course to assess their familiarity with ChatGPT for language learning, followed by pedagogical and practical training sessions and a subsequent evaluation. The quantitative analysis indicates that the students generally valued the training on four levels (i.e., reaction, learning, behaviors, and results). Their perceptions and experiences have changed positively, indicating general positive attitudes towards using ChatGPT as a tool to develop their language skills. Qualitative data from post-training interviews and students’ reflective journals revealed that students valued the practical guidance on ethical usage and critical evaluation of ChatGPT practices, which enhanced their digital literacy skills and fostered responsible ChatGPT use. Such findings point to the benefits of implementing pedagogical training to enhance students’ ChatGPT usage. Full article
(This article belongs to the Section Technology Enhanced Education)
Show Figures

Figure 1

13 pages, 2559 KB  
Article
Artificial Intelligence Versus Professional Standards: A Cross-Sectional Comparative Study of GPT, Gemini, and ENT UK in Delivering Patient Information on ENT Conditions
by Ali Alabdalhussein, Nehal Singhania, Shazaan Nadeem, Mohammed Talib, Derar Al-Domaidat, Ibrahim Jimoh, Waleed Khan and Manish Mair
Diseases 2025, 13(9), 286; https://doi.org/10.3390/diseases13090286 - 1 Sep 2025
Viewed by 129
Abstract
Objective: Patient information materials are sensitive and, if poorly written, can cause misunderstanding. This study evaluated and compared the readability, actionability, and quality of patient education materials on laryngology topics generated by ChatGPT, Google Gemini, and ENT UK. Methods: We obtained patient information [...] Read more.
Objective: Patient information materials are sensitive and, if poorly written, can cause misunderstanding. This study evaluated and compared the readability, actionability, and quality of patient education materials on laryngology topics generated by ChatGPT, Google Gemini, and ENT UK. Methods: We obtained patient information from ENT UK and generated equivalent content with ChatGPT-4-turbo and Google Gemini 2.5 Pro for six laryngology conditions. We assessed readability (Flesch–Kincaid Grade Level, FKGL; Flesch Reading Ease, FRE), quality (DISCERN), and patient engagement (PEMAT-P for understandability and actionability). Statistical comparisons involved using ANOVA, Tukey’s HSD, and Kruskal–Wallis tests. Results: ENT UK showed the highest readability (FRE: 64.6 ± 8.4) and lowest grade level (FKGL: 7.4 ± 1.5), significantly better than that of ChatGPT (FRE: 38.8 ± 10.5, FKGL: 11.0 ± 1.5) and Gemini (FRE: 38.3 ± 8.5, FKGL: 11.9 ± 1.2) (all p < 0.001). DISCERN scores did not differ significantly (ENT UK: 21.3 ± 7.5, GPT: 24.7 ± 9.1, Gemini: 29.5 ± 4.6; p > 0.05). PEMAT-P understandability results were similar (ENT UK: 72.7 ± 8.3%, GPT: 79.1 ± 5.8%, Gemini: 78.5 ± 13.1%), except for lower GPT scores on vocal cord paralysis (p < 0.05). Actionability was also comparable (ENT UK: 46.7 ± 16.3%, GPT: 41.1 ± 24.0%, Gemini: 36.7 ± 19.7%). Conclusion: GPT and Gemini produce patient information of comparable quality and engagement to ENT UK but require higher reading levels and fall short of recommended literacy standards. Full article
Show Figures

Figure 1

17 pages, 485 KB  
Article
Harnessing Self-Control and AI: Understanding ChatGPT’s Impact on Academic Wellbeing
by Metin Besalti
Behav. Sci. 2025, 15(9), 1181; https://doi.org/10.3390/bs15091181 - 29 Aug 2025
Viewed by 217
Abstract
The rapid integration of generative AI, particularly ChatGPT, into academic settings has prompted urgent questions regarding its impact on students’ psychological and academic outcomes. Although generative AI holds considerable potential to transform educational practices, its effects on individual traits such as self-control and [...] Read more.
The rapid integration of generative AI, particularly ChatGPT, into academic settings has prompted urgent questions regarding its impact on students’ psychological and academic outcomes. Although generative AI holds considerable potential to transform educational practices, its effects on individual traits such as self-control and academic wellbeing remain insufficiently explored. This study addresses this gap through a sequential two-phase design. In the first phase, the ChatGPT Usage Scale was adapted and validated for a Turkish university student population (N = 413). Using confirmatory factor analysis and item response theory, the scale was confirmed as a psychometrically valid and reliable one-factor instrument. In the second phase, a separate sample (N = 449) was used to examine the relationships between ChatGPT usage, self-control, and academic wellbeing through a mediation model. The findings revealed that higher ChatGPT usage was significantly associated with lower levels of both self-control and academic wellbeing. Additionally, mediation analysis demonstrated that self-control partially mediates the negative relationship between ChatGPT usage and academic wellbeing. The study concludes that while generative AI tools are valuable, their integration into education presents a double-edged sword, highlighting the critical need to foster students’ self-regulatory skills to ensure they can harness these tools responsibly without compromising their academic and psychological health. Full article
(This article belongs to the Special Issue Artificial Intelligence and Educational Psychology)
Show Figures

Figure 1

20 pages, 661 KB  
Article
An Analysis of Students’ Attitudes Toward Artificial Intelligence—ChatGPT, in Particular—In Relation to Personality Traits, Coping Strategies, and Personal Values
by Simona Maria Glaveanu and Roxana Maier
Behav. Sci. 2025, 15(9), 1179; https://doi.org/10.3390/bs15091179 - 29 Aug 2025
Viewed by 165
Abstract
The general objective of this research was to investigate the attitudes of Bucharest students toward artificial intelligence (AI)—in particular, ChatGPT—in relation to their personality traits, coping strategies, and personal values to identify psychosocial approaches for students’ effective reporting toward this AI product. As [...] Read more.
The general objective of this research was to investigate the attitudes of Bucharest students toward artificial intelligence (AI)—in particular, ChatGPT—in relation to their personality traits, coping strategies, and personal values to identify psychosocial approaches for students’ effective reporting toward this AI product. As there was no instrument validated and calibrated on Romanian students, the scale constructed by Acosta-Enriquez et al. in 2024 was adapted to students from Bucharest (N = 508). Following the item analysis, the adapted scale was reduced to 16 items, and, following the factor analysis (EFA–0.81 < α < 0.91), the structure with three factors (cognitive, affective, and behavioral components), explaining 53% of the variation in Bucharest students’ attitudes toward ChatGPT, was maintained considering the results of the confirmatory factor analysis—CFA (χ2(79) = 218.345, p < 0.001; CMIN/DF = 2.486; CFI = 0.911; TLI = 0.900; RMSEA = 0.058 (90% CI: 0.50–0.065). The present study showed that 85.53% of the research subjects used ChatGPT at least once, of which 24.11% have a positive/open attitude toward ChatGPT, and that there are correlations (p < 0.01; 0.23 < r2 < 0.50) between students’ attitudes toward ChatGPT and several personality traits, coping strategies, and personal values. It also proves that the three components of the attitude toward ChatGPT (cognitive, affective, and behavioral) are correlated with a series of personality traits, coping strategies, and personal values of students. Although the general objective was achieved and the adapted scale has adequate psychometric qualities, the authors propose in future studies to expand the group of subjects so that the scale can be validated at the level of the Romanian population. In this research, at the end, several concrete approaches are proposed for the effective reporting of students toward this AI product, which, beyond the ethical challenges, also recognizes the benefits of technology in the evolution of education. Full article
(This article belongs to the Special Issue Artificial Intelligence and Educational Psychology)
Show Figures

Figure 1

Back to TopTop