Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (102)

Search Parameters:
Keywords = ChatGPT adoption

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 798 KB  
Article
Evaluating Generative AI for HTML Development
by Ahmad Salah Alahmad and Hasan Kahtan
Technologies 2025, 13(10), 445; https://doi.org/10.3390/technologies13100445 - 1 Oct 2025
Viewed by 323
Abstract
The adoption of generative Artificial Intelligence (AI) tools in web development implementation tasks is increasing exponentially. This paper evaluates the performance of five leading Generative AI models: ChatGPT-4.0, DeepSeek-V3, Gemini-1.5, Copilot (March 2025 release), and Claude-3, in building HTML components. This study presents [...] Read more.
The adoption of generative Artificial Intelligence (AI) tools in web development implementation tasks is increasing exponentially. This paper evaluates the performance of five leading Generative AI models: ChatGPT-4.0, DeepSeek-V3, Gemini-1.5, Copilot (March 2025 release), and Claude-3, in building HTML components. This study presents a structured evaluation of AI-generated HTML code produced by leading Generative AI models. We have designed a set of prompts for popular tasks to generate five standardized HTML components: a contact form, a navigation menu, a blog post layout, a product listing page, and a dashboard interface. The responses were evaluated across five dimensions: semantic structure, accessibility, efficiency, readability, and search engine optimization (SEO). Results show that while AI-generated HTML can achieve high validation scores, deficiencies remain in semantic structuring and accessibility, with measurable differences between models. The results show variation in the quality and structure of the generated HTML. These results provide practical insights into the limitations and strengths of the current use of AI tools in HTML development. Full article
Show Figures

Figure 1

14 pages, 1127 KB  
Article
Dental Age Estimation from Panoramic Radiographs: A Comparison of Orthodontist and ChatGPT-4 Evaluations Using the London Atlas, Nolla, and Haavikko Methods
by Derya Dursun and Rumeysa Bilici Geçer
Diagnostics 2025, 15(18), 2389; https://doi.org/10.3390/diagnostics15182389 - 19 Sep 2025
Viewed by 385
Abstract
Background: Dental age (DA) estimation, which is widely used in orthodontics, pediatric dentistry, and forensic dentistry, predicts chronological age (CA) by assessing tooth development and maturation. Most methods rely on radiographic evaluation of tooth mineralization and eruption stages to assess DA. With the [...] Read more.
Background: Dental age (DA) estimation, which is widely used in orthodontics, pediatric dentistry, and forensic dentistry, predicts chronological age (CA) by assessing tooth development and maturation. Most methods rely on radiographic evaluation of tooth mineralization and eruption stages to assess DA. With the increasing adoption of large language models (LLMs) in medical sciences, use of ChatGPT has extended to processing visual data. The aim of this study, therefore, was to evaluate the performance of ChatGPT-4 in estimating DA from panoramic radiographs using three conventional methods (Nolla, Haavikko, and London Atlas) and to compare its accuracy against both orthodontist assessments and CA. Methods: In this retrospective study, panoramic radiographs of 511 Turkish children aged 6–17 years were assessed. DA was estimated using the Nolla, Haavikko, and London Atlas methods by both orthodontists and ChatGPT-4. The DA–CA difference and mean absolute error (MAE) were calculated, and statistical comparisons were performed to assess accuracy and sex differences and reach an agreement between the evaluators, with significance set at p < 0.05. Results: The mean CA of the study population was 12.37 ± 2.95 years (boys: 12.39 ± 2.94; girls: 12.35 ± 2.96). Using the London Atlas method, the orthodontists overestimated CA with a DA–CA difference of 0.78 ± 1.26 years (p < 0.001), whereas ChatGPT-4 showed no significant DA–CA difference (0.03 ± 0.93; p = 0.399). Using the Nolla method, the orthodontist showed no significant DA–CA difference (0.03 ± 1.14; p = 0.606), but ChatGPT-4 underestimated CA with a DA–CA difference of −0.40 ± 1.96 years (p < 0.001). Using the Haavikko method, the evaluators underestimated CA (orthodontist: −0.88; ChatGPT-4: −1.18; p < 0.001). The lowest MAE for ChatGPT-4 was obtained when using the London Atlas method (0.59 ± 0.72), followed by Nolla (1.33 ± 1.28) and Haavikko (1.51 ± 1.41). For the orthodontists, the lowest MAE was achieved when using the Nolla method (0.86 ± 0.75). Agreement between the orthodontists and ChatGPT-4 was highest when using the London Atlas method (ICC = 0.944, r = 0.905). Conclusions: ChatGPT-4 showed the highest accuracy with the London Atlas method, with no significant difference from CA for either sex or the lowest prediction error. When using the Nolla and Haavikko methods, both ChatGPT-4 and the orthodontist tended to underestimate age, with higher errors. Overall, ChatGPT-4 performed best when using visually guided methods and was less accurate when using multi-stage scoring methods. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

26 pages, 653 KB  
Article
Relative Advantage and Compatibility as Drivers of ChatGPT Adoption in Latin American Higher Education: A PLS SEM Study Towards Sustainable Digital Education
by Juana Beatriz Vargas Bernuy, Marco A. Nolasco-Mamani, Norma C. Velásquez Rodríguez, Renza L. Gambetta Quelopana, Ana N. Martinez Valdivia and Sam M. Espinoza Vidaurre
Sustainability 2025, 17(18), 8329; https://doi.org/10.3390/su17188329 - 17 Sep 2025
Viewed by 728
Abstract
As Latin American universities pursue digitally and environmentally sustainable teaching models, understanding why students adopt generative AI is essential. We analyzed data from undergraduate students (n = 792) across five Latin American countries (Peru, Chile, Bolivia, Argentina, and Colombia). Grounded in the diffusion [...] Read more.
As Latin American universities pursue digitally and environmentally sustainable teaching models, understanding why students adopt generative AI is essential. We analyzed data from undergraduate students (n = 792) across five Latin American countries (Peru, Chile, Bolivia, Argentina, and Colombia). Grounded in the diffusion of innovations theory, the study evaluated the effects of relative advantage, compatibility, complexity, trialability, and observability on attitudes towards ChatGPT and examined the effect of attitude on intention to use among higher education students in the region. The reliability and validity of the measurement scale were confirmed, and structural relationships were tested using partial least squares structural equation modeling (PLS-SEM). The model explained 58.1% of the variance in attitude: relative advantage (β = 0.247) and compatibility (β = 0.246) exerted the largest effects, followed by trialability (β = 0.223) and observability (β = 0.167); complexity showed a weaker yet significant effect (β = 0.118). Attitude strongly predicted the intention to use ChatGPT (β = 0.777), accounting for 60.4% of its variance. All paths were significant (p < 0.001), and psychometric indicators exceeded recommended thresholds. These findings indicate that student adoption is driven more by perceived academic benefits and alignment with existing learning routines than by technical ease. Highlighting concrete, ethically delineated use cases and providing guided institutional spaces for experimentation may accelerate the responsible, long-term adoption of generative AI in quality higher education. Full article
Show Figures

Figure 1

23 pages, 25963 KB  
Article
AI-Assisted Landscape Character Assessment: A Structured Framework for Text Generation, Scenario Building, and Stakeholder Engagement Using ChatGPT
by Ghieth Alkhateeb, Martti Veldi, Joanna Tamar Storie and Mart Külvik
Land 2025, 14(9), 1842; https://doi.org/10.3390/land14091842 - 10 Sep 2025
Viewed by 419
Abstract
Landscape Character Assessments (LCAs) support planning decisions by offering structured descriptions of landscape character. However, producing these texts is often resource-intensive and shaped by subjective judgement. This study explores whether Generative Artificial Intelligence (GenAI), specifically ChatGPT, can support the drafting of LCA descriptions [...] Read more.
Landscape Character Assessments (LCAs) support planning decisions by offering structured descriptions of landscape character. However, producing these texts is often resource-intensive and shaped by subjective judgement. This study explores whether Generative Artificial Intelligence (GenAI), specifically ChatGPT, can support the drafting of LCA descriptions using a structured, prompt-based framework. Applied to Harku Municipality in Estonia, the method integrates spatial input, reference material, and standardised prompts to generate consistent descriptions of landscape character areas (LCAs) and facilitate scenario building. The results show that ChatGPT outputs align with core LCA components and maintain internal coherence, although variations in terminology and ecological specificity require expert review. A stakeholder role play using ChatGPT highlighted its potential for enhancing early-stage planning, education, and participatory dialogue. The limitations include the reliance on prompt quality, static inputs, and the absence of real-time community validation. Recommendations include piloting AI-assisted workflows in education and practice, adopting prompt protocols, and prioritising human oversight, both experts and stakeholders, to ensure contextual relevance and build trust. This research proposes a practical framework for embedding GenAI into planning processes while preserving the social and interpretive dimensions central to landscape governance. Full article
Show Figures

Figure 1

31 pages, 2854 KB  
Article
ForestGPT and Beyond: A Trustworthy Domain-Specific Large Language Model Paving the Way to Forestry 5.0
by Florian Ehrlich-Sommer, Benno Eberhard and Andreas Holzinger
Electronics 2025, 14(18), 3583; https://doi.org/10.3390/electronics14183583 - 10 Sep 2025
Viewed by 914
Abstract
Large language models (LLMs) such as Chat Generative Pre-Trained Transformer (ChatGPT) are increasingly used across domains, yet their generic training data and propensity for hallucination limit reliability in safety-critical fields like forestry. This paper outlines the conception and prototype of ForestGPT, a domain-specialised [...] Read more.
Large language models (LLMs) such as Chat Generative Pre-Trained Transformer (ChatGPT) are increasingly used across domains, yet their generic training data and propensity for hallucination limit reliability in safety-critical fields like forestry. This paper outlines the conception and prototype of ForestGPT, a domain-specialised assistant designed to support forest professionals while preserving expert oversight. It addresses two looming risks: unverified adoption of generic outputs and professional mistrust of opaque algorithms. We propose a four-level development path: (1) pre-training a transformer on curated forestry literature to create a baseline conversational tool; (2) augmenting it with Retrieval-Augmented Generation to ground answers in local and time-sensitive documents; (3) coupling growth simulators for scenario modeling; and (4) integrating continuous streams from sensors, drones and machinery for real-time decision support. A Level-1 prototype, deployed at Futa Expo 2025 via a mobile app, successfully guided multilingual visitors and demonstrated the feasibility of lightweight fine-tuning on open-weight checkpoints. We analyse technical challenges, multimodal grounding, continual learning, safety certification, and social barriers including data sovereignty, bias and change management. Results indicate that trustworthy, explainable, and accessible LLMs can accelerate the transition to Forestry 5.0, provided that human-in-the-loop guardrails remain central. Future work will extend ForestGPT with full RAG pipelines, simulator coupling and autonomous data ingestion. Whilst exemplified in forestry, a complex, safety-critical, and ecologically vital domain, the proposed architecture and development path are broadly transferable to other sectors that demand trustworthy, domain-specific language models under expert oversight. Full article
Show Figures

Graphical abstract

18 pages, 1609 KB  
Article
Using Large Language Models to Extract Structured Data from Health Coaching Dialogues: A Comparative Study of Code Generation Versus Direct Information Extraction
by Sai Sangameswara Aadithya Kanduri, Apoorv Prasad and Susan McRoy
BioMedInformatics 2025, 5(3), 50; https://doi.org/10.3390/biomedinformatics5030050 - 4 Sep 2025
Viewed by 943
Abstract
Background: Virtual coaching can help people adopt new healthful behaviors by encouraging them to set specific goals and helping them review their progress. One challenge in creating such systems is analyzing clients’ statements about their activities. Limiting people to selecting among predefined [...] Read more.
Background: Virtual coaching can help people adopt new healthful behaviors by encouraging them to set specific goals and helping them review their progress. One challenge in creating such systems is analyzing clients’ statements about their activities. Limiting people to selecting among predefined answers detracts from the naturalness of conversations and user engagement. Large Language Models (LLMs) offer the promise of covering a wide range of expressions. However, using an LLM for simple entity extraction would not necessarily perform better than functions coded in a programming language, while creating higher long-term costs. Methods: This study uses a real data set of annotated human coaching dialogs to develop LLM-based models for two training scenarios: one that generates pattern-matching functions and the other which does direct extraction. We use models of different sizes and complexity, including Meta-Llama, Gemma, and ChatGPT, and calculate their speed and accuracy. Results: LLM-generated pattern-matching functions took an average of 10 milliseconds (ms) per item as compared to 900 ms. (ChatGPT 3.5 Turbo) to 5 s (Llama 2 70B). The accuracy for pattern matching was 99% on real data, while LLM accuracy ranged from 90% (Llama 2 70B) to 100% (ChatGPT 3.5 Turbo), on both real and synthetically generated examples created for fine-tuning. Conclusions: These findings suggest promising directions for future research that combines both methods (reserving the LLM for cases that cannot be matched directly) or that use LLMs to generate synthetic training data with more expressive variety which can be used to improve the coverage of either generated codes or fine-tuned models. Full article
(This article belongs to the Section Methods in Biomedical Informatics)
Show Figures

Figure 1

17 pages, 1256 KB  
Systematic Review
Integrating Artificial Intelligence into Orthodontic Education: A Systematic Review and Meta-Analysis of Clinical Teaching Application
by Carlos M. Ardila, Eliana Pineda-Vélez and Anny Marcela Vivares Builes
J. Clin. Med. 2025, 14(15), 5487; https://doi.org/10.3390/jcm14155487 - 4 Aug 2025
Cited by 1 | Viewed by 1077
Abstract
Background/Objectives: Artificial intelligence (AI) is rapidly emerging as a transformative force in healthcare education, including orthodontics. This systematic review and meta-analysis aimed to evaluate the integration of AI into orthodontic training programs, focusing on its effectiveness in improving diagnostic accuracy, learner engagement, [...] Read more.
Background/Objectives: Artificial intelligence (AI) is rapidly emerging as a transformative force in healthcare education, including orthodontics. This systematic review and meta-analysis aimed to evaluate the integration of AI into orthodontic training programs, focusing on its effectiveness in improving diagnostic accuracy, learner engagement, and the perceived quality of AI-generated educational content. Materials and Methods: A comprehensive literature search was conducted across PubMed, Scopus, Web of Science, and Embase through May 2025. Eligible studies involved AI-assisted educational interventions in orthodontics. A mixed-methods approach was applied, combining meta-analysis and narrative synthesis based on data availability and consistency. Results: Seven studies involving 1101 participants—including orthodontic students, clinicians, faculty, and program directors—were included. AI tools ranged from cephalometric landmarking platforms to ChatGPT-based learning modules. A fixed-effects meta-analysis using two studies yielded a pooled Global Quality Scale (GQS) score of 3.69 (95% CI: 3.58–3.80), indicating moderate perceived quality of AI-generated content (I2 = 64.5%). Due to methodological heterogeneity and limited statistical reporting in most studies, a narrative synthesis was used to summarize additional outcomes. AI tools enhanced diagnostic skills, learner autonomy, and perceived satisfaction, particularly among students and junior faculty. However, barriers such as limited curricular integration, lack of training, and faculty skepticism were recurrent. Conclusions: AI technologies, especially ChatGPT and digital cephalometry tools, show promise in orthodontic education. While learners demonstrate high acceptance, full integration is hindered by institutional and perceptual challenges. Strategic curricular reforms and targeted faculty development are needed to optimize AI adoption in clinical training. Full article
(This article belongs to the Special Issue Orthodontics: State of the Art and Perspectives)
Show Figures

Figure 1

18 pages, 271 KB  
Article
AI Pioneers and Stragglers in Greece: Challenges, Gaps, and Opportunities for Journalists and Media
by Sotirios Triantafyllou, Andreas M. Panagopoulos and Panagiotis Kapos
Societies 2025, 15(8), 209; https://doi.org/10.3390/soc15080209 - 28 Jul 2025
Viewed by 1288
Abstract
Media organizations are experiencing ongoing transformation, increasingly driven by the advancement of AI technologies. This development has begun to link journalists with generative systems and synthetic technologies. Although newsrooms worldwide are exploring AI adoption to improve information sourcing, news production, and distribution, a [...] Read more.
Media organizations are experiencing ongoing transformation, increasingly driven by the advancement of AI technologies. This development has begun to link journalists with generative systems and synthetic technologies. Although newsrooms worldwide are exploring AI adoption to improve information sourcing, news production, and distribution, a gap exists between resource-rich organizations and those with limited means. Since ChatGPT 3.5 was released on 30 November 2022, Greek media and journalists have gained the ability to use and explore AI technology. In this study, we examine the use of AI in Greek newsrooms, as well as journalists’ reflections and concerns. Through qualitative analysis, our findings indicate that the adoption and integration of these tools in Greek newsrooms is marked by the lack of formal institutional policies, leading to a predominantly self-directed and individualized use of these technologies by journalists. Greek journalists engage with AI tools both professionally and personally, often without organizational guidance or formal training. This issue may compromise the quality of journalism due to the absence of established guidelines. Consequently, individuals may produce content that is inconsistent with the media outlet’s identity or that disseminates misinformation. Age, gender, and newsroom roles do not constitute limiting factors for this “experimentation”, as survey participants showed familiarity with this technology. In addition, in some cases, the disadvantages of specific tools regarding qualitative results in Greek are inhibiting factors for further exploration and use. All these points to the need for immediate training, literacy, and ethical frameworks. Full article
17 pages, 1035 KB  
Article
Whether and When Could Generative AI Improve College Student Learning Engagement?
by Fei Guo, Lanwen Zhang, Tianle Shi and Hamish Coates
Behav. Sci. 2025, 15(8), 1011; https://doi.org/10.3390/bs15081011 - 25 Jul 2025
Viewed by 1161
Abstract
Generative AI (GenAI) technologies have been widely adopted by college students since the launch of ChatGPT in late 2022. While the debate about GenAI’s role in higher education continues, there is a lack of empirical evidence regarding whether and when these technologies can [...] Read more.
Generative AI (GenAI) technologies have been widely adopted by college students since the launch of ChatGPT in late 2022. While the debate about GenAI’s role in higher education continues, there is a lack of empirical evidence regarding whether and when these technologies can improve the learning experience for college students. This study utilizes data from a survey of 72,615 undergraduate students across 25 universities and colleges in China to explore the relationships between GenAI use and student learning engagement in different learning environments. The findings reveal that over sixty percent of Chinese college students use GenAI technologies in Academic Year 2023–2024, with academic use exceeding daily use. GenAI use in academic tasks is related to more cognitive and emotional engagement, though it may also reduce active learning activities and learning motivation. Furthermore, this study highlights that the role of GenAI varies across learning environments. The positive associations of GenAI and student engagement are most prominent for students in “high-challenge and high-support” learning contexts, while GenAI use is mostly negatively associated with student engagement in “low-challenge, high-support” courses. These findings suggest that while GenAI plays a valuable role in the learning process for college students, its effectiveness is fundamentally conditioned by the instructional design of human teachers. Full article
(This article belongs to the Special Issue Artificial Intelligence and Educational Psychology)
Show Figures

Figure 1

21 pages, 1745 KB  
Article
AI and Q Methodology in the Context of Using Online Escape Games in Chemistry Classes
by Markéta Dobečková, Ladislav Simon, Lucia Boldišová and Zita Jenisová
Educ. Sci. 2025, 15(8), 962; https://doi.org/10.3390/educsci15080962 - 25 Jul 2025
Viewed by 572
Abstract
The contemporary digital era has fundamentally reshaped pupil education. It has transformed learning into a dynamic environment with enhanced access to information. The focus shifts to the educator, who must employ teaching strategies, practices, and methods to engage and motivate the pupils. New [...] Read more.
The contemporary digital era has fundamentally reshaped pupil education. It has transformed learning into a dynamic environment with enhanced access to information. The focus shifts to the educator, who must employ teaching strategies, practices, and methods to engage and motivate the pupils. New possibilities are emerging for adopting active pedagogical approaches. One example is the use of educational online escape games. In the theoretical part of this paper, we present online escape games as a tool that broadens pedagogical opportunities for schools in primary school chemistry education. These activities are known to foster pupils’ transversal or soft skills. We investigate the practical dimension of implementing escape games in education. This pilot study aims to analyse primary school teachers’ perceptions of online escape games. We collected data using Q methodology and conducted the Q-sort through digital technology. Data analysis utilised both the PQMethod programme and ChatGPT 4-o, with a subsequent comparison of their respective outputs. Although some numerical differences appeared between the ChatGPT and PQMethod analyses, both methods yielded the same factor saturation and overall results. Full article
(This article belongs to the Special Issue Innovation in Teacher Education Practices)
Show Figures

Figure 1

24 pages, 327 KB  
Article
Trust in Generative AI Tools: A Comparative Study of Higher Education Students, Teachers, and Researchers
by Elena Đerić, Domagoj Frank and Marin Milković
Information 2025, 16(7), 622; https://doi.org/10.3390/info16070622 - 21 Jul 2025
Cited by 5 | Viewed by 3542
Abstract
Generative AI (GenAI) tools, including ChatGPT, Microsoft Copilot, and Google Gemini, are rapidly reshaping higher education by transforming how students, educators, and researchers engage with learning, teaching, and academic work. Despite their growing presence, the adoption of GenAI remains inconsistent, largely due to [...] Read more.
Generative AI (GenAI) tools, including ChatGPT, Microsoft Copilot, and Google Gemini, are rapidly reshaping higher education by transforming how students, educators, and researchers engage with learning, teaching, and academic work. Despite their growing presence, the adoption of GenAI remains inconsistent, largely due to the absence of universal guidelines and trust-related concerns. This study examines how trust, defined across three key dimensions (accuracy and relevance, privacy protection, and nonmaliciousness), influences the adoption and use of GenAI tools in academic environments. Using survey data from 823 participants across different academic roles, this study employs multiple regression analysis to explore the relationship between trust, user characteristics, and behavioral intention. The results reveal that trust is primarily experience-driven. Frequency of use, duration of use, and self-assessed proficiency significantly predict trust, whereas demographic factors, such as gender and academic role, have no significant influence. Furthermore, trust emerges as a strong predictor of behavioral intention to adopt GenAI tools. These findings reinforce trust calibration theory and extend the UTAUT2 framework to the context of GenAI in education. This study highlights that fostering appropriate trust through transparent policies, privacy safeguards, and practical training is critical for enabling responsible, ethical, and effective integration of GenAI into higher education. Full article
(This article belongs to the Section Artificial Intelligence)
15 pages, 2948 KB  
Review
A Comprehensive Review of ChatGPT in Teaching and Learning Within Higher Education
by Samkelisiwe Purity Phokoye, Siphokazi Dlamini, Peggy Pinky Mthalane, Mthokozisi Luthuli and Smangele Pretty Moyane
Informatics 2025, 12(3), 74; https://doi.org/10.3390/informatics12030074 - 21 Jul 2025
Viewed by 3231
Abstract
Artificial intelligence (AI) has become an integral component of various sectors, including higher education. AI, particularly in the form of advanced chatbots like ChatGPT, is increasingly recognized as a valuable tool for engagement in higher education institutions (HEIs). This growing trend highlights the [...] Read more.
Artificial intelligence (AI) has become an integral component of various sectors, including higher education. AI, particularly in the form of advanced chatbots like ChatGPT, is increasingly recognized as a valuable tool for engagement in higher education institutions (HEIs). This growing trend highlights the potential of AI to enhance student engagement and subsequently improve academic performance. Given this development, it is crucial for HEIs to delve deeper into the potential integration of AI-driven chatbots into educational practices. The aim of this study was to conduct a comprehensive review of the use of ChatGPT in teaching and learning within higher education. To offer a comprehensive viewpoint, it had two primary objectives: to identify the key factors influencing the adoption and acceptance of ChatGPT in higher education, and to investigate the roles of institutional policies and support systems in the acceptance of ChatGPT in higher education. A bibliometric analysis methodology was employed in this study, and a PRISMA diagram was used to explain the papers included in the analysis. The findings reveal the increasing adoption of ChatGPT within the higher education sector while also identifying the challenges faced during its implementation, ranging from technical issues to educational adaptations. Moreover, this review provides guidelines for various stakeholders to effectively integrate ChatGPT into higher education. Full article
Show Figures

Figure 1

18 pages, 465 KB  
Article
From Struggle to Mastery: AI-Powered Writing Skills in ESL Education
by John Jairo Jaramillo, Andrés Chiappe and Fabiola Sáez Delgado
Appl. Sci. 2025, 15(14), 8079; https://doi.org/10.3390/app15148079 - 21 Jul 2025
Viewed by 2705
Abstract
Despite reaching intermediate English proficiency, many bilingual secondary students in Colombia struggle with academic writing due to difficulties in organizing ideas and expressing arguments coherently. To address this issue, this study explores the integration of AI-powered tools—Grammarly and ChatGPT—within the Writing Workshop Instructional [...] Read more.
Despite reaching intermediate English proficiency, many bilingual secondary students in Colombia struggle with academic writing due to difficulties in organizing ideas and expressing arguments coherently. To address this issue, this study explores the integration of AI-powered tools—Grammarly and ChatGPT—within the Writing Workshop Instructional Model (WWIM) to enhance students’ writing skills. Conducted at a bilingual secondary school, the intervention targeted 10th grade ESL learners and focused on improving grammar accuracy, textual coherence, and organizational structure. Drawing on Galbraith’s model of writing as content generation, the study adopted a design-based research methodology, incorporating iterations of implementation, feedback, and refinement. The results indicate that combining WWIM with AI feedback significantly improved students’ academic writing performance. Learners reported greater confidence and engagement when revising drafts using automated suggestions. These findings highlight the pedagogical potential of integrating AI tools into writing instructions and offer practical implications for enhancing academic writing curricula in secondary ESL contexts. Full article
(This article belongs to the Special Issue Development of Advanced Models in Information Systems)
Show Figures

Figure 1

14 pages, 320 KB  
Article
Evaluating Large Language Models in Cardiology: A Comparative Study of ChatGPT, Claude, and Gemini
by Michele Danilo Pierri, Michele Galeazzi, Simone D’Alessio, Melissa Dottori, Irene Capodaglio, Christian Corinaldesi, Marco Marini and Marco Di Eusanio
Hearts 2025, 6(3), 19; https://doi.org/10.3390/hearts6030019 - 19 Jul 2025
Viewed by 3106
Abstract
Background: Large Language Models (LLMs) such as ChatGPT, Claude, and Gemini are being increasingly adopted in medicine; however, their reliability in cardiology remains underexplored. Purpose of the study: To compare the performance of three general-purpose LLMs in response to cardiology-related clinical queries. Study [...] Read more.
Background: Large Language Models (LLMs) such as ChatGPT, Claude, and Gemini are being increasingly adopted in medicine; however, their reliability in cardiology remains underexplored. Purpose of the study: To compare the performance of three general-purpose LLMs in response to cardiology-related clinical queries. Study design: Seventy clinical prompts stratified by diagnostic phase (pre or post) and user profile (patient or physician) were submitted to ChatGPT, Claude, and Gemini. Three expert cardiologists, who were blinded to the model’s identity, rated each response on scientific accuracy, completeness, clarity, and coherence using a 5-point Likert scale. Statistical analysis included Kruskal–Wallis tests, Dunn’s post hoc comparisons, Kendall’s W, weighted kappa, and sensitivity analyses. Results: ChatGPT outperformed both Claude and Gemini across all criteria (mean scores: 3.7–4.2 vs. 3.4–4.0 and 2.9–3.7, respectively; p < 0.001). The inter-rater agreement was substantial (Kendall’s W: 0.61–0.71). Pre-diagnostic and patient-framed prompts received higher scores than post-diagnostic and physician-framed ones. Results remained robust across sensitivity analyses. Conclusions: Among the evaluated LLMs, ChatGPT demonstrated superior performance in generating clinically relevant cardiology responses. However, none of the models achieved maximal ratings, and the performance varied by context. These findings highlight the need for domain-specific fine-tuning and human oversight to ensure a safe clinical deployment. Full article
Show Figures

Graphical abstract

18 pages, 529 KB  
Article
Learners’ Acceptance of ChatGPT in School
by Matthias Conrad and Henrik Nuebel
Educ. Sci. 2025, 15(7), 904; https://doi.org/10.3390/educsci15070904 - 16 Jul 2025
Viewed by 1595
Abstract
The rapid development of generative artificial intelligence (AI) systems such as ChatGPT (GPT-4) could transform teaching and learning. Yet, integrating these tools requires insight into what drives students to adopt them. Research on ChatGPT acceptance has so far focused on university settings, leaving [...] Read more.
The rapid development of generative artificial intelligence (AI) systems such as ChatGPT (GPT-4) could transform teaching and learning. Yet, integrating these tools requires insight into what drives students to adopt them. Research on ChatGPT acceptance has so far focused on university settings, leaving school contexts underexplored. This study addresses the gap by surveying 506 upper secondary students in Baden-Württemberg, Germany, using the Unified Theory of Acceptance and Use of Technology 2 (UTAUT2). Performance expectancy, habit and hedonic motivation emerged as strong predictors of behavioral intention to use ChatGPT for school purposes. Adding personality traits and personal values such as conscientiousness or preference for challenge raised the model’s explanatory power only marginally. The findings suggest that students’ readiness to employ ChatGPT reflects the anticipated learning benefits and enjoyment rather than the avoidance of effort. The original UTAUT2 is therefore sufficient to explain students’ acceptance of ChatGPT in school contexts. The results could inform educators and policy makers aiming to foster the reflective and effective use of generative AI in instruction. Full article
(This article belongs to the Special Issue Dynamic Change: Shaping the Schools of Tomorrow in the Digital Age)
Show Figures

Figure 1

Back to TopTop