Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (61)

Search Parameters:
Keywords = AI-assisted content generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 2555 KB  
Article
Joint Learning of Emotion and Singing Style for Enhanced Music Style Understanding
by Yuwen Chen, Jing Mao and Rui-Feng Wang
Sensors 2025, 25(24), 7575; https://doi.org/10.3390/s25247575 - 13 Dec 2025
Viewed by 244
Abstract
Understanding music styles is essential for music information retrieval, personalized recommendation, and AI-assisted content creation. However, existing work typically addresses tasks such as emotion classification and singing style classification independently, thereby neglecting the intrinsic relationships between them. In this study, we introduce a [...] Read more.
Understanding music styles is essential for music information retrieval, personalized recommendation, and AI-assisted content creation. However, existing work typically addresses tasks such as emotion classification and singing style classification independently, thereby neglecting the intrinsic relationships between them. In this study, we introduce a multi-task learning framework that jointly models these two tasks to enable explicit knowledge sharing and mutual enhancement. Our results indicate that joint optimization consistently outperforms single-task counterparts, demonstrating the value of leveraging inter-task correlations for more robust singing style analysis. To assess the generality and adaptability of the proposed framework, we evaluate it across various backbone architectures, including Transformer, TextCNN, and BERT, and observe stable performance improvements in all cases. Experiments on a benchmark dataset, which were self-constructed and collected through professional recording devices, further show that the framework not only achieves the best accuracy on both tasks on our dataset under a singer-wise split, but also yields interpretable insights into the interplay between emotional expression and stylistic characteristics in vocal performance. Full article
Show Figures

Figure 1

21 pages, 2831 KB  
Article
The Psychological Effects of AI Learning Assistants in Immersive Virtual Reality Environments
by Avgoustos Tsinakos, Nikoletta Teazi and Styliani Tsinakou
Information 2025, 16(12), 1062; https://doi.org/10.3390/info16121062 - 3 Dec 2025
Viewed by 542
Abstract
Artificial Intelligence (AI) and Virtual Reality (VR) are increasingly integrated into education, yet their combined psychological effects remain underexplored. This paper investigates the potential benefits and risks of AI-powered learning assistants within immersive VR environments. The study builds on insights from a previous [...] Read more.
Artificial Intelligence (AI) and Virtual Reality (VR) are increasingly integrated into education, yet their combined psychological effects remain underexplored. This paper investigates the potential benefits and risks of AI-powered learning assistants within immersive VR environments. The study builds on insights from a previous pilot involving a virtual tour guide for Athens and proposes a case study with 52 high school students. In groups of three, students would use Oculus headsets with an AI assistant (pre-programmed and AI-generated modes), explore content for a week, and complete questionnaires on usability, trust, and psychological impact. The analysis is expected to support a balance of positive outcomes including greater engagement, motivation and autonomy but also negative ones such as over-reliance, diminished critical thinking, and social isolation. The paper also identifies key psychological dynamics, including the critical role of social influence and teacher-led adoption, and the nuanced nature of student trust in AI-generated information. Ethical implications, such as data privacy and the digital divide, are also discussed. The study concludes by proposing that AI-VR can enrich learning, especially in cultural contexts, but requires safeguards for trust, ethics, and accessibility, with further research on long-term effects, psychological impact and cross-cultural and linguistic nuances. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Graphical abstract

25 pages, 1910 KB  
Review
Natural Language Processing in Generating Industrial Documentation Within Industry 4.0/5.0
by Izabela Rojek, Olga Małolepsza, Mirosław Kozielski and Dariusz Mikołajewski
Appl. Sci. 2025, 15(23), 12662; https://doi.org/10.3390/app152312662 - 29 Nov 2025
Viewed by 570
Abstract
Deep learning (DL) methods have revolutionized natural language processing (NLP), enabling industrial documentation systems to process and generate text with high accuracy and fluency. Modern deep learning models, such as transformers and recurrent neural networks (RNNs), learn contextual relationships in text, making them [...] Read more.
Deep learning (DL) methods have revolutionized natural language processing (NLP), enabling industrial documentation systems to process and generate text with high accuracy and fluency. Modern deep learning models, such as transformers and recurrent neural networks (RNNs), learn contextual relationships in text, making them ideal for analyzing and creating complex industrial documentation. Transformer-based architectures, such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), are ideally suited for tasks such as text summarization, content generation, and question answering, which are crucial for documentation systems. Pre-trained language models, tuned to specific industrial datasets, support domain-specific vocabulary, ensuring the generated documentation complies with industry standards. Deep learning-based systems can use sequential models, such as those used in machine translation, to generate documentation in multiple languages, promoting accessibility, and global collaboration. Using attention mechanisms, these models identify and highlight critical sections of input data, resulting in the generation of accurate and concise documentation. Integration with optical character recognition (OCR) tools enables DL-based NLP systems to digitize and interpret legacy documents, streamlining the transition to automated workflows. Reinforcement learning and human feedback loops can enhance a system’s ability to generate consistent and contextually relevant text over time. These approaches are particularly effective in creating dynamic documentation that is automatically updated based on data from sensors, registers, or other sources in real time. The scalability of DL techniques enables industrial organizations to efficiently produce massive amounts of documentation, reducing manual effort and improving overall efficiency. NLP has become a fundamental technology for automating the generation, maintenance, and personalization of industrial documentation within the Industry 4.0, 5.0, and emerging Industry 6.0 paradigms. Recent advances in large language models, search-assisted generation, and multimodal architectures have significantly improved the accuracy and contextualization of technical manuals, maintenance reports, and compliance documents. However, persistent challenges such as domain-specific terminology, data scarcity, and the risk of hallucinations highlight the limitations of current approaches in safety-critical manufacturing environments. This review synthesizes state-of-the-art methods, comparing rule-based, neural, and hybrid systems while assessing their effectiveness in addressing industrial requirements for reliability, traceability, and real-time adaptation. Human–AI collaboration and the integration of knowledge graphs are transforming documentation workflows as factories evolve toward cognitive and autonomous systems. The review included 32 articles published between 2018 and 2025. The implications of these bibliometric findings suggest that a high percentage of conference papers (69.6%) may indicate a field still in its conceptual phase, which contextualizes the article’s emphasis on proposed architecture rather than their industrial validation. Most research was conducted in computer science, suggesting early stages of technological maturity. The leading countries were China and India, but these countries did not have large publication counts, nor were leading researchers or affiliations observed, suggesting significant research dispersion. However, the most frequently observed SDGs indicate a clear health context, focusing on “industry innovation and infrastructure” and “good health and well-being”. Full article
(This article belongs to the Special Issue Emerging and Exponential Technologies in Industry 4.0)
Show Figures

Figure 1

23 pages, 1463 KB  
Article
Imagined Geographies of Sustainability: Rethinking Responsible Tourism Consumption Through the Utopias of Generation Z
by Semra Günay, Deniz Ateş Akkaya and Öznur Akgiş İlhan
Sustainability 2025, 17(22), 10280; https://doi.org/10.3390/su172210280 - 17 Nov 2025
Viewed by 501
Abstract
This study explores how Generation Z imagines sustainable tourism and how these imaginaries reflect values and norms associated with responsible tourism consumption. Data were collected from 59 university students in Türkiye who created written utopian narratives and AI-assisted visuals depicting their visions of [...] Read more.
This study explores how Generation Z imagines sustainable tourism and how these imaginaries reflect values and norms associated with responsible tourism consumption. Data were collected from 59 university students in Türkiye who created written utopian narratives and AI-assisted visuals depicting their visions of sustainable destinations. Using thematic and visual content analysis, the findings reveal three dominant axes: (i) nature-integrated living practices, (ii) environmentally and community-oriented sustainability, and (iii) futuristic utopian visions. The results demonstrate that Generation Z imagines tourism not merely consumption but as a lifestyle embedded in ecological harmony, collective participation, and cultural continuity. Their dual orientation combining nostalgic “return to nature” imaginaries with techno-utopian futures illustrates how young people reconcile local identity with technological innovation. By bridging the frameworks of tourism imaginaries and responsible tourism consumption, the study introduces an “imagination–consumption bridge,” conceptualizing imaginaries as cognitive and normative mediators that translate values into practices. Methodologically, the integration of AI-assisted visualization offers an innovative approach to capturing mental models and prototyping sustainable futures. Practically, the emphasis on equity, accessibility, and participatory governance provides insights for designing more inclusive and ethically grounded tourism policies. The study thus contributes theoretically, methodologically, and practically to advancing sustainable tourism research. Full article
(This article belongs to the Special Issue Sustainable Consumption and Tourism Market Management)
Show Figures

Figure 1

20 pages, 1296 KB  
Article
Learning Path Recommendation Enhanced by Knowledge Tracing and Large Language Model
by Yunxuan Lin and Zhengyang Wu
Electronics 2025, 14(22), 4385; https://doi.org/10.3390/electronics14224385 - 10 Nov 2025
Viewed by 1358
Abstract
With the development of large language model (LLM) technology, AI-assisted education systems are gradually being widely used. Learning Path Recommendation (LPR) is an important task in personalized instructional scenarios. AI-assisted LPR is gaining traction for its ability to generate learning content based on [...] Read more.
With the development of large language model (LLM) technology, AI-assisted education systems are gradually being widely used. Learning Path Recommendation (LPR) is an important task in personalized instructional scenarios. AI-assisted LPR is gaining traction for its ability to generate learning content based on a student’s personalized needs. However, the native-LLM has the problem of hallucination, which may lead to the inability to generate learning content; in addition, the evaluation results of the LLM on students’ knowledge status are usually conservative and have a large margin of error. To address these issues, this work proposes a novel approach for LPR enhanced by knowledge tracing (KT) and LLM. Our method operates in a “generate-and-retrieve” manner: the LLM acts as a pedagogical planner that generates contextual reference exercises based on the student’s needs. Subsequently, a retrieval mechanism constructs the concrete learning path by retrieving the top-N most semantically similar exercises from an established exercise bank, ensuring the recommendations are both pedagogically sound and practically available. The KT plays the role of an evaluator in the iterative process. Rather than generating semantic instructions directly, it provides a quantitative and structured performance metric. Specifically, given a candidate learning path generated by the LLM, the KT model simulates the student’s knowledge state after completing the path and computes a knowledge promotion score. This score quantitatively measures the effectiveness of the proposed path for the current student, thereby guiding the refinement of subsequent recommendations. This iterative interaction between the KT and the LLM continuously refines the candidate learning items until an optimal learning path is generated. Experimental validations on public datasets demonstrate that our model surpasses baseline methods. Full article
(This article belongs to the Special Issue Data Mining and Recommender Systems)
Show Figures

Figure 1

21 pages, 2761 KB  
Article
The Development and Evaluation of a Retrieval-Augmented Generation Large Language Model Virtual Assistant for Postoperative Instructions
by Syed Ali Haider, Srinivasagam Prabha, Cesar Abraham Gomez Cabello, Ariana Genovese, Bernardo Collaco, Nadia Wood, James London, Sanjay Bagaria, Cui Tao and Antonio Jorge Forte
Bioengineering 2025, 12(11), 1219; https://doi.org/10.3390/bioengineering12111219 - 7 Nov 2025
Viewed by 1372
Abstract
Background: During postoperative recovery, patients and their caregivers often lack crucial information, leading to numerous repetitive inquiries that burden healthcare providers. Traditional discharge materials, including paper handouts and patient portals, are often static, overwhelming, or underutilized, leading to patient overwhelm and contributing to [...] Read more.
Background: During postoperative recovery, patients and their caregivers often lack crucial information, leading to numerous repetitive inquiries that burden healthcare providers. Traditional discharge materials, including paper handouts and patient portals, are often static, overwhelming, or underutilized, leading to patient overwhelm and contributing to unnecessary ER visits and overall healthcare overutilization. Conversational chatbots offer a solution, but Natural Language Processing (NLP) systems are often inflexible and limited in understanding, while powerful Large Language Models (LLMs) are prone to generating “hallucinations”. Objective: To combine the deterministic framework of traditional NLP with the probabilistic capabilities of LLMs, we developed the AI Virtual Assistant (AIVA) Platform. This system utilizes a retrieval-augmented generation (RAG) architecture, integrating Gemini 2.0 Flash with a medically verified knowledge base via Google Vertex AI, to safely deliver dynamic, patient-facing postoperative guidance grounded in validated clinical content. Methods: The AIVA Platform was evaluated through 750 simulated patient interactions derived from 250 unique postoperative queries across 20 high-frequency recovery domains. Three blinded physician reviewers assessed formal system performance, evaluating classification metrics (accuracy, precision, recall, F1-score), relevance (SSI Index), completeness, and consistency (5-point Likert scale). Safety guardrails were tested with 120 out-of-scope queries and 30 emergency escalation scenarios. Additionally, groundedness, fluency, and readability were assessed using automated LLM metrics. Results: The system achieved 98.4% classification accuracy (precision 1.0, recall 0.98, F1-score 0.9899). Physician reviews showed high completeness (4.83/5), consistency (4.49/5), and relevance (SSI Index 2.68/3). Safety guardrails successfully identified 100% of out-of-scope and escalation scenarios. Groundedness evaluations demonstrated strong context precision (0.951), recall (0.910), and faithfulness (0.956), with 95.6% verification agreement. While fluency and semantic alignment were high (BERTScore F1 0.9013, ROUGE-1 0.8377), readability was 11th-grade level (Flesch–Kincaid 46.34). Conclusion: The simulated testing demonstrated strong technical accuracy, safety, and clinical relevance in simulated postoperative care. Its architecture effectively balances flexibility and safety, addressing key limitations of standalone NLP and LLMs. While readability remains a challenge, these findings establish a solid foundation, demonstrating readiness for clinical trials and real-world testing within surgical care pathways. Full article
Show Figures

Figure 1

23 pages, 8644 KB  
Article
Understanding What the Brain Sees: Semantic Recognition from EEG Responses to Visual Stimuli Using Transformer
by Ahmed Fares
AI 2025, 6(11), 288; https://doi.org/10.3390/ai6110288 - 7 Nov 2025
Viewed by 1127
Abstract
Understanding how the human brain processes and interprets multimedia content represents a frontier challenge in neuroscience and artificial intelligence. This study introduces a novel approach to decode semantic information from electroencephalogram (EEG) signals recorded during visual stimulus perception. We present DCT-ViT, a spatial–temporal [...] Read more.
Understanding how the human brain processes and interprets multimedia content represents a frontier challenge in neuroscience and artificial intelligence. This study introduces a novel approach to decode semantic information from electroencephalogram (EEG) signals recorded during visual stimulus perception. We present DCT-ViT, a spatial–temporal transformer architecture that pioneers automated semantic recognition from brain activity patterns, advancing beyond conventional brain state classification to interpret higher level cognitive understanding. Our methodology addresses three fundamental innovations: First, we develop a topology-preserving 2D electrode mapping that, combined with temporal indexing, generates 3D spatial–temporal representations capturing both anatomical relationships and dynamic neural correlations. Second, we integrate discrete cosine transform (DCT) embeddings with standard patch and positional embeddings in the transformer architecture, enabling frequency-domain analysis that quantifies activation variability across spectral bands and enhances attention mechanisms. Third, we introduce the Semantics-EEG dataset comprising ten semantic categories extracted from visual stimuli, providing a benchmark for brain-perceived semantic recognition research. The proposed DCT-ViT model achieves 72.28% recognition accuracy on Semantics-EEG, substantially outperforming LSTM-based and attention-augmented recurrent baselines. Ablation studies demonstrate that DCT embeddings contribute meaningfully to model performance, validating their effectiveness in capturing frequency-specific neural signatures. Interpretability analyses reveal neurobiologically plausible attention patterns, with visual semantics activating occipital–parietal regions and abstract concepts engaging frontal–temporal networks, consistent with established cognitive neuroscience models. To address systematic misclassification between perceptually similar categories, we develop a hierarchical classification framework with boundary refinement mechanisms. This approach substantially reduces confusion between overlapping semantic categories, elevating overall accuracy to 76.15%. Robustness evaluations demonstrate superior noise resilience, effective cross-subject generalization, and few-shot transfer capabilities to novel categories. This work establishes the technical foundation for brain–computer interfaces capable of decoding semantic understanding, with implications for assistive technologies, cognitive assessment, and human–AI interaction. Both the Semantics-EEG dataset and DCT-ViT implementation are publicly released to facilitate reproducibility and advance research in neural semantic decoding. Full article
(This article belongs to the Special Issue AI in Bio and Healthcare Informatics)
Show Figures

Figure 1

25 pages, 2253 KB  
Entry
Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration
by Manolis Adamakis and Theodoros Rachiotis
Encyclopedia 2025, 5(4), 180; https://doi.org/10.3390/encyclopedia5040180 - 28 Oct 2025
Viewed by 4950
Definition
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the [...] Read more.
Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the higher education landscape, emphasizing mature knowledge aimed at educators, researchers, and policymakers. AI technologies now support personalized learning pathways, enhance instructional efficiency, and improve academic productivity by facilitating tasks such as automated grading, adaptive feedback, and academic writing assistance. The widespread adoption of AI tools among students and faculty members has created a critical need for AI literacy—encompassing not only technical proficiency but also critical evaluation, ethical awareness, and metacognitive engagement with AI-generated content. Key opportunities include the deployment of adaptive tutoring and real-time feedback mechanisms that tailor instruction to individual learning trajectories; automated content generation, grading assistance, and administrative workflow optimization that reduce faculty workload; and AI-driven analytics that inform curriculum design and early intervention to improve student outcomes. At the same time, AI poses challenges related to academic integrity (e.g., plagiarism and misuse of generative content), algorithmic bias and data privacy, digital divides that exacerbate inequities, and risks of “cognitive debt” whereby over-reliance on AI tools may degrade working memory, creativity, and executive function. The lack of standardized AI policies and fragmented institutional governance highlight the urgent necessity for transparent frameworks that balance technological adoption with academic values. Anchored in several foundational pillars (such as a brief description of AI higher education, AI literacy, AI tools for educators and teaching staff, ethical use of AI, and institutional integration of AI in higher education), this entry emphasizes that AI is neither a panacea nor an intrinsic threat but a “technology of selection” whose impact depends on the deliberate choices of educators, institutions, and learners. When embraced with ethical discernment and educational accountability, AI holds the potential to foster a more inclusive, efficient, and democratic future for higher education; however, its success depends on purposeful integration, balancing innovation with academic values such as integrity, creativity, and inclusivity. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Figure 1

22 pages, 1749 KB  
Review
How to Conduct AI-Assisted (Large Language Model-Assisted) Content Analysis in Information Science and Cyber Security Research
by Monica Therese Whitty
Electronics 2025, 14(20), 4104; https://doi.org/10.3390/electronics14204104 - 20 Oct 2025
Viewed by 1299
Abstract
The advent of Large Language Models (LLMs) has revolutionised natural language processing, providing unprecedented capabilities in text generation and analysis. This paper examines the utility of Artificial-Intelligence-assisted (AI-assisted) content analysis (CA), supported by LLMs, as a methodological tool for research in Information Science [...] Read more.
The advent of Large Language Models (LLMs) has revolutionised natural language processing, providing unprecedented capabilities in text generation and analysis. This paper examines the utility of Artificial-Intelligence-assisted (AI-assisted) content analysis (CA), supported by LLMs, as a methodological tool for research in Information Science (IS) and Cyber Security. It reviews current applications, methodological practices, and challenges, illustrating how LLMs can augment traditional approaches to qualitative data analysis. Key distinctions between CA and other qualitative methods are outlined, alongside the traditional steps involved in CA. To demonstrate relevance, examples from Information Science and Cyber Security are highlighted, along with a new example detailing the steps involved. A hybrid workflow is proposed that integrates human oversight with AI capabilities, grounded in the principles of Responsible AI. Within this model, human researchers remain central to guiding research design, interpretation, and ethical decision-making, while LLMs support efficiency and scalability. Both deductive and inductive AI-assisted frameworks are introduced. Overall, AI-assisted CA is presented as a valuable approach for advancing rigorous, replicable, and ethical scholarship in Information Science and Cyber Security. This paper contributes to prior LLM-assisted coding work, proposing that this hybrid model is preferred over a fully manual content analysis. Full article
(This article belongs to the Special Issue Trends in Information Systems and Security)
19 pages, 2260 KB  
Systematic Review
Enhancing Systematic Review Efficiency with AIGC: Applications of Perception Data in Built Environment Audits
by Anjun Tao, Zhijie Yang and Wenbo Ou
Buildings 2025, 15(20), 3684; https://doi.org/10.3390/buildings15203684 - 13 Oct 2025
Viewed by 497
Abstract
With the growing use of human perception data streams in audits of the built environment, their value for enhancing objectivity and human-centeredness has become increasingly evident. This review synthesizes 63 publications through July 2024, providing a comprehensive analysis of perception data types, collection [...] Read more.
With the growing use of human perception data streams in audits of the built environment, their value for enhancing objectivity and human-centeredness has become increasingly evident. This review synthesizes 63 publications through July 2024, providing a comprehensive analysis of perception data types, collection modalities and spatial strategies. This review introduces an Artificial Intelligence (AI)-enabled framework and utilizes Artificial Intelligence-Generated Content (AIGC) to assist literature retrieval and analysis, improving efficiency and transparency. The results indicate that heart rate and mood are currently the most frequently used perception data types in built-environment audits. Existing audit practices primarily focus on roads, green spaces, and residential areas at community and block-scale settings, with data choices varying by spatial typology. This review advances a systematic understanding of the application of perception data streams in built-environment audits and offers evidence-based recommendations for data collection, thereby providing stronger data support for future research. Full article
(This article belongs to the Section Architectural Design, Urban Science, and Real Estate)
Show Figures

Figure 1

16 pages, 452 KB  
Article
Students’ Trust in AI and Their Verification Strategies: A Case Study at Camilo José Cela University
by David Martín-Moncunill and Daniel Alonso Martínez
Educ. Sci. 2025, 15(10), 1307; https://doi.org/10.3390/educsci15101307 - 2 Oct 2025
Viewed by 3431
Abstract
Trust plays a pivotal role in individuals’ interactions with technological systems, and those incorporating artificial intelligence present significantly greater challenges than traditional systems. The current landscape of higher education is increasingly shaped by the integration of AI assistants into students’ classroom experiences. Their [...] Read more.
Trust plays a pivotal role in individuals’ interactions with technological systems, and those incorporating artificial intelligence present significantly greater challenges than traditional systems. The current landscape of higher education is increasingly shaped by the integration of AI assistants into students’ classroom experiences. Their appropriate use is closely tied to the level of trust placed in these tools, as well as the strategies adopted to critically assess the accuracy of AI-generated content. However, scholarly attention to this dimension remains limited. To explore these dynamics, this study applied the POTDAI evaluation framework to a sample of 132 engineering and social sciences students at Camilo José Cela University in Madrid, Spain. The findings reveal a general lack of trust in AI assistants despite their extensive use, common reliance on inadequate verification methods, and a notable skepticism regarding professors’ ability to detect AI-related errors. Additionally, students demonstrated a concerning misperception of the capabilities of different AI models, often favoring less advanced or less appropriate tools. These results underscore the urgent need to establish a reliable verification protocol accessible to both students and faculty, and to further investigate the reasons why students opt for limited tools over the more powerful alternatives made available to them. Full article
Show Figures

Figure 1

18 pages, 1151 KB  
Article
Expanding the Team: Integrating Generative Artificial Intelligence into the Assessment Development Process
by Toni A. May, Kathleen Provinzano, Kristin L. K. Koskey, Connor J. Sondergeld, Gregory E. Stone, James N. Archer and Naorah Rimkunas
Appl. Sci. 2025, 15(18), 9976; https://doi.org/10.3390/app15189976 - 11 Sep 2025
Viewed by 1012
Abstract
Effective assessment development requires collaboration between multidisciplinary team members, and the process is often time-intensive. This study illustrates a framework for integrating generative artificial intelligence (GenAI) as a collaborator in assessment design, rather than a fully automated tool. The context was the development [...] Read more.
Effective assessment development requires collaboration between multidisciplinary team members, and the process is often time-intensive. This study illustrates a framework for integrating generative artificial intelligence (GenAI) as a collaborator in assessment design, rather than a fully automated tool. The context was the development of a 12-item multiple-choice test for social work interns in a school-based training program, guided by design-based research (DBR) principles. Using ChatGPT to generate draft items, psychometricians refined outputs through structured prompts and then convened a panel of five subject matter experts to evaluate content validity. Results showed that while most AI-assisted items were relevant, 75% required modification, with revisions focused on response option clarity, alignment with learning objectives, and item stems. These findings provide initial evidence that GenAI can serve as a productive collaborator in assessment development when embedded in a human-in-the-loop process, while underscoring the need for continued expert oversight and further validation research. Full article
Show Figures

Figure 1

28 pages, 2443 KB  
Article
Exploring the Impact of Generative AI ChatGPT on Critical Thinking in Higher Education: Passive AI-Directed Use or Human–AI Supported Collaboration?
by Nesma Ragab Nasr, Chih-Hsiung Tu, Jennifer Werner, Tonia Bauer, Cherng-Jyh Yen and Laura Sujo-Montes
Educ. Sci. 2025, 15(9), 1198; https://doi.org/10.3390/educsci15091198 - 11 Sep 2025
Cited by 2 | Viewed by 12331
Abstract
Generative AI is weaving into the fabric of many human aspects through its transformative power to mimic human-generated content. It is not a mere technology; it functions as a generative virtual assistant, raising concerns about its impact on cognition and critical thinking. This [...] Read more.
Generative AI is weaving into the fabric of many human aspects through its transformative power to mimic human-generated content. It is not a mere technology; it functions as a generative virtual assistant, raising concerns about its impact on cognition and critical thinking. This mixed-methods study investigates how GenAI ChatGPT affects critical thinking across cognitive presence (CP) phases. Forty students from a four-year university in the southwestern United States completed a survey; six provided their ChatGPT scripts, and two engaged in semi-structured interviews. Students’ self-reported survey responses suggested that GenAI ChatGPT improved triggering events (M = 3.60), exploration (M = 3.70), and integration (M = 3.60); however, responses remained neutral during the resolution stage. Two modes of interaction were revealed in the analysis of students’ ChatGPT scripts: passive, AI-directed use and collaborative, AI-supported interaction. A resolution gap was identified; nonetheless, the interview results revealed that when GenAI ChatGPT was utilized with guidance, all four stages of cognitive presence were completed, leading to enhanced critical thinking and a reconceptualization of ChatGPT as a more knowledgeable other. This research suggests that the effective use of GenAI in education depends on the quality of human–AI interaction. Future directions must orient toward an integration of GenAI in education that positions human and machine intelligence not as a substitution but as co-participation, opening new epistemic horizons while reconfiguring assessment practices to ensure that human oversight, critical inquiry, and reflective thinking remain at the center of learning. Full article
(This article belongs to the Section Technology Enhanced Education)
Show Figures

Figure 1

17 pages, 1256 KB  
Systematic Review
Integrating Artificial Intelligence into Orthodontic Education: A Systematic Review and Meta-Analysis of Clinical Teaching Application
by Carlos M. Ardila, Eliana Pineda-Vélez and Anny Marcela Vivares Builes
J. Clin. Med. 2025, 14(15), 5487; https://doi.org/10.3390/jcm14155487 - 4 Aug 2025
Cited by 1 | Viewed by 1992
Abstract
Background/Objectives: Artificial intelligence (AI) is rapidly emerging as a transformative force in healthcare education, including orthodontics. This systematic review and meta-analysis aimed to evaluate the integration of AI into orthodontic training programs, focusing on its effectiveness in improving diagnostic accuracy, learner engagement, [...] Read more.
Background/Objectives: Artificial intelligence (AI) is rapidly emerging as a transformative force in healthcare education, including orthodontics. This systematic review and meta-analysis aimed to evaluate the integration of AI into orthodontic training programs, focusing on its effectiveness in improving diagnostic accuracy, learner engagement, and the perceived quality of AI-generated educational content. Materials and Methods: A comprehensive literature search was conducted across PubMed, Scopus, Web of Science, and Embase through May 2025. Eligible studies involved AI-assisted educational interventions in orthodontics. A mixed-methods approach was applied, combining meta-analysis and narrative synthesis based on data availability and consistency. Results: Seven studies involving 1101 participants—including orthodontic students, clinicians, faculty, and program directors—were included. AI tools ranged from cephalometric landmarking platforms to ChatGPT-based learning modules. A fixed-effects meta-analysis using two studies yielded a pooled Global Quality Scale (GQS) score of 3.69 (95% CI: 3.58–3.80), indicating moderate perceived quality of AI-generated content (I2 = 64.5%). Due to methodological heterogeneity and limited statistical reporting in most studies, a narrative synthesis was used to summarize additional outcomes. AI tools enhanced diagnostic skills, learner autonomy, and perceived satisfaction, particularly among students and junior faculty. However, barriers such as limited curricular integration, lack of training, and faculty skepticism were recurrent. Conclusions: AI technologies, especially ChatGPT and digital cephalometry tools, show promise in orthodontic education. While learners demonstrate high acceptance, full integration is hindered by institutional and perceptual challenges. Strategic curricular reforms and targeted faculty development are needed to optimize AI adoption in clinical training. Full article
(This article belongs to the Special Issue Orthodontics: State of the Art and Perspectives)
Show Figures

Figure 1

25 pages, 5488 KB  
Article
Biased by Design? Evaluating Bias and Behavioral Diversity in LLM Annotation of Real-World and Synthetic Hotel Reviews
by Maria C. Voutsa, Nicolas Tsapatsoulis and Constantinos Djouvas
AI 2025, 6(8), 178; https://doi.org/10.3390/ai6080178 - 4 Aug 2025
Cited by 3 | Viewed by 3158
Abstract
As large language models (LLMs) gain traction among researchers and practitioners, particularly in digital marketing for tasks such as customer feedback analysis and automated communication, concerns remain about the reliability and consistency of their outputs. This study investigates annotation bias in LLMs by [...] Read more.
As large language models (LLMs) gain traction among researchers and practitioners, particularly in digital marketing for tasks such as customer feedback analysis and automated communication, concerns remain about the reliability and consistency of their outputs. This study investigates annotation bias in LLMs by comparing human and AI-generated annotation labels across sentiment, topic, and aspect dimensions in hotel booking reviews. Using the HRAST dataset, which includes 23,114 real user-generated review sentences and a synthetically generated corpus of 2000 LLM-authored sentences, we evaluate inter-annotator agreement between a human expert and three LLMs (ChatGPT-3.5, ChatGPT-4, and ChatGPT-4-mini) as a proxy for assessing annotation bias. Our findings show high agreement among LLMs, especially on synthetic data, but only moderate to fair alignment with human annotations, particularly in sentiment and aspect-based sentiment analysis. LLMs display a pronounced neutrality bias, often defaulting to neutral sentiment in ambiguous cases. Moreover, annotation behavior varies notably with task design, as manual, one-to-one prompting produces higher agreement with human labels than automated batch processing. The study identifies three distinct AI biases—repetition bias, behavioral bias, and neutrality bias—that shape annotation outcomes. These findings highlight how dataset complexity and annotation mode influence LLM behavior, offering important theoretical, methodological, and practical implications for AI-assisted annotation and synthetic content generation. Full article
(This article belongs to the Special Issue AI Bias in the Media and Beyond)
Show Figures

Figure 1

Back to TopTop