Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (109)

Search Parameters:
Keywords = artificial intelligence-assisted language learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 2029 KB  
Review
Artificial Intelligence in Head and Neck Surgical Oncology: A State-of-the-Art Review
by Steven X. Chen, Maria Feucht, Aditya Bhatt and Janice L. Farlow
J. Clin. Med. 2026, 15(7), 2767; https://doi.org/10.3390/jcm15072767 - 6 Apr 2026
Viewed by 306
Abstract
Artificial intelligence (AI) is rapidly reshaping head and neck surgical oncology by augmenting decision-making across the full perioperative continuum. This state-of-the-art review aims to provide head and neck surgical oncologists with a conceptual framework for understanding and critically appraising AI tools entering clinical [...] Read more.
Artificial intelligence (AI) is rapidly reshaping head and neck surgical oncology by augmenting decision-making across the full perioperative continuum. This state-of-the-art review aims to provide head and neck surgical oncologists with a conceptual framework for understanding and critically appraising AI tools entering clinical practice, summarizing how machine learning, deep learning, and generative AI are being integrated into contemporary surgical workflows. Preoperative applications include detection of occult nodal metastasis and extranodal extension. Intraoperative innovations include augmented reality-assisted navigation, real-time margin assessment, and improving visual clarity and tissue handling for robotic platforms. Postoperatively, AI can predict complications like free flap failure and oncologic outcomes. Large language models are being operationalized for clinician-facing applications such as documentation and inbox support, as well as patient-facing education. Despite promising results, broad clinical deployment remains limited by concerns about privacy, validation, reliability, safety, and ethics. Widespread adoption will require prospective clinical trials, robust governance, and human-centered workflows that ensure AI remains a safe, assistive copilot. Full article
(This article belongs to the Special Issue Clinical Advances in Head and Neck Cancer Diagnostics and Treatment)
Show Figures

Figure 1

22 pages, 1060 KB  
Systematic Review
Artificial Intelligence in EFL Speaking Instruction: A Systematic Review of Pedagogical Design, Affective Conditions and Instructional Input
by Sareen Kaur Bhar
Encyclopedia 2026, 6(4), 74; https://doi.org/10.3390/encyclopedia6040074 - 27 Mar 2026
Viewed by 566
Abstract
Speaking proficiency remains one of the most challenging skills for learners of English as a Foreign Language (EFL), particularly in contexts where sustained spoken interaction is limited. This systematic review synthesises 36 empirical studies (2015–2025) identified through a PRISMA-guided Scopus search to examine [...] Read more.
Speaking proficiency remains one of the most challenging skills for learners of English as a Foreign Language (EFL), particularly in contexts where sustained spoken interaction is limited. This systematic review synthesises 36 empirical studies (2015–2025) identified through a PRISMA-guided Scopus search to examine how artificial intelligence (AI)-mediated instruction supports EFL speaking development. The included studies were analysed according to AI modality, pedagogical integration, instructional input characteristics, and linguistic and affective outcomes. Findings indicate that AI tools—such as chatbots, automatic speech recognition systems, and large language models—consistently support affective outcomes, including reduced speaking anxiety and increased willingness to communicate. Improvements in fluency, pronunciation, and accuracy were frequently reported, particularly when AI tools were embedded within task-based and pedagogically structured instructional designs. However, evidence for sustained development of higher-order communicative competence was more variable. The review proposes a mediated input framework conceptualising AI as a design-sensitive instructional resource rather than an autonomous teaching agent. Full article
(This article belongs to the Section Arts & Humanities)
Show Figures

Figure 1

24 pages, 5711 KB  
Article
Image Captioning Through Deep Learning: An Adaptation of the BLIP-2 Model to Arabic
by Ahmed Fathy Abdelaal, Enrique Costa-Montenegro, Silvia García-Méndez, Hatem Mohamed Noaman and Mohammed Kayed
Appl. Sci. 2026, 16(7), 3226; https://doi.org/10.3390/app16073226 - 26 Mar 2026
Viewed by 365
Abstract
Image captioning using deep learning bridges computer vision and natural language processing, enabling machines to generate human-like textual descriptions for images. While significant progress has been made in English, in Arabic, the image captioning field remains under-explored due to the language’s morphological complexity, [...] Read more.
Image captioning using deep learning bridges computer vision and natural language processing, enabling machines to generate human-like textual descriptions for images. While significant progress has been made in English, in Arabic, the image captioning field remains under-explored due to the language’s morphological complexity, right-to-left script, and scarcity of annotated datasets. This paper addresses this gap by adapting the BLIP-2 (Bootstrapped Language—Image Pre-training) model for Arabic caption generation, leveraging machine-translated datasets, like Flickr 30k, to overcome resource limitations. BLIP-2 combines a vision transformer (ViT) for image encoding and a CamelBERT large language model (LLM) for text generation, enhanced by a lightweight Querying Transformer (Q-Former) for cross-modal alignment. Despite challenges such as translation artifacts and linguistic nuances, our experiments demonstrate promising results in generating coherent Arabic captions. In short, this study highlights the potential of BLIP-2 for multilingual applications while underscoring the need for native Arabic datasets and further optimization. Ultimately, this work contributes to advancing inclusive artificial intelligence technologies for Arabic-speaking communities, with applications in assistive tools, education, and content creation. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

35 pages, 2917 KB  
Article
Generative AI-Assisted Automation of Clinical Data Processing: A Methodological Framework for Streamlining Behavioral Research Workflows
by Marta Lilia Eraña-Díaz, Alejandra Rosales-Lagarde, Iván Arango-de-Montis and José Alejandro Velázquez-Monzón
Informatics 2026, 13(4), 48; https://doi.org/10.3390/informatics13040048 - 25 Mar 2026
Viewed by 639
Abstract
This article presents a methodological framework for automating clinical data processing workflows using Generative Artificial Intelligence (AI) as an interactive co-developer. We demonstrate how Large Language Models (LLMs), specifically ChatGPT and Claude, can assist researchers in designing, implementing, and deploying complete ETL (Extract, [...] Read more.
This article presents a methodological framework for automating clinical data processing workflows using Generative Artificial Intelligence (AI) as an interactive co-developer. We demonstrate how Large Language Models (LLMs), specifically ChatGPT and Claude, can assist researchers in designing, implementing, and deploying complete ETL (Extract, Transform, Load) pipelines without requiring advanced programming or DevOps expertise. Using a dataset of 102 participants from a nonverbal expression study as a proof-of-concept, we show how AI-assisted automation transforms FaceReader video analysis outputs during the Cyberball paradigm into structured, analysis-ready datasets through containerized workflows orchestrated via Docker and n8n. The resulting framework successfully processes all 102 datasets, generating machine learning outputs to validate pipeline execution stability (rather than clinical predictivity), and deploys interactive visualization dashboards, tasks that would normally require significant manual effort and technical specialization expertise. This work establishes a replicable methodology for integrating Generative AI into research data management workflows, with implications for accelerating scientific discovery across behavioral and medical research domains. Full article
Show Figures

Figure 1

20 pages, 1088 KB  
Article
Intelligent Assistant with Artificial Intelligence for Language Learning
by Diego De-La-Cruz-Salcedo, Edgar Peña-Casas, Monica Salcedo-Hernandez, Myriam Guichard-Huasasquiche and Jose Salcedo-Hernandez
Appl. Sci. 2026, 16(6), 3072; https://doi.org/10.3390/app16063072 - 22 Mar 2026
Viewed by 441
Abstract
An intelligent assistant with artificial intelligence (AI) for language learning was developed. A dynamic user interface and a supervised feedback module using web technologies (HTML5, CSS3, and JavaScript ES6+) and Python v3.9 for supporting language learning were designed. The integration of the dynamic [...] Read more.
An intelligent assistant with artificial intelligence (AI) for language learning was developed. A dynamic user interface and a supervised feedback module using web technologies (HTML5, CSS3, and JavaScript ES6+) and Python v3.9 for supporting language learning were designed. The integration of the dynamic user interface with the application programming interface (API) powered by artificial intelligence (AI) was implemented. The results of the comparative evaluation of the intelligent assistant with artificial intelligence (AI) versus the traditional language learning method displayed an improvement in the learning stage. Full article
Show Figures

Figure 1

17 pages, 602 KB  
Review
Artificial Intelligence Applications in Gastric Cancer Surgery: Bridging Early Diagnosis and Responsible Precision Medicine
by Silvia Malerba, Miljana Vladimirov, Aman Goyal, Audrius Dulskas, Augustinas Baušys, Tomasz Cwalinski, Sergii Girnyi, Jaroslaw Skokowski, Ruslan Duka, Robert Molchanov, Bojan Jovanovic, Francesco Antonio Ciarleglio, Alberto Brolese, Kebebe Bekele Gonfa, Abdi Tesemma Demmo, Zilvinas Dambrauskas, Adolfo Pérez Bonet, Mario Testini, Francesco Paolo Prete, Valentin Calu, Natale Calomino, Vikas Jain, Aleksandar Karamarkovic, Karol Polom, Adel Abou-Mrad, Rodolfo J. Oviedo, Yogesh Vashist and Luigi Maranoadd Show full author list remove Hide full author list
J. Clin. Med. 2026, 15(6), 2208; https://doi.org/10.3390/jcm15062208 - 13 Mar 2026
Viewed by 988
Abstract
Background: Artificial intelligence is emerging as a promising tool in surgical oncology, with growing evidence suggesting potential applications in diagnostic support, intraoperative guidance, and perioperative risk assessment. In gastric cancer surgery, emerging applications range from AI-assisted endoscopic detection to data-driven perioperative risk [...] Read more.
Background: Artificial intelligence is emerging as a promising tool in surgical oncology, with growing evidence suggesting potential applications in diagnostic support, intraoperative guidance, and perioperative risk assessment. In gastric cancer surgery, emerging applications range from AI-assisted endoscopic detection to data-driven perioperative risk prediction, while some technological developments, particularly in robotic autonomy, derive from broader surgical or experimental models that may inform future gastric procedures. Methods: A narrative review was conducted following established methodological standards, including the Scale for the Assessment of Narrative Review Articles (SANRA) and the Search–Appraisal–Synthesis–Analysis (SALSA) framework. English-language studies indexed in PubMed, Scopus, Embase, and Web of Science up to October 2025 were included. Evidence was synthesized thematically across five domains: AI-assisted anatomical recognition and lymphadenectomy support, autonomous robotic systems, early cancer detection, perioperative predictive and frailty models, and ethical and regulatory considerations. Results: AI-based computer vision and deep learning algorithms have demonstrated promising capabilities for real-time anatomical recognition, surgical phase classification, and intraoperative guidance, although evidence of direct patient-level benefit remains limited. In diagnostic settings, AI-assisted endoscopy and Raman spectroscopy have been shown to improve early lesion detection and reduce dependence on operator experience. Predictive models, including MySurgeryRisk and AI-driven frailty assessments, may support individualized prehabilitation planning and perioperative risk stratification. Persistent limitations include small and heterogeneous datasets, insufficient external validation, and unresolved concerns related to data privacy, algorithmic interpretability, and medico-legal responsibility. Conclusions: Artificial intelligence is progressively emerging as a promising tool in gastric cancer surgery, integrating automation, advanced analytics, and human clinical reasoning. Its safe and ethical adoption requires robust validation, transparent governance, and continuous surgeon oversight. When developed within human-centered and ethically grounded frameworks, AI can augment, rather than replace, surgical expertise, potentially advancing precision, safety, and equity in oncologic care. Full article
Show Figures

Figure 1

25 pages, 639 KB  
Article
AI-Assisted Value Investing: A Human-in-the-Loop Framework for Prompt-Guided Financial Analysis and Decision Support
by Andrea Caridi, Marco Giovannini and Lorenzo Ricciardi Celsi
Electronics 2026, 15(6), 1155; https://doi.org/10.3390/electronics15061155 - 10 Mar 2026
Viewed by 750
Abstract
Value investing remains grounded in intrinsic value estimation, margin-of-safety reasoning, and disciplined fundamental analysis, but its practical execution is increasingly constrained by the scale, heterogeneity, and velocity of modern financial information. Recent advances in artificial intelligence (AI), particularly large language models and automated [...] Read more.
Value investing remains grounded in intrinsic value estimation, margin-of-safety reasoning, and disciplined fundamental analysis, but its practical execution is increasingly constrained by the scale, heterogeneity, and velocity of modern financial information. Recent advances in artificial intelligence (AI), particularly large language models and automated information-extraction systems, create new opportunities to accelerate financial analysis; however, their outputs remain probabilistic, context-dependent, and potentially error-prone, making governance and verification essential. This article proposes an AI-assisted value investing framework that integrates automated extraction, valuation modeling, explainability, and human-in-the-loop (HITL) supervision into a unified decision-support architecture. The framework is organized into three layers: (i) a data layer for traceable extraction and normalization of structured and unstructured financial information; (ii) a modeling layer for automated key performance indicator (KPI) computation, forecasting support, and discounted cash flow (DCF) valuation; and (iii) an explainability and governance layer for traceability, verification, model-risk control, and analyst oversight. A central contribution of the paper is the operational characterization of prompt literacy as a determinant of analytical reliability, showing that structured, context-aware prompts materially affect extraction correctness, usability, and verification effort. The framework is evaluated through a case study using Rivanna AI on three large U.S. beverage firms—namely, The Coca-Cola Company, PepsiCo, and Keurig Dr Pepper—selected as a controlled, information-rich setting for comparative analysis. The results indicate that the proposed workflow can reduce end-to-end analysis time from approximately 25–40 h in a traditional manual process to approximately 8–12 h in an AI-assisted setting, including citation/source verification, unit and period reconciliation, and review of key valuation assumptions. Rather than eliminating analyst effort, AI shifts it from manual information processing toward verification, adjudication, and interpretation. Overall, the findings position AI not as an autonomous decision-maker, but as a governed reasoning accelerator whose effectiveness depends on structured human guidance, traceability, and disciplined validation. In value investing, a discipline traditionally grounded in labor-intensive fundamental analysis and disciplined intrinsic value estimation, AI introduces the potential to scale analytical coverage and accelerate evidence synthesis. However, AI systems in financial contexts are probabilistic, context-sensitive, and inherently dependent on human interaction, raising critical questions about reliability, governance, and operational integration. This article proposes a structured framework for AI-driven value investing that preserves the foundational principles of intrinsic value, margin of safety, and economic reasoning, while redesigning the analytical workflow through automation, explainability, and human-in-the-loop (HITL) supervision. The proposed architecture integrates three layers: (i) an AI-enabled data layer for traceable extraction and normalization of structured and unstructured financial information; (ii) a modeling and valuation layer combining automated KPI computation, machine learning forecasting, and discounted cash flow (DCF) valuation; and (iii) an explainability and governance layer ensuring traceability, verification, and model risk control. A central contribution of this work is the operational characterization of prompt literacy, namely the ability to formulate structured, context-aware requests to AI systems, as a critical determinant of system reliability and analytical correctness. Through a focused case study using an AI-assisted analysis platform (Rivanna AI) on three U.S. beverage firms, we provide evidence that structured prompt formulation can improve extraction consistency, reduce verification overhead, and increase workflow efficiency in a human-supervised setting. In this setting, analysis time decreased from a manual range of approximately 25–40 h to 8–12 h with AI assistance and HITL validation, while preserving traceability and decision accountability. The reported hour savings should be interpreted as conservative estimates from the initial deployment phase; additional efficiency gains are expected as operational maturity increases, driven by learning-economy effects. The findings position AI not as an autonomous decision-maker but as a probabilistic reasoning accelerator whose effectiveness depends on structured human guidance, verification discipline, and prompt-driven interaction. These results redefine the role of the financial analyst from manual data processor to reasoning architect, responsible for designing, guiding, and validating AI-assisted analytical workflows. Full article
(This article belongs to the Special Issue Feature Papers in Artificial Intelligence)
Show Figures

Figure 1

23 pages, 940 KB  
Review
AI-Driven Drug Discovery: Focus on Targets for Solid Tumors
by Jialong Wu, Jide He, Qianyang Ni, Zi’ang Li, Xiushi Lin, Zhenkun Zhao, Lei Qiu, Hongyin Wang, Sijie Li, Chengdong Shi, Yunyi Zhang, Huile Gao and Jian Lu
Pharmaceutics 2026, 18(3), 329; https://doi.org/10.3390/pharmaceutics18030329 - 6 Mar 2026
Viewed by 800
Abstract
In the field of anti-tumor drug development, target identification remains a key component of innovative therapeutic strategies. Solid malignancies have posed significant challenges to conventional target discovery approaches due to their distinct genetic heterogeneity, complex tumor microenvironment, and highly individualized evolutionary trajectories. In [...] Read more.
In the field of anti-tumor drug development, target identification remains a key component of innovative therapeutic strategies. Solid malignancies have posed significant challenges to conventional target discovery approaches due to their distinct genetic heterogeneity, complex tumor microenvironment, and highly individualized evolutionary trajectories. In recent years, artificial intelligence (AI) has emerged as a revolutionary force in drug discovery. The technological advances from machine learning and deep learning to large language models (LLMs) has enabled the comprehensive integration and analysis of multi-omics biological data and real-world evidence, thereby promoting every stage of the drug discovery process. Thus, this article begins with an overview of the biological characteristics of tumors and the limitations of traditional strategies. It then delves into recent advances particularly in the past three years in the application of AI to drug discovery, especially LLMs. The main focus is on the current landscape of AI-assisted target identification. Furthermore, the article examines key challenges such as multimodal data integration and the interpretability of AI models, and envisions the future path towards integrated AI systems in precision oncology. Full article
(This article belongs to the Section Drug Targeting and Design)
Show Figures

Figure 1

17 pages, 306 KB  
Article
Multimodal AI Screening of Developmental Language Disorder in Tunisian Arabic Children: Clinical Markers and Computational Detection
by Faten Bouhajeb, Redha Touati and Selçuk Güven
Behav. Sci. 2026, 16(3), 375; https://doi.org/10.3390/bs16030375 - 6 Mar 2026
Viewed by 400
Abstract
Developmental Language Disorder (DLD) is a common neurodevelopmental condition that affects language acquisition in children. However, standardized diagnostic tools for Tunisian Arabic, a widely spoken yet underrepresented dialect, is still lacking. This study presents a multimodal biomedical informatics framework that integrates clinical assessments, [...] Read more.
Developmental Language Disorder (DLD) is a common neurodevelopmental condition that affects language acquisition in children. However, standardized diagnostic tools for Tunisian Arabic, a widely spoken yet underrepresented dialect, is still lacking. This study presents a multimodal biomedical informatics framework that integrates clinical assessments, speech recordings, and artificial intelligence (AI) for early DLD detection. Three linguistic tasks (the CLT Task, the Arabic Verb Evaluation Task, and the Nonword Repetition Task) were adapted for Tunisian Arabic, and spontaneous speech samples were collected from children with typical development and those with DLD. Statistical analyses revealed significant deficits in verb production, past-tense morphology, and phonological memory in the DLD group. For automated screening, we developed two systems: a Random Forest classifier based on structured clinical and linguistic features and a multimodal deep learning model using Wav2Vec2 acoustic embeddings. The best model achieved an F1 score of 0.85, demonstrating the feasibility of AI-assisted DLD screening. This work introduces the first standardized dataset and computational baseline for DLD in Tunisian Arabic, providing clinically relevant tools for early identification and supporting research on underrepresented Arabic dialects. This work also highlights future implications, including potential applications in early screening, the integration of acoustic markers, and the development of culturally adapted assessment tools for underrepresented languages. Full article
22 pages, 3288 KB  
Article
An Intelligent Real-Time System for Sentence-Level Recognition of Continuous Saudi Sign Language Using Landmark-Based Temporal Modeling
by Adel BenAbdennour, Mohammed Mukhtar, Osama Almolike, Bilal A. Khawaja and Abdulmajeed M. Alenezi
Sensors 2026, 26(5), 1652; https://doi.org/10.3390/s26051652 - 5 Mar 2026
Viewed by 447
Abstract
A persistent challenge for Deaf and Hard-of-Hearing individuals is the communication gap between sign language users and the hearing community, particularly in regions with limited automated translation resources. In Saudi Arabia, this gap is amplified by the reliance on Saudi Sign Language (SSL) [...] Read more.
A persistent challenge for Deaf and Hard-of-Hearing individuals is the communication gap between sign language users and the hearing community, particularly in regions with limited automated translation resources. In Saudi Arabia, this gap is amplified by the reliance on Saudi Sign Language (SSL) and the scarcity of real-time, sentence-level translation systems. This paper presents a real-time system for sentence-level recognition of continuous SSL and direct mapping to natural spoken Arabic. The proposed system operates end-to-end on live video streams or pre-recorded content, extracting spatio-temporal landmark features using the MediaPipe Holistic framework. For classification, the input feature vector consists of 225 features derived from hand and body pose landmarks. These features are processed by a Bidirectional Long Short-Term Memory (BiLSTM) network trained on the ArabSign (ArSL) dataset to perform direct sentence-level classification over a vocabulary of 50 continuous Arabic sign language sentences, supported by an idle-based segmentation mechanism that enables natural, uninterrupted signing. Experimental evaluation demonstrates robust generalization: under a Leave-One-Signer-Out (LOSO) cross-validation protocol, the model attains a mean sentence-level accuracy of 94.2%, outperforming the fixed signer-independent split baseline of 92.07%, while maintaining real-time performance suitable for interactive use. To enhance linguistic fluency, an optional post-recognition refinement stage is incorporated using a large language model (LLM), followed by text-to-speech synthesis to produce audible Arabic output; this refinement operates strictly as post-processing and is not included in the reported recognition accuracy metrics. The results demonstrate that direct sentence-level modeling, combined with landmark-based feature extraction and real-time segmentation, provides an effective and practical solution for continuous SSL sentence recognition in real-time. Full article
(This article belongs to the Special Issue Sensor Systems for Gesture Recognition (3rd Edition))
Show Figures

Figure 1

16 pages, 775 KB  
Review
ChatMicroscopy: A Perspective Review of Large Language Models for Next-Generation Optical Microscopy
by Giuseppe Sancataldo
Appl. Sci. 2026, 16(5), 2502; https://doi.org/10.3390/app16052502 - 5 Mar 2026
Viewed by 425
Abstract
Optical microscopy is a fundamental tool in the physical, chemical, and life sciences, enabling direct investigation of structure, dynamics, and function across multiple spatial and temporal scales. Advances in optical design, detectors, and computational techniques have greatly enhanced performance, but have also increased [...] Read more.
Optical microscopy is a fundamental tool in the physical, chemical, and life sciences, enabling direct investigation of structure, dynamics, and function across multiple spatial and temporal scales. Advances in optical design, detectors, and computational techniques have greatly enhanced performance, but have also increased the complexity of modern microscopes, which are now software-driven and embedded in data-intensive workflows. Artificial intelligence has become an important component of this landscape, particularly through task-specific machine learning approaches for image analysis, optimization, and limited instrument control. While effective, these solutions are often fragmented and lack the ability to integrate experimental intent, contextual knowledge, and multi-step reasoning. Recent progress in large language models (LLMs) offers a new paradigm for intelligent microscopy. As foundation models trained on large-scale text and code, LLMs exhibit emergent capabilities in reasoning, abstraction, and tool coordination, allowing them to act as natural interfaces between users and complex experimental systems. This perspective highlights how LLMs can function as cognitive and orchestration layers that connect experiment design, instrument control, data analysis, and knowledge integration. Emerging applications include conversational microscope control, workflow supervision, and scientific assistance for data exploration and hypothesis generation, alongside important technical, ethical, and governance challenges. Full article
(This article belongs to the Special Issue Biomedical Optics and Imaging: Latest Advances and Prospects)
Show Figures

Figure 1

41 pages, 2707 KB  
Article
Prompt Engineering and Multimodal Tasks in AI-Supported EFL Education: A Mixed Methods Study
by Debopriyo Roy, George F. Fragulis and Adya Surbhi
Sustainability 2026, 18(5), 2415; https://doi.org/10.3390/su18052415 - 2 Mar 2026
Viewed by 642
Abstract
The rapid integration of artificial intelligence (AI) into higher education is reshaping how learners develop academic, linguistic, and research competencies. This mixed-methods study examines how second-year EFL computer science students employ prompt engineering techniques across four task domains—research summarization, academic video note-taking, style [...] Read more.
The rapid integration of artificial intelligence (AI) into higher education is reshaping how learners develop academic, linguistic, and research competencies. This mixed-methods study examines how second-year EFL computer science students employ prompt engineering techniques across four task domains—research summarization, academic video note-taking, style transformation, and concept mapping—within a smart learning environment. Sixty-nine students completed a structured survey requiring AI-assisted draft generation followed by student-led revision. Quantitative analyses included descriptive statistics, chi-square tests, Cramer’s V, t-tests, ANOVA, Kruskal–Wallis tests, and three text-similarity measures (cosine, Jaccard, and Levenshtein). Qualitative evidence was drawn from students’ revised outputs and reflective responses. Results indicate that students consistently preserved semantic meaning while significantly rephrasing AI-generated text, demonstrating moderate conceptual alignment but substantial lexical and structural transformation. Frequent AI users said they were better at searching and revising, but the type of prompt didn’t have much of an effect on how deep the revision was or how well they learned. Iterative prompting and revision emerged as central drivers of metacognitive growth, academic language development, and sustainable learning behaviors. Across tasks, students viewed AI prompts as effective scaffolds for organizing information and synthesizing multimodal input, though reliance varied by learner. The findings underscore that sustainable AI use in EFL technical education depends not on AI output alone, but on structured prompting, iterative human revision, and critical engagement—practices that cultivate autonomy, digital literacy, and long-term academic resilience. Full article
(This article belongs to the Special Issue AI for Sustainable and Creative Learning in Education)
Show Figures

Figure 1

27 pages, 1542 KB  
Article
The Application of AI Chatbot System Based on CLIL Concept in the Teaching of Artificial Intelligence Courses
by Ziqi Liu and Qian Wang
Appl. Sci. 2026, 16(3), 1633; https://doi.org/10.3390/app16031633 - 5 Feb 2026
Viewed by 514
Abstract
The interdisciplinary nature of artificial intelligence courses forces non-computer science majors to contend with the simultaneous challenges of terminology comprehension and language cognition. To increase the efficiency of terminology teaching, this project develops and deploys an OpenAI-based AI chatbot teaching system that incorporates [...] Read more.
The interdisciplinary nature of artificial intelligence courses forces non-computer science majors to contend with the simultaneous challenges of terminology comprehension and language cognition. To increase the efficiency of terminology teaching, this project develops and deploys an OpenAI-based AI chatbot teaching system that incorporates the concept of content and language integrated learning (CLIL). The system creates a dual-track “terminology layer-cognition layer” framework that includes term recognition, multi-level explanation (contextual examples and conceptual associations), task-driven dialogues, and conversation memory bank (CMB) modules. It then guides students through natural language interactions to master the core AI terms in context. The system’s effectiveness was confirmed in a controlled experiment with 98 participants (including computer and non-computer majors) separated into two groups: experimental (chatbot teaching) and control (conventional PPT teaching). In terms of terminology mastery, the experimental group’s posttest score (86.0 ± 5.33) was considerably higher than that of the control group (66.98 ± 5.6). Non-computer science major students showed a more significant improvement effect (83.29 ± 4.5 vs. 63.62 ± 4.68 for the control group). Non-computing students evaluated the clarity of systematic terminology explanation (4.33 ± 0.76) and the effectiveness of contextual assistance (4.21 ± 0.88) as the most important aspects of their learning experience. These experimental results show that the fusion AI chatbot teaching system developed in this study can improve teaching efficiency while effectively reducing cognitive load, and that the task-guided and immediate feedback mechanism can significantly increase students’ learning engagement. Full article
(This article belongs to the Special Issue Application of Smart Learning in Education)
Show Figures

Figure 1

19 pages, 715 KB  
Article
Large Language Models and Innovative Work Behavior in Higher Education Curriculum Development
by Ibrahim A. Elshaer, Chokri Kooli, Alaa M. S. Azazz and Mansour Alyahya
Adm. Sci. 2026, 16(1), 56; https://doi.org/10.3390/admsci16010056 - 22 Jan 2026
Viewed by 737
Abstract
The growth of generative artificial intelligence (GAI), remarkably, Large Language Models (LLMs) such as ChatGPT, converts the educational environment by empowering intelligent, data-driven education and curriculum design innovation. This study aimed to assess the integration of LLMs into higher education to foster curriculum [...] Read more.
The growth of generative artificial intelligence (GAI), remarkably, Large Language Models (LLMs) such as ChatGPT, converts the educational environment by empowering intelligent, data-driven education and curriculum design innovation. This study aimed to assess the integration of LLMs into higher education to foster curriculum design, learning outcomes, and innovative work behaviour (IWB). Specifically, this study investigated how LLMs’ perceived usefulness (PU) and perceived ease of use (PEOU) can support educators to be engaged in IWB—idea generation (IG), idea promotion (IP), opportunity exploration (OE), and reflection (Relf)—employing a web-based survey and targeting faculty members. A total of 493 replies were obtained and found to be valid to be analysed with partial least squares structural equation modelling (PLS-SEM). The results indicated that PU and PEOU have a significant positive impact on the four dimensions of IWB in the context of LLMs for curriculum development. The evaluated model can assist in bridging the gap between AI technology acceptance and educational strategy by offering some practical evidence and implications for university leaders and policymakers. Additionally, this study offered a data-driven pathway to advance higher education IWB through the adoption of LLMs. Full article
Show Figures

Figure 1

23 pages, 3985 KB  
Article
Enabling Humans and AI Systems to Retrieve Information from System Architectures in Model-Based Systems Engineering
by Vincent Quast, Georg Jacobs, Simon Dehn and Gregor Höpfner
Systems 2026, 14(1), 83; https://doi.org/10.3390/systems14010083 - 12 Jan 2026
Viewed by 1253
Abstract
The complexity of modern cyber–physical systems is steadily increasing as their functional scope expands and as regulations become more demanding. To cope with this complexity, organizations are adopting methodologies such as model-based systems engineering (MBSE). By creating system models, MBSE promises significant advantages [...] Read more.
The complexity of modern cyber–physical systems is steadily increasing as their functional scope expands and as regulations become more demanding. To cope with this complexity, organizations are adopting methodologies such as model-based systems engineering (MBSE). By creating system models, MBSE promises significant advantages such as improved traceability, consistency, and collaboration. On the other hand, the adoption of MBSE faces challenges in both the introduction and the operational use. In the introduction phase, challenges include high initial effort and steep learning curves. In the operational use phase, challenges arise from the difficulty of retrieving and reusing information stored in system models. Research on the support of MBSE through artificial intelligence (AI), especially generative AI, has so far focused mainly on easing the introduction phase, for example by using large language models (LLMs) to assist in creating system models. However, generative AI could also support the operational use phase by helping stakeholders access the information embedded in existing system models. This study introduces an LLM-based multi-agent system that applies a Graph Retrieval-Augmented Generation (GraphRAG) strategy to access and utilize information stored in MBSE system models. The system’s capabilities are demonstrated through a chatbot that answers questions about the underlying system model. This solution reduces the complexity and effort involved in retrieving system model information and improves accessibility for stakeholders who lack advanced knowledge in MBSE methodologies. The chatbot was evaluated using the architecture of a battery electric vehicle as a reference model and a set of 100 curated questions and answers. When tested across four large language models, the best-performing model achieved an accuracy of 93 percent in providing correct answers. Full article
Show Figures

Figure 1

Back to TopTop