Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (597)

Search Parameters:
Keywords = second language research

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
45 pages, 2954 KB  
Review
A Review of Fault Diagnosis Methods: From Traditional Machine Learning to Large Language Model Fusion Paradigm
by Qingwei Nie, Junsai Geng and Changchun Liu
Sensors 2026, 26(2), 702; https://doi.org/10.3390/s26020702 - 21 Jan 2026
Viewed by 177
Abstract
Fault diagnosis is a core technology ensuring the safe and efficient operation of industrial systems. A paradigm shift has been observed wherein traditional signal analysis has been replaced by intelligent, algorithm-driven approaches. In recent years, large language models, digital twins, and knowledge graphs [...] Read more.
Fault diagnosis is a core technology ensuring the safe and efficient operation of industrial systems. A paradigm shift has been observed wherein traditional signal analysis has been replaced by intelligent, algorithm-driven approaches. In recent years, large language models, digital twins, and knowledge graphs have been introduced. A new stage of intelligent integration has been reached that is characterized by data-driven methods, knowledge guidance, and physical–virtual fusion. In the present paper, the evolutionary context of fault diagnosis technologies was systematically reviewed, with a focus on the theoretical methods and application practices of traditional machine learning, digital twins, knowledge graphs, and large language models. First, the research background, core objectives, and development history of fault diagnosis were described. Second, the principles, industrial applications, and limitations of supervised and unsupervised learning were analyzed. Third, innovative uses were examined involving physical–virtual mapping in digital twins, knowledge modeling in knowledge graphs, and feature learning in large language models. Subsequently, a multi-dimensional comparison framework was constructed to analyze the performance indicators, applicable scenarios, and collaborative potential of different technologies. Finally, the key challenges faced in the current fault diagnosis field were summarized. These included data quality, model generalization, and knowledge reuse. Future directions driven by the fusion of large language models, digital twins, and knowledge graphs were also outlined. A comprehensive technical map was established for fault diagnosis researchers, as well as an up-to-date reference. Theoretical innovation and engineering deployment of intelligent fault diagnosis are intended to be supported. Full article
(This article belongs to the Section Fault Diagnosis & Sensors)
Show Figures

Figure 1

18 pages, 799 KB  
Review
Implementing Universal Design for Learning to Transform Science Education
by Noëlle Fabre-Mitjans and Gregorio Jiménez-Valverde
Encyclopedia 2026, 6(1), 24; https://doi.org/10.3390/encyclopedia6010024 - 19 Jan 2026
Viewed by 210
Abstract
This review critically examines the implementation of Universal Design for Learning (UDL) in science education, providing an integrative overview of research, methodologies, and disciplinary applications. The first section explores UDL across educational stages—from early childhood to higher education—highlighting how age-specific adaptations, such as [...] Read more.
This review critically examines the implementation of Universal Design for Learning (UDL) in science education, providing an integrative overview of research, methodologies, and disciplinary applications. The first section explores UDL across educational stages—from early childhood to higher education—highlighting how age-specific adaptations, such as play-based and outdoor learning in early years or language- and problem-focused strategies in secondary education, enhance engagement and equity. The second section analyses science-specific pedagogies, including inquiry-based science education, the 5E model (Engage, Explore, Explain, Elaborate, Evaluate), STEM/STEAM approaches, and gamification, demonstrating how their alignment with UDL principles fosters motivation, creativity, and metacognitive development. The third section addresses the application of UDL across scientific disciplines—biology, physics, chemistry, geosciences, environmental education, and the Nature of Science—illustrating discipline-oriented adaptations and inclusive practices. Finally, a section on multiple scenarios of diversity synthesizes UDL responses to physical, sensory, and learning difficulties, neurodivergence, giftedness, and socio-emotional barriers. The review concludes by calling for enhanced teacher preparation and providing key ideas for professionals who want to implement UDL in science contexts. Full article
(This article belongs to the Collection Encyclopedia of Social Sciences)
Show Figures

Figure 1

36 pages, 3276 KB  
Article
Robot Planning via LLM Proposals and Symbolic Verification
by Drejc Pesjak and Jure Žabkar
Mach. Learn. Knowl. Extr. 2026, 8(1), 22; https://doi.org/10.3390/make8010022 - 16 Jan 2026
Viewed by 339
Abstract
Planning in robotics represents an ongoing research challenge, as it requires the integration of sensing, reasoning, and execution. Although large language models (LLMs) provide a high degree of flexibility in planning, they often introduce hallucinated goals and actions and consequently lack the formal [...] Read more.
Planning in robotics represents an ongoing research challenge, as it requires the integration of sensing, reasoning, and execution. Although large language models (LLMs) provide a high degree of flexibility in planning, they often introduce hallucinated goals and actions and consequently lack the formal reliability of deterministic methods. In this paper, we address this limitation by proposing a hybrid Sense–Plan–Code–Act (SPCA) framework that combines perception, LLM-based reasoning, and symbolic planning. Within the proposed approach, sensory information is first transformed into a symbolic description of the world in Planning Domain Definition Language (PDDL) using an LLM. A heuristic planner is then used to generate a valid plan, which is subsequently converted to code by a second LLM. The generated code is first validated syntactically through compilation and then semantically in simulation. When errors are detected, local corrections can be applied and the process is repeated as necessary. The proposed method is evaluated in the OpenAI Gym MiniGrid reinforcement learning environment and in a Gazebo simulation on a UR5 robotic arm using a curriculum of tasks with increasing complexity. The system successfully completes approximately 71–75% of tasks across environments with a relatively low number of simulation iterations. Full article
Show Figures

Figure 1

58 pages, 606 KB  
Review
The Pervasiveness of Digital Identity: Surveying Themes, Trends, and Ontological Foundations
by Matthew Comb and Andrew Martin
Information 2026, 17(1), 85; https://doi.org/10.3390/info17010085 - 13 Jan 2026
Viewed by 245
Abstract
Digital identity operates as the connective infrastructure of the digital age, linking individuals, organisations, and devices into networks through which services, rights, and responsibilities are transacted. Despite this centrality, the field remains fragmented, with technical solutions, disciplinary perspectives, and regulatory approaches often developing [...] Read more.
Digital identity operates as the connective infrastructure of the digital age, linking individuals, organisations, and devices into networks through which services, rights, and responsibilities are transacted. Despite this centrality, the field remains fragmented, with technical solutions, disciplinary perspectives, and regulatory approaches often developing in parallel without interoperability. This paper presents a systematic survey of digital identity research, drawing on a Scopus-indexed baseline corpus of 2551 publications spanning full years 2005–2024, complemented by a recent stratum of 1241 publications (2023–2025) used to surface contemporary thematic structure and inform the ontology-oriented synthesis. The survey contributes in three ways. First, it provides an integrated overview of the digital identity landscape, tracing influential and widely cited works, historical developments, and recent scholarship across technical, legal, organisational, and cultural domains. Second, it applies natural language processing and subject metadata to identify thematic patterns, disciplinary emphases, and influential authors, exposing trends and cross-field connections difficult to capture through manual review. Third, it consolidates recurring concepts and relationships into ontological fragments (illustrative concept maps and subgraphs) that surface candidate entities, processes, and contexts as signals for future formalisation and alignment of fragmented approaches. By clarifying how digital identity has been conceptualised and where gaps remain, the study provides a foundation for progress toward a universal digital identity that is coherent, interoperable, and socially inclusive. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

31 pages, 1534 KB  
Article
Causal Reasoning and Large Language Models for Military Decision-Making: Rethinking the Command Structures in the Era of Generative AI
by Dimitrios Doumanas, Andreas Soularidis and Konstantinos Kotis
AI 2026, 7(1), 14; https://doi.org/10.3390/ai7010014 - 7 Jan 2026
Viewed by 568
Abstract
Military decision-making is inherently complex and highly critical, requiring commanders to assess multiple variables in real-time, anticipate second-order effects, and adapt strategies based on continuously evolving battlefield conditions. Traditional approaches rely on domain expertise, experience, and intuition, often supported by decision-support systems designed [...] Read more.
Military decision-making is inherently complex and highly critical, requiring commanders to assess multiple variables in real-time, anticipate second-order effects, and adapt strategies based on continuously evolving battlefield conditions. Traditional approaches rely on domain expertise, experience, and intuition, often supported by decision-support systems designed by military experts. With the rapid advancement of Large Language Models (LLMs) such as ChatGPT, Claude, and DeepSeek, a new research question emerges: can LLMs perform causal reasoning at a level that could meaningfully replace human decision-makers, or should they remain human-led decision-support tools in high-stakes environments? This paper explores the causal reasoning capabilities of LLMs for operational and strategic military decisions. Unlike conventional AI models that rely primarily on correlation-based predictions, LLMs are now able to engage in multi-perspective reasoning, intervention analysis, and scenario-based assessments. We introduce a structured empirical evaluation framework to assess LLM performance through 10 de-identified real-world-inspired battle scenarios, ensuring models reason over provided inputs rather than memorized data. Critically, LLM outputs are systematically compared against a human expert baseline, composed of military officers across multiple ranks and years of operational experience. The evaluation focuses on precision, recall, causal reasoning depth, adaptability, and decision soundness. Our findings provide a rigorous comparative assessment of whether carefully prompted LLMs can assist, complement, or approach expert-level performance in military planning. While fully autonomous AI-led command remains premature, the results suggest that LLMs can offer valuable support in complex decision processes when integrated as part of hybrid human-AI decision-support frameworks. Since our evaluation directly tests this capability, this paradigm shift raises fundamental question: Is there a possibility to fully replace high-ranking officers/commanders in leading critical military operations, or should AI-driven tools remain as decision-support systems enhancing human-driven battlefield strategies? Full article
Show Figures

Figure 1

31 pages, 3199 KB  
Article
Hierarchical Decoupling Digital Twin Modeling Method for Topological Systems: A Case Study of Water Purification Systems
by Xubin Wu, Guoqiang Wu, Xuewei Zhang, Qiliang Yang and Liqiang Xie
Technologies 2026, 14(1), 42; https://doi.org/10.3390/technologies14010042 - 6 Jan 2026
Viewed by 206
Abstract
Digital twins (DTs) have seen widespread application across industries, enabling deep integration of cyber–physical systems. However, previous research has largely focused on domain-specific DTs and lacks a universal, cross-industry modeling framework, resulting in high development costs and low reusability. To address these challenges, [...] Read more.
Digital twins (DTs) have seen widespread application across industries, enabling deep integration of cyber–physical systems. However, previous research has largely focused on domain-specific DTs and lacks a universal, cross-industry modeling framework, resulting in high development costs and low reusability. To address these challenges, this study proposes a DT modeling method based on hierarchical decoupling and topological connections. First, the system is decomposed top–down into three levels—system, subsystem, and component—through hierarchical functional decoupling, reducing system complexity and supporting independent component development. Second, a method for constructing component-level DTs using standardized information sets is introduced, employing the JSON-LD language to uniformly describe and encapsulate component information. Finally, a topological connection mechanism abstracts the relationships between components into an adjacency matrix and assembles components and subsystems bottom–up using graph theory, ultimately forming the system-level DT. The effectiveness of the proposed method was validated using a typical surface water purification system as a case study, where the system was decomposed into four functional subsystems and 12 types of components. Experimental results demonstrate that the method efficiently enables automated integration of DTs from standardized components to subsystems and the complete system. Compared with conventional monolithic modeling approaches, it significantly reduces system complexity, supports efficient component development, and accelerates system integration. For example, when the number of components exceeds 300, the proposed method generates topology connections 44.69% faster than direct information set traversal. Consequently, this approach provides a novel and effective solution to the challenges of low reusability and limited generality in DT models, laying a theoretical foundation and offering technical support for establishing a universal cross-industry DT modeling framework. Full article
Show Figures

Figure 1

15 pages, 871 KB  
Article
The Concurrent and Longitudinal Contributions of Linguistic and Cognitive Skills to L2 Writing Quality
by Aiping Zhao, Fangzhu Chen and Xiang Li
J. Intell. 2026, 14(1), 11; https://doi.org/10.3390/jintelligence14010011 - 6 Jan 2026
Viewed by 279
Abstract
Research on second language (L2) writing has primarily focused on linguistic skills, with limited attention to higher-order cognitive skills such as inference making. This study expands prior research by examining both concurrent and longitudinal effects of linguistic skills (vocabulary, grammatical knowledge, and morphological [...] Read more.
Research on second language (L2) writing has primarily focused on linguistic skills, with limited attention to higher-order cognitive skills such as inference making. This study expands prior research by examining both concurrent and longitudinal effects of linguistic skills (vocabulary, grammatical knowledge, and morphological awareness) and inference making on L2 English writing quality among 135 Chinese high school English learners. Students’ linguistic skills, inference making, and writing were assessed in Grade 10 and Grade 11. Regression analyses showed that, in Grade 10, vocabulary, grammatical knowledge, and inference making significantly predicted writing quality, whereas in Grade 11, morphological awareness, grammatical knowledge, and inference making were significant predictors. Longitudinally, Grade 10 morphological awareness uniquely contributed to L2 writing quality in Grade 11 after controlling for the autoregressive effect of L2 writing quality in Grade 10. These findings highlight the key role of inference making in writing development and reveal that linguistic skills contribute to writing differently across grades. Pedagogically, the results underscore the importance of targeting grade-specific skills to support higher-quality English writing. Full article
20 pages, 1735 KB  
Article
AI-Enhanced CLIL for Embodied Learning: Applying the CLPS Framework in Secondary Physical Education
by Cristina Ramírez-Aroca and Arash Javadinejad
Educ. Sci. 2026, 16(1), 62; https://doi.org/10.3390/educsci16010062 - 2 Jan 2026
Viewed by 404
Abstract
This study examines how Artificial Intelligence (AI) can enhance Content and Language Integrated Learning (CLIL) through embodied, multimodal instruction in secondary Physical Education (PE). Drawing on Fernández Fontecha’s Content and Language Processing Sequence (CLPS) model, four AI-supported CLIL modules were designed and partially [...] Read more.
This study examines how Artificial Intelligence (AI) can enhance Content and Language Integrated Learning (CLIL) through embodied, multimodal instruction in secondary Physical Education (PE). Drawing on Fernández Fontecha’s Content and Language Processing Sequence (CLPS) model, four AI-supported CLIL modules were designed and partially implemented in a Spanish secondary school. The exploratory, design-based study involved 25 students (aged 13–14) enrolled in second-year secondary education (2° ESO). Data were collected through a student perception survey and structured teacher observations to examine learners’ perceived content understanding, language use, engagement, and embodied participation in AI-supported CLIL tasks. Results indicate high levels of student engagement and positive perceptions of learning, particularly regarding vocabulary use, task comprehension, and the integration of physical movement with language use. Students reported that AI tools such as NaturalReader and Gliglish supported pronunciation practice, comprehension, and interactive language use when embedded within guided CLIL tasks. The findings highlight the pedagogical potential of AI as a mediating scaffold in embodied CLIL contexts, while underscoring the importance of teacher guidance and task design. The study contributes to emerging research on AI-enhanced CLIL by offering empirically grounded insights into the affordances and limitations of integrating AI in Physical Education. Full article
Show Figures

Figure 1

33 pages, 3147 KB  
Review
Perception–Production of Second-Language Mandarin Tones Based on Interpretable Computational Methods: A Review
by Yujiao Huang, Zhaohong Xu, Xianming Bei and Huakun Huang
Mathematics 2026, 14(1), 145; https://doi.org/10.3390/math14010145 - 30 Dec 2025
Viewed by 448
Abstract
We survey recent advances in second-language (L2) Mandarin lexical tones research and show how an interpretable computational approach can deliver parameter-aligned feedback across perception–production (P ↔ P). We synthesize four strands: (A) conventional evaluations and tasks (identification, same–different, imitation/read-aloud) that reveal robust tone-pair [...] Read more.
We survey recent advances in second-language (L2) Mandarin lexical tones research and show how an interpretable computational approach can deliver parameter-aligned feedback across perception–production (P ↔ P). We synthesize four strands: (A) conventional evaluations and tasks (identification, same–different, imitation/read-aloud) that reveal robust tone-pair asymmetries and early P ↔ P decoupling; (B) physiological and behavioral instrumentation (e.g., EEG, eye-tracking) that clarifies cue weighting and time course; (C) audio-only speech analysis, from classic F0 tracking and MFCC–prosody fusion to CNN/RNN/CTC and self-supervised pipelines; and (D) interpretable learning, including attention and relational models (e.g., graph neural networks, GNNs) opened with explainable AI (XAI). Across strands, evidence converges on tones as time-evolving F0 trajectories, so movement, turning-point timing, and local F0 range are more diagnostic than height alone, and the contrast between Tone 2 (rising) and Tone 3 (dipping/low) remains the persistent difficulty; learners with tonal vs. non-tonal language backgrounds weight these cues differently. Guided by this synthesis, we outline a tool-oriented framework that pairs perception and production on the same items, jointly predicts tone labels and parameter targets, and uses XAI to generate local attributions and counterfactual edits, making feedback classroom-ready. Full article
(This article belongs to the Section E1: Mathematics and Computer Science)
Show Figures

Figure 1

16 pages, 321 KB  
Article
‘A Dead Person Cannot Carry a Dead Person’: Health, Social Support and Language Learning Among Syrian Refugees in Norway
by Ayan B. Sheikh-Mohamed, Esperanza Diaz, Melanie Straiton and Arnfinn Jomar Andersen
Int. J. Environ. Res. Public Health 2026, 23(1), 47; https://doi.org/10.3390/ijerph23010047 - 29 Dec 2025
Viewed by 470
Abstract
Second language acquisition (SLA) is critical for refugee integration and a determinant of health and health care access. Although numerous studies have examined language barriers and health communication, the reciprocal relationship between health and second language acquisition remains underexplored in public health research. [...] Read more.
Second language acquisition (SLA) is critical for refugee integration and a determinant of health and health care access. Although numerous studies have examined language barriers and health communication, the reciprocal relationship between health and second language acquisition remains underexplored in public health research. This qualitative study draws on interviews with twenty Syrian refugees (nine men and eleven women, aged 22–65) resettled in Norway. Data were collected through semi-structured interviews and analysed using reflexive thematic analysis. Two overarching themes were identified: (1) Learning under strain: health problems and post-migratory stressors constrained SLA; and (2) Relational support: reciprocal interactions with neighbours, colleagues, and volunteers enabled both language learning and functional health. These social arenas acted as low-threshold, health-promoting settings that mitigated isolation and strengthened belonging. The study highlights that language operates as a social determinant of health: inclusive, relational spaces facilitate both SLA and health by enhancing communicative participation and access to care. Refugee integration policy should therefore support accessible community spaces outside formal education to strengthen social inclusion, health literacy and refugees’ ability to navigate health and welfare services. Full article
(This article belongs to the Section Global Health)
27 pages, 5235 KB  
Article
AI-Assisted Arbitrator Selection in Construction Disputes: An Expert-Calibrated Large Language Model Framework
by Mohammad Mobadersani, Ali Bedii Candas, Murat Kuruoğlu and Onur Behzat Tokdemir
Buildings 2026, 16(1), 120; https://doi.org/10.3390/buildings16010120 - 26 Dec 2025
Viewed by 359
Abstract
Arbitration efficiency is widely recognized as a factor influencing outcomes in construction disputes. To increase the chance of finding and designating the best-fit arbitrator, a large number of candidate profiles must be investigated, which is an overwhelming, time-consuming process. This study develops and [...] Read more.
Arbitration efficiency is widely recognized as a factor influencing outcomes in construction disputes. To increase the chance of finding and designating the best-fit arbitrator, a large number of candidate profiles must be investigated, which is an overwhelming, time-consuming process. This study develops and evaluates a large language model (LLM)- enabled framework for arbitrator selection based on dispute details and predefined expert criteria. To reach this goal, 500 standardized, anonymized arbitrator resumes were evaluated using a unified scoring structure. These resumes were scored and classified using two GPT-5 models with different levels of detail in their prompts. The results of these models were then compared with expert evaluations to assess their ability to replicate human decision-making patterns in resume evaluation and classification. According to the results, the second model, with a high level of detail in its prompt structure, achieved an accuracy of 84%, while the first model, with a concise prompt that provides only a brief description of the experts’ expectations, achieved an overall accuracy of 53%. As can be concluded, the accuracy of the LLM-assisted resume analysis framework improves when guided by a detailed, expert-aligned prompt structure. From a research perspective, this study’s results highlight the importance of prompt engineering in an AI-assisted decision-support system for professional evaluation tasks. Since this framework is limited to resumes in English, future research should examine the effectiveness of LLMs in evaluating and classifying resumes in languages other than English. Moreover, future studies might consider replicating this study using other large language models to compare precision and accuracy across different LLMs. Full article
(This article belongs to the Section Construction Management, and Computers & Digitization)
Show Figures

Figure 1

22 pages, 624 KB  
Article
Development and Validation of Human-Computer Collaborative Classroom Second Language Learning Engagement Scale
by Yanshuang Jiang and Yuxuan Liu
Behav. Sci. 2026, 16(1), 46; https://doi.org/10.3390/bs16010046 - 25 Dec 2025
Viewed by 447
Abstract
This study developed and validated the Human–Computer Collaborative Classroom Second Language Learning Engagement Scale among 710 junior high school students studying in Mongolian. Initially, the scale’s conceptual framework was developed through a review of pertinent literature and interview, drawing on self-determination theory and [...] Read more.
This study developed and validated the Human–Computer Collaborative Classroom Second Language Learning Engagement Scale among 710 junior high school students studying in Mongolian. Initially, the scale’s conceptual framework was developed through a review of pertinent literature and interview, drawing on self-determination theory and socio-constructivist perspectives to define engagement in human–computer collaborative second language learning contexts. The study adopted a sequential mixed-methods design: in Phase 1, item analysis and exploratory factor analysis (EFA) were conducted using data from 437 students, resulting in a preliminary five-factor structure; in Phase 2, confirmatory factor analysis (CFA) was performed using data from the remaining 273 students to validate the factor structure. The final scale comprises five core dimensions: (1) higher-order thinking, (2) student–teacher interaction, (3) human–computer interaction, (4) active collaborative learning, and (5) learning enthusiasm. Structural equation modeling confirmed a robust five-factor model, with all fit indices indicating satisfactory model fit (e.g., CFI = 0.981, TLI = 0.977, RMSEA = 0.041). The scale demonstrates strong internal consistency (Cronbach’s α = 0.959) and construct validity. These findings highlight the reliability and efficacy of this psychometric tool for evaluating students’ engagement in second language learning within human–computer collaborative classroom environments, offering valuable insights for educators and researchers. Full article
Show Figures

Figure 1

23 pages, 473 KB  
Article
A General Framework for Activation Function Optimization Based on Mollification Theory
by Wentao Zhang, Yutong Zhang, Yuxin Zheng and Wentao Mo
Mathematics 2026, 14(1), 72; https://doi.org/10.3390/math14010072 - 25 Dec 2025
Viewed by 512
Abstract
The deep learning paradigm is progressively shifting from non-smooth activation functions, exemplified by ReLU, to smoother alternatives such as GELU and SiLU. This transition is motivated by the fact that non-differentiability introduces challenges for gradient-based optimization, while an expanding body of research demonstrates [...] Read more.
The deep learning paradigm is progressively shifting from non-smooth activation functions, exemplified by ReLU, to smoother alternatives such as GELU and SiLU. This transition is motivated by the fact that non-differentiability introduces challenges for gradient-based optimization, while an expanding body of research demonstrates that smooth activations yield superior convergence, improved generalization, and enhanced training stability. A central challenge, however, is how to systematically transform widely used non-smooth functions into smooth counterparts that preserve their proven representational strengths while improving differentiability and computational efficiency. To address this, we propose a general activation smoothing framework grounded in mollification theory. Leveraging the Epanechnikov kernel, the framework achieves statistical optimality and computational tractability, thereby combining theoretical rigor with practical utility. Within this framework, we introduce Smoothed ReLU (S-ReLU), a novel second-order continuously differentiable (C2) activation derived from ReLU that inherits its favorable properties while mitigating inherent drawbacks. Extensive experiments on CIFAR-10, CIFAR-100, and ImageNet-1K with Vision Transformers and ConvNeXt consistently demonstrate the superior performance of S-ReLU over existing ReLU variants. Beyond computer vision, large-scale fine-tuning experiments on language models further show that S-ReLU surpasses GELU, underscoring its broad applicability across both vision and language domains and its potential to enhance stability and scalability. Full article
Show Figures

Figure A1

32 pages, 5768 KB  
Article
Digital Human Teachers with Personalized Identity: Enhancing Accessibility and Long-Term Engagement in Sustainable Language Education
by Qi Deng, Yixuan Zhang, Yuehan Xiao and Changzeng Fu
Sustainability 2026, 18(1), 220; https://doi.org/10.3390/su18010220 - 25 Dec 2025
Viewed by 440
Abstract
Sustainable language education necessitates scalable, accessible learning environments that foster long-term learner autonomy and reduce educational inequality. While online courses have democratized access to language learning globally, persistent deficiencies in instructor-student interaction and learner engagement compromise their sustainability. The “face effect,” denoting the [...] Read more.
Sustainable language education necessitates scalable, accessible learning environments that foster long-term learner autonomy and reduce educational inequality. While online courses have democratized access to language learning globally, persistent deficiencies in instructor-student interaction and learner engagement compromise their sustainability. The “face effect,” denoting the influence of instructor facial appearance on learning outcomes, remains underexplored as a resource-efficient mechanism for enhancing engagement in digital environments. Furthermore, effective measures linking psychological engagement to sustained learning experiences are notably absent. This study addresses three research questions within a sustainable education framework: (1) How does instructor identity, particularly facial appearance, affect second language learners’ outcomes and interactivity in scalable online environments? (2) How can digital human technology dynamically personalize instructor appearance to support diverse learner populations in resource-efficient ways? (3) How does instructor identity influence learners’ flow state, a critical indicator of intrinsic motivation and self-directed learning capacity? Two controlled experiments with Japanese language learners examined three instructor identity conditions: real teacher identity, learner self-identity, and idol-inspired identity. Results demonstrated that the self-identity condition significantly enhanced oral performance and flow state dimensions, particularly concentration and weakened self-awareness. These findings indicate that identity-adaptive digital human instructors cultivate intrinsic motivation and learner autonomy, which are essential competencies for lifelong learning. This research advances Sustainable Development Goal 4 (Quality Education) by demonstrating that adaptive educational technology can simultaneously improve learning outcomes and psychological engagement in scalable, cost-effective online environments. The personalization capabilities of digital human instructors provide a sustainable pathway to reduce educational disparities while maintaining high-quality, engaging instruction accessible to diverse global populations. Full article
(This article belongs to the Special Issue Sustainable Education in the Age of Artificial Intelligence (AI))
Show Figures

Figure 1

22 pages, 306 KB  
Article
The Importance of the Teacher–Researcher–Artist in Curriculum Design, Development and Assessment in Vocational Education in England
by Margaret (Maggie) Gregson
Educ. Sci. 2026, 16(1), 24; https://doi.org/10.3390/educsci16010024 - 24 Dec 2025
Viewed by 251
Abstract
Set in the vocational education and training sector in England, this article draws attention to how top-down, centre–periphery approaches to curriculum design and development in vocational education fail for at least three reasons. First, they misconstrue the nature of knowledge. Second, they lead [...] Read more.
Set in the vocational education and training sector in England, this article draws attention to how top-down, centre–periphery approaches to curriculum design and development in vocational education fail for at least three reasons. First, they misconstrue the nature of knowledge. Second, they lead to perfunctory and fragmented approaches to curriculum design, coupled with mechanistic measures of quality and achievement, which often require little more than “one-off” and superficially assessed demonstrations of performance. Finally, they underplay the role and importance of the teacher as researcher and artist in putting the cultural resources of society to work in creative curriculum design and pedagogy. Teacher artistry is pivotal in animating and heightening the vitality of vocational curricula. It is through this artistry that teachers make theories, ideas and concepts in vocational subjects and disciplines accessible and meaningful to all learners in coherent ways in the contexts of their learning and their lives. The consequences of the epistemic faux pas underpinning centre-to-periphery models of curriculum design and development are highlighted in this article in vocational tutors’ accounts of experiences of problems and issues in curriculum design, development and assessment encountered in their practice. Participants in the research teach in a variety of vocational education settings, including Apprenticeships and Higher-Level Technical Education; English Language at General Certificate of Secondary Education (GCSE) level; Health and Social Care; Information and Communications Technology; Construction (Plumbing); Digital Production, Design and Development and High-Tech Precision Engineering. Data are analysed and reported through systematic, thematic analysis This article draws upon qualitative data derived from a study funded by the Education and Training Foundation (ETF) in England over a two-year period from 2021 to 2023. The research population consists of a group of eight practitioner–researchers working in three colleges of Further Education (FE) and one Industry Training Centre (ITC) in England. All of the teachers of vocational education reported here volunteered to participate in the study. Research methods include semi-structured interviews, analysis of critical incidents and case studies produced by practitioner–researchers from across the FE and Skills sector in England. Full article
Back to TopTop