Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,137)

Search Parameters:
Keywords = any language

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
10 pages, 418 KB  
Article
Empirical Analysis of Internal Hallucination Detection in Quantized LLMs: Layer Dynamics and White-Box Benchmarks
by Haohua Liu and Jinli Xu
Electronics 2026, 15(9), 1802; https://doi.org/10.3390/electronics15091802 - 23 Apr 2026
Abstract
As large language models (LLMs) move onto resource-constrained devices, maintaining factual reliability without adding another expensive decoding pass becomes a practical inference problem. Instead of introducing another complex hallucination detector, this paper presents an empirical study of which low-cost white-box features remain useful [...] Read more.
As large language models (LLMs) move onto resource-constrained devices, maintaining factual reliability without adding another expensive decoding pass becomes a practical inference problem. Instead of introducing another complex hallucination detector, this paper presents an empirical study of which low-cost white-box features remain useful under a controlled single-pass benchmark. Across repeated candidate-answer reruns on Qwen2.5-1.5B-Instruct and Llama-3.2-1B-Instruct, truthful and incorrect internal states are most separable in the middle-to-late layers, with the peak consistently falling at 50–70% of total network depth across both model families. The depth-relative pattern is more stable than any single detector ranking: simple residual-space baselines, including Mahalanobis scoring, remain competitive with more elaborate residual-plus-spectral fusion features under the same protocol, although detector ranking still changes by task. A separate preliminary two-seed Qwen2.5-7B-Instruct BF16 probe under that same white-box benchmark reproduces the same middle-to-late peak, and auxiliary Int8 checks on Qwen2.5-1.5B and Qwen2.5-7B remain consistent with that same localization under moderate quantization. Taken together, the results point away from detector complexity and toward a more reproducible question of where hallucination cues emerge, which internal statistics remain reliable, and how cautiously such conclusions should be transferred to deployment settings. Full article
Show Figures

Figure 1

23 pages, 85141 KB  
Article
A Movement Description Language for Functional Training Exercise Analysis
by Lúcia Sousa, Daniel Canedo, Pedro Santos and António Neves
J. Funct. Morphol. Kinesiol. 2026, 11(2), 162; https://doi.org/10.3390/jfmk11020162 - 21 Apr 2026
Viewed by 109
Abstract
Objective: Functional training exercises involve complex multi-joint movements that challenge traditional rule-based or data-driven recognition systems. This paper introduces a Movement Description Language (MDL) designed to formally represent, analyze, and evaluate such exercises using camera-based pose estimation and interpretable, composable structures. Methods: The [...] Read more.
Objective: Functional training exercises involve complex multi-joint movements that challenge traditional rule-based or data-driven recognition systems. This paper introduces a Movement Description Language (MDL) designed to formally represent, analyze, and evaluate such exercises using camera-based pose estimation and interpretable, composable structures. Methods: The proposed MDL models each exercise as a finite-state machine defined by pose-derived angle proxy transitions, allowing movements to be described in a modular and reusable way. Demonstrated with MediaPipe landmark extraction from monocular video, while the MDL remains compatible with any pose estimation algorithm, the framework focuses on exercise phase detection and repetition counting. Experimental validation was conducted on a dataset of 1513 videos of 12 functional exercises (squats, deadlifts, lunges, shoulder presses, planks, push-ups, pull-ups, bent-over rows, box jumps, thrusters, overhead squats, and burpees) obtained from public pose datasets, competition footage, and recordings of 9 participants in real-world environments. Results: Automated repetition counts were compared against manually annotated ground truth, showing an overall repetition-counting accuracy of 97.2%, with a mean per-exercise accuracy of 98.8% (range 95–100%). The MDL successfully handled both simple and compound exercises, maintaining reliable phase detection despite variations in execution speed, camera perspective, and environmental conditions. Conclusion: The system was implemented using real-time pose estimation to demonstrate the practical execution of the MDL framework. The proposed MDL provides a transparent, extensible, and computationally efficient framework for functional exercise analysis. By bridging human-readable movement semantics with executable motion logic, it enables interpretable automatic repetition counting and phase detection, offering an alternative to black-box recognition approaches. The results support its potential for scalable deployment in training, monitoring and movement analysis applications. The proposed system is not intended for biomechanical measurement or clinical-grade kinematic analysis, but rather for interpretable modeling of exercise structure and repetition detection using approximate pose-derived signals. Full article
(This article belongs to the Section Kinesiology and Biomechanics)
37 pages, 3613 KB  
Article
Evaluating the Efficacy of Large Language Models in Stock Market Decision-Making: A Decision-Focused, Price-Only, Multi-Country Analysis Using Historical Price Data
by Maria C. Mariani, Sourav Malakar, Amrita Bagchi, Subhrajyoti Basu, Saptarsi Goswami, Osei Kofi Tweneboah, Sarbadeep Biswas, Ankit Dey and Ankit Sinha
Mach. Learn. Knowl. Extr. 2026, 8(4), 104; https://doi.org/10.3390/make8040104 - 17 Apr 2026
Viewed by 163
Abstract
This study provides a comparative evaluation of three state-of-the-art large language models (LLMs), namely OpenAI’s (San Francisco, CA, USA) GPT-4.0, Google’s (Google LLC, Mountain View, CA, USA) Gemini 2.0 Flash, and Meta’s (Meta Platforms, Menlo Park, CA, USA) LLaMA-4-Scout-17B-16E, in a decision-oriented framework [...] Read more.
This study provides a comparative evaluation of three state-of-the-art large language models (LLMs), namely OpenAI’s (San Francisco, CA, USA) GPT-4.0, Google’s (Google LLC, Mountain View, CA, USA) Gemini 2.0 Flash, and Meta’s (Meta Platforms, Menlo Park, CA, USA) LLaMA-4-Scout-17B-16E, in a decision-oriented framework in which the models generate structured outputs based only on historical closing-price data. The evaluation covers 150 stocks sampled from three countries (India, the United States, and South Africa) across ten economic sectors, including Information Technology, Banking, and Pharmaceuticals. Unlike many prior studies that combine numerical and textual inputs, this study relies solely on three years of numerical time series data and examines model responses in terms of decision labels such as buy, sell, or hold. The LLMs were provided with historical closing-price sequences and prompted with three types of finance-related questions: (a) whether to buy a stock, (b) whether to sell or hold a stock, and (c) in a pairwise comparison, which stock to buy or hold. These prompts were evaluated across two investment horizons: 1 month and 3 months. Model outputs were compared against realized market outcomes during the corresponding test periods. Performance was assessed across four key dimensions: country, sector, annualized volatility, and question type. The models were not given any supplementary financial information or instructions on specific analytical methods. The results indicate that GPT-4.0 achieves the highest average accuracy (56%), followed by LLaMA-4-Scout-17B-16E (48%) and Gemini 2.0 Flash (39%). Overall performance remains moderate and varies across market conditions, with relatively higher accuracy observed in high-volatility regimes (51%). This work evaluates how LLMs behave when presented with structured numerical price sequences in a controlled decision-labeling setting and contributes to the broader discussion on the potential and limitations of LLMs for numerical decision tasks in finance. Full article
Show Figures

Figure 1

33 pages, 3322 KB  
Review
Evolution of Dysphagia Rehabilitation in Japan Since the 1980s: Expanding Dental Roles in Interprofessional Care—A Narrative Review
by Mika Miyaoka, Kosuke Muraoka, Shuji Awano and Wataru Fujii
Healthcare 2026, 14(8), 1060; https://doi.org/10.3390/healthcare14081060 - 16 Apr 2026
Viewed by 292
Abstract
Background/Objectives: Japan, the world’s first super-aged society, has confronted rapid population aging and increasing healthcare demands earlier than any other country. In this context, dysphagia rehabilitation has become a critical issue affecting quality of life and survival. With nearly 30% of the [...] Read more.
Background/Objectives: Japan, the world’s first super-aged society, has confronted rapid population aging and increasing healthcare demands earlier than any other country. In this context, dysphagia rehabilitation has become a critical issue affecting quality of life and survival. With nearly 30% of the population aged ≥65 years, Japan has developed a distinctive dysphagia rehabilitation model characterized by interprofessional collaboration and dental involvement. This narrative review describes its historical evolution and structural characteristics. Methods: This narrative review employed a structured literature search of PubMed and Ichushi-Web, supplemented by manual searches of policy documents and professional guidelines. Publications from 1980 to January 2026 were included if they addressed dysphagia rehabilitation systems or dental involvement in Japan. Both English- and Japanese-language sources were analyzed using thematic synthesis. Results: Japan’s dysphagia rehabilitation model evolved alongside population aging and is embedded within the universal health insurance and long-term care insurance systems. A prominent characteristic is the sustained involvement of dental professionals, who contributed to the foundational development of the field and remain actively involved across care settings, particularly within community- and home-based care. The system is further supported by certification frameworks, a triadic model integrating rehabilitation, nutrition, and oral health, and institutionalized interprofessional education. Conclusions: Previous studies have examined specific aspects of dysphagia care in Japan, but few have examined the overall structure of the system. This review maps the fundamental structure of Japan’s dysphagia rehabilitation model within its historical and policy context, offering insights relevant to dysphagia care in other aging societies. Full article
(This article belongs to the Section Healthcare Organizations, Systems, and Providers)
Show Figures

Figure 1

59 pages, 8251 KB  
Review
IMGT® Nomenclature of Immunoglobulins (IG) or Antibodies and T Cell Receptors (TR): A Common Language for Immunoinformatics and Artificial Intelligence (AI)
by Marie-Paule Lefranc and Gérard Lefranc
Antibodies 2026, 15(2), 35; https://doi.org/10.3390/antib15020035 - 15 Apr 2026
Viewed by 152
Abstract
The immunoglobulins (IG) or antibodies and the T cell receptors (TR) are the antigen receptors of the adaptive immune responses (AIR) of jawed vertebrates (Gnathostomata). IMGT®, the international ImMunoGeneTics information system®, was created in 1989 by Marie-Paule [...] Read more.
The immunoglobulins (IG) or antibodies and the T cell receptors (TR) are the antigen receptors of the adaptive immune responses (AIR) of jawed vertebrates (Gnathostomata). IMGT®, the international ImMunoGeneTics information system®, was created in 1989 by Marie-Paule Lefranc (Laboratoire d’ImmunoGénétique Moléculaire (LIGM), Université de Montpellier and CNRS) to deal with and to manage the huge diversity of IG or antibodies and TR. The founding of IMGT® marked the advent of immunoinformatics, a new science which emerged at the interface between immunogenetics and bioinformatics. For the first time, the IG and TR variable (V), diversity (D), joining (J) and constant (C) genes were officially recognized as ‘genes’, as were the conventional genes. The IMGT-ONTOLOGY CLASSIFICATION axiom and the concepts of classification have generated the IMGT nomenclature and the IMGT Scientific chart rules for assigning IMGT names to IG and TR genes and alleles of Homo sapiens and of any other jawed vertebrate species. The IMGT nomenclature is used for genes in locus, in sequences (genomic or rearranged, expressed or not) and in structures enabling comparative immunology, evolutionary immunogenetics, standardized analysis and comparison of IG and TR repertoires analysis in normal or pathologic situations. IMGT nomenclature is used in basic, veterinary, and medical research, in clinical applications (mutation analysis in leukemia and lymphoma), and in therapeutic antibody design, engineering and humanization. By providing consistent and high standard biocuration for the description of the IG and TR loci, genes and alleles, and for the analysis of the IG or antibody and TR-expressed rearranged sequences and proteins and structures, the IMGT nomenclature is the common language for immunoinformatics and artificial intelligence (AI). Full article
(This article belongs to the Section Antibody Discovery and Engineering)
Show Figures

Graphical abstract

32 pages, 1364 KB  
Article
XRL-LLM: Explainable Reinforcement Learning Framework for Voltage Control
by Shrenik Jadhav, Birva Sevak and Van-Hai Bui
Energies 2026, 19(7), 1789; https://doi.org/10.3390/en19071789 - 6 Apr 2026
Viewed by 460
Abstract
Reinforcement learning (RL) agents are increasingly deployed for voltage control in power distribution networks. However, their opaque decision-making creates a significant trust barrier, limiting their adoption in safety-sensitive operational settings. This paper presents XRL-LLM, a novel framework that generates natural language explanations for [...] Read more.
Reinforcement learning (RL) agents are increasingly deployed for voltage control in power distribution networks. However, their opaque decision-making creates a significant trust barrier, limiting their adoption in safety-sensitive operational settings. This paper presents XRL-LLM, a novel framework that generates natural language explanations for RL control decisions by combining game-theoretic feature attribution (KernelSHAP) with large language model (LLM) reasoning grounded in power systems domain knowledge. We deployed a Proximal Policy Optimization (PPO) agent on an IEEE 33-bus network to coordinate capacitor banks and on-load tap changers, successfully reducing voltage violations by 90.5% across diverse loading conditions. To make these decisions interpretable, KernelSHAP identifies the most influential state features. These features are then processed by a domain-context-engineered LLM prompt that explicitly encodes network topology, device specifications, and ANSI C84.1 voltage limits.Evaluated via G-Eval across 30 scenarios, XRL-LLM achieves an explanation quality score of 4.13/5. This represents a 33.7% improvement over template-based generation and a 67.9% improvement over raw SHAP outputs, delivering statistically significant gains in accuracy, actionability, and completeness (p<0.001, Cohen’s d values up to 4.07). Additionally, a physics-grounded counterfactual verification procedure, which perturbs the underlying power flow model, confirms a causal faithfulness of 0.81 under critical loading. Finally, five ablation studies yield three broader insights. First, structured domain context engineering produces synergistic quality gains that exceed any single knowledge component, demonstrating that prompt composition matters more than the choice of foundational model. Second, even an open source 8B-parameter model outperforms templates given the same prompt, confirming the framework’s backbone-agnostic value. Most importantly, counterfactual faithfulness increases alongside load severity, indicating that post hoc attributions are most reliable in the high-stakes regimes where trustworthy explanations matter most. Full article
Show Figures

Figure 1

14 pages, 245 KB  
Article
Exploring Strategies to Detect and Mitigate Bias in AI in Education: Students’ Perceptions and Didactic Approaches
by María Ribes-Lafoz, Borja Navarro-Colorado and José Rovira-Collado
Trends High. Educ. 2026, 5(2), 33; https://doi.org/10.3390/higheredu5020033 - 3 Apr 2026
Viewed by 585
Abstract
The increasing integration of Generative AI (GenAI) into higher education, particularly in the domain of language teaching, presents both opportunities and challenges. While AI-powered tools such as ChatGPT-5 can support language learning by generating personalised content which enables real-time interaction and feedback, they [...] Read more.
The increasing integration of Generative AI (GenAI) into higher education, particularly in the domain of language teaching, presents both opportunities and challenges. While AI-powered tools such as ChatGPT-5 can support language learning by generating personalised content which enables real-time interaction and feedback, they also risk perpetuating biases embedded in training data. These biases can appear in linguistic, cultural or socio-political forms, reinforcing stereotypes and influencing language norms. Therefore, equipping students and educators with strategies to critically assess AI outputs is essential for ethical and responsible AI use in language education. While recent research highlights the risks of algorithmic bias, less attention has been given to the perceptions and attitudes of pre-service teachers, whose future practice will shape classroom uses of these technologies. This exploratory pilot study adopts a survey-based approach to examine pre-service teachers’ baseline awareness of bias in artificial intelligence, with particular attention to linguistic and cultural dimensions Data were collected through an online questionnaire administered to 65 undergraduate students enrolled in Primary Education degree programmes. The study documents baseline perceptions prior to any instructional intervention and provides preliminary empirical evidence to inform the future design of pedagogical strategies aimed at developing critical AI literacy in teacher education. Full article
30 pages, 1286 KB  
Article
Large Language Model Recommendations for Empiric Antibiotics Versus Clinician Prescribing: A Non-Interventional Paired Retrospective Antimicrobial Stewardship Analysis
by Ninel Iacobus Antonie, Vlad Alexandru Ionescu, Gina Gheorghe, Loredana-Crista Tiucă and Camelia Cristina Diaconu
Antibiotics 2026, 15(4), 368; https://doi.org/10.3390/antibiotics15040368 - 2 Apr 2026
Viewed by 428
Abstract
Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship strategies that limit unnecessary broad-spectrum empiric therapy while preserving timely escalation when clinically warranted. Before any clinical deployment of large language model (LLM)-based antibiotic decision support [...] Read more.
Background/Objectives: Antimicrobial resistance (AMR) remains a major global health threat, strengthening the case for antimicrobial stewardship strategies that limit unnecessary broad-spectrum empiric therapy while preserving timely escalation when clinically warranted. Before any clinical deployment of large language model (LLM)-based antibiotic decision support can be considered, structured offline evaluation is needed to assess whether model outputs align with auditable stewardship constraints under real-world admission contexts. We therefore evaluated whether post hoc LLM-generated empiric antibiotic recommendations showed greater concordance with a pre-specified stewardship benchmarking framework than clinician-initiated regimens in a retrospective shadow-mode setting. Methods: Single-center retrospective paired evaluation at Clinical Emergency Hospital of Bucharest (Internal Medicine, 2020–2024). The unit of analysis was the admission (N = 493), with paired 24 h empiric regimens (clinician-prescribed vs. post hoc LLM-recommended via OpenAI API; not visible to clinicians; no influence on care). Local laboratory-derived epidemiology was precomputed from microbiology exports and provided as structured prompt context to approximate information parity with clinicians’ implicit local ecology knowledge. Primary (prespecified) endpoint: any contextual guardrail violation (unjustified carbapenem/antipseudomonal/anti-MRSA under prespecified structured severity/MDR-risk rules), exact McNemar. Key secondary (prespecified): Δ contextual guardrail penalty (LLM − Clin), sign test and Wilcoxon signed-rank (ties reported). Ethics committee approval was obtained. Results: Guardrail violations occurred in 17.0% of clinician regimens vs. 4.9% of LLM regimens (paired RD −12.2%; matched OR 0.216, 95% CI 0.127–0.367; McNemar exact p = 1.60 × 10−10). Δ penalty had median 0 with 398/493 ties; among non-ties, improvements (Δ < 0) exceeded adverse shifts (79 vs. 16; sign-test p = 3.47 × 10−11). Conclusions: In this offline, non-interventional paired evaluation, LLM-generated empiric regimens showed greater concordance with a pre-specified stewardship benchmarking framework than clinician empiric regimens for the same admissions. These findings should not be interpreted as evidence of clinical superiority, patient safety, or causal effectiveness, but rather as process-level benchmarking within a rule-based stewardship construct. As such, reproducible guardrail-based benchmarking may serve as an early pre-implementation step to identify alignment and potential failure modes before prospective, safety-governed evaluation. Full article
(This article belongs to the Section Antibiotics Use and Antimicrobial Stewardship)
Show Figures

Figure 1

24 pages, 6161 KB  
Article
Just-in-Time Historical State Reconstruction for Low-Latency Financial Trading with Large Language Models
by Dong Hoang Van, Md Monjurul Karim and Qiang Qu
AI 2026, 7(4), 117; https://doi.org/10.3390/ai7040117 - 27 Mar 2026
Viewed by 1127
Abstract
This paper introduces Historical State Reconstruction, a novel framework for low-latency financial decision-making using Large Language Models. While agentic systems have demonstrated potential in synthesizing complex financial narratives, they typically rely on Retrieval-Augmented Generation or memory-based architectures. These paradigms introduce significant latency and [...] Read more.
This paper introduces Historical State Reconstruction, a novel framework for low-latency financial decision-making using Large Language Models. While agentic systems have demonstrated potential in synthesizing complex financial narratives, they typically rely on Retrieval-Augmented Generation or memory-based architectures. These paradigms introduce significant latency and risk look-ahead bias during real-time inference, rendering them unsuitable for high-frequency trading environments where milliseconds determine profitability. This proposed framework resolves this bottleneck by decoupling the heavy computational cost of context acquisition from the latency-sensitive critical path of decision-making. We propose a system that proactively compiles unstructured regulatory filings (10-K, 10-Q, 8-K) into a structured, bitemporal database. By pre-computing complex state facets, such as financial health ratios, governance structures, and insider trading signals offline, the system allows trading agents to “time travel” to a reconstructed state at any historical moment t with O(1) snapshot retrieval plus O(k) delta application complexity. We implement this approach on the top 50 companies in the S&P 500 ranked by market capitalization, processing over 12,000 filings to demonstrate a pipeline that transforms high-dimensional financial narratives into compact, prompt-ready context. Our evaluation shows that the system reduces context retrieval latency by over 97% compared to traditional baselines while achieving a 300:1 compression ratio for financial health data. Furthermore, the bitemporal architecture guarantees strict temporal integrity, eliminating the risk of data leakage in backtesting and satisfying the reproducibility requirements of regulatory frameworks like SR 11-7. Full article
Show Figures

Graphical abstract

22 pages, 668 KB  
Data Descriptor
Kula Toponyms: Preserving the Cultural–Linguistic Landscape of Eastern Alor
by Hanjun Hua and Francesco Perono Cacciafoco
Data 2026, 11(3), 61; https://doi.org/10.3390/data11030061 - 17 Mar 2026
Viewed by 483
Abstract
Toponyms, i.e., place names, are fundamental for reconstructing the diachronic development of communities without written records, encoding unique historical and cultural data of any civilisation; however, they are vulnerable to loss as languages decline. This also happens for the scarcely documented language Kula [...] Read more.
Toponyms, i.e., place names, are fundamental for reconstructing the diachronic development of communities without written records, encoding unique historical and cultural data of any civilisation; however, they are vulnerable to loss as languages decline. This also happens for the scarcely documented language Kula (or Tanglapui), a Papuan Alor-Pantar language (Trans-New Guinea macro-family) from Eastern Alor, Southeastern Indonesia (Alor-Pantar Archipelago, Timor area). The spatial knowledge encapsulated in Kula toponyms has been critically threatened by resettlement since the 1960s, alongside its declining daily usage. To preserve this heritage, this article presents a systemised dataset of Kula place names derived from oral traditions, documented for the first time during fieldwork between 2023 and 2026. Data collection followed established language documentation methodologies, utilising semi-structured interviews and community verification with elder native speakers and local consultants to ensure adherence to ethical standards and cultural accuracy of recording practices. The dataset comprises 31 entries of place names, each detailing toponymic variants, glosses/folk etymologies, associated natural resources, stories/historical elements, settlement type, location, habitation status, and internal and external tribal links when information is available. This paper fills a critical gap in Timor-Alor-Pantar linguistics, offering an open-access resource for reconstructing migration patterns and preserving the Kula people’s collective memory against accelerating language endangerment. Full article
(This article belongs to the Section Information Systems and Data Management)
Show Figures

Figure 1

27 pages, 1023 KB  
Article
MoRe: LLM-Based Domain Model Generation with Hybrid Self-Refinement
by Ru Chen, Jingwei Shen and Xiao He
Electronics 2026, 15(6), 1239; https://doi.org/10.3390/electronics15061239 - 17 Mar 2026
Viewed by 498
Abstract
Generating domain models from requirements is a vital and complex challenge in automated software engineering. Although large language models (LLMs) have exhibited significant competence in this area, their propensity for hallucination frequently results in models that are redundant, inconsistent, or structurally unsound. To [...] Read more.
Generating domain models from requirements is a vital and complex challenge in automated software engineering. Although large language models (LLMs) have exhibited significant competence in this area, their propensity for hallucination frequently results in models that are redundant, inconsistent, or structurally unsound. To enhance the quality of automatically generated models, this paper introduces MoRe, an LLM-based approach to domain model generation with self-refinement. Within our approach, an LLM is first tasked with producing an initial domain model draft. Subsequently, a hybrid refinement—combining LLMs with a rule-based scanner—is employed to identify and correct common issues in the model. An empirical study was conducted using 30 domain modeling problems and four open-source LLMs. The results indicate that MoRe significantly improves the quality of generated domain models. This paper advocates for incorporating a self-refinement phase as a standard component in any automated modeling workflow. Full article
Show Figures

Figure 1

18 pages, 748 KB  
Review
Integrating Theoretical Perspectives: A Narrative Review of How Music Training Enhances Word Reading
by William Choi
Educ. Sci. 2026, 16(3), 449; https://doi.org/10.3390/educsci16030449 - 16 Mar 2026
Viewed by 418
Abstract
In recent decades, there has been considerable evidence suggesting that music training can improve word reading. However, the mechanisms through which music training enhances word reading remain an open question. Although various claims and hypotheses have been proposed, they are scattered across the [...] Read more.
In recent decades, there has been considerable evidence suggesting that music training can improve word reading. However, the mechanisms through which music training enhances word reading remain an open question. Although various claims and hypotheses have been proposed, they are scattered across the literature. By reviewing existing research, this narrative review article identifies three possible theoretical accounts: the perceptual account, the cognitive account, and the music reading account. The central argument is that these accounts are not mutually exclusive and can be integrated into a comprehensive framework. Specifically, this article proposes an integrated model suggesting that music training improves word reading by enhancing phonological awareness, cognitive skills, and music reading abilities. Rather than dismissing any single account in favour of others, this article advocates for their integration and calls for an evaluation of their relative contributions to music-to-language transfer. Additionally, future research directions for investigating this transfer process are discussed. Full article
(This article belongs to the Special Issue Music Education and Cultures)
Show Figures

Figure 1

28 pages, 817 KB  
Article
Compositional Incrementality Based on Polish Reveal-Type Verbs and Verbal Nouns
by Karolina Zuchewicz
Languages 2026, 11(3), 52; https://doi.org/10.3390/languages11030052 - 16 Mar 2026
Viewed by 296
Abstract
This article focuses on the realization of incrementality in Polish verbal and nominal constructions. The object of investigation is clause-embedding reveal-type concepts like ‘prove’, ‘reveal’, or ‘show’. In Slavic languages, incremental relations have traditionally been examined in direct relation to (im)perfectivity, with imperfective [...] Read more.
This article focuses on the realization of incrementality in Polish verbal and nominal constructions. The object of investigation is clause-embedding reveal-type concepts like ‘prove’, ‘reveal’, or ‘show’. In Slavic languages, incremental relations have traditionally been examined in direct relation to (im)perfectivity, with imperfective verbs enforcing partial affectedness of events and objects, and perfective verbs enforcing their total affectedness. In the present paper, I take a closer look at the incremental output within the reveal-type concept. I investigate whether an incremental event comes with a fixed incremental path that remains intact independently of any morphological or syntactic modifications. My research question is: Is an incremental feature specified in the lexicon as is the aspectual value ‘(im)perfective’, or does it rather arise compositionally? To answer this question, I analyze the impact of the dative argument and the nominalization on the incremental output of clause-embedding reveal-type predicates. I demonstrate that incremental meanings are affected by the properties of an entire construction. Based on that, I propose to distinguish between two types of incrementality: the non-modifiable (im)perfectivity-dependent partial and total integration requirement, and the compositional incrementality that arises as an interplay between lexical semantics, argument structure, and the morphological shape of the respective lexeme. Full article
23 pages, 2679 KB  
Article
Morphology-Aware Deep Features and Frozen Filters for Surgical Instrument Segmentation with LLM-Based Scene Summarization
by Adnan Haider, Muhammad Arsalan and Kyungeun Cho
J. Clin. Med. 2026, 15(6), 2227; https://doi.org/10.3390/jcm15062227 - 15 Mar 2026
Viewed by 333
Abstract
Background/Objectives: The rise of artificial intelligence is injecting intelligence into the healthcare sector, including surgery. Vision-based intelligent systems that assist surgical procedures can significantly increase productivity, safety, and effectiveness during surgery. Surgical instruments are central components of any surgical intervention, yet detecting and [...] Read more.
Background/Objectives: The rise of artificial intelligence is injecting intelligence into the healthcare sector, including surgery. Vision-based intelligent systems that assist surgical procedures can significantly increase productivity, safety, and effectiveness during surgery. Surgical instruments are central components of any surgical intervention, yet detecting and locating them during live surgeries remains challenging due to adverse imaging conditions such as blood occlusion, smoke, blur, glare, low-contrast, instrument scale variation, and other artifacts. Methods: To address these challenges, we developed an advanced segmentation architecture termed the frozen-filters-based morphology-aware segmentation network (FFMS-Net). Accurate surgical instrument segmentation strongly depends on edge and morphology information; however, in conventional neural networks, this spatial information is progressively degraded during spatial processing. FFMS-Net introduces a frozen and learnable feature pipeline (FLFP) that simultaneously exploits frozen edge representations and learnable features. Within FLFP, Sobel and Laplacian filters are frozen to preserve edge and orientation information, which is subsequently fused with learnable initial spatial features. Moreover, a tri-atrous blending (TAB) block is employed at the end of the encoder to fuse multi-receptive-field-based contextual information, preserving instrument morphology and improving robustness under challenging conditions such as blur, blood occlusion, and smoke. Datasets focused on surgical instruments often suffer from severe class imbalance and poor instrument visibility. To mitigate these issues, FFMS-Net incorporates a progressively structure-preserving decoder (PSPD) that aggregates dilated and standard spatial information after each upsampling stage to maintain class structure. Multi-scale spatial features from different encoder levels are further fused using light skip paths (LSPs) to project channels with task-relevant patterns. Results/Conclusions: FFMS-Net is extensively evaluated on three challenging datasets: UW-Sinus-surgery-live, UW-Sinus-cadaveric, and CholecSeg8k. The proposed method demonstrates promising performance compared with state-of-the-art approaches while requiring only 1.5 million trainable parameters. In addition, an open-source large language model is integrated for non-clinical summarization of the surgical scene based on the predicted mask and deterministic descriptors derived from it. Full article
(This article belongs to the Special Issue Artificial Intelligence and Machine Learning in Clinical Practice)
Show Figures

Figure 1

26 pages, 1536 KB  
Article
GraphGPT-Patent: Time-Aware Graph Foundation Modeling on Semantic Similarity Document Graphs for Grant-Time Economic Impact Prediction
by Tianhui Fang, Junru Si, Chi Ye and Hailong Shi
Appl. Sci. 2026, 16(6), 2737; https://doi.org/10.3390/app16062737 - 12 Mar 2026
Viewed by 357
Abstract
Predicting the future impact of technical economic documents at release time is challenging due to delayed supervision signals, long-tailed label distributions, and time- and domain-dependent shifts in language and topics. Moreover, similarity graphs derived from text embeddings can be noisy due to boilerplate [...] Read more.
Predicting the future impact of technical economic documents at release time is challenging due to delayed supervision signals, long-tailed label distributions, and time- and domain-dependent shifts in language and topics. Moreover, similarity graphs derived from text embeddings can be noisy due to boilerplate and evolve under temporal drift, making robustness and leakage-free evaluation essential. We formulate grant-time patent impact prediction as a node classification and within-domain ranking problem on a large-scale semantic similarity document graph built from patent text embeddings, avoiding any future citation leakage. The document graph is constructed via ANN Top-K retrieval and similarity thresholding, enabling scalable and reproducible sparsification on hundreds of thousands of nodes. We propose GraphGPT-Patent, which adapts a reversible graph-to-sequence foundation backbone to local subgraphs extracted from the similarity network. The model incorporates time- and domain-conditioned edge reliability to suppress drift-induced and template-driven pseudo-similarity, and optimizes a joint objective coupling high-impact classification with ranking consistency within comparable groups. Experiments on USPTO granted patents (2000–2022) across three high-volume CPC domains and three evaluation horizons show consistent gains over text-only and GNN baselines, achieving up to 0.94 recall for the positive class and improved macro-average recall across nine settings. Temporal shift analyses further quantify the effect of training-data freshness, while explanation subgraphs provide auditable structural evidence of model decisions. The proposed framework offers an effective graph-based learning pipeline for scalable impact prediction and downstream triage under strict information constraints. Full article
Show Figures

Figure 1

Back to TopTop