Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,208)

Search Parameters:
Keywords = language generation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 417 KB  
Review
A Review of the Effectiveness of Hand Gestures in Second Language Phonetic Training
by Xiaotong Xi and Peng Li
Languages 2026, 11(3), 43; https://doi.org/10.3390/languages11030043 (registering DOI) - 4 Mar 2026
Abstract
This narrative review synthesizes 24 empirical studies on the role of four types of pedagogical gestures (beat, durational, pitch, and articulatory) in second language (L2) phonetic training since 2010. We reviewed studies involving training interventions to assess the efficacy, mediating factors, and robustness [...] Read more.
This narrative review synthesizes 24 empirical studies on the role of four types of pedagogical gestures (beat, durational, pitch, and articulatory) in second language (L2) phonetic training since 2010. We reviewed studies involving training interventions to assess the efficacy, mediating factors, and robustness of multimodal training. The findings confirm that gestural training is a powerful tool, yielding the most robust positive effects for L2 speech production and the acquisition of suprasegmental features. Crucially, the effectiveness is highly dependent on gesture-sound consistency and visual saliency of the target phonetic/prosodic feature. However, results are mixed regarding perceptual learning and the generalization of gains to untrained items or novel contexts. While the literature supports the value of gestural training, there are gaps in determining the optimal training paradigm (observing gestures vs. performing gestures), accounting for individual learner differences, and establishing long-term retention and ecological validity. Future research should incorporate longitudinal designs and neurophysiological methods to fully illuminate the cognitive mechanisms that drive the body–mind link in L2 speech acquisition. Full article
32 pages, 4122 KB  
Article
Navigating the Seas of AI: Effectiveness of Small Language Models on Edge Devices for Maritime Applications
by Nicolò Guainazzo, Giorgio Delzanno, Davide Ancona and Daniele D’Agostino
Sensors 2026, 26(5), 1590; https://doi.org/10.3390/s26051590 (registering DOI) - 3 Mar 2026
Abstract
This paper explores the feasibility of employing small language models (SLMs) on edge devices powered by batteries in environments with limited/no internet connectivity. SLMs in fact offer significant advantages in such scenarios due to their lower resource requirements with respect to large language [...] Read more.
This paper explores the feasibility of employing small language models (SLMs) on edge devices powered by batteries in environments with limited/no internet connectivity. SLMs in fact offer significant advantages in such scenarios due to their lower resource requirements with respect to large language models. The use case in this study is maritime navigation—in particular, the documentation on Sailing Directions (Enroutd) of the World Port Index (WPI) provided by the National Geospatial-Intelligence Agency (NGA), which provides information that cannot be shown graphically on nautical charts and is not readily available elsewhere. In this environment, response immediacy is not critical, as users have sufficient time to query information while navigating and planning activities, making edge devices ideal for running these models. On the contrary, the response quality is fundamental. For this reason, given the constrained knowledge of SLMs in maritime contexts, we investigate the use of the retrieval-augmented generation (RAG) methodology, integrating external information from sailing directions. A comparative analysis is presented to evaluate the performance of various state-of-the-art SLMs, focusing on response quality, the effectiveness of the RAG component, and inference times. Full article
(This article belongs to the Special Issue Energy Harvesting and Machine Learning in IoT Sensors)
Show Figures

Figure 1

34 pages, 66116 KB  
Article
Frequency-Domain Trajectory Planning for Autonomous Driving in Highly Dynamic Scenarios
by Jie Xia, Zhuo Kong, Xiaodong Wu, Boran Shi, Yuanbo Han and Min Xu
Appl. Sci. 2026, 16(5), 2447; https://doi.org/10.3390/app16052447 - 3 Mar 2026
Abstract
Trajectory planning is a central problem in autonomous driving, requiring long-horizon reasoning, strict safety guarantees, and robustness to rare but critical events. Recent learning-based planners increasingly formulate planning as an autoregressive sequence generation problem, analogous to large language models, where future motions are [...] Read more.
Trajectory planning is a central problem in autonomous driving, requiring long-horizon reasoning, strict safety guarantees, and robustness to rare but critical events. Recent learning-based planners increasingly formulate planning as an autoregressive sequence generation problem, analogous to large language models, where future motions are discretized into action tokens and predicted by Transformer-based neural sequence models. Despite promising empirical results, most existing approaches adopt time-domain action representations, in which consecutive actions are highly correlated. When combined with autoregressive decoding, this design induces degenerate generation behavior in learning-based planners, encouraging local action continuation and leading to rapid error accumulation during closed-loop execution, particularly in safety-critical corner cases such as sudden pedestrian emergence. To address this limitation of time-domain autoregressive planning, we propose a unified trajectory planning framework built upon three core ideas: (1) explicit action tokenization for long-horizon planning, (2) transformation of the action space from the time domain to the frequency domain, and (3) a hybrid learning paradigm that combines imitation learning with reinforcement learning. By representing future motion using compact frequency-domain action coefficients rather than per-timestep actions, the proposed planner is encouraged to reason about global motion intent before refining local details. This change in action representation fundamentally alters the inductive bias of learning-based autoregressive planning, mitigates exposure bias, and enables earlier and more decisive responses in complex and safety-critical environments. We present the model formulation, learning objectives, and training strategy, and outline a comprehensive experimental protocol. Full article
(This article belongs to the Section Robotics and Automation)
16 pages, 20925 KB  
Article
RewriteGen: Autonomous Query Optimization for Retrieval-Augmented Large Language Models via Reinforcement Learning
by Yixuan Zhao, Zihao Fan, Yingying Cao, Zhengjia Lyu and Jingyuan Li
Electronics 2026, 15(5), 1058; https://doi.org/10.3390/electronics15051058 - 3 Mar 2026
Abstract
Large Language Models (LLMs) have achieved substantial progress in knowledge-intensive tasks, particularly through Retrieval-Augmented Generation (RAG) frameworks. However, existing RAG systems often suffer from performance degradation when input queries are misaligned with retrieval requirements, and effectively coordinating retrieval with reasoning remains challenging—especially for [...] Read more.
Large Language Models (LLMs) have achieved substantial progress in knowledge-intensive tasks, particularly through Retrieval-Augmented Generation (RAG) frameworks. However, existing RAG systems often suffer from performance degradation when input queries are misaligned with retrieval requirements, and effectively coordinating retrieval with reasoning remains challenging—especially for multi-hop questions requiring iterative retrieval steps. To address these challenges, we propose ReWriteGen, a unified framework that integrates query rewriting, retrieval augmentation, and complementary generation within a coordinated architecture, optimized using reinforcement learning (Group Relative Policy Optimization, GRPO) and Direct Preference Optimization (DPO). ReWriteGen introduces a retrieval-aware query rewriting mechanism to better align input queries with external knowledge. The framework optimizes retrieval-augmented answers without requiring supervised reasoning annotations.Our experiments show that ReWriteGen consistently outperforms traditional RAG baselines across three multi-hop QA benchmarks: HotpotQA, MuSiQue, and 2Wiki. On HotpotQA, ReWriteGen achieves improvements of 5.32 and 5.10 percentage points in EM and LLM-based evaluation, respectively, compared to the strongest baseline. Corresponding gains of 11.90 and 7.18 are observed on MuSiQue, and 15.45 and 18.60 on 2Wiki.ReWriteGen enhances the coordination between retrieval and reasoning in LLMs, delivering consistent performance gains while reducing reliance on supervised reasoning annotations and extensive task-specific engineering. Full article
(This article belongs to the Special Issue AI for Industry)
Show Figures

Figure 1

20 pages, 1100 KB  
Review
Educational Applications of AI-Based Chatbots in Nursing: A Scoping Review
by Francisco Fernandes, Rúben Encarnação, José Alves, Carla Pais-Vieira, Suzinara Beatriz Soares de Lima and Paulo Alves
Nurs. Rep. 2026, 16(3), 87; https://doi.org/10.3390/nursrep16030087 (registering DOI) - 3 Mar 2026
Abstract
Background/Objectives: The rapid expansion of generative artificial intelligence (AI) and large language model-based chatbots has accelerated their adoption in higher education, including nursing. This scoping review mapped the use of AI-based chatbots in nursing education, including curricular domains, pedagogical approaches, educational outcomes, and [...] Read more.
Background/Objectives: The rapid expansion of generative artificial intelligence (AI) and large language model-based chatbots has accelerated their adoption in higher education, including nursing. This scoping review mapped the use of AI-based chatbots in nursing education, including curricular domains, pedagogical approaches, educational outcomes, and implementation challenges. Methods: A scoping review was conducted following the Joanna Briggs Institute methodology and reported in accordance with the PRISMA-ScR guideline. Searches were performed across major bibliographic databases and grey literature sources. Quantitative, qualitative, and mixed-methods studies addressing the use of AI chatbots in nursing education or professional training were included. Data were extracted using a standardized instrument and synthesized through descriptive statistics and qualitative content analysis. Results: Sixty-six studies (2019–2025) were included, with significant growth observed after 2023. Most studies employed quasi-experimental designs (37.9%) and were implemented in academic settings (83.3%). Application formats varied across online, hybrid, simulation-based, and classroom models. Reported benefits included improved learning performance, clinical reasoning, and student engagement. Key challenges involved the reliability of AI outputs, academic integrity, data protection, and limited institutional governance. Conclusions: AI-based chatbots represent promising tools to enhance nursing education, particularly when integrated into structured pedagogical strategies with active faculty supervision. Their use can support the development of clinical reasoning, student engagement, and personalized learning. However, methodological heterogeneity, ethical concerns, and governance gaps highlight the need for careful implementation and further rigorous research to ensure safe, effective, and pedagogically sound integration. Full article
Show Figures

Figure 1

15 pages, 736 KB  
Article
Reducing Energy Footprint of LLM Inference Through FPGA-Based Heterogeneous Computing Platforms
by Thiago Cormie Monteiro and Andrea Guerrieri
Electronics 2026, 15(5), 1052; https://doi.org/10.3390/electronics15051052 - 3 Mar 2026
Abstract
Artificial Intelligence (AI) has emerged as a transformative force, increasingly integrated into diverse aspects of modern society, from healthcare and education to business and entertainment. Among the most influential AI technologies are large language models (LLMs), such as generative pretrained transformers (GPTs). These [...] Read more.
Artificial Intelligence (AI) has emerged as a transformative force, increasingly integrated into diverse aspects of modern society, from healthcare and education to business and entertainment. Among the most influential AI technologies are large language models (LLMs), such as generative pretrained transformers (GPTs). These models are designed to process vast amounts of data and perform complex computations, enabling advanced capabilities in natural language understanding and generation. However, deployment and operation of such systems requires significant computational resources, leading to substantial energy consumption. While general-purpose hardware such as GPUs is limited by fixed-precision architectures, field-programmable gate arrays (FPGAs) offer the bit-level reconfigurability needed to exploit ultra-low-bitwidth representations. This allows power-intensive multiplications to be replaced by streamlined logic-based accumulations, maximizing the energy benefits of model quantization. This paper addresses the problem of the energy impact of LLMs by leveraging innovative FPGA-based heterogeneous computing platforms. Results demonstrate that ternary matrix multiplication (MatMul) achieves a 23% speedup and a remarkable 96% reduction in digital signal processor (DSP) utilization. Furthermore, the final optimized design shows a 52% reduction in total energy consumption compared to the baseline, making heterogeneous computing a compelling solution for power- and resource-constrained embedded applications. Full article
(This article belongs to the Special Issue New Trends for Power Optimizations in FPGA-Based Embedded Systems)
Show Figures

Figure 1

13 pages, 2278 KB  
Article
Opportunities and Challenges of Visual Large Language Models in Imaging Diagnostics: Lessons from Brain Metastasis Detection in Clinical MRI
by Christian Nelles, Nour Abou Zeid, Robert Terzis, Andra-Iza Iuga, Lukas Görtz, Marvin A. Spurek, David Maintz, Simon Lennartz and Jonathan Kottlors
Diagnostics 2026, 16(5), 749; https://doi.org/10.3390/diagnostics16050749 - 3 Mar 2026
Abstract
Background/Objectives: To evaluate the diagnostic accuracy of two visual large language models (vLLMs), GPT-4o (OpenAI) and Claude Sonnet 3.5 (Anthropic), for detecting brain metastases in routine MRI using combined imaging and textual input. Methods: This retrospective study included 31 patients with [...] Read more.
Background/Objectives: To evaluate the diagnostic accuracy of two visual large language models (vLLMs), GPT-4o (OpenAI) and Claude Sonnet 3.5 (Anthropic), for detecting brain metastases in routine MRI using combined imaging and textual input. Methods: This retrospective study included 31 patients with and 46 without brain metastases with underlying melanoma (n = 24), lung cancer (n = 23), breast cancer (n = 17), or renal cell carcinoma (n = 13). In total, 100 MRI examinations (50 with, 50 without metastases) were provided to both vLLMs using a single representative slice per sequence, together with clinical history and the referring question. The generated free-text reports were evaluated for detection accuracy, overdiagnosis, correct sequence recognition, anatomical localization, lesion laterality, and lesion size estimation. Results: Both vLLMs showed perfect sensitivity (100% for both) but very low specificity (GPT-4o: 8%, Sonnet 3.5: 4%; p = 0.625), resulting in low diagnostic accuracy (GPT-4o: 54%, Sonnet 3.5: 52%; p = 0.625). Sequence identification was highly accurate in both models, with GPT-4o performing significantly better (100% vs. 93%; p < 0.05). Identification of the anatomical brain region (70% vs. 72%; p = 1.00) and lesion laterality (62% vs. 76%; p = 0.189) was comparable. Both models hallucinated additional lesions in 12% of cases. Lesion size measurements showed no significant differences between the models or in comparison with the radiologist. Conclusions: GPT-4o and Claude Sonnet 3.5 can generate radiological reports and detect brain metastases with excellent sensitivity, but their very low specificity, frequent hallucinations, and limited spatial reliability currently preclude clinical application. Future work should address how the balance between visual and textual input influences diagnostic behavior in vLLMs. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

23 pages, 703 KB  
Article
CPES: A Comprehensive Method for Automatic Evaluation of Paraphrased Sentences
by Haya Rabih Alsulami and Amal Abdullah Almansour
Appl. Sci. 2026, 16(5), 2427; https://doi.org/10.3390/app16052427 - 2 Mar 2026
Abstract
Paraphrasing is the process of transforming a given text into another text using alternative lexical or syntactic forms while preserving its original meaning. Paraphrasing significantly affects several Natural Language Processing (NLP) applications, such as machine translation (MT) and data augmentation. Paraphrasing suffers from [...] Read more.
Paraphrasing is the process of transforming a given text into another text using alternative lexical or syntactic forms while preserving its original meaning. Paraphrasing significantly affects several Natural Language Processing (NLP) applications, such as machine translation (MT) and data augmentation. Paraphrasing suffers from a specifically designed metric, and most research adopts metrics developed for other NLP purposes. Paraphrase evaluation remains challenging due to the limitations of surface-level similarity metrics such as BLEU and ROUGE. Therefore, this research aims to develop a new metric for paraphrase generation, the Comprehensive Paraphrasing Evaluation Score (CPES). Furthermore, the CPES requires lexical language resources; thus, the research uses an Arabic corpus and produces a new Arabic lexical dictionary (Rabih dictionary). The CPES considers major paraphrasing criteria, including sentence structure, changes in word forms, synonym substitution, and paraphrased-sentence lexical diversity (LD). Each CPES supports interpretability by enabling decomposition into the criterion that drives the final result. The research finds that (1) the CPES effectively measures the modification ratio between original and paraphrased sentences, and (2) the text category impacts the CPESs. Full article
(This article belongs to the Special Issue Applications of Natural Language Processing to Data Science)
Show Figures

Figure 1

18 pages, 1182 KB  
Article
Co-MedGraphRAG: A Collaborative Large–Small Model Medical Question-Answering Framework Enhanced by Knowledge Graph Reasoning
by Sizhe Chen and Tao Chen
Information 2026, 17(3), 247; https://doi.org/10.3390/info17030247 - 2 Mar 2026
Abstract
Large language models (LLMs) have demonstrated significant capabilities in natural language processing (NLP), but they often encounter challenges in the medical domain. This can result in insufficient alignment between generated answers and user intent, as well as factual deviations. To address these issues, [...] Read more.
Large language models (LLMs) have demonstrated significant capabilities in natural language processing (NLP), but they often encounter challenges in the medical domain. This can result in insufficient alignment between generated answers and user intent, as well as factual deviations. To address these issues, we propose Co-MedGraphRAG, a novel framework combining knowledge graph reasoning with large–small model collaboration, aimed at improving the structural grounding and interpretability of medical responses. The framework operates through a multi-stage collaborative mechanism to augment question answering. First, a large language model constructs a question-specific knowledge graph (KG) containing pending entities (denoted as “none”) to explicitly define known and unknown variables. Subsequently, a hybrid reasoning strategy is employed to populate the pending entities, thereby completing the question-specific knowledge graph. Finally, this graph serves as critical structured evidence, combined with the original question, to augment the large language model in generating the final answer, implemented using Qwen2.5-7B and GLM4-9B in this paper. To evaluate the generated answers, we introduce a larger-parameter LLM(GPT-4o) to assess performance across five dimensions and compute an overall score. Experiments on three medical datasets demonstrate that Co-MedGraphRAG achieves consistent improvements in relevance, practicality, and structured knowledge support compared with mainstream Retrieval-Augmented Generation (RAG) frameworks. This work serves as a reference for researchers and developers designing medical question-answering frameworks and exploring decision-support applications. Full article
Show Figures

Graphical abstract

24 pages, 1344 KB  
Systematic Review
Personalised Nutrition in Obesity and Prediabetes: Do Genotypes Matter?
by Magdalena Bossowska, Filip Bossowski, Edyta Adamska-Patruno, Katarzyna Maliszewska and Adam Krętowski
Nutrients 2026, 18(5), 815; https://doi.org/10.3390/nu18050815 (registering DOI) - 2 Mar 2026
Abstract
Background/Objectives: Obesity and prediabetes are overlapping global epidemics. This systematic review synthesises evidence on gene-diet interactions in adults with obesity, prediabetes, or related cardiometabolic risks. It evaluates Mediterranean and DASH dietary patterns, macronutrient quality, and energy restriction across both single-variant and polygenic score [...] Read more.
Background/Objectives: Obesity and prediabetes are overlapping global epidemics. This systematic review synthesises evidence on gene-diet interactions in adults with obesity, prediabetes, or related cardiometabolic risks. It evaluates Mediterranean and DASH dietary patterns, macronutrient quality, and energy restriction across both single-variant and polygenic score approaches. Methods: PubMed was searched for English language papers published in the last 5 years (last run: 31 October 2025). Fewer than 200 studies were retained after excluding those lacking explicit statistical testing for gene-diet interactions or relevant endpoints. Results: Evidence supports restricting saturated fat and preserving carbohydrate quality as general baseline targets, with associations heterogeneous by genotype. Effect modification was observed: healthy dietary patterns were associated with lower risk in high polygenic-risk strata (OR~0.53) but little or no benefit in low-risk groups. TCF7L2 variants were associated with macronutrient thresholds (e.g., protein > 18%, carbohydrate < 48%) affecting visceral adiposity, while APOA2 variants showed genotype-dependent inflammation, including paradoxical increases in markers with higher dietary antioxidant capacity. Interpretation was limited by underpowered interaction tests, multiplicity, and uneven ancestry representation (e.g., unique SLC16A11 and CREBRF signals). Conclusions: While anti-inflammatory dietary substitutions improve biomarkers irrespective of some variants (e.g., TCF7L2), genotype-informed nutrition appears to yield the largest absolute risk reduction in high-risk populations. Clinical implementation should therefore combine baseline diet-quality guidance with targeted strategies for genotype-specific response patterns (e.g., APOA2 antioxidant heterogeneity and TCF7L2 carbohydrate thresholds), rather than rely on uniform recommendations alone. Future progress requires preregistered, genotype-stratified trials and locally trained polygenic scores to address ancestry-specific genetic architecture. Full article
Show Figures

Figure 1

24 pages, 1346 KB  
Systematic Review
Artificial Intelligence in Cadastre: A Systematic Review of Methods, Applications, and Trends
by Jingshu Chen, Majid Nazeer, Bo Sum Lee and Man Sing Wong
Land 2026, 15(3), 411; https://doi.org/10.3390/land15030411 - 2 Mar 2026
Abstract
Surveying and register administration are core to land administration, and accordingly, land surveying and registration are essential to socio-economic development due to their potential accuracy and efficiency. Until now, customary land surveying and registration have relied on human input, which is a situation [...] Read more.
Surveying and register administration are core to land administration, and accordingly, land surveying and registration are essential to socio-economic development due to their potential accuracy and efficiency. Until now, customary land surveying and registration have relied on human input, which is a situation that undermines efficiency and is prone to errors in data handling. During the last decade, the exponential growth in artificial intelligence (AI), in particular, geospatial artificial intelligence (GeoAI), has provided new methodologies that can overcome these deficiencies. This review examines AI in cadastral management by analyzing technical solutions and trends across three areas including data collection, modeling, and common applications. This review aims to provide a comprehensive survey of the current use of AI in cadastral management to the extent of defining a future research avenue. Based on the comprehensive review of literature, this study has reached the following three conclusions. (1) Automated extraction of parcel boundaries has been achieved through deep learning in data collection and processing, removing the bottlenecks of manual interpretation. Models such as convolutional neural networks (CNNs) and Transformers have been used for pixel-level semantic segmentation of high-resolution remote sensing images, leading to significant improvements in efficiency and accuracy. (2) Non-spatial data have been processed with natural language processing techniques to automatically extract information and construct relationships, thus overcoming the limitations of paper-based archives and traditional relational databases. (3) Deep learning models have been applied to automatically detect parcel changes and to enable integrated analysis of spatial and non-spatial data, which has supported the transition of cadastral management from two-dimensional to three-dimensional. However, several challenges remain, including differences in multi-temporal data processing, spatial semantic ambiguity, and the lack of large-scale, high-quality annotated data. Future research can focus on improving model generalization, advancing cross-modal data fusion, and providing recommendations for the development of a reliable and practical intelligent cadastral system. Full article
Show Figures

Figure 1

20 pages, 2213 KB  
Article
The Development of a Large Language Model-Powered Chatbot to Advance Fairness in Machine Learning
by Pedro Henrique Ribeiro Santiago, Xiangqun Ju, Xavier Vasquez, Heidi Shen, Lisa Jamieson and Hawazin W. Elani
AI 2026, 7(3), 90; https://doi.org/10.3390/ai7030090 (registering DOI) - 2 Mar 2026
Abstract
Background: Machine learning (ML) has been widely adopted in decision-making, making fairness a central ethical and scientific priority. We developed the Themis chatbot, a Large Language Model (LLM) system designed to explain concepts of ML fairness in an accessible, conversational format. Methods [...] Read more.
Background: Machine learning (ML) has been widely adopted in decision-making, making fairness a central ethical and scientific priority. We developed the Themis chatbot, a Large Language Model (LLM) system designed to explain concepts of ML fairness in an accessible, conversational format. Methods: The development followed four stages: (1) curating a document corpus of 286 peer-reviewed publications on ML fairness; (2) development of Themis by combining a modern LLM (OpenAI’s GPT-4o) with Retrieval Augmented Generation (RAG); (3) creation of a 340-item benchmark dataset, the FairnessQA; and (4) evaluating performance against state-of-the-art non-augmented LLMs (DeepSeek R1, GPT-4o, GPT-5, and Grok 3). Results: For the multiple-choice questions, Themis achieved an accuracy of 96.7%, outperforming DeepSeek R1 (90.0%), GPT-4o (89.3%), GPT-5 (92.0%), and Grok 3 (86.7%), and the overall difference was statistically significant (χ2(4) = 10.1, p = 0.038). In the closed-ended questions, Themis achieved the highest accuracy (96.7%), while competing models ranged from 78.0% to 84.0%, and the overall difference was significant (χ2(4) = 23.9, p < 0.001). In the open-ended questions, Themis achieved the highest mean scores for correctness (M = 4.62), completeness (M = 4.59), and usefulness (M = 4.56), and differences were statistically significant (correctness: F(4, 195) = 20.91, p < 0.001; completeness: F(4, 195) = 7.76, p < 0.001; usefulness: F(4, 195) = 2.90, p < 0.001). By consolidating scattered research into an interactive assistant, Themis makes fairness concepts more accessible to educators, researchers, and policymakers. This work demonstrates that retrieval-augmented systems can enhance the public understanding of machine learning fairness at scale. Full article
Show Figures

Figure 1

18 pages, 5140 KB  
Article
BERT-Based Schema Matching for Integrating Heterogeneous Flood Data: A Case Study in Korea
by Taeyoung Choe, Mincheol Shin, Kwangyoung Kim, Myungseok Yang, Ka Lok Man and Mucheol Kim
Systems 2026, 14(3), 267; https://doi.org/10.3390/systems14030267 - 2 Mar 2026
Abstract
Integrating flood-response datasets across municipalities is often hindered by heterogeneous and non-standard variable names, a challenge amplified in Korean by local naming conventions and linguistic variation. This study addresses scalable schema alignment to standardize municipal flood datasets with reduced manual effort while maintaining [...] Read more.
Integrating flood-response datasets across municipalities is often hindered by heterogeneous and non-standard variable names, a challenge amplified in Korean by local naming conventions and linguistic variation. This study addresses scalable schema alignment to standardize municipal flood datasets with reduced manual effort while maintaining semantic consistency for downstream modeling. We propose a BERT-based schema matching framework that augments standardized attribute names with paraphrases generated by a generative language model and filtered to reduce semantic drift. Both standardized and target variable names are encoded using a flood-domain-adapted Korean BERT model, and candidate correspondences are retrieved via cosine-similarity ranking to produce top-k match suggestions for automated or human-in-the-loop alignment. Experiments on real flood-related tables from Busan and Incheon, evaluated jointly to diversify variable expressions, show that augmentation substantially improves top-k retrieval accuracy. In the combined evaluation, Hit@5 improves from 0.71 to 0.95, supporting more reliable schema harmonization for simulation-ready inputs. Full article
(This article belongs to the Section Supply Chain Management)
Show Figures

Figure 1

19 pages, 18999 KB  
Article
TFS Point-on-Hand Sign Recognition Using Part Affinity Fields
by Jinnavat Sanalohit and Tatpong Katanyukul
Appl. Sci. 2026, 16(5), 2416; https://doi.org/10.3390/app16052416 - 2 Mar 2026
Abstract
Our study investigates an application of a bottom-up design for keypoint regression, Part Affinity Fields (PAFs), for sign language recognition. Automatic sign language recognition could facilitate communication between deaf people and the hearing majority. Sign languages generally employ both semantic and finger-spelling signing. [...] Read more.
Our study investigates an application of a bottom-up design for keypoint regression, Part Affinity Fields (PAFs), for sign language recognition. Automatic sign language recognition could facilitate communication between deaf people and the hearing majority. Sign languages generally employ both semantic and finger-spelling signing. Semantic signing includes acting out to convey meaning, while finger spelling complements signing through the spelling out of proper names. Specifically, this article addresses an automatic recognition framework for the static point-on-hand (PoH) signing of Thai Finger Spelling (TFS)—the finger-spelling part of Thai Sign Language (TSL). From a pattern recognition perspective, PoH signing is quite distinct among signing schemes for requirement of precise localization of key parts on the signing hands. A recent study addressed PoH using an off-the-shelf version of MediaPipe Hands (MPH) and found shortcomings particularly when there was a high degree of hand-to-hand interaction. The top-down design of MPH was hypothesized to be the culprit. Our study investigates a bottom-up design, Part Affinity Fields (PAFs), along with examination of the related factors. The results support the hypothesis of a high-degree of hand-to-hand interaction posited by the MPH study. However, the overall performance of the PAF-based approach is shown to be modestly effective (72% accuracy vs. 58% and 47% of the MPH- and X-Pose-based approaches). In addition, its generalization is shown to be lacking. Thus TFS point-on-hand sign recognition remains a challenge. Full article
Show Figures

Figure 1

19 pages, 1473 KB  
Article
AI-Assisted Analysis of Future-Oriented Discourses: Institutional Narratives and Public Reactions on Social Media
by Galina V. Gradoselskaya, Inga V. Zheltikova, Maria Pilgun, Alexey N. Raskhodchikov and Andrey N. Yazykayev
Journal. Media 2026, 7(1), 49; https://doi.org/10.3390/journalmedia7010049 - 2 Mar 2026
Abstract
This study explores how digital media ecosystems shape collective visions of the future under conditions of rapid technological innovation and the growing influence of artificial intelligence (AI). Drawing on a large corpus of social media content comprising 50,036,592 tokens, the research examines institutional [...] Read more.
This study explores how digital media ecosystems shape collective visions of the future under conditions of rapid technological innovation and the growing influence of artificial intelligence (AI). Drawing on a large corpus of social media content comprising 50,036,592 tokens, the research examines institutional narratives and user-generated responses through a hybrid methodological framework. This framework combines information-wave detection, network analysis, semantic and associative modeling (TextAnalyst 2.32), and interpretation supported by a large language model (GPT-5). The methodological contribution of the study lies in the integration of network-based and semantic algorithms with AI-driven analytical tools for the examination of large-scale textual data. The findings indicate that media discourses about the future operate as key mechanisms through which societies interpret the environmental, social, and economic consequences of technological change. Institutional actors promote multiple future-oriented models that often conflict with one another at both discursive and practical levels. In contrast, user-generated content reflects widespread fear, skepticism, and distrust. Prominent themes include nostalgia for the past, anxiety about socio-economic and environmental consequences, and concerns related to expanding forms of digital control. The analysis also reveals divergent perspectives on urban development. Positive narratives emphasize ecological balance, a comfortable urban environment, thoughtfully designed mixed-use development, and solutions to transportation challenges. Negative narratives, by contrast, focus on over-densification, environmental degradation, and the erosion of privacy in technologically saturated urban spaces. Full article
Show Figures

Graphical abstract

Back to TopTop