Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (15,867)

Search Parameters:
Keywords = model interpretation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 286 KB  
Article
Trusted Yet Flexible: High-Level Runtimes for Secure ML Inference in TEEs
by Nikolaos-Achilleas Steiakakis and Giorgos Vasiliadis
J. Cybersecur. Priv. 2026, 6(1), 23; https://doi.org/10.3390/jcp6010023 (registering DOI) - 27 Jan 2026
Abstract
Machine learning inference is increasingly deployed on shared and cloud infrastructures, where both user inputs and model parameters are highly sensitive. Confidential computing promises to protect these assets using Trusted Execution Environments (TEEs), yet existing TEE-based inference systems remain fundamentally constrained: they rely [...] Read more.
Machine learning inference is increasingly deployed on shared and cloud infrastructures, where both user inputs and model parameters are highly sensitive. Confidential computing promises to protect these assets using Trusted Execution Environments (TEEs), yet existing TEE-based inference systems remain fundamentally constrained: they rely almost exclusively on low-level, memory-unsafe languages to enforce confinement, sacrificing developer productivity, portability, and access to modern ML ecosystems. At the same time, mainstream high-level runtimes, such as Python, are widely considered incompatible with enclave execution due to their large memory footprints and unsafe model-loading mechanisms that permit arbitrary code execution. To bridge this gap, we present the first Python-based ML inference system that executes entirely inside Intel SGX enclaves while safely supporting untrusted third-party models. Our design enforces standardized, declarative model representations (ONNX), eliminating deserialization-time code execution and confining model behavior through interpreter-mediated execution. The entire inference pipeline (including model loading, execution, and I/O) remains enclave-resident, with cryptographic protection and integrity verification throughout. Our experimental results show that Python incurs modest overheads for small models (≈17%) and outperforms a low-level baseline on larger workloads (97% vs. 265% overhead), demonstrating that enclave-resident high-level runtimes can achieve competitive performances. Overall, our findings indicate that Python-based TEE inference is practical and secure, enabling the deployment of untrusted models with strong confidentiality and integrity guarantees while maintaining developer productivity and ecosystem advantages. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Figure 1

27 pages, 3594 KB  
Article
Machine Learning-Driven Personalized Risk Prediction: Developing an Explainable Sarcopenia Model for Older European Adults with Arthritis
by Xiao Xu
J. Clin. Med. 2026, 15(3), 1022; https://doi.org/10.3390/jcm15031022 (registering DOI) - 27 Jan 2026
Abstract
Objectives: This study aimed to develop and validate an explainable machine learning (ML) model to predict the risk of sarcopenia in older European adults with arthritis, providing a practical tool for early and precise screening in clinical settings. Methods: We analyzed [...] Read more.
Objectives: This study aimed to develop and validate an explainable machine learning (ML) model to predict the risk of sarcopenia in older European adults with arthritis, providing a practical tool for early and precise screening in clinical settings. Methods: We analyzed data from the English Longitudinal Study of Aging (ELSA) and the Survey of Health, Aging and Retirement in Europe (SHARE). The final analysis included 1959 participants aged ≥65 years. The ELSA dataset was divided into a training set (n = 1371) and an internal validation set (n = 588), while the SHARE dataset (n = 1001) served as an independent external test cohort. From an initial pool of 33 variables, nine core predictors were identified using ensemble feature selection techniques. Six ML algorithms were compared, with model performance evaluated using the Area Under the Curve (AUC) and calibration analysis. Model interpretability was enhanced via SHapley Additive exPlanations (SHAP). Results: The Decision Tree model demonstrated the optimal balance between performance and interpretability. It achieved an AUC of 0.921 (95% CI: 0.848–0.988) in the internal validation set and maintained robust generalizability in the external SHARE cohort with an AUC of 0.958 (95% CI: 0.931–0.985). The nine key predictors identified were stroke history, BMI, HDL, loneliness, walking speed, disease duration, age, recall summary score, and total cholesterol. SHAP analysis visualized the specific contribution of these features to individual risk. Conclusions: This study successfully developed a high-performance, explainable, lightweight ML model for sarcopenia risk prediction. By inputting only nine readily available clinical indicators via an online tool, individualized risk assessment can be generated. This facilitates early identification and risk stratification of sarcopenia in older European arthritis patients, thereby providing valuable decision support for implementing precision interventions. Full article
Show Figures

Figure 1

14 pages, 719 KB  
Article
In Vitro Investigation of the PneumoWave Biosensor for the Identification of Central Sleep Apnea in Pediatrics
by Burcu Kolukisa Birgec, Ross Langley, Jennifer Miller, Osian Meredith, Beyza Toprak and Alexander Balfour Mullen
Biosensors 2026, 16(2), 77; https://doi.org/10.3390/bios16020077 (registering DOI) - 27 Jan 2026
Abstract
The interpretation and diagnosis of central sleep apnea in pediatrics by nocturnal polysomnography is challenging due to its technical complexity, which involves the simultaneous recording of multiple physiological parameters related to sleep and wakefulness. Furthermore, the unfamiliar environment of a sleep laboratory can [...] Read more.
The interpretation and diagnosis of central sleep apnea in pediatrics by nocturnal polysomnography is challenging due to its technical complexity, which involves the simultaneous recording of multiple physiological parameters related to sleep and wakefulness. Furthermore, the unfamiliar environment of a sleep laboratory can hinder sleep evaluation, and diagnostic backlogs are common due to restricted capacity at specialist tertiary centers. The ability to undertake home sleep studies in a familiar environment using simple, robust, and low-cost technology is attractive. The potential to repurpose the PneumoWave biosensor, a UKCA Class 1 device, registered as an accelerometer-based monitoring device that is intended to capture and store chest motion data continuously over a period of time for retrospective analysis, was explored in an in vitro model of central sleep apnea. The PneumoWave system contains a biosensor (PW010), which was able to record simulated apnea episodes of 5 to 20 s across physiologically relevant pediatric breathing rates using an in vitro manikin model and manual annotation. The findings confirm that the PneumoWave biosensor could be a useful technology to support home sleep apnea testing and warrant further exploration. Full article
(This article belongs to the Section Biosensors and Healthcare)
35 pages, 5590 KB  
Article
Value Positioning and Spatial Activation Path of Modern Chinese Industrial Heritage: Social Media Data-Based Perception Analysis of Huaxin Cement Plant via the Four-Quadrant Model
by Zhengcong Wei, Yongning Xiong and Yile Chen
Buildings 2026, 16(3), 519; https://doi.org/10.3390/buildings16030519 (registering DOI) - 27 Jan 2026
Abstract
Industrial heritage—particularly large modern cement plants—serves as a crucial witness to the architectural and technological evolution of modern urbanization. In Europe, North America, and East Asia, many decommissioned cement factories have been transformed into cultural venues, creative districts, or urban landmarks, while a [...] Read more.
Industrial heritage—particularly large modern cement plants—serves as a crucial witness to the architectural and technological evolution of modern urbanization. In Europe, North America, and East Asia, many decommissioned cement factories have been transformed into cultural venues, creative districts, or urban landmarks, while a greater number of sites still face the risks of functional decline and spatial disappearance. In China, early large-scale cement plants have received limited attention in international industrial heritage research, and their conservation and adaptive reuse practices remain underdeveloped. This study takes the Huaxin Cement Plant, founded in 1907, as the research object. As the birthplace of China’s modern cement industry, it preserves the world’s only complete wet-process rotary kiln production line, representing exceptional rarity and typological significance. Combining social media perception analysis with the Hidalgo-Giralt four-quadrant model, the study aims to clarify the plant’s value positioning and propose a design-oriented pathway for spatial activation. Based on 378 short videos and 75,001 words of textual data collected from five major platforms, the study conducts a value-tag analysis of public perceptions across five dimensions—historical, technological, social, aesthetic, and economic. Two composite indicators, Cultural Representativeness (CR) and Utilization Intensity (UI), are further established to evaluate the relationship between heritage value and spatial performance. The findings indicate that (1) historical and aesthetic values dominate public perception, whereas social and economic values are significantly underrepresented; (2) the Huaxin Cement Plant falls within the “high cultural representativeness/low utilization intensity” quadrant, revealing concentrated heritage value but insufficient spatial activation; (3) the gap between value cognition and spatial transformation primarily arises from limited public accessibility, weak interpretive narratives, and a lack of immersive experience. In response, the study proposes five optimization strategies: expanding public access, building a multi-layered interpretive system, introducing immersive and interactive design, integrating into the Yangtze River Industrial Heritage Corridor, and encouraging community co-participation. As a representative case of modern Chinese industrial heritage distinguished by its integrity and scarcity, the Huaxin Cement Plant not only enriches the understanding of industrial heritage typology in China but also provides a methodological paradigm for the “value positioning–spatial utilization–heritage activation” framework, bearing both international comparability and disciplinary methodological significance. Full article
27 pages, 1594 KB  
Review
Toward Clinically Dependable AI for Brain Tumors: A Unified Diagnostic–Prognostic Framework and Triadic Evaluation Model
by Mohammed A. Atiea, Mona Gafar, Shahenda Sarhan and Abdullah M. Shaheen
BioMedInformatics 2026, 6(1), 7; https://doi.org/10.3390/biomedinformatics6010007 (registering DOI) - 27 Jan 2026
Abstract
Artificial intelligence (AI) has shown promising performance in brain tumor diagnosis and prognosis; however, most reported advances remain difficult to translate into clinical practice due to limited interpretability, inconsistent evaluation protocols, and weak generalization across datasets and institutions. In this work, we present [...] Read more.
Artificial intelligence (AI) has shown promising performance in brain tumor diagnosis and prognosis; however, most reported advances remain difficult to translate into clinical practice due to limited interpretability, inconsistent evaluation protocols, and weak generalization across datasets and institutions. In this work, we present a critical synthesis of recent brain tumor AI studies (2020–2025) guided by two novel conceptual tools: a unified diagnostic-prognostic framework and a triadic evaluation model emphasizing interpretability, computational efficiency, and generalizability as core dimensions of clinical readiness. Following PRISMA 2020 guidelines, we screened and analyzed over 100 peer-reviewed studies. A structured analysis of reported metrics reveals systematic trends and trade-offs—for instance, between model accuracy and inference latency—rather than providing a direct performance benchmark. This synthesis exposes critical gaps in current evaluation practices, particularly the under-reporting of interpretability validation, deployment-level efficiency, and external generalization. By integrating conceptual structuring with evidence-driven analysis, this work provides a framework for more clinically grounded development and evaluation of AI systems in neuro-oncology. Full article
Show Figures

Figure 1

21 pages, 1967 KB  
Article
Unified Promptable Panoptic Mapping with Dynamic Labeling Using Foundation Models
by Mohamad Al Mdfaa, Raghad Salameh, Geesara Kulathunga, Sergey Zagoruyko and Gonzalo Ferrer
Robotics 2026, 15(2), 31; https://doi.org/10.3390/robotics15020031 - 27 Jan 2026
Abstract
Panoptic maps enable robots to reason about both geometry and semantics. However, open-vocabulary models repeatedly produce closely related labels that split panoptic entities and degrade volumetric consistency. The proposed UPPM advances open-world scene understanding by leveraging foundation models to introduce a panoptic Dynamic [...] Read more.
Panoptic maps enable robots to reason about both geometry and semantics. However, open-vocabulary models repeatedly produce closely related labels that split panoptic entities and degrade volumetric consistency. The proposed UPPM advances open-world scene understanding by leveraging foundation models to introduce a panoptic Dynamic Descriptor that reconciles open-vocabulary labels with unified category structure and geometric size priors. The fusion for such dynamic descriptors is performed within a multi-resolution multi-TSDF map using language-guided open-vocabulary panoptic segmentation and semantic retrieval, resulting in a persistent and promptable panoptic map without additional model training. Based on our evaluation experiments, UPPM shows the best overall performance in terms of the map reconstruction accuracy and the panoptic segmentation quality. The ablation study investigates the contribution for each component of UPPM (custom NMS, blurry-frame filtering, and unified semantics) to the overall system performance. Consequently, UPPM preserves open-vocabulary interpretability while delivering strong geometric and panoptic accuracy. Full article
(This article belongs to the Section AI in Robotics)
Show Figures

Figure 1

45 pages, 1232 KB  
Review
Predicting Intrapartum Acidemia: A Review of Approaches Based on Fetal Heart Rate
by Gabriele Varisco, Giulio Steyde, Elisabetta Peri, Iris Hoogendoorn, Maria G. Signorini, Judith O. E. H. van Laar, Massimo Mischi and Marieke B. van der Hout-van der Jagt
Bioengineering 2026, 13(2), 146; https://doi.org/10.3390/bioengineering13020146 - 27 Jan 2026
Abstract
Fetal acidemia, caused by impaired gas exchange between the fetus and the mother, is a leading cause of stillbirth and neurologic complications. Early prediction is therefore essential to guide timely clinical intervention. Several strategies rely on cardiotocography (CTG), which combines fetal heart rate [...] Read more.
Fetal acidemia, caused by impaired gas exchange between the fetus and the mother, is a leading cause of stillbirth and neurologic complications. Early prediction is therefore essential to guide timely clinical intervention. Several strategies rely on cardiotocography (CTG), which combines fetal heart rate (fHR) with uterine contractions and has led to development of clinical guidelines for CTG interpretation and the introduction of different fHR features. Additionally, ST event analysis, investigating changes in the ST segments of the fetal electrocardiogram (fECG), has been proposed as a complementary tool. This narrative review adopts a systematic approach, with comprehensive searches in Embase and PubMed to ensure full coverage of the available literature, and summarizes findings from 30 studies. Clinical guidelines for CTG interpretation frequently lead to intermediate risk level annotations, leaving the final decision regarding fetal management to clinical experience. In contrast, various fHR features can successfully discriminate between fetuses developing acidemia and healthy controls. Evidence regarding the added value of ST events derived from the scalp electrode remains conflicting, due to concerns about invasiveness. Recent studies on machine learning models highlight their ability to integrate multiple fHR features and improve predictive performance, suggesting a promising direction for enhancing acidemia prediction during labor. Full article
Show Figures

Figure 1

27 pages, 1633 KB  
Review
Transformer Models, Graph Networks, and Generative AI in Gut Microbiome Research: A Narrative Review
by Yan Zhu, Yiteng Tang, Xin Qi and Xiong Zhu
Bioengineering 2026, 13(2), 144; https://doi.org/10.3390/bioengineering13020144 - 27 Jan 2026
Abstract
Background: The rapid advancement in artificial intelligence (AI) has fundamentally reshaped gut microbiome research by enabling high-resolution analysis of complex, high-dimensional microbial communities and their functional interactions with the human host. Objective: This narrative review aims to synthesize recent methodological advances in AI-driven [...] Read more.
Background: The rapid advancement in artificial intelligence (AI) has fundamentally reshaped gut microbiome research by enabling high-resolution analysis of complex, high-dimensional microbial communities and their functional interactions with the human host. Objective: This narrative review aims to synthesize recent methodological advances in AI-driven gut microbiome research and to evaluate their translational relevance for therapeutic optimization, personalized nutrition, and precision medicine. Methods: A narrative literature review was conducted using PubMed, Google Scholar, Web of Science, and IEEE Xplore, focusing on peer-reviewed studies published between approximately 2015 and early 2025. Representative articles were selected based on relevance to AI methodologies applied to gut microbiome analysis, including machine learning, deep learning, transformer-based models, graph neural networks, generative AI, and multi-omics integration frameworks. Additional seminal studies were identified through manual screening of reference lists. Results: The reviewed literature demonstrates that AI enables robust identification of diagnostic microbial signatures, prediction of individual responses to microbiome-targeted therapies, and design of personalized nutritional and pharmacological interventions using in silico simulations and digital twin models. AI-driven multi-omics integration—encompassing metagenomics, metatranscriptomics, metabolomics, proteomics, and clinical data—has improved functional interpretation of host–microbiome interactions and enhanced predictive performance across diverse disease contexts. For example, AI-guided personalized nutrition models have achieved AUC exceeding 0.8 for predicting postprandial glycemic responses, while community-scale metabolic modeling frameworks have accurately forecast individualized short-chain fatty acid production. Conclusions: Despite substantial progress, key challenges remain, including data heterogeneity, limited model interpretability, population bias, and barriers to clinical deployment. Future research should prioritize standardized data pipelines, explainable and privacy-preserving AI frameworks, and broader population representation. Collectively, these advances position AI as a cornerstone technology for translating gut microbiome data into actionable insights for diagnostics, therapeutics, and precision nutrition. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Complex Diseases)
Show Figures

Figure 1

20 pages, 1319 KB  
Article
Complexity and Persistence of Electrical Brain Activity Estimated by Higuchi Fractal Dimension
by Pierpaolo Croce and Filippo Zappasodi
Fractal Fract. 2026, 10(2), 88; https://doi.org/10.3390/fractalfract10020088 - 27 Jan 2026
Abstract
Brain electrical activity, as recorded through electroencephalography (EEG), displays scale-free temporal fluctuations indicative of fractal behavior and complex dynamics. This study explores the use of the Higuchi Fractal Dimension (HFD) as a proxy of two complementary aspects of EEG temporal organization: local signal [...] Read more.
Brain electrical activity, as recorded through electroencephalography (EEG), displays scale-free temporal fluctuations indicative of fractal behavior and complex dynamics. This study explores the use of the Higuchi Fractal Dimension (HFD) as a proxy of two complementary aspects of EEG temporal organization: local signal irregularity, interpreted within a Kolmogorov-type framework, and persistence related to temporal structure, associated with statistical complexity. The latter can be used to evidence persistence in the EEG signal, serving as an alternative to previously used approaches for estimating the Hurst exponent. Thirty-eight healthy participants underwent resting-state EEG recordings in open- and closed-eyes conditions. HFD was computed for the original signals to assess Kolmogorov complexity and for the signals’ cumulative envelopes to evaluate statistical complexity and, consequently, persistence. The results confirmed that HFD values align with theoretical expectations: higher for random noise in the Kolmogorov model (~2) and lower in the statistical model (~1.5). EEG data showed condition-dependent and topographically specific variations in HFD, with parieto-occipital regions exhibiting greater complexity and persistence. The HFD values in the statistical model fall within the 1–1.5 range, indicating long-term correlation. These findings support HFD as a reliable tool for assessing both the local roughness and global temporal structure of brain activity, with implications for physiological modeling and clinical applications. Full article
Show Figures

Figure 1

15 pages, 1728 KB  
Article
Reframing BIM: Toward Epistemic Resilience in Existing-Building Representation
by Ciera Hanson, Xiaotong Liu and Mike Christenson
Infrastructures 2026, 11(2), 40; https://doi.org/10.3390/infrastructures11020040 - 27 Jan 2026
Abstract
Conventional uses of building information modeling (BIM) in existing-building representation tend to prioritize geometric consistency and efficiency, but often at the expense of interpretive depth. This paper challenges BIM’s tendency to promote epistemic closure by proposing a method to foreground relational ambiguity, [...] Read more.
Conventional uses of building information modeling (BIM) in existing-building representation tend to prioritize geometric consistency and efficiency, but often at the expense of interpretive depth. This paper challenges BIM’s tendency to promote epistemic closure by proposing a method to foreground relational ambiguity, transforming view reconciliation from a default automated process into a generative act of critical inquiry. The method, implemented in Autodesk Revit, introduces a parametric reference frame within BIM sheets that foregrounds and manipulates reciprocal relationships between orthographic views (e.g., plans and sections) to promote interpretive ambiguity. Through a case study, the paper demonstrates how parameterized view relationships can resist oversimplification and encourage conflicting interpretations. By intentionally sacrificing efficiency for epistemic resilience, the method aims to expand BIM’s role beyond documentation, positioning it as a tool for architectural knowledge production. The paper concludes with implications for software development, pedagogy, and future research at the intersection of critical representation and computational tools. Full article
(This article belongs to the Special Issue Modern Digital Technologies for the Built Environment of the Future)
Show Figures

Figure 1

36 pages, 6008 KB  
Article
Continuous Authentication Through Touch Stroke Analysis with Explainable AI (xAI)
by Muhammad Nadzmi Mohd Nizam, Shih Yin Ooi, Soodamani Ramalingam and Ying Han Pang
Electronics 2026, 15(3), 542; https://doi.org/10.3390/electronics15030542 - 27 Jan 2026
Abstract
Mobile authentication is crucial for device security; however, conventional techniques such as PINs and swipe patterns are susceptible to social engineering attacks. This work explores the integration of touch stroke analysis and Explainable AI (xAI) to address these vulnerabilities. Unlike static methods that [...] Read more.
Mobile authentication is crucial for device security; however, conventional techniques such as PINs and swipe patterns are susceptible to social engineering attacks. This work explores the integration of touch stroke analysis and Explainable AI (xAI) to address these vulnerabilities. Unlike static methods that require intervention at specific intervals, continuous authentication offers dynamic security by utilizing distinct user touch dynamics. This study aggregates touch stroke data from 150 participants to create comprehensive user profiles, incorporating novel biometric features such as mid-stroke pressure and mid-stroke area. These profiles are analyzed using machine learning methods, where the Random Tree classifier achieved the highest accuracy of 97.07%. To enhance interpretability and user trust, xAI methods such as SHAP and LIME are employed to provide transparency into the models’ decision-making processes, demonstrating how integrating touch stroke dynamics with xAI produces a visible, trustworthy, and continuous authentication system. Full article
Show Figures

Figure 1

18 pages, 758 KB  
Article
An Adaptive Task Difficulty Model for Personalized Reading Comprehension in AI-Based Learning Systems
by Aray M. Kassenkhan, Mateus Mendes and Akbayan Bekarystankyzy
Algorithms 2026, 19(2), 100; https://doi.org/10.3390/a19020100 - 27 Jan 2026
Abstract
This article proposes an interpretable adaptive control model for dynamically regulating task difficulty in Artificial intelligence (AI)-augmented reading-comprehension learning systems. The model adjusts, on the fly, the level of task complexity associated with reading comprehension and post-text analytical tasks based on learner performance, [...] Read more.
This article proposes an interpretable adaptive control model for dynamically regulating task difficulty in Artificial intelligence (AI)-augmented reading-comprehension learning systems. The model adjusts, on the fly, the level of task complexity associated with reading comprehension and post-text analytical tasks based on learner performance, with the objective of maintaining an optimal difficulty level. Grounded in adaptive control theory and learning theory, the proposed algorithm updates task difficulty according to the deviation between observed learner performance and a predefined target mastery rate, modulated by an adaptivity coefficient. A simulation study involving heterogeneous learner profiles demonstrates stable convergence behavior and a strong positive correlation between task difficulty and learning performance (r = 0.78). The results indicate that the model achieves a balanced trade-off between learner engagement and cognitive load while maintaining low computational complexity, making it suitable for real-time integration into intelligent learning environments. The proposed approach contributes to AI-supported education by offering a transparent, control-theoretic alternative to heuristic difficulty adjustment mechanisms commonly used in e-learning systems. Full article
(This article belongs to the Section Algorithms for Multidisciplinary Applications)
Show Figures

Figure 1

30 pages, 612 KB  
Article
A KNN-Based Bilingual Book Recommendation System with Gamification and Learning Analytics
by Aray Kassenkhan
Information 2026, 17(2), 120; https://doi.org/10.3390/info17020120 - 27 Jan 2026
Abstract
The article reports on a bilingual and interpretable book recommendation platform for schoolchildren. This platform uses a lightweight K-Nearest Neighbors algorithm combined with gamification and learning analytics. This application has been designed for a bilingual learning environment in Kazakhstan, supporting learning in Kazakh [...] Read more.
The article reports on a bilingual and interpretable book recommendation platform for schoolchildren. This platform uses a lightweight K-Nearest Neighbors algorithm combined with gamification and learning analytics. This application has been designed for a bilingual learning environment in Kazakhstan, supporting learning in Kazakh and Russian languages, and is intended to improve reading engagement through culturally adjusted personalization. The recommendation engine combines content and collaborative filtering in that it leverages structured book data (genres, target age ranges, authors, languages, and semantics) and learner attributes (language of instruction, preferences, and learner history). A hybrid ranking function combines the similarity to the user and the item similarity to produce top-N recommendations, whereas gamification elements (points, achievements, and reading challenges) are used to foster sustained activity.Teacher dashboards show learners’ overall reading activity and progress through real-time data visualization. The initial calibration of the model was carried out using an open-source book collection consisting of 5197 items. Thereafter, the model was modified for a curated bilingual collection of 600 books intended for use in educational institutions in the Kazakh and Russian languages. The validation experiment was carried out on a pilot test involving 156 children. The experimental outcome suggests a stable level of recommendation in terms of the Precision@10 and Recall@10 values of 0.71 and 0.63 respectively. The computational complexity remained low. Moreover, the bilingual normalization technique increased the relevance of recommendations of non-majority language items by 12.4%. In conclusion, the proposed approach presents a scalable and transparent framework for AI-assisted reading personalization in bilingual e-learning systems. Future research will focus on transparent recommendation interfaces and more adaptive learner modeling. Full article
(This article belongs to the Special Issue Trends in Artificial Intelligence-Supported E-Learning)
Show Figures

Graphical abstract

18 pages, 11087 KB  
Article
GWAS and Machine Learning Screening of Genomic Determinants Underlying Host Adaptation in Swine and Chicken Salmonella Typhimurium Isolates
by Yifan Liu, Yuhao Wang, Yaxi Wang, Xiao Liu, Shuang Wang, Yao Peng, Ziyu Liu, Zhenpeng Li, Xin Lu and Biao Kan
Microorganisms 2026, 14(2), 293; https://doi.org/10.3390/microorganisms14020293 - 27 Jan 2026
Abstract
Salmonella Typhimurium is a major zoonotic pathogen, with pigs and chickens serving as key reservoirs for human infection, yet the genomic determinants of its host adaptation remain incompletely understood. This study integrated comparative genomics, genome-wide association studies (GWASs), and interpretable machine learning on [...] Read more.
Salmonella Typhimurium is a major zoonotic pathogen, with pigs and chickens serving as key reservoirs for human infection, yet the genomic determinants of its host adaptation remain incompletely understood. This study integrated comparative genomics, genome-wide association studies (GWASs), and interpretable machine learning on 1654 high-quality genomes of swine- and chicken-origin S. Typhimurium isolates to identify host-associated genetic features. Phylogenetic analysis revealed host-preferred lineages and significantly lower genetic diversity within chicken-adapted subpopulations. Meta-analysis identified distinct host-associated profiles of antimicrobial resistance genes (e.g., higher prevalence of floR and blaTEM-1 in swine) and virulence factors (e.g., enrichment of allB and the yersiniabactin system in chickens). GWASs pinpointed 1878 host-associated genes and multiple SNPs/indels, functionally enriched in metabolism, regulation, and cell processes. A two-stage Random Forest model, built using the most contributory features, accurately discriminated between swine and chicken origins (AUC = 0.974). These findings systematically revealed the genomic signatures of host adaptation in S. Typhimurium, providing a prioritized set of candidate markers for experimental validation. Full article
(This article belongs to the Section Food Microbiology)
Show Figures

Figure 1

23 pages, 653 KB  
Article
From Access to Impact: A Three-Level Model of ICT Use, Digital Feedback, and Students’ Achievement in Lithuanian Schools
by Julija Melnikova, Sigitas Balčiūnas, Eglė Pranckūnienė and Liudmila Rupšienė
Educ. Sci. 2026, 16(2), 193; https://doi.org/10.3390/educsci16020193 - 27 Jan 2026
Abstract
This study develops and validates a three-level model of digital learning conditions that reflects the progression from ICT accessibility (“access”) to pedagogical use (“use”) and their influence on student learning outcomes (“impact”). Drawing on secondary analysis of the PISA 2022 ICT Familiarity Questionnaire [...] Read more.
This study develops and validates a three-level model of digital learning conditions that reflects the progression from ICT accessibility (“access”) to pedagogical use (“use”) and their influence on student learning outcomes (“impact”). Drawing on secondary analysis of the PISA 2022 ICT Familiarity Questionnaire and applying complex-sample regression together with the logic of structural equation modelling (SEM), the study examines how ICT resources, usage practices, and digital feedback (ICTFEED) interact and how they are associated with Lithuanian fifteen-year-olds’ achievement in mathematics, reading, and science. The three-level model includes: (1) ICT infrastructure—access to technology at home and at school and students’ perceived quality of technological resources; (2) ICT learning practices—use of digital tools in subject lessons, inquiry-based activities, and school-related work outside the classroom; and (3) digital feedback and its relationship with academic achievement. Results show that neither home nor school ICT availability predicts students’ experience of receiving digital feedback. The only significant infrastructure-level predictor is the perceived quality of school ICT resources (ICTQUAL). Digital feedback is most strongly predicted by ICT use in inquiry-based learning and by ICT-supported schoolwork outside the classroom, whereas ICT use in subject lessons has only a minimal effect. Across all domains, digital feedback is negatively associated with student achievement, even when ICT access, resource quality, learning-use variables, and digital leisure are controlled for. This pattern suggests that ICTFEED functions primarily as a compensatory mechanism, being more frequently used with lower-achieving students rather than serving as a direct enhancer of academic performance. The proposed three-level model offers a structured framework for interpreting students’ digital learning experiences and highlights the key components of school ICT ecosystems that shape digital assessment practices and learning outcomes. Full article
(This article belongs to the Section Technology Enhanced Education)
Show Figures

Figure 1

Back to TopTop