Next Issue
Volume 17, February
Previous Issue
Volume 16, December
 
 

Information, Volume 17, Issue 1 (January 2026) – 111 articles

Cover Story (view full-size image): This paper explores the novel application of large language models (LLMs) as evaluators for structured scientific summaries—a task where traditional natural language evaluation metrics may not readily apply. Leveraging the Open Research Knowledge Graph (ORKG) as a repository of human-curated properties, we augment a gold-standard dataset by generating corresponding properties using three distinct LLMs—Llama, Mistral, and Qwen—under three contextual settings: context-lean (research problem only), context-rich (research problem with title and abstract), and context-dense (research problem with multiple similar papers). To assess the quality of these properties, we employ LLM evaluators (Deepseek, Mistral, and Qwen) to rate them on criteria, including similarity, relevance, factuality, informativeness, coherence, and specificity. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
17 pages, 6316 KB  
Article
Research on a Lightweight Real-Time Facial Expression Recognition System Based on an Improved Mini-Xception Algorithm
by Xuchen Sun, Jianfeng Yang and Yi Zhou
Information 2026, 17(1), 111; https://doi.org/10.3390/info17010111 - 22 Jan 2026
Viewed by 125
Abstract
This paper proposes a lightweight facial expression recognition model based on an improved Mini-Xception algorithm to address the issue of deploying existing models on resource-constrained devices. The model achieves lightweight facial expression recognition, particularly for elder-oriented applications, by introducing depthwise separable convolutions, residual [...] Read more.
This paper proposes a lightweight facial expression recognition model based on an improved Mini-Xception algorithm to address the issue of deploying existing models on resource-constrained devices. The model achieves lightweight facial expression recognition, particularly for elder-oriented applications, by introducing depthwise separable convolutions, residual connections, and a four-class expression reconstruction. These designs significantly reduce the number of parameters and computational complexity while maintaining high accuracy. The model achieves an accuracy of 79.96% on the FER2013 dataset, outperforming various other popular models, and enables efficient real-time inference in standard CPU environments. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

17 pages, 1377 KB  
Article
GemSP: An Ensemble Model for User Story Point Estimation Using Gemini Embeddings
by Imad Moufidi, Safaa Achour and Mohammed Benattou
Information 2026, 17(1), 110; https://doi.org/10.3390/info17010110 - 22 Jan 2026
Viewed by 87
Abstract
Accurately estimating story points in Agile Scrum environments remains a challenging task, as traditional models often struggle to capture the complex relationships between user stories and their corresponding effort estimations. In this study, we leverage Gemini’s embedding representations to enhance the modeling of [...] Read more.
Accurately estimating story points in Agile Scrum environments remains a challenging task, as traditional models often struggle to capture the complex relationships between user stories and their corresponding effort estimations. In this study, we leverage Gemini’s embedding representations to enhance the modeling of user stories within a story point estimation dataset. To improve prediction performance, we propose GemSP, an ensemble regression model that integrates two complementary regression techniques applied to the Gemini embeddings. Our approach aims to exploit the rich semantic representations of user stories while benefiting from the robustness of ensemble learning. Experimental results show that, when instantiated with Gemini embeddings, the proposed GemSP framework achieves lower prediction error than selected baseline models (GPT-2, Deep-SE, and GPT2SP) under cross-project evaluation on JIRA datasets. These results illustrate the practical benefit of decoupling semantic representation learning from regression, enabling effective integration of stronger embedding models within lightweight ensemble predictors. Full article
(This article belongs to the Special Issue Using Generative Artificial Intelligence Within Software Engineering)
Show Figures

Figure 1

21 pages, 1683 KB  
Article
Method of Estimating Wave Height from Radar Images Based on Genetic Algorithm Back-Propagation (GABP) Neural Network
by Yang Meng, Jinda Wang, Zhanjun Tian, Fei Niu and Yanbo Wei
Information 2026, 17(1), 109; https://doi.org/10.3390/info17010109 - 22 Jan 2026
Viewed by 94
Abstract
In the domain of marine remote sensing, the real-time monitoring of ocean waves is a research hotspot, which employs acquired X-band radar images to retrieve wave information. To enhance the accuracy of the classical spectrum method using the extracted signal-to-noise ratio (SNR) from [...] Read more.
In the domain of marine remote sensing, the real-time monitoring of ocean waves is a research hotspot, which employs acquired X-band radar images to retrieve wave information. To enhance the accuracy of the classical spectrum method using the extracted signal-to-noise ratio (SNR) from an image sequence, data from the preferred analysis area around the upwind is required. Additionally, the accuracy requires further improvement in cases of low wind speed and swell. For shore-based radar, access to the preferred analysis area cannot be guaranteed in practice, which limits the measurement accuracy of the spectrum method. In this paper, a method using extracted SNRs and an optimized genetic algorithm back-propagation (GABP) neural network model is proposed to enhance the inversion accuracy of significant wave height. The extracted SNRs from multiple selected analysis regions, included angles, and wind speed are employed to construct a feature vector as the input parameter of the GABP neural network. Considering the not-completely linear relationship of wave height to the SNR derived from radar images, the GABP network model is used to fit the relationship. Compared with the classical SNR-based method, the correlation coefficient using the GABP neural network is improved by 0.14, and the root mean square error is reduced by 0.20 m. Full article
(This article belongs to the Section Information Processes)
Show Figures

Graphical abstract

26 pages, 1051 KB  
Article
Neural Signatures of Speed and Regular Reading: A Machine Learning and Explainable AI (XAI) Study of Sinhalese and Japanese
by Thishuli Walpola, Namal Rathnayake, Hoang Ngoc Thanh, Niluka Dilhani and Atsushi Senoo
Information 2026, 17(1), 108; https://doi.org/10.3390/info17010108 - 21 Jan 2026
Viewed by 131
Abstract
Reading speed is hypothesized to have distinct neural signatures across orthographically diverse languages, yet cross-linguistic evidence remains limited. We investigated this by classifying speed readers versus regular readers among Sinhalese and Japanese adults (n=142) using task-based fMRI and 35 [...] Read more.
Reading speed is hypothesized to have distinct neural signatures across orthographically diverse languages, yet cross-linguistic evidence remains limited. We investigated this by classifying speed readers versus regular readers among Sinhalese and Japanese adults (n=142) using task-based fMRI and 35 supervised machine learning classifiers. Functional activation was extracted from 12 reading-related cortical regions. We introduced Fuzzy C-Means (FCM) clustering for data augmentation and Shapley additive explanations (SHAP) for model interpretability, enabling evaluation of region-wise contributions to reading speed classification. The best model, an FT-TABPFN network with FCM augmentation, achieved 81.1% test accuracy in the Combined cohort. In the Japanese-only cohort, Quadratic SVM and Subspace KNN each reached 85.7% accuracy. SHAP analysis revealed that the angular gyrus (AG) and inferior frontal gyrus (triangularis) were the strongest contributors across cohorts. Additionally, the anterior supra marginal gyrus (ASMG) appeared as a higher contributor in the Japanese-only cohort, while the posterior superior temporal gyrus (PSTG) contributed strongly to both cohorts separately. However, the posterior middle temporal gyrus (PMTG) showed less or no contribution to the model classification in each cohort. These findings demonstrate the effectiveness of interpretable machine learning for decoding reading speed, highlighting both universal neural predictors and language-specific differences. Our study provides a novel, generalizable framework for cross-linguistic neuroimaging analysis of reading proficiency. Full article
Show Figures

Graphical abstract

18 pages, 587 KB  
Article
Bridging the Engagement–Regulation Gap: A Longitudinal Evaluation of AI-Enhanced Learning Attitudes in Social Work Education
by Duen-Huang Huang and Yu-Cheng Wang
Information 2026, 17(1), 107; https://doi.org/10.3390/info17010107 - 21 Jan 2026
Viewed by 116
Abstract
The rapid adoption of generative artificial intelligence (AI) in higher education has intensified a pedagogical dilemma: while AI tools can increase immediate classroom engagement, they do not necessarily foster the self-regulated learning (SRL) capacities required for ethical and reflective professional practice, particularly in [...] Read more.
The rapid adoption of generative artificial intelligence (AI) in higher education has intensified a pedagogical dilemma: while AI tools can increase immediate classroom engagement, they do not necessarily foster the self-regulated learning (SRL) capacities required for ethical and reflective professional practice, particularly in human-service fields. In this two-time-point, pre-post cohort-level (repeated cross-sectional) evaluation, we examined a six-week AI-integrated curriculum incorporating explicit SRL scaffolding among social work undergraduates at a Taiwanese university (pre-test N = 37; post-test N = 35). Because the surveys were administered anonymously and individual responses could not be linked across time, pre-post comparisons were conducted at the cohort level using independent samples. The participating students completed the AI-Enhanced Learning Attitude Scale (AILAS); this is a 30-item instrument grounded in the Technology Acceptance Model, Attitude Theory and SRL frameworks, assessing six dimensions of AI-related learning attitudes. Prior pilot evidence suggested an engagement regulation gap, characterized by relatively strong learning process engagement but weaker learning planning and learning habits. Accordingly, the curriculum incorporated weekly goal-setting activities, structured reflection tasks, peer accountability mechanisms, explicit instructor modeling of SRL strategies and simple progress tracking tools. The conducted psychometric analyses demonstrated excellent internal consistency for the total scale at the post-test stage (Cronbach’s α = 0.95). The independent-samples t-tests indicated that, at the post-test stage, the cohorts reported higher mean scores across most dimensions, with the largest cohort-level differences in Learning Habits (Cohen’s d = 0.75, p = 0.003) and Learning Process (Cohen’s d = 0.79, p = 0.002). After Bonferroni adjustment, improvements in the Learning Desire, Learning Habits and Learning Process dimensions and the Overall Attitude scores remained statistically robust. In contrast, the Learning Planning dimension demonstrated only marginal improvement (d = 0.46, p = 0.064), suggesting that higher-order planning skills may require longer or more sustained instructional support. No statistically significant gender differences were identified at the post-test stage. Taken together, the findings presented in this study offer preliminary, design-consistent evidence that SRL-oriented pedagogical scaffolding, rather than AI technology itself, may help narrow the engagement regulation gap, while the consolidation of autonomous planning capacities remains an ongoing instructional challenge. Full article
Show Figures

Graphical abstract

24 pages, 3185 KB  
Article
A Hybrid Optimization Approach for Multi-Generation Intelligent Breeding Decisions
by Mingxiang Yang, Ziyu Li, Jiahao Li, Bingling Huang, Xiaohui Niu, Xin Lu and Xiaoxia Li
Information 2026, 17(1), 106; https://doi.org/10.3390/info17010106 - 20 Jan 2026
Viewed by 177
Abstract
Multi-generation intelligent breeding (MGIB) decision-making is a technique used by plant breeders to select mating individuals to produce new generations and allocate resources for each generation. However, existing research remains scarce on dynamic optimization of resources under limited budget and time constraints. Inspired [...] Read more.
Multi-generation intelligent breeding (MGIB) decision-making is a technique used by plant breeders to select mating individuals to produce new generations and allocate resources for each generation. However, existing research remains scarce on dynamic optimization of resources under limited budget and time constraints. Inspired by advances in reinforcement learning (RL), a framework that integrates evolutionary algorithms with deep RL was proposed to fill this gap. The framework combines two modules: the Improved Look-Ahead Selection (ILAS) module and Deep Q-Networks (DQNs) module. The former employs a simulated annealing-enhanced estimation of the distribution algorithm to make mating decisions. Based on the selected mating individual, the latter module learns multi-generation resource allocation policies using DQN. To evaluate our framework, numerical experiments were conducted on two realistic breeding datasets, i.e., Corn2019 and CUBIC. The ILAS outperformed LAS on corn2019, increasing the maximum and mean population Genomic Estimated Breeding Value (GEBV) by 9.1% and 7.7%. ILAS-DQN consistently outperformed the baseline methods, achieving significant and practical improvements in both top-performing and elite-average GEBVs across two independent datasets. The results demonstrated that our method outperforms traditional baselines, in both generalization and effectiveness for complex agricultural problems with delayed rewards. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

11 pages, 586 KB  
Article
Executive Functions and Adaptation in Vulnerable Contexts: Effects of a Digital Strategy-Based Intervention
by Alberto Aguilar-González, María Vaíllo Rodríguez, Claudia Poch and Nuria Camuñas
Information 2026, 17(1), 105; https://doi.org/10.3390/info17010105 - 20 Jan 2026
Viewed by 160
Abstract
Childhood and adolescence are critical periods for the development of Executive Functions (EF), which underpin self-control, planning, and social adaptation, and are often compromised in children growing up in psychosocially vulnerable contexts. This study examined the effects of STap2Go, a fully digital, strategy-based [...] Read more.
Childhood and adolescence are critical periods for the development of Executive Functions (EF), which underpin self-control, planning, and social adaptation, and are often compromised in children growing up in psychosocially vulnerable contexts. This study examined the effects of STap2Go, a fully digital, strategy-based EF training, on EF performance and self-perceived maladjustment in 36 at-risk children and adolescents compared with 32 controls. Participants completed pre- and post-intervention assessments using the Neuropsychological Assessment Battery of Executive Functions (BANFE-3) and the Multifactorial Self-Evaluative Test for Child Adaptation (TAMAI). Results showed a significant effect of training on global EF and on General Maladjustment, with improvements only in the intervention group. These findings support the inclusion of scalable, avatar-guided EF stimulation programs such as STap2Go within social inclusion pathways for youth in vulnerable situations. Full article
(This article belongs to the Special Issue Human–Computer Interactions and Computer-Assisted Education)
Show Figures

Figure 1

16 pages, 2424 KB  
Article
Development and Accessibility of the INCE App to Assess the Gut–Brain Axis in Individuals with and Without Autism
by Agustín E. Martínez-González
Information 2026, 17(1), 104; https://doi.org/10.3390/info17010104 - 20 Jan 2026
Viewed by 266
Abstract
In recent years, there has been increasing interest in the study of the gut–brain axis. Furthermore, there appears to be a relationship between abdominal pain, selective eating patterns, emotional instability, and intestinal disorders in Autism Spectrum Disorder (ASD). This work describes the development [...] Read more.
In recent years, there has been increasing interest in the study of the gut–brain axis. Furthermore, there appears to be a relationship between abdominal pain, selective eating patterns, emotional instability, and intestinal disorders in Autism Spectrum Disorder (ASD). This work describes the development and accessibility evaluation of the INCE mobile app. This mobile app allows users to obtain levels of gut–brain interaction severity using two scientifically proven scales: The Gastrointestinal Symptom Severity Scale (GSSS) and the Pain and Sensitivity Reactivity Scale (PSRS). The validity of both instruments was established in previous studies in neurotypical and autistic populations. Statistically significant improvements were found following post-design changes in the use and accessibility of the INCE app (.NET Maui 9 Software) reported by professionals (p = 0.013), families (p = 0.011), and adolescents (p = 0.004). INCE represents an important contribution to evidence-based applications and clearly translates into society. Full article
(This article belongs to the Special Issue Information Technology in Society)
Show Figures

Graphical abstract

29 pages, 13806 KB  
Article
DCAM-DETR: Dual Cross-Attention Mamba Detection Transformer for RGB–Infrared Anti-UAV Detection
by Zemin Qin and Yuheng Li
Information 2026, 17(1), 103; https://doi.org/10.3390/info17010103 - 19 Jan 2026
Viewed by 287
Abstract
The proliferation of unmanned aerial vehicles (UAVs) poses escalating security threats across critical infrastructures, necessitating robust real-time detection systems. Existing vision-based methods predominantly rely on single-modality data and exhibit significant performance degradation under challenging scenarios. To address these limitations, we propose DCAM-DETR, a [...] Read more.
The proliferation of unmanned aerial vehicles (UAVs) poses escalating security threats across critical infrastructures, necessitating robust real-time detection systems. Existing vision-based methods predominantly rely on single-modality data and exhibit significant performance degradation under challenging scenarios. To address these limitations, we propose DCAM-DETR, a novel multimodal detection framework that fuses RGB and thermal infrared modalities through an enhanced RT-DETR architecture integrated with state space models. Our approach introduces four innovations: (1) a MobileMamba backbone leveraging selective state space models for efficient long-range dependency modeling with linear complexity O(n); (2) Cross-Dimensional Attention (CDA) and Cross-Path Attention (CPA) modules capturing intermodal correlations across spatial and channel dimensions; (3) an Adaptive Feature Fusion Module (AFFM) dynamically calibrating multimodal feature contributions; and (4) a Dual-Attention Decoupling Module (DADM) enhancing detection head discrimination for small targets. Experiments on Anti-UAV300 demonstrate state-of-the-art performance with 94.7% mAP@0.5 and 78.3% mAP@0.5:0.95 at 42 FPS. Extended evaluations on FLIR-ADAS and KAIST datasets validate the generalization capacity across diverse scenarios. Full article
(This article belongs to the Special Issue Computer Vision for Security Applications, 2nd Edition)
Show Figures

Graphical abstract

33 pages, 7152 KB  
Article
DRADG: A Dynamic Risk-Adaptive Data Governance Framework for Modern Digital Ecosystems
by Jihane Gharib and Youssef Gahi
Information 2026, 17(1), 102; https://doi.org/10.3390/info17010102 - 19 Jan 2026
Viewed by 193
Abstract
In today’s volatile digital environments, conventional data governance practices fail to adequately address the dynamic, context-sensitive, and risk-hazardous nature of data use. This paper introduces DRADG (Dynamic Risk-Adaptive Data Governance), a new paradigm that unites risk-aware decision-making with adaptive data governance mechanisms to [...] Read more.
In today’s volatile digital environments, conventional data governance practices fail to adequately address the dynamic, context-sensitive, and risk-hazardous nature of data use. This paper introduces DRADG (Dynamic Risk-Adaptive Data Governance), a new paradigm that unites risk-aware decision-making with adaptive data governance mechanisms to enhance resilience, compliance, and trust in complex data environments. Drawing on the convergence of existing data governance models, best practice risk management (DAMA-DMBOK, NIST, and ISO 31000), and real-world enterprise experience, this framework provides a modular, expandable approach to dynamically aligning governance strategy with evolving contextual factors and threats in data management. The contribution is in the form of a multi-layered paradigm combining static policy with dynamic risk indicator through application of data sensitivity categorization, contextual risk scoring, and use of feedback loops to continuously adapt. The technical contribution is in the governance-risk matrix formulated, mapping data lifecycle stages (acquisition, storage, use, sharing, and archival) to corresponding risk mitigation mechanisms. This is embedded through a semi-automated rules-based engine capable of modifying governance controls based on predetermined thresholds and evolving data contexts. Validation was obtained through simulation-based training in cross-border data sharing, regulatory adherence, and cloud-based data management. Findings indicate that DRADG enhances governance responsiveness, reduces exposure to compliance risks, and provides a basis for sustainable data accountability. The research concludes by providing guidelines for implementation and avenues for future research in AI-driven governance automation and policy learning. DRADG sets a precedent for imbuing intelligence and responsiveness at the heart of data governance operations of modern-day digital enterprises. Full article
(This article belongs to the Special Issue Information Management and Decision-Making)
Show Figures

Graphical abstract

27 pages, 2766 KB  
Article
Explainable Reciprocal Recommender System for Affiliate–Seller Matching: A Two-Stage Deep Learning Approach
by Hanadi Almutairi and Mourad Ykhlef
Information 2026, 17(1), 101; https://doi.org/10.3390/info17010101 - 19 Jan 2026
Viewed by 140
Abstract
This paper presents a two-stage explainable recommendation system for reciprocal affiliate–seller matching that uses machine learning and data science to handle voluminous data and generate personalized ranking lists for each user. In the first stage, a representation learning model was trained to create [...] Read more.
This paper presents a two-stage explainable recommendation system for reciprocal affiliate–seller matching that uses machine learning and data science to handle voluminous data and generate personalized ranking lists for each user. In the first stage, a representation learning model was trained to create dense embeddings for affiliates and sellers, ensuring efficient identification of relevant pairs. In the second stage, a learning-to-rank approach was applied to refine the recommendation list based on user suitability and relevance. Diversity-enhancing reranking (maximal marginal relevance/explicit query aspect diversification) and popularity penalties were also implemented, and their effects on accuracy and provider-side diversity were quantified. Model interpretability techniques were used to identify which features affect a recommendation. The system was evaluated on a fully synthetic dataset that mimics the high-level statistics generated by affiliate platforms, and the results were compared against classical baselines (ALS, Bayesian personalized ranking) and ablated variants of the proposed model. While the reported ranking metrics (e.g., normalized discounted cumulative gain at 10 (NDCG@10)) are close to 1.0 under controlled conditions, potential overfitting, synthetic data limitations, and the need for further validation on real-world datasets are addressed. Attributions based on Shapley additive explanations were computed offline for the ranking model and excluded from the online latency budget, which was dominated by approximate nearest neighbors-based retrieval and listwise ranking. Our work demonstrates that high top-K accuracy, diversity-aware reranking, and post hoc explainability can be integrated within a single recommendation pipeline. While initially validated under synthetic evaluation, the pipeline was further assessed on a public real-world behavioral dataset, highlighting deployment challenges in affiliate–seller platforms and revealing practical constraints related to incomplete metadata. Full article
Show Figures

Figure 1

17 pages, 468 KB  
Article
A Traceable Ring Signcryption Scheme Based on SM9 for Privacy Protection
by Liang Qiao, Xuefeng Zhang and Beibei Li
Information 2026, 17(1), 100; https://doi.org/10.3390/info17010100 - 19 Jan 2026
Viewed by 182
Abstract
To address the issues of insufficient privacy protection, lack of confidentiality, and absence of traceability mechanisms in resource-constrained application scenarios such as IoT nodes or mobile network group communications, this paper proposes a traceable ring signcryption privacy protection scheme based on the SM9 [...] Read more.
To address the issues of insufficient privacy protection, lack of confidentiality, and absence of traceability mechanisms in resource-constrained application scenarios such as IoT nodes or mobile network group communications, this paper proposes a traceable ring signcryption privacy protection scheme based on the SM9 algorithm. In detail, the ring signcryption structure is designed based on the SM9 identity-based cryptography algorithm framework. Additionally, the scheme introduces a dynamic accumulator to compress ciphertext length and optimizes the algorithm to improve computational efficiency. Under the random oracle model, it is proved that the scheme has unforgeability, confidentiality, and conditional anonymity, and it is also demonstrated that conditional anonymity can be used to trace the identity of the actual signcryptor in the event of a dispute. Performance analysis shows that, compared with related schemes, this scheme improves the efficiency of signcryption, and the size of the signcryption ciphertext remains at a constant level. Full article
(This article belongs to the Special Issue Privacy-Preserving Data Analytics and Secure Computation)
Show Figures

Graphical abstract

21 pages, 2529 KB  
Article
Continual Learning for Saudi-Dialect Offensive-Language Detection Under Temporal Linguistic Drift
by Afefa Asiri and Mostafa Saleh
Information 2026, 17(1), 99; https://doi.org/10.3390/info17010099 - 18 Jan 2026
Viewed by 188
Abstract
Offensive-language detection systems that perform well at a given point in time often degrade as linguistic patterns evolve, particularly in dialectal Arabic social media, where new terms emerge and familiar expressions shift in meaning. This study investigates temporal linguistic drift in Saudi-dialect offensive-language [...] Read more.
Offensive-language detection systems that perform well at a given point in time often degrade as linguistic patterns evolve, particularly in dialectal Arabic social media, where new terms emerge and familiar expressions shift in meaning. This study investigates temporal linguistic drift in Saudi-dialect offensive-language detection through a systematic evaluation of continual-learning approaches. Building on the Saudi Offensive Dialect (SOD) dataset, we designed test scenarios incorporating newly introduced offensive terms, context-shifting expressions, and varying proportions of historical data to assess both adaptation and knowledge retention. Eight continual-learning configurations—Experience Replay (ER), Elastic Weight Consolidation (EWC), Low-Rank Adaptation (LoRA), and their combinations—were evaluated across five test scenarios. Results show that models without continual-learning experience a 13.4-percentage-point decline in F1-macro on evolved patterns. In our experiments, Experience Replay achieved a relatively favorable balance, maintaining 0.812 F1-macro on historical data and 0.976 on contemporary patterns (KR = −0.035; AG = +0.264), though with increased memory and training time. EWC showed moderate retention (KR = −0.052) with comparable adaptation (AG = +0.255). On the SimuReal test set—designed with realistic class imbalance and only 5% drift terms—ER achieved 0.842 and EWC achieved 0.833, compared to the original model’s 0.817, representing modest improvements under realistic conditions. LoRA-based methods showed lower adaptation in our experiments, likely reflecting the specific LoRA configuration used in this study. Further investigation with alternative settings is warranted. Full article
(This article belongs to the Special Issue Social Media Mining: Algorithms, Insights, and Applications)
Show Figures

Figure 1

25 pages, 3597 KB  
Article
Social Engineering Attacks Using Technical Job Interviews: Real-Life Case Analysis and AI-Assisted Mitigation Proposals
by Tomás de J. Mateo Sanguino
Information 2026, 17(1), 98; https://doi.org/10.3390/info17010098 - 18 Jan 2026
Viewed by 285
Abstract
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather [...] Read more.
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather than seeking to generalize statistical evidence. The study examines a real-world covert attack conducted through a simulated interview, identifying the technical and psychological elements that contribute to its effectiveness, assessing the performance of artificial intelligence (AI) assistants in early detection and proposing mitigation strategies. To this end, a methodology was implemented that combines discursive reconstruction of the attack, code exploitation and forensic analysis. The experimental phase, primarily focused on evaluating 10 large language models (LLMs) against a fragment of obfuscated code, reveals that the malware initially evaded detection by 62 antivirus engines, while assistants such as GPT 5.1, Grok 4.1 and Claude Sonnet 4.5 successfully identified malicious patterns and suggested operational countermeasures. The discussion highlights how the apparent legitimacy of platforms like LinkedIn, Calendly and Bitbucket, along with time pressure and technical familiarity, act as catalysts for deception. Based on these findings, the study suggests that LLMs may play a role in the early detection of threats, offering a potentially valuable avenue to enhance security in technical recruitment processes by enabling the timely identification of malicious behavior. To the best of available knowledge, this represents the first academically documented case of its kind analyzed from an interdisciplinary perspective. Full article
Show Figures

Figure 1

22 pages, 2315 KB  
Article
Fuzzy-Based MCDA Technique Applied in Multi-Risk Problems Involving Heatwave Risks in and Pandemic Scenarios
by Rosa Cafaro, Barbara Cardone, Ferdinando Di Martino, Cristiano Mauriello and Vittorio Miraglia
Information 2026, 17(1), 97; https://doi.org/10.3390/info17010097 - 18 Jan 2026
Viewed by 119
Abstract
Assessing the increased impacts/risks of urban heatwaves generated by stressors such as a pandemic period, such as the one experienced during the COVID-19 pandemic, is complicated by the lack of comprehensive information that allows for an analytical determination of the alteration produced on [...] Read more.
Assessing the increased impacts/risks of urban heatwaves generated by stressors such as a pandemic period, such as the one experienced during the COVID-19 pandemic, is complicated by the lack of comprehensive information that allows for an analytical determination of the alteration produced on climate risks/impacts. The assessment of the increased impacts/risks of urban heatwaves generated by stressors such as those due to the presence of a pandemic is complicated by the lack of comprehensive information that allows for the functional determination of the increased impacts/risks due to such stressors. On the other hand, it is essential for decision makers to understand the complex interactions between climate risks and the environmental and socioeconomic conditions generated by pandemics in an urban context, where specific restrictions on citizens’ livability are in place to protect their health. This study aims to address this need by proposing a fuzzy multi-criteria decision-making framework in a GIS environment that intuitively allows experts to assess the increase in heatwave risk factors for the population generated by pandemics. This assessment is accomplished by varying the values in the pairwise comparison matrices of the criteria that contribute to the construction of physical and socioeconomic vulnerability, exposure, and the hazard scenario. The framework was tested to assess heatwave impacts/risks on the population in the study area, which includes the municipalities of the metropolitan city of Naples, Italy, an urban area with high residential density where numerous summer heatwaves have been recorded over the last decade. The findings indicate a rise in impact/risks during pandemic times, particularly in municipalities with the greatest resident population density, situated close to Naples. Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis, 3rd Edition)
Show Figures

Figure 1

24 pages, 536 KB  
Systematic Review
Dynamic Difficulty Adjustment in Serious Games: A Literature Review
by Lucia Víteková, Christian Eichhorn, Johanna Pirker and David A. Plecher
Information 2026, 17(1), 96; https://doi.org/10.3390/info17010096 - 17 Jan 2026
Viewed by 298
Abstract
This systematic literature review analyzes the role of dynamic difficulty adaptation (DDA) in serious games (SGs) to provide an overview of current trends and identify research gaps. The purpose of the study is to contextualize how DDA is being employed in SGs to [...] Read more.
This systematic literature review analyzes the role of dynamic difficulty adaptation (DDA) in serious games (SGs) to provide an overview of current trends and identify research gaps. The purpose of the study is to contextualize how DDA is being employed in SGs to enhance their learning outcomes, effectiveness, and game enjoyment. The review included studies published over the past five years that implemented specific DDA methods within SGs. Publications were identified through Google Scholar (searched up to 10 November 2025) and screened for relevance, resulting in 75 relevant papers. No formal risk-of-bias assessment was conducted. These studies were analyzed by publication year, source, application domain, DDA type, and effectiveness. The results indicate a growing interest in adaptive SGs across domains, including rehabilitation and education, with DDA methods ranging from rule-based (e.g., fuzzy logic) and player modeling (using performance, physiological, or emotional metrics) to various machine learning techniques (reinforcement learning, genetic algorithms, neural networks). Newly emerging trends, such as the integration of generative artificial intelligence for DDA, were also identified. Evidence suggests that DDA can enhance learning outcomes and game experience, although study differences, limited evaluation metrics, and unexplored opportunities for adaptive SGs highlight the need for further research. Full article
(This article belongs to the Special Issue Serious Games, Games for Learning and Gamified Apps)
Show Figures

Graphical abstract

22 pages, 570 KB  
Article
Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models
by Marco Rospocher, Massimo Salgaro and Simone Rebora
Information 2026, 17(1), 95; https://doi.org/10.3390/info17010095 - 16 Jan 2026
Viewed by 299
Abstract
Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less [...] Read more.
Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less authentic or emotionally resonant than human creations, with authorship attribution strongly shaping esthetic judgments. Yet little attention has been paid to how AI systems themselves evaluate creative authorship. This study investigates how large language models (LLMs) evaluate literary quality under different framings of authorship—Human, AI, or Human+AI collaboration. Using a questionnaire-based experimental design, we prompted four instruction-tuned LLMs (ChatGPT 4, Gemini 2, Gemma 3, and LLaMA 3) to read and assess three short stories in Italian, originally generated by ChatGPT 4 in the narrative style of Roald Dahl. For each story × authorship condition × model combination, we collected 100 questionnaire completions, yielding 3600 responses in total. Across esthetic, literary, and inclusiveness dimensions, the stated authorship systematically conditioned model judgments: identical stories were consistently rated more favorably when framed as human-authored or human–AI co-authored than when labeled as AI-authored, revealing a robust negative bias toward AI authorship. Model-specific analyses further indicate distinctive evaluative profiles and inclusiveness thresholds across proprietary and open-source systems. Our findings extend research on attribution bias into the computational realm, showing that LLM-based evaluations reproduce human-like assumptions about creative agency and literary value. We publicly release all materials to facilitate transparency and future comparative work on AI-mediated literary evaluation. Full article
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)
Show Figures

Graphical abstract

21 pages, 4290 KB  
Article
Information Modeling of Asymmetric Aesthetics Using DCGAN: A Data-Driven Approach to the Generation of Marbling Art
by Muhammed Fahri Unlersen and Hatice Unlersen
Information 2026, 17(1), 94; https://doi.org/10.3390/info17010094 - 15 Jan 2026
Viewed by 369
Abstract
Traditional Turkish marbling (Ebru) art is an intangible cultural heritage characterized by highly asymmetric, fluid, and non-reproducible patterns, making its long-term preservation and large-scale dissemination challenging. It is highly sensitive to environmental conditions, making it enormously difficult to mass produce while maintaining its [...] Read more.
Traditional Turkish marbling (Ebru) art is an intangible cultural heritage characterized by highly asymmetric, fluid, and non-reproducible patterns, making its long-term preservation and large-scale dissemination challenging. It is highly sensitive to environmental conditions, making it enormously difficult to mass produce while maintaining its original aesthetic qualities. A data-driven generative model is therefore required to create unlimited, high-fidelity digital surrogates that safeguard this UNESCO heritage against physical loss and enable large-scale cultural applications. This study introduces a deep generative modeling framework for the digital reconstruction of traditional Turkish marbling (Ebru) art using a Deep Convolutional Generative Adversarial Network (DCGAN). A dataset of 20,400 image patches, systematically derived from 17 original marbling works, was used to train the proposed model. The framework aims to mathematically capture the asymmetric, fluid, and stochastic nature of Ebru patterns, enabling the reproduction of their aesthetic structure in a digital medium. The generated images were evaluated using multiple quantitative and perceptual metrics, including Fréchet Inception Distance (FID), Kernel Inception Distance (KID), Learned Perceptual Image Patch Similarity (LPIPS), and PRDC-based indicators (Precision, Recall, Density, Coverage). For experimental validation, the proposed DCGAN framework is additionally compared against a Vanilla GAN baseline trained under identical conditions, highlighting the advantages of convolutional architectures for modeling marbling textures. The results show that the DCGAN model achieved a high level of realism and diversity without mode collapse or overfitting, producing images that were perceptually close to authentic marbling works. In addition to the quantitative evaluation, expert qualitative assessment by a traditional Ebru artist confirmed that the model reproduced the organic textures, color dynamics, and compositional asymmetrical characteristic of real marbling art. The proposed approach demonstrates the potential of deep generative models for the digital preservation, dissemination, and reinterpretation of intangible cultural heritage recognized by UNESCO. Full article
Show Figures

Graphical abstract

12 pages, 12633 KB  
Article
Point Cloud Quality Assessment via Complexity-Driven Patch Sampling and Attention-Enhanced Swin-Transformer
by Xilei Shen, Qiqi Li, Renwei Tu, Yongqiang Bai, Di Ge and Zhongjie Zhu
Information 2026, 17(1), 93; https://doi.org/10.3390/info17010093 - 15 Jan 2026
Viewed by 193
Abstract
As an emerging immersive media format, point clouds (PC) inevitably suffer from distortions such as compression and noise, where even local degradations may severely impair perceived visual quality and user experience. It is therefore essential to accurately evaluate the perceived quality of PC. [...] Read more.
As an emerging immersive media format, point clouds (PC) inevitably suffer from distortions such as compression and noise, where even local degradations may severely impair perceived visual quality and user experience. It is therefore essential to accurately evaluate the perceived quality of PC. In this paper, a no-reference point cloud quality assessment (PCQA) method that uses complexity-driven patch sampling and an attention-enhanced Swin-Transformer is proposed to accurately assess the perceived quality of PC. Given that projected PC maps effectively capture distortions and that the quality-related information density varies significantly across local patches, a complexity-driven patch sampling strategy is proposed. By quantifying patch complexity, regions with higher information density are preferentially sampled to enhance subsequent quality-sensitive feature representation. Given that the indistinguishable response strengths between key and redundant channels during feature extraction may dilute effective features, an Attention-Enhanced Swin-Transformer is proposed to adaptively reweight critical channels, thereby improving feature extraction performance. Given that traditional regression heads typically use a single-layer linear mapping, which overlooks the heterogeneous importance of information across channels, a gated regression head is designed to enable adaptive fusion of global and statistical features via a statistics-guided gating mechanism. Experiments on the SJTU-PCQA dataset demonstrate that the proposed method consistently outperforms representative PCQA methods. Full article
Show Figures

Figure 1

18 pages, 1323 KB  
Article
AI-Enhanced Modular Information Architecture for Cultural Heritage: Designing Cognitive-Efficient and User-Centered Experiences
by Fotios Pastrakis, Markos Konstantakis and George Caridakis
Information 2026, 17(1), 92; https://doi.org/10.3390/info17010092 - 15 Jan 2026
Viewed by 261
Abstract
Digital cultural heritage platforms face a dual challenge: preserving rich historical information while engaging an audience with declining attention spans. This paper addresses that challenge by proposing a modular information architecture designed to mitigate cognitive overload in cultural heritage tourism applications. We begin [...] Read more.
Digital cultural heritage platforms face a dual challenge: preserving rich historical information while engaging an audience with declining attention spans. This paper addresses that challenge by proposing a modular information architecture designed to mitigate cognitive overload in cultural heritage tourism applications. We begin by examining evidence of diminishing sustained attention in digital user experience and its specific ramifications for cultural heritage sites, where dense content can overwhelm users. Grounded in cognitive load theory and principles of user-centered design, we outline a theoretical framework linking mental models, findability, and modular information architecture. We then present a user-centric modeling methodology that elicits visitor mental models and tasks (via card sorting, contextual inquiry, etc.), informing the specification of content components and semantic metadata (leveraging standards like Dublin Core and CIDOC-CRM). A visual framework is introduced that maps user tasks to content components, clusters these into UI components with progressive disclosure, and adapts them into screen instances suited to context, illustrated through a step-by-step walkthrough. Using this framework, we comparatively evaluate personalization and information structuring strategies in three platforms—TripAdvisor, Google Arts and Culture, and Airbnb Experiences—against criteria of cognitive load mitigation and user engagement. We also discuss how this modular architecture provides a structural foundation for human-centered, explainable AI–driven personalization and recommender services in cultural heritage contexts. The analysis reveals gaps in current designs (e.g., overwhelming content or passive user roles) and highlights best practices (such as tailored recommendations and progressive reveal of details). We conclude with implications for designing cultural heritage experiences that are cognitively accessible yet richly informative, summarizing contributions and suggesting future research in cultural UX, component-based design, and adaptive content delivery. Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
Show Figures

Graphical abstract

17 pages, 297 KB  
Article
Potential of Different Machine Learning Methods in Cost Estimation of High-Rise Construction in Croatia
by Ksenija Tijanić Štrok
Information 2026, 17(1), 91; https://doi.org/10.3390/info17010091 - 15 Jan 2026
Viewed by 265
Abstract
The fundamental goal of a construction project is to complete the construction phase within budget, but in practice, planned cost estimates are often exceeded. The causes of overruns can be due to insufficient preparation and planning of the project, changes during construction, activation [...] Read more.
The fundamental goal of a construction project is to complete the construction phase within budget, but in practice, planned cost estimates are often exceeded. The causes of overruns can be due to insufficient preparation and planning of the project, changes during construction, activation of risky events, etc. Also, construction costs are often calculated based on experience rather than scientifically based approaches. Due to the challenges, this paper investigates the potential of several different machine learning methods (linear regression, decision tree forest, support vector machine and general regression neural network) for estimating construction costs. The methods were implemented on a database of recent high-rise construction projects in the Republic of Croatia. Results confirmed the potential of the selected assessment methods; in particular, the support vector machine stands out in terms of accuracy metrics. Established machine learning models contribute to a deeper understanding of real construction costs, their optimization, and more effective cost management during the construction phase. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Graphical abstract

25 pages, 462 KB  
Article
ARIA: An AI-Supported Adaptive Augmented Reality Framework for Cultural Heritage
by Markos Konstantakis and Eleftheria Iakovaki
Information 2026, 17(1), 90; https://doi.org/10.3390/info17010090 - 15 Jan 2026
Viewed by 222
Abstract
Artificial Intelligence (AI) is increasingly reshaping how cultural heritage institutions design and deliver digital visitor experiences, particularly through adaptive Augmented Reality (AR) applications. However, most existing AR deployments in museums and galleries remain static, rule-based, and insufficiently responsive to visitors’ contextual, behavioral, and [...] Read more.
Artificial Intelligence (AI) is increasingly reshaping how cultural heritage institutions design and deliver digital visitor experiences, particularly through adaptive Augmented Reality (AR) applications. However, most existing AR deployments in museums and galleries remain static, rule-based, and insufficiently responsive to visitors’ contextual, behavioral, and emotional diversity. This paper presents ARIA (Augmented Reality for Interpreting Artefacts), a conceptual and architectural framework for AI-supported, adaptive AR experiences in cultural heritage settings. ARIA is designed to address current limitations in personalization, affect-awareness, and ethical governance by integrating multimodal context sensing, lightweight affect recognition, and AI-driven content personalization within a unified system architecture. The framework combines Retrieval-Augmented Generation (RAG) for controlled, knowledge-grounded narrative adaptation, continuous user modeling, and interoperable Digital Asset Management (DAM), while embedding Human-Centered Design (HCD) and Fairness, Accountability, Transparency, and Ethics (FATE) principles at its core. Emphasis is placed on accountable personalization, privacy-preserving data handling, and curatorial oversight of narrative variation. ARIA is positioned as a design-oriented contribution rather than a fully implemented system. Its architecture, data flows, and adaptive logic are articulated through representative museum use-case scenarios and a structured formative validation process including expert walkthrough evaluation and feasibility analysis, providing a foundation for future prototyping and empirical evaluation. The framework aims to support the development of scalable, ethically grounded, and emotionally responsive AR experiences for next-generation digital museology. Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies for Sustainable Development)
Show Figures

Graphical abstract

25 pages, 8224 KB  
Article
QWR-Dec-Net: A Quaternion-Wavelet Retinex Framework for Low-Light Image Enhancement with Applications to Remote Sensing
by Vladimir Frants, Sos Agaian, Karen Panetta and Artyom Grigoryan
Information 2026, 17(1), 89; https://doi.org/10.3390/info17010089 - 14 Jan 2026
Viewed by 255
Abstract
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor [...] Read more.
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor limitations and environmental factors, weakening visual fidelity and reducing performance in vision tasks. Common issues such as insufficient lighting, backlighting, and limited exposure create low contrast, heavy shadows, and poor visibility, particularly at night. We propose QWR-Dec-Net, a quaternion-based Retinex decomposition network tailored for low-light image enhancement. QWR-Dec-Net consists of two key modules: a decomposition module that separates illumination and reflectance, and a denoising module that fuses a quaternion holistic color representation with wavelet multi-frequency information. This structure jointly improves color constancy and noise suppression. Experiments on low-light remote sensing datasets (LSCIDMR and UCMerced) show that QWR-Dec-Net outperforms current methods in PSNR, SSIM, LPIPS, and classification accuracy. The model’s accurate illumination estimation and stable reflectance make it well-suited for remote sensing tasks such as object detection, video surveillance, precision agriculture, and autonomous navigation. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

13 pages, 219 KB  
Review
Flourishing Considerations for AI
by Tyler J. VanderWeele and Jonathan D. Teubner
Information 2026, 17(1), 88; https://doi.org/10.3390/info17010088 - 14 Jan 2026
Viewed by 574
Abstract
Artificial intelligence (AI) is transforming countless aspects of society, including possibly even who we are as persons. AI technologies may affect our flourishing for good or for ill. In this paper, we put forward principled considerations concerning flourishing and AI that are oriented [...] Read more.
Artificial intelligence (AI) is transforming countless aspects of society, including possibly even who we are as persons. AI technologies may affect our flourishing for good or for ill. In this paper, we put forward principled considerations concerning flourishing and AI that are oriented towards ensuring AI technologies are conducive to human flourishing, rather than impeding it. The considerations are intended to help guide discussions around the development of, and engagement with, AI technologies so as to orient them towards the promotion of individual and societal flourishing. Five sets of considerations around flourishing and AI are discussed concerning: (i) the output provided by large language models; (ii) the specific AI product design; (iii) our engagement with those products; (iv) the effects this is having on human knowledge; and (v) the effects this is having on the self-realization of the human person. While not exhaustive, it is argued that each of these sets of considerations must be taken seriously if these technologies are to help promote, rather than impede, flourishing. We suggest that we should ultimately frame all of our thinking on AI technologies around flourishing. Full article
(This article belongs to the Special Issue Advances in Human-Centered Artificial Intelligence)
Show Figures

Graphical abstract

40 pages, 2156 KB  
Article
The Art Nouveau Path: From Gameplay Logs to Learning Analytics in a Mobile Augmented Reality Game for Sustainability Education
by João Ferreira-Santos and Lúcia Pombo
Information 2026, 17(1), 87; https://doi.org/10.3390/info17010087 - 14 Jan 2026
Viewed by 125
Abstract
Mobile augmented reality games (MARGs) generate rich digital traces of how students engage with complex, place-based learning tasks. This study analyses gameplay logs from the Art Nouveau Path, a location-based MARG within the EduCITY Digital Teaching and Learning Ecosystem (DTLE), to develop [...] Read more.
Mobile augmented reality games (MARGs) generate rich digital traces of how students engage with complex, place-based learning tasks. This study analyses gameplay logs from the Art Nouveau Path, a location-based MARG within the EduCITY Digital Teaching and Learning Ecosystem (DTLE), to develop a learning analytics workflow that uses detailed gameplay logs to inform sustainability-focused educational design. During the post-game segment of a repeated cross-sectional intervention, 439 students in 118 collaborative groups completed 36 quiz tasks at 8 Art Nouveau heritage Points of Interest (POI). Group-level logs (4248 group-item responses) capturing correctness, AR-specific scores, session duration and pacing were transformed into interpretable indicators, combined with error mapping and cluster analysis, and triangulated with post-game open-ended reflections. Results show high overall feasibility (mean accuracy 85.33%) and a small subset of six conceptually demanding items with lower accuracy (mean 68.36%, range 58.47% to 72.88%) concentrated in specific path segments and media types. Cluster analysis yields three collaborative gameplay profiles, labeled ‘fast but fragile’, ‘slow but moderate’ and ‘thorough and successful’, which differ systematically in accuracy, pacing and engagement with AR-mediated tasks. The study proposes a replicable event-based workflow that links mobile AR gameplay logs to design decisions for heritage-based education for sustainability. Full article
(This article belongs to the Collection Augmented Reality Technologies, Systems and Applications)
Show Figures

Graphical abstract

21 pages, 1394 KB  
Article
Optimization and Application of Generative AI Algorithm Based on Transformer Architecture in Adaptive Learning
by Xuan Liu and Zhi Li
Information 2026, 17(1), 86; https://doi.org/10.3390/info17010086 - 13 Jan 2026
Viewed by 346
Abstract
At present, generative AI has problems of insufficient content generation accuracy, weak personalized response, and low reasoning efficiency in adaptive learning scenarios, which limit its in-depth application in intelligent teaching. To solve this problem, this paper proposed a Transformer fine-tuning method based on [...] Read more.
At present, generative AI has problems of insufficient content generation accuracy, weak personalized response, and low reasoning efficiency in adaptive learning scenarios, which limit its in-depth application in intelligent teaching. To solve this problem, this paper proposed a Transformer fine-tuning method based on low-rank adaptation technology, which realized efficient parameter update of pre-trained models through low-rank matrix insertion, and combined the instruction fine-tuning strategy to perform domain adaptation training on the model for the constructed educational scenario dataset. At the same time, a dynamic prompt construction mechanism was introduced to enhance the model’s context perception ability of individual learners’ behaviors, thereby achieving precise alignment and personalized control of generated content. This paper embeds the “wrong question guidance” and “knowledge graph embedding” mechanisms in the model, provides intelligent feedback based on student errors, and promotes in-depth understanding of subject knowledge through knowledge graphs. Experimental results showed that this method scored higher than 0.9 in BLEU and ROUGE-L. The average response delay was low, which was significantly better than the traditional fine-tuning method. This method showed good adaptability and practicality in the fusion of generative AI and adaptive learning and provided a generalizable optimization path and application solution for intelligent education systems. Full article
(This article belongs to the Special Issue Deep Learning Approach for Time Series Forecasting)
Show Figures

Figure 1

58 pages, 606 KB  
Review
The Pervasiveness of Digital Identity: Surveying Themes, Trends, and Ontological Foundations
by Matthew Comb and Andrew Martin
Information 2026, 17(1), 85; https://doi.org/10.3390/info17010085 - 13 Jan 2026
Viewed by 298
Abstract
Digital identity operates as the connective infrastructure of the digital age, linking individuals, organisations, and devices into networks through which services, rights, and responsibilities are transacted. Despite this centrality, the field remains fragmented, with technical solutions, disciplinary perspectives, and regulatory approaches often developing [...] Read more.
Digital identity operates as the connective infrastructure of the digital age, linking individuals, organisations, and devices into networks through which services, rights, and responsibilities are transacted. Despite this centrality, the field remains fragmented, with technical solutions, disciplinary perspectives, and regulatory approaches often developing in parallel without interoperability. This paper presents a systematic survey of digital identity research, drawing on a Scopus-indexed baseline corpus of 2551 publications spanning full years 2005–2024, complemented by a recent stratum of 1241 publications (2023–2025) used to surface contemporary thematic structure and inform the ontology-oriented synthesis. The survey contributes in three ways. First, it provides an integrated overview of the digital identity landscape, tracing influential and widely cited works, historical developments, and recent scholarship across technical, legal, organisational, and cultural domains. Second, it applies natural language processing and subject metadata to identify thematic patterns, disciplinary emphases, and influential authors, exposing trends and cross-field connections difficult to capture through manual review. Third, it consolidates recurring concepts and relationships into ontological fragments (illustrative concept maps and subgraphs) that surface candidate entities, processes, and contexts as signals for future formalisation and alignment of fragmented approaches. By clarifying how digital identity has been conceptualised and where gaps remain, the study provides a foundation for progress toward a universal digital identity that is coherent, interoperable, and socially inclusive. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

24 pages, 9146 KB  
Article
A Model for a Serialized Set-Oriented NoSQL Database Management System
by Alexandru-George Șerban and Alexandru Boicea
Information 2026, 17(1), 84; https://doi.org/10.3390/info17010084 - 13 Jan 2026
Viewed by 295
Abstract
Recent advancements in data management highlight the increasing focus on large-scale integration and analytics, with the management of duplicate information becoming a more resource-intensive and costly task. Existing SQL and NoSQL systems inadequately address the semantic constraints of set-based data, either by compromising [...] Read more.
Recent advancements in data management highlight the increasing focus on large-scale integration and analytics, with the management of duplicate information becoming a more resource-intensive and costly task. Existing SQL and NoSQL systems inadequately address the semantic constraints of set-based data, either by compromising relational fidelity or through inefficient deduplication mechanisms. This paper presents a set-oriented centralized NoSQL database management system (DBMS) that enforces uniqueness by construction, thereby reducing downstream deduplication and enhancing result determinism. The system utilizes in-memory execution with binary serialized persistence, achieving O(1) time complexity for exact-match CRUD operations while maintaining ACID-compliant transactional semantics through explicit commit operations. A comparative performance evaluation against Redis and MongoDB highlights the trade-offs between consistency guarantees and latency. The results reveal that enforced set uniqueness completely eliminates duplicates, incurring only moderate latency trade-offs compared to in-memory performance measures. The model can be extended for fuzzy queries and imprecise data by retrieving the membership function information. This work demonstrates that the set-oriented DBMS design represents a distinct architectural paradigm that addresses data integrity constraints inadequately handled by contemporary database systems. Full article
(This article belongs to the Section Information Systems)
Show Figures

Figure 1

30 pages, 8378 KB  
Article
Fund Similarity: A Use of Bipartite Graphs
by Ren-Raw Chen, Liangbingyan Luo, Yihui Wang and Xiaohu Zhang
Information 2026, 17(1), 83; https://doi.org/10.3390/info17010083 - 13 Jan 2026
Viewed by 165
Abstract
Fund similarity is important for investors when constructing diversified portfolios. Because mutual funds do not always adhere closely to their stated investment policies, investors may unintentionally hold funds with overlapping exposures, leading to reduced diversification and instead causing “diworsification”, which is an investment [...] Read more.
Fund similarity is important for investors when constructing diversified portfolios. Because mutual funds do not always adhere closely to their stated investment policies, investors may unintentionally hold funds with overlapping exposures, leading to reduced diversification and instead causing “diworsification”, which is an investment term for when too much complexity leads to worse results. As a result, various quantitative methods have been proposed in the literature to investigate fund similarity, primarily using portfolio holdings. Recently, machine learning tools such as clustering and graph theory have been introduced to capture fund similarity. This paper builds on this literature by applying bipartite graphs and node2vec embeddings to a comprehensive dataset that covers 3874 funds over a nearly 6-year period. Our empirical evidence suggests that, bipartiteness is not preserved for non-index (active) funds. Furthermore, while graph embeddings yield higher similarity scores than holding-based measures, they do not necessarily outperform holding-based similarity in explaining returns. These findings suggest that graph-based embeddings capture structural relationships among funds that are distinct from direct portfolio overlap but are not sufficient substitutes when similarity is evaluated solely through returns. As a result, we recommend a more comprehensive similarity measure that includes important risk metrics such as volatility risk, liquidity risk, and systemic risk. Full article
Show Figures

Graphical abstract

15 pages, 3341 KB  
Article
Probabilistic Modeling and Pattern Discovery-Based Sindhi Information Retrieval System
by Dil Nawaz Hakro, Abdullah Abbasi, Anjum Zameer Bhat, Saleem Raza, Muhammad Babar and Osama Al Rahbi
Information 2026, 17(1), 82; https://doi.org/10.3390/info17010082 - 13 Jan 2026
Viewed by 171
Abstract
Natural language processing is the technology used to interact with computers using human languages. An overlapping technology is Information Retrieval (IR), in which a user searches for the demanded or required documents from among a number of documents that are already stored. The [...] Read more.
Natural language processing is the technology used to interact with computers using human languages. An overlapping technology is Information Retrieval (IR), in which a user searches for the demanded or required documents from among a number of documents that are already stored. The required document is retrieved according to the relevance of the query of the user, and the results are presented in descending order. Many of the languages have their own IR systems, whereas a dedicated IR system for Sindhi still needs attention. Various approaches to effective information retrieval have been proposed. As Sindhi is an old language with a rich history and literature, it needs IR. For the development of Sindhi IR, a document database is required so that the documents can be retrieved accordingly. Many Sindhi documents were identified and collected from various sources, such as books, journal, magazines, and newspapers. These documents were identified as having potential for use in indexing and other forms of processing. Probabilistic modeling and pattern discovery were used to find patterns and for effective retrieval and relevancy. The results for Sindhi Information Retrieval systems are promising and presented more than 90% relevancy. The time elapsed was recorded as ranging from 0.2 to 4.8 s for a single word and 4.6 s with a Sindhi sentence, with the same starting time of 0.2 s. The IR system for Sindhi can be fine-tuned and utilized for other languages with the same characteristics, which adopt Arabic script. Full article
Show Figures

Graphical abstract

Previous Issue
Next Issue
Back to TopTop