Journal Description
Information
Information
is a scientific, peer-reviewed, open access journal of information science and technology, data, knowledge, and communication, published monthly online by MDPI. The International Society for the Study of Information (IS4SI) is affiliated with Information and its members receive discounts on the article processing charges.
- Open Access— free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, dblp, and other databases.
- Journal Rank: JCR - Q2 (Computer Science, Information Systems) / CiteScore - Q2 (Information Systems)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 20.9 days after submission; acceptance to publication is undertaken in 3.6 days (median values for papers published in this journal in the second half of 2025).
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
- Journal Cluster of Information Systems and Technology: Analytics, Applied System Innovation, Cryptography, Data, Digital, Informatics, Information, Journal of Cybersecurity and Privacy and Multimedia.
Impact Factor:
2.9 (2024);
5-Year Impact Factor:
3.0 (2024)
Latest Articles
Neural Signatures of Speed and Regular Reading: A Machine Learning and Explainable AI (XAI) Study of Sinhalese and Japanese
Information 2026, 17(1), 108; https://doi.org/10.3390/info17010108 - 21 Jan 2026
Abstract
►
Show Figures
Reading speed is hypothesized to have distinct neural signatures across orthographically diverse languages, yet cross-linguistic evidence remains limited. We investigated this by classifying speed readers versus regular readers among Sinhalese and Japanese adults ( ) using task-based fMRI and 35
[...] Read more.
Reading speed is hypothesized to have distinct neural signatures across orthographically diverse languages, yet cross-linguistic evidence remains limited. We investigated this by classifying speed readers versus regular readers among Sinhalese and Japanese adults ( ) using task-based fMRI and 35 supervised machine learning classifiers. Functional activation was extracted from 12 reading-related cortical regions. We introduced Fuzzy C-Means (FCM) clustering for data augmentation and Shapley additive explanations (SHAP) for model interpretability, enabling evaluation of region-wise contributions to reading speed classification. The best model, an FT-TABPFN network with FCM augmentation, achieved 81.1% test accuracy in the Combined cohort. In the Japanese-only cohort, Quadratic SVM and Subspace KNN each reached 85.7% accuracy. SHAP analysis revealed that the angular gyrus (AG) and inferior frontal gyrus (triangularis) were the strongest contributors across cohorts. Additionally, the anterior supra marginal gyrus (ASMG) appeared as a higher contributor in the Japanese-only cohort, while the posterior superior temporal gyrus (PSTG) contributed strongly to both cohorts separately. However, the posterior middle temporal gyrus (PMTG) showed less or no contribution to the model classification in each cohort. These findings demonstrate the effectiveness of interpretable machine learning for decoding reading speed, highlighting both universal neural predictors and language-specific differences. Our study provides a novel, generalizable framework for cross-linguistic neuroimaging analysis of reading proficiency.
Full article
Open AccessArticle
Bridging the Engagement–Regulation Gap: A Longitudinal Evaluation of AI-Enhanced Learning Attitudes in Social Work Education
by
Duen-Huang Huang and Yu-Cheng Wang
Information 2026, 17(1), 107; https://doi.org/10.3390/info17010107 - 21 Jan 2026
Abstract
The rapid adoption of generative artificial intelligence (AI) in higher education has intensified a pedagogical dilemma: while AI tools can increase immediate classroom engagement, they do not necessarily foster the self-regulated learning (SRL) capacities required for ethical and reflective professional practice, particularly in
[...] Read more.
The rapid adoption of generative artificial intelligence (AI) in higher education has intensified a pedagogical dilemma: while AI tools can increase immediate classroom engagement, they do not necessarily foster the self-regulated learning (SRL) capacities required for ethical and reflective professional practice, particularly in human-service fields. In this two-time-point, pre-post cohort-level (repeated cross-sectional) evaluation, we examined a six-week AI-integrated curriculum incorporating explicit SRL scaffolding among social work undergraduates at a Taiwanese university (pre-test N = 37; post-test N = 35). Because the surveys were administered anonymously and individual responses could not be linked across time, pre-post comparisons were conducted at the cohort level using independent samples. The participating students completed the AI-Enhanced Learning Attitude Scale (AILAS); this is a 30-item instrument grounded in the Technology Acceptance Model, Attitude Theory and SRL frameworks, assessing six dimensions of AI-related learning attitudes. Prior pilot evidence suggested an engagement regulation gap, characterized by relatively strong learning process engagement but weaker learning planning and learning habits. Accordingly, the curriculum incorporated weekly goal-setting activities, structured reflection tasks, peer accountability mechanisms, explicit instructor modeling of SRL strategies and simple progress tracking tools. The conducted psychometric analyses demonstrated excellent internal consistency for the total scale at the post-test stage (Cronbach’s α = 0.95). The independent-samples t-tests indicated that, at the post-test stage, the cohorts reported higher mean scores across most dimensions, with the largest cohort-level differences in Learning Habits (Cohen’s d = 0.75, p = 0.003) and Learning Process (Cohen’s d = 0.79, p = 0.002). After Bonferroni adjustment, improvements in the Learning Desire, Learning Habits and Learning Process dimensions and the Overall Attitude scores remained statistically robust. In contrast, the Learning Planning dimension demonstrated only marginal improvement (d = 0.46, p = 0.064), suggesting that higher-order planning skills may require longer or more sustained instructional support. No statistically significant gender differences were identified at the post-test stage. Taken together, the findings presented in this study offer preliminary, design-consistent evidence that SRL-oriented pedagogical scaffolding, rather than AI technology itself, may help narrow the engagement regulation gap, while the consolidation of autonomous planning capacities remains an ongoing instructional challenge.
Full article
(This article belongs to the Special Issue Advancing AI Applications in Education and Engineering: A Multidisciplinary Perspective)
►▼
Show Figures

Graphical abstract
Open AccessArticle
A Hybrid Optimization Approach for Multi-Generation Intelligent Breeding Decisions
by
Mingxiang Yang, Ziyu Li, Jiahao Li, Bingling Huang, Xiaohui Niu, Xin Lu and Xiaoxia Li
Information 2026, 17(1), 106; https://doi.org/10.3390/info17010106 - 20 Jan 2026
Abstract
Multi-generation intelligent breeding (MGIB) decision-making is a technique used by plant breeders to select mating individuals to produce new generations and allocate resources for each generation. However, existing research remains scarce on dynamic optimization of resources under limited budget and time constraints. Inspired
[...] Read more.
Multi-generation intelligent breeding (MGIB) decision-making is a technique used by plant breeders to select mating individuals to produce new generations and allocate resources for each generation. However, existing research remains scarce on dynamic optimization of resources under limited budget and time constraints. Inspired by advances in reinforcement learning (RL), a framework that integrates evolutionary algorithms with deep RL was proposed to fill this gap. The framework combines two modules: the Improved Look-Ahead Selection (ILAS) module and Deep Q-Networks (DQNs) module. The former employs a simulated annealing-enhanced estimation of the distribution algorithm to make mating decisions. Based on the selected mating individual, the latter module learns multi-generation resource allocation policies using DQN. To evaluate our framework, numerical experiments were conducted on two realistic breeding datasets, i.e., Corn2019 and CUBIC. The ILAS outperformed LAS on corn2019, increasing the maximum and mean population Genomic Estimated Breeding Value (GEBV) by 9.1% and 7.7%. ILAS-DQN consistently outperformed the baseline methods, achieving significant and practical improvements in both top-performing and elite-average GEBVs across two independent datasets. The results demonstrated that our method outperforms traditional baselines, in both generalization and effectiveness for complex agricultural problems with delayed rewards.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Executive Functions and Adaptation in Vulnerable Contexts: Effects of a Digital Strategy-Based Intervention
by
Alberto Aguilar-González, María Vaíllo Rodríguez, Claudia Poch and Nuria Camuñas
Information 2026, 17(1), 105; https://doi.org/10.3390/info17010105 - 20 Jan 2026
Abstract
Childhood and adolescence are critical periods for the development of Executive Functions (EF), which underpin self-control, planning, and social adaptation, and are often compromised in children growing up in psychosocially vulnerable contexts. This study examined the effects of STap2Go, a fully digital, strategy-based
[...] Read more.
Childhood and adolescence are critical periods for the development of Executive Functions (EF), which underpin self-control, planning, and social adaptation, and are often compromised in children growing up in psychosocially vulnerable contexts. This study examined the effects of STap2Go, a fully digital, strategy-based EF training, on EF performance and self-perceived maladjustment in 36 at-risk children and adolescents compared with 32 controls. Participants completed pre- and post-intervention assessments using the Neuropsychological Assessment Battery of Executive Functions (BANFE-3) and the Multifactorial Self-Evaluative Test for Child Adaptation (TAMAI). Results showed a significant effect of training on global EF and on General Maladjustment, with improvements only in the intervention group. These findings support the inclusion of scalable, avatar-guided EF stimulation programs such as STap2Go within social inclusion pathways for youth in vulnerable situations.
Full article
(This article belongs to the Special Issue Human–Computer Interactions and Computer-Assisted Education)
►▼
Show Figures

Figure 1
Open AccessArticle
Development and Accessibility of the INCE App to Assess the Gut–Brain Axis in Individuals with and Without Autism
by
Agustín E. Martínez-González
Information 2026, 17(1), 104; https://doi.org/10.3390/info17010104 - 20 Jan 2026
Abstract
In recent years, there has been increasing interest in the study of the gut–brain axis. Furthermore, there appears to be a relationship between abdominal pain, selective eating patterns, emotional instability, and intestinal disorders in Autism Spectrum Disorder (ASD). This work describes the development
[...] Read more.
In recent years, there has been increasing interest in the study of the gut–brain axis. Furthermore, there appears to be a relationship between abdominal pain, selective eating patterns, emotional instability, and intestinal disorders in Autism Spectrum Disorder (ASD). This work describes the development and accessibility evaluation of the INCE mobile app. This mobile app allows users to obtain levels of gut–brain interaction severity using two scientifically proven scales: The Gastrointestinal Symptom Severity Scale (GSSS) and the Pain and Sensitivity Reactivity Scale (PSRS). The validity of both instruments was established in previous studies in neurotypical and autistic populations. Statistically significant improvements were found following post-design changes in the use and accessibility of the INCE app (.NET Maui 9 Software) reported by professionals (p = 0.013), families (p = 0.011), and adolescents (p = 0.004). INCE represents an important contribution to evidence-based applications and clearly translates into society.
Full article
(This article belongs to the Special Issue Information Technology in Society)
►▼
Show Figures

Graphical abstract
Open AccessArticle
DCAM-DETR: Dual Cross-Attention Mamba Detection Transformer for RGB–Infrared Anti-UAV Detection
by
Zemin Qin and Yuheng Li
Information 2026, 17(1), 103; https://doi.org/10.3390/info17010103 - 19 Jan 2026
Abstract
The proliferation of unmanned aerial vehicles (UAVs) poses escalating security threats across critical infrastructures, necessitating robust real-time detection systems. Existing vision-based methods predominantly rely on single-modality data and exhibit significant performance degradation under challenging scenarios. To address these limitations, we propose DCAM-DETR, a
[...] Read more.
The proliferation of unmanned aerial vehicles (UAVs) poses escalating security threats across critical infrastructures, necessitating robust real-time detection systems. Existing vision-based methods predominantly rely on single-modality data and exhibit significant performance degradation under challenging scenarios. To address these limitations, we propose DCAM-DETR, a novel multimodal detection framework that fuses RGB and thermal infrared modalities through an enhanced RT-DETR architecture integrated with state space models. Our approach introduces four innovations: (1) a MobileMamba backbone leveraging selective state space models for efficient long-range dependency modeling with linear complexity ; (2) Cross-Dimensional Attention (CDA) and Cross-Path Attention (CPA) modules capturing intermodal correlations across spatial and channel dimensions; (3) an Adaptive Feature Fusion Module (AFFM) dynamically calibrating multimodal feature contributions; and (4) a Dual-Attention Decoupling Module (DADM) enhancing detection head discrimination for small targets. Experiments on Anti-UAV300 demonstrate state-of-the-art performance with 94.7% mAP@0.5 and 78.3% mAP@0.5:0.95 at 42 FPS. Extended evaluations on FLIR-ADAS and KAIST datasets validate the generalization capacity across diverse scenarios.
Full article
(This article belongs to the Special Issue Computer Vision for Security Applications, 2nd Edition)
►▼
Show Figures

Graphical abstract
Open AccessArticle
DRADG: A Dynamic Risk-Adaptive Data Governance Framework for Modern Digital Ecosystems
by
Jihane Gharib and Youssef Gahi
Information 2026, 17(1), 102; https://doi.org/10.3390/info17010102 - 19 Jan 2026
Abstract
In today’s volatile digital environments, conventional data governance practices fail to adequately address the dynamic, context-sensitive, and risk-hazardous nature of data use. This paper introduces DRADG (Dynamic Risk-Adaptive Data Governance), a new paradigm that unites risk-aware decision-making with adaptive data governance mechanisms to
[...] Read more.
In today’s volatile digital environments, conventional data governance practices fail to adequately address the dynamic, context-sensitive, and risk-hazardous nature of data use. This paper introduces DRADG (Dynamic Risk-Adaptive Data Governance), a new paradigm that unites risk-aware decision-making with adaptive data governance mechanisms to enhance resilience, compliance, and trust in complex data environments. Drawing on the convergence of existing data governance models, best practice risk management (DAMA-DMBOK, NIST, and ISO 31000), and real-world enterprise experience, this framework provides a modular, expandable approach to dynamically aligning governance strategy with evolving contextual factors and threats in data management. The contribution is in the form of a multi-layered paradigm combining static policy with dynamic risk indicator through application of data sensitivity categorization, contextual risk scoring, and use of feedback loops to continuously adapt. The technical contribution is in the governance-risk matrix formulated, mapping data lifecycle stages (acquisition, storage, use, sharing, and archival) to corresponding risk mitigation mechanisms. This is embedded through a semi-automated rules-based engine capable of modifying governance controls based on predetermined thresholds and evolving data contexts. Validation was obtained through simulation-based training in cross-border data sharing, regulatory adherence, and cloud-based data management. Findings indicate that DRADG enhances governance responsiveness, reduces exposure to compliance risks, and provides a basis for sustainable data accountability. The research concludes by providing guidelines for implementation and avenues for future research in AI-driven governance automation and policy learning. DRADG sets a precedent for imbuing intelligence and responsiveness at the heart of data governance operations of modern-day digital enterprises.
Full article
(This article belongs to the Special Issue Information Management and Decision-Making)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Explainable Reciprocal Recommender System for Affiliate–Seller Matching: A Two-Stage Deep Learning Approach
by
Hanadi Almutairi and Mourad Ykhlef
Information 2026, 17(1), 101; https://doi.org/10.3390/info17010101 - 19 Jan 2026
Abstract
This paper presents a two-stage explainable recommendation system for reciprocal affiliate–seller matching that uses machine learning and data science to handle voluminous data and generate personalized ranking lists for each user. In the first stage, a representation learning model was trained to create
[...] Read more.
This paper presents a two-stage explainable recommendation system for reciprocal affiliate–seller matching that uses machine learning and data science to handle voluminous data and generate personalized ranking lists for each user. In the first stage, a representation learning model was trained to create dense embeddings for affiliates and sellers, ensuring efficient identification of relevant pairs. In the second stage, a learning-to-rank approach was applied to refine the recommendation list based on user suitability and relevance. Diversity-enhancing reranking (maximal marginal relevance/explicit query aspect diversification) and popularity penalties were also implemented, and their effects on accuracy and provider-side diversity were quantified. Model interpretability techniques were used to identify which features affect a recommendation. The system was evaluated on a fully synthetic dataset that mimics the high-level statistics generated by affiliate platforms, and the results were compared against classical baselines (ALS, Bayesian personalized ranking) and ablated variants of the proposed model. While the reported ranking metrics (e.g., normalized discounted cumulative gain at 10 (NDCG@10)) are close to 1.0 under controlled conditions, potential overfitting, synthetic data limitations, and the need for further validation on real-world datasets are addressed. Attributions based on Shapley additive explanations were computed offline for the ranking model and excluded from the online latency budget, which was dominated by approximate nearest neighbors-based retrieval and listwise ranking. Our work demonstrates that high top-K accuracy, diversity-aware reranking, and post hoc explainability can be integrated within a single recommendation pipeline. While initially validated under synthetic evaluation, the pipeline was further assessed on a public real-world behavioral dataset, highlighting deployment challenges in affiliate–seller platforms and revealing practical constraints related to incomplete metadata.
Full article
(This article belongs to the Special Issue 2nd Edition of Modern Recommender Systems: Approaches, Challenges and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
A Traceable Ring Signcryption Scheme Based on SM9 for Privacy Protection
by
Liang Qiao, Xuefeng Zhang and Beibei Li
Information 2026, 17(1), 100; https://doi.org/10.3390/info17010100 - 19 Jan 2026
Abstract
To address the issues of insufficient privacy protection, lack of confidentiality, and absence of traceability mechanisms in resource-constrained application scenarios such as IoT nodes or mobile network group communications, this paper proposes a traceable ring signcryption privacy protection scheme based on the SM9
[...] Read more.
To address the issues of insufficient privacy protection, lack of confidentiality, and absence of traceability mechanisms in resource-constrained application scenarios such as IoT nodes or mobile network group communications, this paper proposes a traceable ring signcryption privacy protection scheme based on the SM9 algorithm. In detail, the ring signcryption structure is designed based on the SM9 identity-based cryptography algorithm framework. Additionally, the scheme introduces a dynamic accumulator to compress ciphertext length and optimizes the algorithm to improve computational efficiency. Under the random oracle model, it is proved that the scheme has unforgeability, confidentiality, and conditional anonymity, and it is also demonstrated that conditional anonymity can be used to trace the identity of the actual signcryptor in the event of a dispute. Performance analysis shows that, compared with related schemes, this scheme improves the efficiency of signcryption, and the size of the signcryption ciphertext remains at a constant level.
Full article
(This article belongs to the Special Issue Privacy-Preserving Data Analytics and Secure Computation)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Continual Learning for Saudi-Dialect Offensive-Language Detection Under Temporal Linguistic Drift
by
Afefa Asiri and Mostafa Saleh
Information 2026, 17(1), 99; https://doi.org/10.3390/info17010099 - 18 Jan 2026
Abstract
Offensive-language detection systems that perform well at a given point in time often degrade as linguistic patterns evolve, particularly in dialectal Arabic social media, where new terms emerge and familiar expressions shift in meaning. This study investigates temporal linguistic drift in Saudi-dialect offensive-language
[...] Read more.
Offensive-language detection systems that perform well at a given point in time often degrade as linguistic patterns evolve, particularly in dialectal Arabic social media, where new terms emerge and familiar expressions shift in meaning. This study investigates temporal linguistic drift in Saudi-dialect offensive-language detection through a systematic evaluation of continual-learning approaches. Building on the Saudi Offensive Dialect (SOD) dataset, we designed test scenarios incorporating newly introduced offensive terms, context-shifting expressions, and varying proportions of historical data to assess both adaptation and knowledge retention. Eight continual-learning configurations—Experience Replay (ER), Elastic Weight Consolidation (EWC), Low-Rank Adaptation (LoRA), and their combinations—were evaluated across five test scenarios. Results show that models without continual-learning experience a 13.4-percentage-point decline in F1-macro on evolved patterns. In our experiments, Experience Replay achieved a relatively favorable balance, maintaining 0.812 F1-macro on historical data and 0.976 on contemporary patterns (KR = −0.035; AG = +0.264), though with increased memory and training time. EWC showed moderate retention (KR = −0.052) with comparable adaptation (AG = +0.255). On the SimuReal test set—designed with realistic class imbalance and only 5% drift terms—ER achieved 0.842 and EWC achieved 0.833, compared to the original model’s 0.817, representing modest improvements under realistic conditions. LoRA-based methods showed lower adaptation in our experiments, likely reflecting the specific LoRA configuration used in this study. Further investigation with alternative settings is warranted.
Full article
(This article belongs to the Special Issue Social Media Mining: Algorithms, Insights, and Applications)
►▼
Show Figures

Figure 1
Open AccessArticle
Social Engineering Attacks Using Technical Job Interviews: Real-Life Case Analysis and AI-Assisted Mitigation Proposals
by
Tomás de J. Mateo Sanguino
Information 2026, 17(1), 98; https://doi.org/10.3390/info17010098 - 18 Jan 2026
Abstract
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather
[...] Read more.
Technical job interviews have become a vulnerable environment for social engineering attacks, particularly when they involve direct interaction with malicious code. In this context, the present manuscript investigates an exploratory case study, aiming to provide an in-depth analysis of a single incident rather than seeking to generalize statistical evidence. The study examines a real-world covert attack conducted through a simulated interview, identifying the technical and psychological elements that contribute to its effectiveness, assessing the performance of artificial intelligence (AI) assistants in early detection and proposing mitigation strategies. To this end, a methodology was implemented that combines discursive reconstruction of the attack, code exploitation and forensic analysis. The experimental phase, primarily focused on evaluating 10 large language models (LLMs) against a fragment of obfuscated code, reveals that the malware initially evaded detection by 62 antivirus engines, while assistants such as GPT 5.1, Grok 4.1 and Claude Sonnet 4.5 successfully identified malicious patterns and suggested operational countermeasures. The discussion highlights how the apparent legitimacy of platforms like LinkedIn, Calendly and Bitbucket, along with time pressure and technical familiarity, act as catalysts for deception. Based on these findings, the study suggests that LLMs may play a role in the early detection of threats, offering a potentially valuable avenue to enhance security in technical recruitment processes by enabling the timely identification of malicious behavior. To the best of available knowledge, this represents the first academically documented case of its kind analyzed from an interdisciplinary perspective.
Full article
(This article belongs to the Special Issue Emerging Research in Artificial Intelligence for Code Analysis and Security)
►▼
Show Figures

Figure 1
Open AccessArticle
Fuzzy-Based MCDA Technique Applied in Multi-Risk Problems Involving Heatwave Risks in and Pandemic Scenarios
by
Rosa Cafaro, Barbara Cardone, Ferdinando Di Martino, Cristiano Mauriello and Vittorio Miraglia
Information 2026, 17(1), 97; https://doi.org/10.3390/info17010097 - 18 Jan 2026
Abstract
Assessing the increased impacts/risks of urban heatwaves generated by stressors such as a pandemic period, such as the one experienced during the COVID-19 pandemic, is complicated by the lack of comprehensive information that allows for an analytical determination of the alteration produced on
[...] Read more.
Assessing the increased impacts/risks of urban heatwaves generated by stressors such as a pandemic period, such as the one experienced during the COVID-19 pandemic, is complicated by the lack of comprehensive information that allows for an analytical determination of the alteration produced on climate risks/impacts. The assessment of the increased impacts/risks of urban heatwaves generated by stressors such as those due to the presence of a pandemic is complicated by the lack of comprehensive information that allows for the functional determination of the increased impacts/risks due to such stressors. On the other hand, it is essential for decision makers to understand the complex interactions between climate risks and the environmental and socioeconomic conditions generated by pandemics in an urban context, where specific restrictions on citizens’ livability are in place to protect their health. This study aims to address this need by proposing a fuzzy multi-criteria decision-making framework in a GIS environment that intuitively allows experts to assess the increase in heatwave risk factors for the population generated by pandemics. This assessment is accomplished by varying the values in the pairwise comparison matrices of the criteria that contribute to the construction of physical and socioeconomic vulnerability, exposure, and the hazard scenario. The framework was tested to assess heatwave impacts/risks on the population in the study area, which includes the municipalities of the metropolitan city of Naples, Italy, an urban area with high residential density where numerous summer heatwaves have been recorded over the last decade. The findings indicate a rise in impact/risks during pandemic times, particularly in municipalities with the greatest resident population density, situated close to Naples.
Full article
(This article belongs to the Special Issue New Applications in Multiple Criteria Decision Analysis, 3rd Edition)
►▼
Show Figures

Figure 1
Open AccessSystematic Review
Dynamic Difficulty Adjustment in Serious Games: A Literature Review
by
Lucia Víteková, Christian Eichhorn, Johanna Pirker and David A. Plecher
Information 2026, 17(1), 96; https://doi.org/10.3390/info17010096 - 17 Jan 2026
Abstract
This systematic literature review analyzes the role of dynamic difficulty adaptation (DDA) in serious games (SGs) to provide an overview of current trends and identify research gaps. The purpose of the study is to contextualize how DDA is being employed in SGs to
[...] Read more.
This systematic literature review analyzes the role of dynamic difficulty adaptation (DDA) in serious games (SGs) to provide an overview of current trends and identify research gaps. The purpose of the study is to contextualize how DDA is being employed in SGs to enhance their learning outcomes, effectiveness, and game enjoyment. The review included studies published over the past five years that implemented specific DDA methods within SGs. Publications were identified through Google Scholar (searched up to 10 November 2025) and screened for relevance, resulting in 75 relevant papers. No formal risk-of-bias assessment was conducted. These studies were analyzed by publication year, source, application domain, DDA type, and effectiveness. The results indicate a growing interest in adaptive SGs across domains, including rehabilitation and education, with DDA methods ranging from rule-based (e.g., fuzzy logic) and player modeling (using performance, physiological, or emotional metrics) to various machine learning techniques (reinforcement learning, genetic algorithms, neural networks). Newly emerging trends, such as the integration of generative artificial intelligence for DDA, were also identified. Evidence suggests that DDA can enhance learning outcomes and game experience, although study differences, limited evaluation metrics, and unexplored opportunities for adaptive SGs highlight the need for further research.
Full article
(This article belongs to the Special Issue Serious Games, Games for Learning and Gamified Apps)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Machines Prefer Humans as Literary Authors: Evaluating Authorship Bias in Large Language Models
by
Marco Rospocher, Massimo Salgaro and Simone Rebora
Information 2026, 17(1), 95; https://doi.org/10.3390/info17010095 - 16 Jan 2026
Abstract
Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less
[...] Read more.
Automata and artificial intelligence (AI) have long occupied a central place in cultural and artistic imagination, and the recent proliferation of AI-generated artworks has intensified debates about authorship, creativity, and human agency. Empirical studies show that audiences often perceive AI-generated works as less authentic or emotionally resonant than human creations, with authorship attribution strongly shaping esthetic judgments. Yet little attention has been paid to how AI systems themselves evaluate creative authorship. This study investigates how large language models (LLMs) evaluate literary quality under different framings of authorship—Human, AI, or Human+AI collaboration. Using a questionnaire-based experimental design, we prompted four instruction-tuned LLMs (ChatGPT 4, Gemini 2, Gemma 3, and LLaMA 3) to read and assess three short stories in Italian, originally generated by ChatGPT 4 in the narrative style of Roald Dahl. For each story × authorship condition × model combination, we collected 100 questionnaire completions, yielding 3600 responses in total. Across esthetic, literary, and inclusiveness dimensions, the stated authorship systematically conditioned model judgments: identical stories were consistently rated more favorably when framed as human-authored or human–AI co-authored than when labeled as AI-authored, revealing a robust negative bias toward AI authorship. Model-specific analyses further indicate distinctive evaluative profiles and inclusiveness thresholds across proprietary and open-source systems. Our findings extend research on attribution bias into the computational realm, showing that LLM-based evaluations reproduce human-like assumptions about creative agency and literary value. We publicly release all materials to facilitate transparency and future comparative work on AI-mediated literary evaluation.
Full article
(This article belongs to the Special Issue Emerging Research in Computational Creativity and Creative Robotics)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Information Modeling of Asymmetric Aesthetics Using DCGAN: A Data-Driven Approach to the Generation of Marbling Art
by
Muhammed Fahri Unlersen and Hatice Unlersen
Information 2026, 17(1), 94; https://doi.org/10.3390/info17010094 - 15 Jan 2026
Abstract
Traditional Turkish marbling (Ebru) art is an intangible cultural heritage characterized by highly asymmetric, fluid, and non-reproducible patterns, making its long-term preservation and large-scale dissemination challenging. It is highly sensitive to environmental conditions, making it enormously difficult to mass produce while maintaining its
[...] Read more.
Traditional Turkish marbling (Ebru) art is an intangible cultural heritage characterized by highly asymmetric, fluid, and non-reproducible patterns, making its long-term preservation and large-scale dissemination challenging. It is highly sensitive to environmental conditions, making it enormously difficult to mass produce while maintaining its original aesthetic qualities. A data-driven generative model is therefore required to create unlimited, high-fidelity digital surrogates that safeguard this UNESCO heritage against physical loss and enable large-scale cultural applications. This study introduces a deep generative modeling framework for the digital reconstruction of traditional Turkish marbling (Ebru) art using a Deep Convolutional Generative Adversarial Network (DCGAN). A dataset of 20,400 image patches, systematically derived from 17 original marbling works, was used to train the proposed model. The framework aims to mathematically capture the asymmetric, fluid, and stochastic nature of Ebru patterns, enabling the reproduction of their aesthetic structure in a digital medium. The generated images were evaluated using multiple quantitative and perceptual metrics, including Fréchet Inception Distance (FID), Kernel Inception Distance (KID), Learned Perceptual Image Patch Similarity (LPIPS), and PRDC-based indicators (Precision, Recall, Density, Coverage). For experimental validation, the proposed DCGAN framework is additionally compared against a Vanilla GAN baseline trained under identical conditions, highlighting the advantages of convolutional architectures for modeling marbling textures. The results show that the DCGAN model achieved a high level of realism and diversity without mode collapse or overfitting, producing images that were perceptually close to authentic marbling works. In addition to the quantitative evaluation, expert qualitative assessment by a traditional Ebru artist confirmed that the model reproduced the organic textures, color dynamics, and compositional asymmetrical characteristic of real marbling art. The proposed approach demonstrates the potential of deep generative models for the digital preservation, dissemination, and reinterpretation of intangible cultural heritage recognized by UNESCO.
Full article
(This article belongs to the Topic Advanced Development and Applications of AI-Generated Content (AIGC))
►▼
Show Figures

Graphical abstract
Open AccessArticle
Point Cloud Quality Assessment via Complexity-Driven Patch Sampling and Attention-Enhanced Swin-Transformer
by
Xilei Shen, Qiqi Li, Renwei Tu, Yongqiang Bai, Di Ge and Zhongjie Zhu
Information 2026, 17(1), 93; https://doi.org/10.3390/info17010093 - 15 Jan 2026
Abstract
►▼
Show Figures
As an emerging immersive media format, point clouds (PC) inevitably suffer from distortions such as compression and noise, where even local degradations may severely impair perceived visual quality and user experience. It is therefore essential to accurately evaluate the perceived quality of PC.
[...] Read more.
As an emerging immersive media format, point clouds (PC) inevitably suffer from distortions such as compression and noise, where even local degradations may severely impair perceived visual quality and user experience. It is therefore essential to accurately evaluate the perceived quality of PC. In this paper, a no-reference point cloud quality assessment (PCQA) method that uses complexity-driven patch sampling and an attention-enhanced Swin-Transformer is proposed to accurately assess the perceived quality of PC. Given that projected PC maps effectively capture distortions and that the quality-related information density varies significantly across local patches, a complexity-driven patch sampling strategy is proposed. By quantifying patch complexity, regions with higher information density are preferentially sampled to enhance subsequent quality-sensitive feature representation. Given that the indistinguishable response strengths between key and redundant channels during feature extraction may dilute effective features, an Attention-Enhanced Swin-Transformer is proposed to adaptively reweight critical channels, thereby improving feature extraction performance. Given that traditional regression heads typically use a single-layer linear mapping, which overlooks the heterogeneous importance of information across channels, a gated regression head is designed to enable adaptive fusion of global and statistical features via a statistics-guided gating mechanism. Experiments on the SJTU-PCQA dataset demonstrate that the proposed method consistently outperforms representative PCQA methods.
Full article

Figure 1
Open AccessArticle
AI-Enhanced Modular Information Architecture for Cultural Heritage: Designing Cognitive-Efficient and User-Centered Experiences
by
Fotios Pastrakis, Markos Konstantakis and George Caridakis
Information 2026, 17(1), 92; https://doi.org/10.3390/info17010092 - 15 Jan 2026
Abstract
Digital cultural heritage platforms face a dual challenge: preserving rich historical information while engaging an audience with declining attention spans. This paper addresses that challenge by proposing a modular information architecture designed to mitigate cognitive overload in cultural heritage tourism applications. We begin
[...] Read more.
Digital cultural heritage platforms face a dual challenge: preserving rich historical information while engaging an audience with declining attention spans. This paper addresses that challenge by proposing a modular information architecture designed to mitigate cognitive overload in cultural heritage tourism applications. We begin by examining evidence of diminishing sustained attention in digital user experience and its specific ramifications for cultural heritage sites, where dense content can overwhelm users. Grounded in cognitive load theory and principles of user-centered design, we outline a theoretical framework linking mental models, findability, and modular information architecture. We then present a user-centric modeling methodology that elicits visitor mental models and tasks (via card sorting, contextual inquiry, etc.), informing the specification of content components and semantic metadata (leveraging standards like Dublin Core and CIDOC-CRM). A visual framework is introduced that maps user tasks to content components, clusters these into UI components with progressive disclosure, and adapts them into screen instances suited to context, illustrated through a step-by-step walkthrough. Using this framework, we comparatively evaluate personalization and information structuring strategies in three platforms—TripAdvisor, Google Arts and Culture, and Airbnb Experiences—against criteria of cognitive load mitigation and user engagement. We also discuss how this modular architecture provides a structural foundation for human-centered, explainable AI–driven personalization and recommender services in cultural heritage contexts. The analysis reveals gaps in current designs (e.g., overwhelming content or passive user roles) and highlights best practices (such as tailored recommendations and progressive reveal of details). We conclude with implications for designing cultural heritage experiences that are cognitively accessible yet richly informative, summarizing contributions and suggesting future research in cultural UX, component-based design, and adaptive content delivery.
Full article
(This article belongs to the Special Issue Intelligent Interaction in Cultural Heritage)
►▼
Show Figures

Graphical abstract
Open AccessArticle
Potential of Different Machine Learning Methods in Cost Estimation of High-Rise Construction in Croatia
by
Ksenija Tijanić Štrok
Information 2026, 17(1), 91; https://doi.org/10.3390/info17010091 - 15 Jan 2026
Abstract
The fundamental goal of a construction project is to complete the construction phase within budget, but in practice, planned cost estimates are often exceeded. The causes of overruns can be due to insufficient preparation and planning of the project, changes during construction, activation
[...] Read more.
The fundamental goal of a construction project is to complete the construction phase within budget, but in practice, planned cost estimates are often exceeded. The causes of overruns can be due to insufficient preparation and planning of the project, changes during construction, activation of risky events, etc. Also, construction costs are often calculated based on experience rather than scientifically based approaches. Due to the challenges, this paper investigates the potential of several different machine learning methods (linear regression, decision tree forest, support vector machine and general regression neural network) for estimating construction costs. The methods were implemented on a database of recent high-rise construction projects in the Republic of Croatia. Results confirmed the potential of the selected assessment methods; in particular, the support vector machine stands out in terms of accuracy metrics. Established machine learning models contribute to a deeper understanding of real construction costs, their optimization, and more effective cost management during the construction phase.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Graphical abstract
Open AccessArticle
ARIA: An AI-Supported Adaptive Augmented Reality Framework for Cultural Heritage
by
Markos Konstantakis and Eleftheria Iakovaki
Information 2026, 17(1), 90; https://doi.org/10.3390/info17010090 - 15 Jan 2026
Abstract
Artificial Intelligence (AI) is increasingly reshaping how cultural heritage institutions design and deliver digital visitor experiences, particularly through adaptive Augmented Reality (AR) applications. However, most existing AR deployments in museums and galleries remain static, rule-based, and insufficiently responsive to visitors’ contextual, behavioral, and
[...] Read more.
Artificial Intelligence (AI) is increasingly reshaping how cultural heritage institutions design and deliver digital visitor experiences, particularly through adaptive Augmented Reality (AR) applications. However, most existing AR deployments in museums and galleries remain static, rule-based, and insufficiently responsive to visitors’ contextual, behavioral, and emotional diversity. This paper presents ARIA (Augmented Reality for Interpreting Artefacts), a conceptual and architectural framework for AI-supported, adaptive AR experiences in cultural heritage settings. ARIA is designed to address current limitations in personalization, affect-awareness, and ethical governance by integrating multimodal context sensing, lightweight affect recognition, and AI-driven content personalization within a unified system architecture. The framework combines Retrieval-Augmented Generation (RAG) for controlled, knowledge-grounded narrative adaptation, continuous user modeling, and interoperable Digital Asset Management (DAM), while embedding Human-Centered Design (HCD) and Fairness, Accountability, Transparency, and Ethics (FATE) principles at its core. Emphasis is placed on accountable personalization, privacy-preserving data handling, and curatorial oversight of narrative variation. ARIA is positioned as a design-oriented contribution rather than a fully implemented system. Its architecture, data flows, and adaptive logic are articulated through representative museum use-case scenarios and a structured formative validation process including expert walkthrough evaluation and feasibility analysis, providing a foundation for future prototyping and empirical evaluation. The framework aims to support the development of scalable, ethically grounded, and emotionally responsive AR experiences for next-generation digital museology.
Full article
(This article belongs to the Special Issue Artificial Intelligence Technologies for Sustainable Development)
►▼
Show Figures

Graphical abstract
Open AccessArticle
QWR-Dec-Net: A Quaternion-Wavelet Retinex Framework for Low-Light Image Enhancement with Applications to Remote Sensing
by
Vladimir Frants, Sos Agaian, Karen Panetta and Artyom Grigoryan
Information 2026, 17(1), 89; https://doi.org/10.3390/info17010089 - 14 Jan 2026
Abstract
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor
[...] Read more.
Computer vision and deep learning are essential in diverse fields such as autonomous driving, medical imaging, face recognition, and object detection. However, enhancing low-light remote sensing images remains challenging for both research and real-world applications. Low illumination degrades image quality due to sensor limitations and environmental factors, weakening visual fidelity and reducing performance in vision tasks. Common issues such as insufficient lighting, backlighting, and limited exposure create low contrast, heavy shadows, and poor visibility, particularly at night. We propose QWR-Dec-Net, a quaternion-based Retinex decomposition network tailored for low-light image enhancement. QWR-Dec-Net consists of two key modules: a decomposition module that separates illumination and reflectance, and a denoising module that fuses a quaternion holistic color representation with wavelet multi-frequency information. This structure jointly improves color constancy and noise suppression. Experiments on low-light remote sensing datasets (LSCIDMR and UCMerced) show that QWR-Dec-Net outperforms current methods in PSNR, SSIM, LPIPS, and classification accuracy. The model’s accurate illumination estimation and stable reflectance make it well-suited for remote sensing tasks such as object detection, video surveillance, precision agriculture, and autonomous navigation.
Full article
(This article belongs to the Section Artificial Intelligence)
►▼
Show Figures

Figure 1
Journal Menu
► ▼ Journal Menu-
- Information Home
- Aims & Scope
- Editorial Board
- Reviewer Board
- Topical Advisory Panel
- Instructions for Authors
- Special Issues
- Topics
- Sections & Collections
- Article Processing Charge
- Indexing & Archiving
- Editor’s Choice Articles
- Most Cited & Viewed
- Journal Statistics
- Journal History
- Journal Awards
- Society Collaborations
- Conferences
- Editorial Office
Journal Browser
► ▼ Journal BrowserHighly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Topic in
AI, Computers, Electronics, Information, MAKE, Signals
Recent Advances in Label Distribution Learning
Topic Editors: Xin Geng, Ning Xu, Liangxiao JiangDeadline: 31 January 2026
Topic in
Applied Sciences, Computers, Electronics, Information, J. Imaging
Visual Computing and Understanding: New Developments and Trends
Topic Editors: Wei Zhou, Guanghui Yue, Wenhan YangDeadline: 31 March 2026
Topic in
Applied Sciences, Information, Systems, Technologies, Electronics, AI
Challenges and Opportunities of Integrating Service Science with Data Science and Artificial Intelligence
Topic Editors: Dickson K. W. Chiu, Stuart SoDeadline: 30 April 2026
Topic in
Electronics, Future Internet, Technologies, Telecom, Network, Microwave, Information, Signals
Advanced Propagation Channel Estimation Techniques for Sixth-Generation (6G) Wireless Communications
Topic Editors: Han Wang, Fangqing Wen, Xianpeng WangDeadline: 31 May 2026
Conferences
Special Issues
Special Issue in
Information
Machine Learning for the Blockchain
Guest Editors: Georgios Alexandridis, Thanasis Papaioannou, Georgios Siolas, Paraskevi TzouveliDeadline: 31 January 2026
Special Issue in
Information
Emerging Applications of Machine Learning in Healthcare, Industry, and Beyond
Guest Editors: Francesco Isgrò, Huiyu Zhou, Daniele RaviDeadline: 31 January 2026
Special Issue in
Information
Selected Papers of the 10th North American International Conference on Industrial Engineering and Operations Management
Guest Editors: Luis Rabelo, Shahram TajDeadline: 31 January 2026
Special Issue in
Information
Transformative Technologies in Healthcare: Harnessing Machine Learning, Deep Learning and Large Language Models in Health Informatics
Guest Editors: Balu Bhasuran, Kalpana RajaDeadline: 31 January 2026
Topical Collections
Topical Collection in
Information
Knowledge Graphs for Search and Recommendation
Collection Editors: Pierpaolo Basile, Annalina Caputo
Topical Collection in
Information
Augmented Reality Technologies, Systems and Applications
Collection Editors: Ramon Fabregat, Jorge Bacca-Acosta, N.D. Duque-Mendez
Topical Collection in
Information
Natural Language Processing and Applications: Challenges and Perspectives
Collection Editor: Diego Reforgiato Recupero




